40% of global organisations could be hit by security breaches due to “shadow AI” by 2030, according to analyst firm Gartner.
Shadow AI – the use of artificial intelligence tools by employees without a company’s approval and oversight – is becoming a significant cybersecurity risk.
Unlike traditional “shadow IT,” which involves workers installing unauthorised software or plugging in unapproved devices, shadow AI typically does not require more than visiting a website with a browser.
Many workers will open an AI chatbot, paste in a document or upload a spreadsheet, and ask an AI to summarise it. To the employee, it may seem harmless and time-saving, but if the data includes customer information, salary details, source code, or sensitive company plans, then it has just been shared with a third-party system.
And it’s not as if the people taking advantage of shadow AI can claim to be clueless about the associated security and compliance risks.
For instance, a recent report by security firm UpGuard revealed an eyebrow-raising 90% of security leaders themselves report using unapproved AI tools at work, with 69% of CISOs incorporating them into their daily workflows.
According to Gartner’s research, there is already significant, unauthorised use of generative AI (GenAI) in the workplace, and no one is betting against the future escalation of AI usage.
Microsoft agrees. Its own research, published last month, found that 71% of UK employees admitted to using unapproved AI tools at work, with 51% doing so at least once a week.
Many employees rely upon AI to help them write emails, prepare presentations, or tackle financial and HR tasks. Under pressure to work faster and more effectively, many staff members will turn to the tool that is the easiest to use, regardless of whether it has been approved or not.
Having established that the shadow AI problem is unlikely to disappear anytime soon, it is clear that organisations need to take action now to ensure they do not suffer a breach.
“To address these risks, CIOs should define clear enterprise-wide policies for AI tool usage, conduct regular audits for shadow AI activity and incorporate GenAI risk evaluation into their SaaS assessment processes,” advises Gartner’s Arun Chandrasekaran.
Businesses cannot simply ban the use of AI and hope for the best. That is likely to lead to staff covering up their use of AI and hiding what they are doing from IT departments.
Read the full article here
_______
If this information is helpful to you, read our blog for more interesting and useful content, tips, and guidelines on similar topics. Contact the team of COMPUTER 2000 Bulgaria now if you have a specific question. Our specialists will be assisting you with your query.
Content curated by the team of COMPUTER 2000 on the basis of news in reputable media and marketing materials provided by our partners, companies, and other vendors.
Follow us to learn more
CONTACT US
Let’s walk through the journey of digital transformation together.

