In contrast, 44% of respondents to another survey said they used AIs in professional settings as well as personal settings. That survey, which looked at French companies specifically, estimated that 28% of employees were using AI without company supervision.
In other words, despite the buzz surrounding generative text and image tools over the past few years, businesses have been slow to come up with regulations for AI.
Why Shadow AI Is Dangerous
It’s a challenge to nail down the dangers of a practice that, by definition, isn’t monitored or fully understood. Here are the biggest areas of concern.
Internal or external misinformation
Our report found that 49% of senior leadership are concerned about the risk of large language AI models generating false information. We’ve already seen reports of faulty AI-powered legal briefs, as well as other blunders, so it’s easy to imagine the same happening with an internal business report or an email to an important client.
Cybersecurity risk
Deploying AI for coding purposes is one popular use case, but if used by an IT support team, that code might contain AI-generated bugs or openings for hackers to slip a malware logic bomb past your security protocols.
Exposed data
Plus, many AI users are unaware that their prompts will be recorded by the company behind their free AI tool. If private company data is used for a prompt, it will be exposed. That’s why you should never share sensitive company data with an AI platform, among other things.
Compliance failures
Governments around the globe are rolling out AI restrictions and guidelines of their own. Without a representative to track federal or state regulation within your company, you can’t be sure that employees aren’t opening your business up to an investigation from a regulatory watchdog down the road.
How Your Company Can Combat Shadow AI Use
Ultimately, shadow AI’s threat is caused by non-existent or limited business policies surrounding AI use within the workplace. So, the answer is relatively simple: You’ll need to create guidelines that limit AI use to specific tasks within specific roles.
And, with 50% of U.S. companies saying that they are currently “updating their internal policies to govern the use of ChatGPT and end Shadow GPT,” this solution appears to be rolling out already, albeit slowly.
The safest option is a total ban on AI use: Apple, Amazon, Samsung, and Goldman Sachs are among a few companies that have banned at least some versions of AI from on-the-clock use. However, this also means you can’t benefit from the tech tools, either.
You’ll likely want to include a caveat about future AI use within your guidelines: Pending approval, workers should be able to expand AI use outside of your initial guidelines, since AI tools will continue evolving in the future.