A Breeze and a Breach: The Hidden Risks of Shadow AI

Jamie, Fireflies, and Otter. Sounds like a perfect romantic date night at a nearby lake, with some good food and a warm summer breeze. Right? Right??

I'm sorry to admit, but I'm talking about AI. More specifically, shadow AI. Otter, Fireflies, and Jamie are all AI note-taking tools. They are just one example of the many AI applications that employees use to make their lives easier and their work more effective. All too often, these tools have not been formally approved by the IT or security department (called shadow AI then), so they can pose a real threat to your company.

But why is that? How can using a seemingly harmless AI-powered web service impact decisions, lead to data leaks or privacy concerns, and affect the integrity of your data?

Let's consider a hypothetical incident involving a large financial institution. Imagine an employee working with customer data including personally identifiable information (PII) who wants to eliminate the need for tedious Excel calculations and decides to use ChatGPT to analyze the data. The employee grants the US-based web service complete access to the customer data that needs to be analyzed to develop new marketing strategies. This data includes a lot of financial data, but also personal identification details such as account numbers, addresses, phone numbers and ID numbers.

It's not clear whether the data is being handled in a secure manner when uploaded to an unauthorized web service. It could be stored or used for training purposes. If the web service were to be hacked, the data could be stolen and leaked without the security department even being aware of it.

Large Language Models are prone to providing incorrect results and hallucinating. If the employee does not check the output and decisions are made based on it, the LLM could initiate a completely useless marketing strategy that would waste a lot of money. Alternatively, the strategy could address the wrong people because the LLM was not given enough context, miscalculated a complex financial formula, or simply hallucinated. 

Now imagine the employee sharing the chat with ChatGPT with their team to demonstrate how effectively the AI analyzed the data. If you followed the cybersecurity news closely at the end of July 2025, you might be aware of the major ChatGPT leak, in which shared private chats were exposed on the internet and could be easily found using specific search parameters on Google. As of August 1, zeit.de reported that 110k chats were still indexed on Google, despite OpenAI working actively with search engines to remove the indexed chats from their databases. Among these leaked chats, people with malicious intent targeting the financial company could easily find related chats by using this search: `site:chat.openai.com/share "name of the company`. They would then find all the data that the employee uploaded. Depending on the information present, this could lead to sophisticated phishing attempts against customers, demands for ransom, or sale to competitors.

This is just one of the dangers that shadow AI introduces. For example, AI note-taking tools that have not been verified by the security team could collect voice samples that could be used for Vishing attempts and impersonation, take screenshots of classified presentations or personally identifiable information (PII), or collect information about attendees in sensitive meetings. Unverified AI web services could read sensitive data from an invoice and feed it to a Google Workspace spreadsheet, where another unknown LLM could run automations on this sensitive data. Think about the AI tools you are familiar with and I am sure you will identify more potential security threats.

Nobody knows where the data is flowing. Where is it being processed? Will it be stored somewhere? If so, in which country and with which provider? Will it be encrypted? How is the security and privacy of your data guaranteed? What about the security of the underlying infrastructure? Will you be informed if your data is involved in an incident? How will you know what data has been given to unauthorized tools? When it comes to shadow AI, there are so many questions you cannot answer.

So, what can you do? The solution is quite simple. It's all about communication and building bridges between your security team, management, and end users. Firstly, everyone should be aware of the dangers of unauthorized AI tools and the potential reputational and financial damage the data loss could cause. Secondly, ensure that your IT or security departments are aware of the tools that could benefit employees. AI is on the rise for good reason, since it can automate tedious tasks and provide support in many areas. Denying access to AI for security reasons doesn't make sense since employees will always find a way to use it, even if it means copying sensitive data to private unmanaged and potentially insecure devices to work with their favorite AI tools. Ask your employees what they need and evaluate possible solutions or build them yourself. This gives you more control over the tools being used, makes it easier and faster to fix and react on security issues, decreases the risk of privacy concerns, and enables your employees to use AI to work more effectively with significantly less danger.

Of course, for certain special cases, you may not be able to provide a solution or prevent the use of unauthorized AI services. However, eliminating every small security risk will increase your overall security. If you communicate the importance of security effectively and explain the risks of shadow AI in a way that is easy to understand, you will benefit in many ways: happy employees who can use AI in their daily work, increased trust in the security department because they're taking care of the needs of employees and not only checking on their KPIs, and happy management because both security and productivity have been increased without any bigger restrictions.

So yes, Jamie, Fireflies, and Otter could still be part of a perfect summer evening - but in your workplace, they’re not bringing wine and sunsets. They’re bringing risk. Keep shadow AI in the open, set clear rules, and choose trusted tools. That way, your next date with AI ends in productivity, not a breach.


There are plenty of cybersecurity blogs out there - but this one’s a little different. Think of it as your personal cyber bedtime story: a calm(ish), reflective read to end your day, with just the right mix of insight, realism and a touch of provocation.

I’m thrilled to introduce The Luna(r) Brief, a new monthly blog series brilliant Luna-Marika Dahl will be writing for Cybersecurity Redefined - published on the second Monday of each month at 9PM CE(S)T.

Why late? Because cybersecurity doesn’t sleep - and neither do the thoughts that keep us up at night.

Each post is designed to be a thoughtful end-of-day read - short enough to digest after work, deep enough to spark new thinking.

Previous
Previous

MFA: Moderately Frustrating Authentication

Next
Next

The Human Factor: Why Holistic Security Beats Every Innovation