A Breeze and a Breach: The Hidden Risks of Shadow AI
Jamie, Fireflies, and Otter. Sounds like a perfect romantic date night at a nearby lake, with some good food and a warm summer breeze. Right? Right??
I'm sorry to admit, but I'm talking about AI. More specifically, shadow AI. Otter, Fireflies, and Jamie are all AI note-taking tools. They are just one example of the many AI applications that employees use to make their lives easier and their work more effective. All too often, these tools have not been formally approved by the IT or security department (called shadow AI then), so they can pose a real threat to your company.
Jamie, Fireflies, and Otter. Sounds like a perfect romantic date night at a nearby lake, with some good food and a warm summer breeze. Right? Right??
I'm sorry to admit, but I'm talking about AI. More specifically, shadow AI. Otter, Fireflies, and Jamie are all AI note-taking tools. They are just one example of the many AI applications that employees use to make their lives easier and their work more effective. All too often, these tools have not been formally approved by the IT or security department (called shadow AI then), so they can pose a real threat to your company.
But why is that? How can using a seemingly harmless AI-powered web service impact decisions, lead to data leaks or privacy concerns, and affect the integrity of your data?
Let's consider a hypothetical incident involving a large financial institution. Imagine an employee working with customer data including personally identifiable information (PII) who wants to eliminate the need for tedious Excel calculations and decides to use ChatGPT to analyze the data. The employee grants the US-based web service complete access to the customer data that needs to be analyzed to develop new marketing strategies. This data includes a lot of financial data, but also personal identification details such as account numbers, addresses, phone numbers and ID numbers.
It's not clear whether the data is being handled in a secure manner when uploaded to an unauthorized web service. It could be stored or used for training purposes. If the web service were to be hacked, the data could be stolen and leaked without the security department even being aware of it.
Large Language Models are prone to providing incorrect results and hallucinating. If the employee does not check the output and decisions are made based on it, the LLM could initiate a completely useless marketing strategy that would waste a lot of money. Alternatively, the strategy could address the wrong people because the LLM was not given enough context, miscalculated a complex financial formula, or simply hallucinated.
Now imagine the employee sharing the chat with ChatGPT with their team to demonstrate how effectively the AI analyzed the data. If you followed the cybersecurity news closely at the end of July 2025, you might be aware of the major ChatGPT leak, in which shared private chats were exposed on the internet and could be easily found using specific search parameters on Google. As of August 1, zeit.de reported that 110k chats were still indexed on Google, despite OpenAI working actively with search engines to remove the indexed chats from their databases. Among these leaked chats, people with malicious intent targeting the financial company could easily find related chats by using this search: `site:chat.openai.com/share "name of the company`. They would then find all the data that the employee uploaded. Depending on the information present, this could lead to sophisticated phishing attempts against customers, demands for ransom, or sale to competitors.
This is just one of the dangers that shadow AI introduces. For example, AI note-taking tools that have not been verified by the security team could collect voice samples that could be used for Vishing attempts and impersonation, take screenshots of classified presentations or personally identifiable information (PII), or collect information about attendees in sensitive meetings. Unverified AI web services could read sensitive data from an invoice and feed it to a Google Workspace spreadsheet, where another unknown LLM could run automations on this sensitive data. Think about the AI tools you are familiar with and I am sure you will identify more potential security threats.
Nobody knows where the data is flowing. Where is it being processed? Will it be stored somewhere? If so, in which country and with which provider? Will it be encrypted? How is the security and privacy of your data guaranteed? What about the security of the underlying infrastructure? Will you be informed if your data is involved in an incident? How will you know what data has been given to unauthorized tools? When it comes to shadow AI, there are so many questions you cannot answer.
So, what can you do? The solution is quite simple. It's all about communication and building bridges between your security team, management, and end users. Firstly, everyone should be aware of the dangers of unauthorized AI tools and the potential reputational and financial damage the data loss could cause. Secondly, ensure that your IT or security departments are aware of the tools that could benefit employees. AI is on the rise for good reason, since it can automate tedious tasks and provide support in many areas. Denying access to AI for security reasons doesn't make sense since employees will always find a way to use it, even if it means copying sensitive data to private unmanaged and potentially insecure devices to work with their favorite AI tools. Ask your employees what they need and evaluate possible solutions or build them yourself. This gives you more control over the tools being used, makes it easier and faster to fix and react on security issues, decreases the risk of privacy concerns, and enables your employees to use AI to work more effectively with significantly less danger.
Of course, for certain special cases, you may not be able to provide a solution or prevent the use of unauthorized AI services. However, eliminating every small security risk will increase your overall security. If you communicate the importance of security effectively and explain the risks of shadow AI in a way that is easy to understand, you will benefit in many ways: happy employees who can use AI in their daily work, increased trust in the security department because they're taking care of the needs of employees and not only checking on their KPIs, and happy management because both security and productivity have been increased without any bigger restrictions.
So yes, Jamie, Fireflies, and Otter could still be part of a perfect summer evening - but in your workplace, they’re not bringing wine and sunsets. They’re bringing risk. Keep shadow AI in the open, set clear rules, and choose trusted tools. That way, your next date with AI ends in productivity, not a breach.
There are plenty of cybersecurity blogs out there - but this one’s a little different. Think of it as your personal cyber bedtime story: a calm(ish), reflective read to end your day, with just the right mix of insight, realism and a touch of provocation.
I’m thrilled to introduce The Luna(r) Brief, a new monthly blog series brilliant Luna-Marika Dahl will be writing for Cybersecurity Redefined - published on the second Monday of each month at 9PM CE(S)T.
Why late? Because cybersecurity doesn’t sleep - and neither do the thoughts that keep us up at night.
Each post is designed to be a thoughtful end-of-day read - short enough to digest after work, deep enough to spark new thinking.
The Human Factor: Why Holistic Security Beats Every Innovation
New cyber trends emerge almost daily. They promise to solve all your problems with minimal effort, offering maximum flexibility, scalability, and reliability - with AI integrated throughout. But what would happen if we focused solely on such solutions?
New cyber trends emerge almost daily. They promise to solve all your problems with minimal effort, offering maximum flexibility, scalability, and reliability - with AI integrated throughout. But what would happen if we focused solely on such solutions?
Picture a company where cybersecurity isn't a major focus. Employees may think cybersecurity is magic, nothing can go wrong, since, if stuff hits the fan, AI will save them all. Then, someone finds a USB stick, is curious to see what's on it, plugs it in, and your fancy AI solution doesn't detect the malware on it. Maybe you're lucky this time. It was a digital threat known to your AI. But now think of some real-world threats. Passwords sticking to your employee’s monitor, devices being unlocked and left alone, or more sophisticated: attackers pretending to be cleaning personnel or facility management who want to enter your office building or access specific rooms. And even more physical, an attack on insecure infrastructure. Bad locks, weak fences, no cameras, or even no security at all.
And who is it that you consider trustworthy? Real human beings run our health systems, produce our food - at least I hope this is still the case. Why should we then choose to put our entire trust in some machines when it comes to our data, our money, or the safety in our connected homes?
Don't get me wrong; IT innovations are great. AI is a superpower that we can and should utilize. What is more, focusing solely on digital security is in itself flawed, since cybersecurity is not only about implementing password policies or setting up firewalls, but involves physical security, personnel security, your IT systems, and your OT systems.
Sweden's Säkerhetsskyddslag (2018:585), or Protective Security Act is a good example of legislation that specifies (cyber)security as a combination of digital security, physical security, and personnel security.
You cannot create and embed proper security for human beings without humans being involved. Surely, at first glance, security often focuses on infrastructure, tech systems, and similar areas. But what is the reason for their existence? Our usage of them - or, more specifically, the services that run on them. We use them heavily every day for fun and business purposes, and much of the infrastructure runs critical services on which our modern society is built. Electricity, banking, healthcare, and the internet, for example. This, on the downside, is why companies that run critical infrastructure are often targeted. Attackers can easily extort money from them by threatening to open a water dam, shut down the mobile network in a region or cut off electricity to a hospital. And you know what's really shocking? Despite us talking about critical infrastructure right now, their security is often critical as well. Critically bad. Legacy systems are connected to the internet with no security measures in place, or with weak ones. What could possibly go wrong?
And that's exactly what we must keep in mind when talking about security. Knowing what could happen is a good start. Knowing what exactly you need to have in mind when planning or increasing your security. Don't focus on tools or trainings first, make sure to have a proper overview over your IT landscape and assets. And even though it sounds wrong, treat your employees as assets as well. Once you have that good overview, you can start thinking of how to implement security for your different types of assets - by the way, securing OT sensors in the fields is a different story than securing a web server, and awareness sessions are far away from technical security implementations.
Consider the potential threats you may face. The consequences can vary depending on who attacks you and their motivation. If the attackers want to disrupt your business, they may destroy anything and everything, whether physically or digitally. If they want to take control, they may have been sitting silently in your networks for months, or they may break into your location. Do they want to demand a ransom? They will trick your employees, access your networks, steal data, and threaten to keep it unless you pay. Be prepared for these scenarios and implement the appropriate measures. Compare their effectiveness and ease of implementation, and start with the most effective and easiest to implement. Additionally, make sure you know where to seek help if needed.
So, we have now discussed why security is about much more than AI innovations, technical implementations, and policies. However, as I also mentioned, AI is a superpower that we can and must utilize to improve our security. Machine Learning systems are ideal for identifying patterns or spotting the odd in tons of logs. Integrated LLMs can be used to support in the development of automations or security tools, craft queries for SIEM systems and detection rules for EDR/XDR. They can also explain potentially malicious code found in incidents, support penetration tests and summarize reports generated by sandbox solutions. Finally, they can support in incident communication by collecting important information, putting it into readable text that can be understood by both tech and non-tech people, and providing the analyst with a comprehensive summary.
From a technical perspective, yes, AI can be useful in each and every area of cybersecurity. However, it cannot replace human beings. Neglecting the vital role of humans may lead you off course, preventing a truly holistic approach to security - one that integrates not just IT concerns, but all essential dimensions of protection.