Annika Wägenbauer Annika Wägenbauer

MFA: Moderately Frustrating Authentication

Is the answer 42? Your authenticator says yes. Multifactor authentication, also known as second-factor authentication or, more briefly, MFA. Security loves it, users hate it, and management wants efficient, happy employees and bulletproof security at the same time.

Before we explore the importance of targeted communication in cybersecurity and cyber security awareness, let's take a technical look at multi-factor authentication (MFA). How does it work? Is it truly bulletproof? How can attackers still access accounts even when MFA is enabled?

Is the answer 42? Your authenticator says yes. Multifactor authentication, also known as second-factor authentication or, more briefly, MFA. Security loves it, users hate it, and management wants efficient, happy employees and bulletproof security at the same time.

Before we explore the importance of targeted communication in cybersecurity and cybersecurity awareness, let's take a technical look at multi-factor authentication (MFA). How does it work? Is it truly bulletproof? How can attackers still access accounts even when MFA is enabled?

Multifactor authentication is a technical solution designed to enhance the security of basic logins that use a username or email address and a password to verify your identity. It works by adding a second layer to the login process that is independent of the storage of your initial credentials. This could be a sticky note, your memory, or, in the best case, a password manager. This second layer can take the form of a hardware key (such as a USB stick), random numbers sent to you via email or SMS, or something called a HOTP (HMAC-based one-time password) or a TOTP (time-based one-time password). The latter two are the most common. An authenticator app stores a secret key provided by a web service. To log in to that service, you'll need to enter the number shown in your authenticator app, which is derived from the shared secret key. After 30 seconds, the code will change. This means that even if adversaries know your credentials, they cannot authenticate because they are missing the second factor.

From a technological perspective, this is not rocket science and is quite easy to understand and implement. However, as you might have guessed, it doesn't make online accounts completely secure. There are ways to bypass MFA. The most well-known method is to use advanced phishing techniques. Phishing kits nowadays can perfectly replicate hundreds of legitimate services. You'll receive an email that looks legitimate, click the link and a copy of your bank's website will open. You probably won't check the URL, as everything else looks trustworthy. If you enter your credentials, the phishing kit will redirect them to the legitimate service. If they are correct, the service will present you with an MFA challenge. You will then be asked to enter your code, and the website will redirect it to the legitimate service again. If you have entered everything correctly, you will be redirected to the real page and will be logged in. Everything looks normal, no dodgy errors or weird looking websites anymore. Meanwhile, the phishing kit collects the session cookie and shares it with the attacker. They can then import the cookie into their browser and gain access. Other attacks on MFA include SIM swapping and fatigue attacks. In SIM swapping, the attacker replicates your SIM and authenticates with a web service using harvested credentials. If you receive an SMS with the MFA code, the attacker will receive it too, log in and hijack your account. In fatigue attacks, attackers rely on apps where you simply accept the login request (LinkedIn does this, for example). They send lots of login requests until you accidentally accept one. Again, your account has been hijacked by just one wrong click.

Aside from the methods of circumventing MFA, there is another problem. User acceptance. If employees or private individuals do not understand why MFA is important and are simply forced to use it, they will find ways to circumvent the need for a second device, such as a smartphone, by storing the 2FA secret in their password manager so that they only have to copy it from there. While this may sound convenient, it can pose a security risk. If your device is compromised and attackers steal the master password for your key vault, they can log in to your account even if you have MFA enabled. This is why MFA apps are so important. They are a second device. Even if your credentials have been stolen or your laptop compromised, attackers won't be able to get the MFA challenge right unless they also hack your phone. Even more secure MFA options exist, such as hardware tokens that store a cryptographic key on a USB-like device. Adversaries would have to steal the device physically to access your account, which would be quite difficult.

We included already it in our first Luna(r) brief. Everything is an asset, even though it sounds a bit strange to call human beings assets. Every asset can either benefit or threaten the security of your organisation. It's relatively easy to keep machines secure. However, when it comes to human beings, psychology plays a significant role. You can't enforce security from above; it won't be accepted. You need role models who embody security, even if they don't work in that field. You need very good, transparent communication that everyone can understand and access. Explain to your employees why it is necessary to enforce MFA and other user-unfriendly security features. Teach them how to use them. Explain what could go wrong if the feature were not active or if it were circumvented. Make it real and tangible. Show them what happens if an account gets hacked using harvested credentials. Show them what happens to their device and explain that they will be slowed down at work. Be transparent and explain that MFA is not bulletproof. Show them how attackers can bypass these security measures. Teach them how to spot phishing and how to report security issues. Provide them with contacts to discuss security concerns, such as unusual calls or text messages. And don't just stick to the business context. Security outside of the corporate world is equally important. Take banking, insurance and cloud storage with PII, for example. All of these can be secured, and improving security in people's private lives can also benefit corporate cybersecurity, as acceptance of security features will increase.

While 42 may be the answer to life, the key to user acceptance in cybersecurity is transparent and honest communication that is targeted at the right people.


There are plenty of cybersecurity blogs out there - but this one’s a little different. Think of it as your personal cyber bedtime story: a calm(ish), reflective read to end your day, with just the right mix of insight, realism and a touch of provocation.

I’m thrilled to introduce The Luna(r) Brief, a new monthly blog series brilliant Luna-Marika Dahl will be writing for Cybersecurity Redefined - published on the second Monday of each month at 9PM CE(S)T.

Why late? Because cybersecurity doesn’t sleep - and neither do the thoughts that keep us up at night.

Each post is designed to be a thoughtful end-of-day read - short enough to digest after work, deep enough to spark new thinking.

Read More
Annika Wägenbauer Annika Wägenbauer

A Breeze and a Breach: The Hidden Risks of Shadow AI

Jamie, Fireflies, and Otter. Sounds like a perfect romantic date night at a nearby lake, with some good food and a warm summer breeze. Right? Right??

I'm sorry to admit, but I'm talking about AI. More specifically, shadow AI. Otter, Fireflies, and Jamie are all AI note-taking tools. They are just one example of the many AI applications that employees use to make their lives easier and their work more effective. All too often, these tools have not been formally approved by the IT or security department (called shadow AI then), so they can pose a real threat to your company.

Jamie, Fireflies, and Otter. Sounds like a perfect romantic date night at a nearby lake, with some good food and a warm summer breeze. Right? Right??

I'm sorry to admit, but I'm talking about AI. More specifically, shadow AI. Otter, Fireflies, and Jamie are all AI note-taking tools. They are just one example of the many AI applications that employees use to make their lives easier and their work more effective. All too often, these tools have not been formally approved by the IT or security department (called shadow AI then), so they can pose a real threat to your company.

But why is that? How can using a seemingly harmless AI-powered web service impact decisions, lead to data leaks or privacy concerns, and affect the integrity of your data?

Let's consider a hypothetical incident involving a large financial institution. Imagine an employee working with customer data including personally identifiable information (PII) who wants to eliminate the need for tedious Excel calculations and decides to use ChatGPT to analyze the data. The employee grants the US-based web service complete access to the customer data that needs to be analyzed to develop new marketing strategies. This data includes a lot of financial data, but also personal identification details such as account numbers, addresses, phone numbers and ID numbers.

It's not clear whether the data is being handled in a secure manner when uploaded to an unauthorized web service. It could be stored or used for training purposes. If the web service were to be hacked, the data could be stolen and leaked without the security department even being aware of it.

Large Language Models are prone to providing incorrect results and hallucinating. If the employee does not check the output and decisions are made based on it, the LLM could initiate a completely useless marketing strategy that would waste a lot of money. Alternatively, the strategy could address the wrong people because the LLM was not given enough context, miscalculated a complex financial formula, or simply hallucinated. 

Now imagine the employee sharing the chat with ChatGPT with their team to demonstrate how effectively the AI analyzed the data. If you followed the cybersecurity news closely at the end of July 2025, you might be aware of the major ChatGPT leak, in which shared private chats were exposed on the internet and could be easily found using specific search parameters on Google. As of August 1, zeit.de reported that 110k chats were still indexed on Google, despite OpenAI working actively with search engines to remove the indexed chats from their databases. Among these leaked chats, people with malicious intent targeting the financial company could easily find related chats by using this search: `site:chat.openai.com/share "name of the company`. They would then find all the data that the employee uploaded. Depending on the information present, this could lead to sophisticated phishing attempts against customers, demands for ransom, or sale to competitors.

This is just one of the dangers that shadow AI introduces. For example, AI note-taking tools that have not been verified by the security team could collect voice samples that could be used for Vishing attempts and impersonation, take screenshots of classified presentations or personally identifiable information (PII), or collect information about attendees in sensitive meetings. Unverified AI web services could read sensitive data from an invoice and feed it to a Google Workspace spreadsheet, where another unknown LLM could run automations on this sensitive data. Think about the AI tools you are familiar with and I am sure you will identify more potential security threats.

Nobody knows where the data is flowing. Where is it being processed? Will it be stored somewhere? If so, in which country and with which provider? Will it be encrypted? How is the security and privacy of your data guaranteed? What about the security of the underlying infrastructure? Will you be informed if your data is involved in an incident? How will you know what data has been given to unauthorized tools? When it comes to shadow AI, there are so many questions you cannot answer.

So, what can you do? The solution is quite simple. It's all about communication and building bridges between your security team, management, and end users. Firstly, everyone should be aware of the dangers of unauthorized AI tools and the potential reputational and financial damage the data loss could cause. Secondly, ensure that your IT or security departments are aware of the tools that could benefit employees. AI is on the rise for good reason, since it can automate tedious tasks and provide support in many areas. Denying access to AI for security reasons doesn't make sense since employees will always find a way to use it, even if it means copying sensitive data to private unmanaged and potentially insecure devices to work with their favorite AI tools. Ask your employees what they need and evaluate possible solutions or build them yourself. This gives you more control over the tools being used, makes it easier and faster to fix and react on security issues, decreases the risk of privacy concerns, and enables your employees to use AI to work more effectively with significantly less danger.

Of course, for certain special cases, you may not be able to provide a solution or prevent the use of unauthorized AI services. However, eliminating every small security risk will increase your overall security. If you communicate the importance of security effectively and explain the risks of shadow AI in a way that is easy to understand, you will benefit in many ways: happy employees who can use AI in their daily work, increased trust in the security department because they're taking care of the needs of employees and not only checking on their KPIs, and happy management because both security and productivity have been increased without any bigger restrictions.

So yes, Jamie, Fireflies, and Otter could still be part of a perfect summer evening - but in your workplace, they’re not bringing wine and sunsets. They’re bringing risk. Keep shadow AI in the open, set clear rules, and choose trusted tools. That way, your next date with AI ends in productivity, not a breach.


There are plenty of cybersecurity blogs out there - but this one’s a little different. Think of it as your personal cyber bedtime story: a calm(ish), reflective read to end your day, with just the right mix of insight, realism and a touch of provocation.

I’m thrilled to introduce The Luna(r) Brief, a new monthly blog series brilliant Luna-Marika Dahl will be writing for Cybersecurity Redefined - published on the second Monday of each month at 9PM CE(S)T.

Why late? Because cybersecurity doesn’t sleep - and neither do the thoughts that keep us up at night.

Each post is designed to be a thoughtful end-of-day read - short enough to digest after work, deep enough to spark new thinking.

Read More
Annika Wägenbauer Annika Wägenbauer

The Human Factor: Why Holistic Security Beats Every Innovation

New cyber trends emerge almost daily. They promise to solve all your problems with minimal effort, offering maximum flexibility, scalability, and reliability - with AI integrated throughout. But what would happen if we focused solely on such solutions?

New cyber trends emerge almost daily. They promise to solve all your problems with minimal effort, offering maximum flexibility, scalability, and reliability - with AI integrated throughout. But what would happen if we focused solely on such solutions?

Picture a company where cybersecurity isn't a major focus. Employees may think cybersecurity is magic, nothing can go wrong, since, if stuff hits the fan, AI will save them all. Then, someone finds a USB stick, is curious to see what's on it, plugs it in, and your fancy AI solution doesn't detect the malware on it. Maybe you're lucky this time. It was a digital threat known to your AI. But now think of some real-world threats. Passwords sticking to your employee’s monitor, devices being unlocked and left alone, or more sophisticated: attackers pretending to be cleaning personnel or facility management who want to enter your office building or access specific rooms. And even more physical, an attack on insecure infrastructure. Bad locks, weak fences, no cameras, or even no security at all.

And who is it that you consider trustworthy? Real human beings run our health systems, produce our food - at least I hope this is still the case. Why should we then choose to put our entire trust in some machines when it comes to our data, our money, or the safety in our connected homes?

Don't get me wrong; IT innovations are great. AI is a superpower that we can and should utilize. What is more, focusing solely on digital security is in itself flawed, since cybersecurity is not only about implementing password policies or setting up firewalls, but involves physical security, personnel security, your IT systems, and your OT systems. 

Sweden's Säkerhetsskyddslag (2018:585), or Protective Security Act is a good example of legislation that specifies (cyber)security as a combination of digital security, physical security, and personnel security.

You cannot create and embed proper security for human beings without humans being involved. Surely, at first glance, security often focuses on infrastructure, tech systems, and similar areas. But what is the reason for their existence? Our usage of them - or, more specifically, the services that run on them. We use them heavily every day for fun and business purposes, and much of the infrastructure runs critical services on which our modern society is built. Electricity, banking, healthcare, and the internet, for example. This, on the downside, is why companies that run critical infrastructure are often targeted. Attackers can easily extort money from them by threatening to open a water dam, shut down the mobile network in a region or cut off electricity to a hospital. And you know what's really shocking? Despite us talking about critical infrastructure right now, their security is often critical as well. Critically bad. Legacy systems are connected to the internet with no security measures in place, or with weak ones. What could possibly go wrong?

And that's exactly what we must keep in mind when talking about security. Knowing what could happen is a good start. Knowing what exactly you need to have in mind when planning or increasing your security. Don't focus on tools or trainings first, make sure to have a proper overview over your IT landscape and assets. And even though it sounds wrong, treat your employees as assets as well. Once you have that good overview, you can start thinking of how to implement security for your different types of assets - by the way, securing OT sensors in the fields is a different story than securing a web server, and awareness sessions are far away from technical security implementations. 

Consider the potential threats you may face. The consequences can vary depending on who attacks you and their motivation. If the attackers want to disrupt your business, they may destroy anything and everything, whether physically or digitally. If they want to take control, they may have been sitting silently in your networks for months, or they may break into your location. Do they want to demand a ransom? They will trick your employees, access your networks, steal data, and threaten to keep it unless you pay. Be prepared for these scenarios and implement the appropriate measures. Compare their effectiveness and ease of implementation, and start with the most effective and easiest to implement. Additionally, make sure you know where to seek help if needed.

So, we have now discussed why security is about much more than AI innovations, technical implementations, and policies. However, as I also mentioned, AI is a superpower that we can and must utilize to improve our security. Machine Learning systems are ideal for identifying patterns or spotting the odd in tons of logs. Integrated LLMs can be used to support in the development of automations or security tools, craft queries for SIEM systems and detection rules for EDR/XDR. They can also explain potentially malicious code found in incidents, support penetration tests and summarize reports generated by sandbox solutions. Finally, they can support in incident communication by collecting important information, putting it into readable text that can be understood by both tech and non-tech people, and providing the analyst with a comprehensive summary. 

From a technical perspective, yes, AI can be useful in each and every area of cybersecurity. However, it cannot replace human beings. Neglecting the vital role of humans may lead you off course, preventing a truly holistic approach to security - one that integrates not just IT concerns, but all essential dimensions of protection.


There are plenty of cybersecurity blogs out there - but this one’s a little different. Think of it as your personal cyber bedtime story: a calm(ish), reflective read to end your day, with just the right mix of insight, realism and a touch of provocation.

I’m thrilled to introduce The Luna(r) Brief, a new monthly blog series brilliant Luna-Marika Dahl will be writing for Cybersecurity Redefined - published on the second Monday of each month at 9PM CE(S)T.

Why late? Because cybersecurity doesn’t sleep - and neither do the thoughts that keep us up at night.

Each post is designed to be a thoughtful end-of-day read - short enough to digest after work, deep enough to spark new thinking.

Read More