The Haunted Supply Chain: Why Spooky Season Never Really Ends in Cybersecurity
European Cyber Security Month (ECSM) has come to an end. Awareness has been raised, so what could possibly go wrong now? Nobody will click on phishing emails or dodgy adverts, connect USB sticks found in the car park outside the grocery store or install shady software to convert PDFs to Word documents. Right?
Even if you're extremely careful, is the supplier of your highly specialised software taking care of their security? What about the logistics company you exchange emails with daily and have trusted for years?
European Cyber Security Month (ECSM) has come to an end. Awareness has been raised, so what could possibly go wrong now? Nobody will click on phishing emails or dodgy adverts, connect USB sticks found in the car park outside the grocery store or install shady software to convert PDFs to Word documents. Right?
Even if you're extremely careful, is the supplier of your highly specialised software taking care of their security? What about the logistics company you exchange emails with daily and have trusted for years?
This is a well-known phenomenon, it's called supply chain attack, and it basically describes, that one of the pipelines you use, is getting compromised without anyone knowing. This can be a software pipeline where a manufacturer is getting hacked and their software injected with some malicious code, some open-source project where attackers smuggle malware into, or a company you're working with has some week email security and thus malicious actors can easily take over accounts to distribute legitimate looking mails with harmful attachments or links.
So, essentially, cybersecurity isn't a silo. It's a vast, interconnected ecosystem comprising your own cybersecurity, that of your suppliers and subcontractors, and everyone you have ever worked with, or will ever work with. If we trust the theory of Six Degrees of Separation, it is probably one huge, globe-spanning ecosystem, as it says that everyone is connected to everyone else via a surprisingly short span of nodes. Thanks to global players and tech celebrities, as well as open-source libraries - the small, unseen brothers and sisters of those celebrities - the number of nodes may be even smaller.
Just think of log4j and the panic that spread through every security department. There's a good chance that you didn't even know about it until CVE-2021-44228 was announced. Or consider the recent attacks on the npm (Node Package Manager) ecosystem, in which attackers phished a popular package's maintainer and injected malicious code into several widely used node packages for application development and cryptography. There are many such stories, some of which read like thrillers. If you're into crazy cyber stories, check out the XZ backdoor, where you can find some informative videos by Seytonic (EN) and Simplicissimus (DE) on YouTube that delve into the details of this attack.
Scary, isn't it? If one piece of software is compromised or has a critical security bug, large parts of the internet can suddenly be affected. Were you expecting good news? Well, I've got none for you. We can't really avoid this. We do have better security tests integrated into GitHub and GitLab, and AI can help to find issues in code. However, copiloted code can also introduce severe security bugs. And could you also tell me if the servers of the supplier of one of your suppliers use software from a manufacturer that uses a third-party software component whose update pipeline has been compromised?
These are important questions, but there are no good answers. It is not the responsibility of a single company to ensure the cybersecurity of its partners. Everyone needs to understand cybersecurity better, including how important it is and what can happen if things go wrong. ECSM is a good place to start, but awareness is not only for end users in terms of phishing emails and USB sticks. It is also for developers and admins.
However, we can implement narrow security checks and audits. Make sure you have an up-to-date list of vulnerabilities that you can map to your assets and the software running on them, so you can react quickly if a new CVE appears. Keep your firewalls and security rules up to date to limit access and network traffic. Ensure that your software and devices are up to date. If you are willing to accept the risk of having unpatched assets, limit access, and work on alternatives. Make sure you have SBOMs (software bills of material) for the tools you use to detect issues in your dependencies effectively. And most importantly, ensure that your cybersecurity team can do a good job.
To limit the risk of being attacked via one of your suppliers, only work with those that take cybersecurity seriously. Unfortunately, you can't control whether their suppliers do the same, but you should focus on what is possible. Only contract a company if they comply with your security standards. If they don't and it happens often enough, they may improve their security measures to avoid losing more customers. Cybersecurity is a community effort. We don't have different silos; we live in a hyperconnected world where the number of electrical devices far exceeds the number of people on Earth. Therefore, it is everyone's responsibility to improve overall security, even if only by a small amount.
But don't blame others if something goes wrong. Support them and explain how they could prevent the same mistake in future. It's understandable if you're angry with them, but getting emotional in the event of an incident won't help. The most common issue I've seen so far is disabled MFA which leads to successful phishing attacks. After such an attack, the attacker spreads phishing links from a seemingly trustworthy account to your employees. If your employees are aware of phishing, you might be lucky, and they might report suspicious emails to you. Of course, it is not your job to implement a third party's cybersecurity, but pointing out what is wrong is a good idea. This way, you can ensure they understand the issue. And if they make the same mistake again, you might want to consider working with another company.
We once had an incident involving signed phishing emails. It turned out that the company had configured their Exchange server so that all outgoing emails were signed. When the hackers compromised the system, they used the server to distribute phishing emails, and it was automatically signing these malicious emails. Did the hacker know his luck? I don't know. Is it a convenient configuration? Arguable. Did this true/false setting have a significant impact on security? Definitely! It's not only about crazy stories like “xz”, Log4j and so on. Sometimes it's just a tiny parameter that has been configured incorrectly, or user-friendliness has been prioritised over security.
Be mindful of your security, only contract companies that understand the importance of cybersecurity and consider how seemingly convenient security changes could increase risk.
Because in the end, every chain tells a story - of trust, connection, and the quiet fragility between them. Our job isn’t to break the chain, but to understand it. And maybe, just maybe, keep the ghosts from finding a way in.
There are plenty of cybersecurity blogs out there - but this one’s a little different. Think of it as your personal cyber bedtime story: a calm(ish), reflective read to end your day, with just the right mix of insight, realism and a touch of provocation.
I’m thrilled to introduce The Luna(r) Brief, a new monthly blog series brilliant Luna-Marika Dahl will be writing for Cybersecurity Redefined - published on the second Monday of each month at 9PM CE(S)T.
Why late? Because cybersecurity doesn’t sleep - and neither do the thoughts that keep us up at night.
Each post is designed to be a thoughtful end-of-day read - short enough to digest after work, deep enough to spark new thinking.
Slow Travel, Fast Insights: Cyber Lessons on the Road to Athens
Athens: Women4Cyber conference. Either a two-hour flight away, or a 55-hour journey by train, bus and ferry. A couple of days full of learning about cyber security, people, fears and myself. As I opted for slow travel due to my aversion to flying, I had plenty of time to read, think, enjoy the views of the Albanian coast and the Balkans' mountains, as well as observe my surroundings and the people around me.
And during all the time, I did not forget about my passion: Cyber Security. So I had some interesting insights. The principles of cyber security can be found everywhere since every principle is just an idea of security with a small grain of bits and bytes. Travelling involves many techniques that we know in cyber security; city life and even hotels have them as well. If you understand these techniques, you will have a good grasp of fundamental cyber security concepts.
Athens: Women4Cyber conference. Either a two-hour flight away, or a 55-hour journey by train, bus and ferry. A couple of days full of learning about cyber security, people, fears and myself. As I opted for slow travel due to my aversion to flying, I had plenty of time to read, think, enjoy the views of the Albanian coast and the Balkans' mountains, as well as observe my surroundings and the people around me.
And during all the time, I did not forget about my passion: Cyber Security. So I had some interesting insights. The principles of cyber security can be found everywhere since every principle is just an idea of security with a small grain of bits and bytes. Travelling involves many techniques that we know in cyber security; city life and even hotels have them as well. If you understand these techniques, you will have a good grasp of fundamental cyber security concepts.
Athens is a city with a rich history. It gave birth to democracy, and many famous mathematicians, architects and philosophers have come from there. It is a city of many different neighbourhoods and contradictions, with many beautiful places, as well as some odd ones. Just like your IT infrastructure. There are ancient (legacy) systems that have been running for 20 years and haven't received updates since then. There are also new, state-of-the-art systems that fill the gaps that the legacy systems could not. Taxis are everywhere, keeping the city alive, just like your IT department. Police are everywhere, keeping the city secure, just like your security units. There are potholes, big streets without traffic lights and steep, slippery paths. The credo is: if it works, it works. Or, "never change a running system".
Fixing an ancient building in such a way, that it doesn't lose its charm, is really difficult. And so is it with fixing legacy systems. Chances are high it will break. However, you can secure an ancient building by putting up fences and restricting access, and that's what we do with legacy systems. We allow only the necessary connections to it, trying to keep it safe from malicious actors while it is still able to operate and serve the business.
Why repair small things that are still working? There's no time for it, no real need for it and maybe not enough money for it either. Holes in the street are like minor vulnerabilities that don't pose a significant risk, so they might get fixed one day when someone has time, but it's OK to leave them open in the meantime. And a street without streetlights for pedestrians is like a hardware security bug. Use it at your own risk. If you want to get rid of it, you have to replace it, which costs a lot of money and affects your work and the work of others for an unknown amount of time.
When you check in to a hotel, you prove who you are and, in return, you are given a keycard to access your room. It's a kind of login system, including role-based access. Only you and the staff can access your room. As a guest, you get access to a room, suite or whatever you have paid for; the staff are like admins or superusers.
Even the mentality of the people and the unspoken rules reminded me of the daily challenges we face in Cyber Security. Things that aren't changed, even though they're inefficient or inconvenient for customers, because they've always done it that way. Some extra olives? Not possible. Small salad? Order a large one; we don't do small.
On my way back, I travelled through Bulgaria, Serbia, Croatia, Slovenia and Austria. The EU countries trusted their colleagues at the border. There was just one control, where policemen walked quickly through the bus, checking our passports to see if we were all authorised to enter the country. It was kind of like logging in, where we proved to be the user by providing information that only we have: Our ID.
As a non-EU country, Serbia was a bit more cautious. There were two controls at the border with Bulgaria and two at the border with Croatia, including luggage checks. You could call it packet inspection as well. It's not just about checking if someone is authorised to enter a country; it's also about seeing if they're carrying something forbidden or dangerous. It's similar to checking a network packet or a file being downloaded to see if it's suspicious or carries harmful content, like a virus. The same applies to luggage being scanned at airports or ferry ports. Packet inspection. Is there anything that is not allowed by law to be carried? If so, access is denied. If you really dig into what happens during border control in terms of the information processed, the decisions made and the actions taken, you'll see that it incorporates many principles of a cyber security model called Zero Trust.
You see? Even if you think you know nothing about cyber security, you already know a lot. Security is often very logical and straightforward. Cyber is then the cherry on top. It's the technical implementation of what we're seeing daily in our non-digital life. Cybersecurity is by no means only a field for nerdy tech people in hoodies. There's so much more to it than that. Who defines security standards? Who communicates them? Who talks to employees about security awareness? Who informs customers about security features? Who informs management about the importance of security? Who makes these topics accessible to non-tech people? While understanding cyber security principles is essential if you work in this field, you don't need to be a forensic specialist or seasoned SOC professional to understand what's going on and have an impact in your role.
Keep your eyes open on your next trip and you’ll see real-world examples of security concepts everywhere. Once you have mapped them to their cyber equivalents, you will see that it is not as nerdy as you might have expected, since the basics are clear and simple to understand.
There are plenty of cybersecurity blogs out there - but this one’s a little different. Think of it as your personal cyber bedtime story: a calm(ish), reflective read to end your day, with just the right mix of insight, realism and a touch of provocation.
I’m thrilled to introduce The Luna(r) Brief, a new monthly blog series brilliant Luna-Marika Dahl will be writing for Cybersecurity Redefined - published on the second Monday of each month at 9PM CE(S)T.
Why late? Because cybersecurity doesn’t sleep - and neither do the thoughts that keep us up at night.
Each post is designed to be a thoughtful end-of-day read - short enough to digest after work, deep enough to spark new thinking.
MFA: Moderately Frustrating Authentication
Is the answer 42? Your authenticator says yes. Multifactor authentication, also known as second-factor authentication or, more briefly, MFA. Security loves it, users hate it, and management wants efficient, happy employees and bulletproof security at the same time.
Before we explore the importance of targeted communication in cybersecurity and cyber security awareness, let's take a technical look at multi-factor authentication (MFA). How does it work? Is it truly bulletproof? How can attackers still access accounts even when MFA is enabled?
Is the answer 42? Your authenticator says yes. Multifactor authentication, also known as second-factor authentication or, more briefly, MFA. Security loves it, users hate it, and management wants efficient, happy employees and bulletproof security at the same time.
Before we explore the importance of targeted communication in cybersecurity and cybersecurity awareness, let's take a technical look at multi-factor authentication (MFA). How does it work? Is it truly bulletproof? How can attackers still access accounts even when MFA is enabled?
Multifactor authentication is a technical solution designed to enhance the security of basic logins that use a username or email address and a password to verify your identity. It works by adding a second layer to the login process that is independent of the storage of your initial credentials. This could be a sticky note, your memory, or, in the best case, a password manager. This second layer can take the form of a hardware key (such as a USB stick), random numbers sent to you via email or SMS, or something called a HOTP (HMAC-based one-time password) or a TOTP (time-based one-time password). The latter two are the most common. An authenticator app stores a secret key provided by a web service. To log in to that service, you'll need to enter the number shown in your authenticator app, which is derived from the shared secret key. After 30 seconds, the code will change. This means that even if adversaries know your credentials, they cannot authenticate because they are missing the second factor.
From a technological perspective, this is not rocket science and is quite easy to understand and implement. However, as you might have guessed, it doesn't make online accounts completely secure. There are ways to bypass MFA. The most well-known method is to use advanced phishing techniques. Phishing kits nowadays can perfectly replicate hundreds of legitimate services. You'll receive an email that looks legitimate, click the link and a copy of your bank's website will open. You probably won't check the URL, as everything else looks trustworthy. If you enter your credentials, the phishing kit will redirect them to the legitimate service. If they are correct, the service will present you with an MFA challenge. You will then be asked to enter your code, and the website will redirect it to the legitimate service again. If you have entered everything correctly, you will be redirected to the real page and will be logged in. Everything looks normal, no dodgy errors or weird looking websites anymore. Meanwhile, the phishing kit collects the session cookie and shares it with the attacker. They can then import the cookie into their browser and gain access. Other attacks on MFA include SIM swapping and fatigue attacks. In SIM swapping, the attacker replicates your SIM and authenticates with a web service using harvested credentials. If you receive an SMS with the MFA code, the attacker will receive it too, log in and hijack your account. In fatigue attacks, attackers rely on apps where you simply accept the login request (LinkedIn does this, for example). They send lots of login requests until you accidentally accept one. Again, your account has been hijacked by just one wrong click.
Aside from the methods of circumventing MFA, there is another problem. User acceptance. If employees or private individuals do not understand why MFA is important and are simply forced to use it, they will find ways to circumvent the need for a second device, such as a smartphone, by storing the 2FA secret in their password manager so that they only have to copy it from there. While this may sound convenient, it can pose a security risk. If your device is compromised and attackers steal the master password for your key vault, they can log in to your account even if you have MFA enabled. This is why MFA apps are so important. They are a second device. Even if your credentials have been stolen or your laptop compromised, attackers won't be able to get the MFA challenge right unless they also hack your phone. Even more secure MFA options exist, such as hardware tokens that store a cryptographic key on a USB-like device. Adversaries would have to steal the device physically to access your account, which would be quite difficult.
We included already it in our first Luna(r) brief. Everything is an asset, even though it sounds a bit strange to call human beings assets. Every asset can either benefit or threaten the security of your organisation. It's relatively easy to keep machines secure. However, when it comes to human beings, psychology plays a significant role. You can't enforce security from above; it won't be accepted. You need role models who embody security, even if they don't work in that field. You need very good, transparent communication that everyone can understand and access. Explain to your employees why it is necessary to enforce MFA and other user-unfriendly security features. Teach them how to use them. Explain what could go wrong if the feature were not active or if it were circumvented. Make it real and tangible. Show them what happens if an account gets hacked using harvested credentials. Show them what happens to their device and explain that they will be slowed down at work. Be transparent and explain that MFA is not bulletproof. Show them how attackers can bypass these security measures. Teach them how to spot phishing and how to report security issues. Provide them with contacts to discuss security concerns, such as unusual calls or text messages. And don't just stick to the business context. Security outside of the corporate world is equally important. Take banking, insurance and cloud storage with PII, for example. All of these can be secured, and improving security in people's private lives can also benefit corporate cybersecurity, as acceptance of security features will increase.
While 42 may be the answer to life, the key to user acceptance in cybersecurity is transparent and honest communication that is targeted at the right people.
There are plenty of cybersecurity blogs out there - but this one’s a little different. Think of it as your personal cyber bedtime story: a calm(ish), reflective read to end your day, with just the right mix of insight, realism and a touch of provocation.
I’m thrilled to introduce The Luna(r) Brief, a new monthly blog series brilliant Luna-Marika Dahl will be writing for Cybersecurity Redefined - published on the second Monday of each month at 9PM CE(S)T.
Why late? Because cybersecurity doesn’t sleep - and neither do the thoughts that keep us up at night.
Each post is designed to be a thoughtful end-of-day read - short enough to digest after work, deep enough to spark new thinking.
A Breeze and a Breach: The Hidden Risks of Shadow AI
Jamie, Fireflies, and Otter. Sounds like a perfect romantic date night at a nearby lake, with some good food and a warm summer breeze. Right? Right??
I'm sorry to admit, but I'm talking about AI. More specifically, shadow AI. Otter, Fireflies, and Jamie are all AI note-taking tools. They are just one example of the many AI applications that employees use to make their lives easier and their work more effective. All too often, these tools have not been formally approved by the IT or security department (called shadow AI then), so they can pose a real threat to your company.
Jamie, Fireflies, and Otter. Sounds like a perfect romantic date night at a nearby lake, with some good food and a warm summer breeze. Right? Right??
I'm sorry to admit, but I'm talking about AI. More specifically, shadow AI. Otter, Fireflies, and Jamie are all AI note-taking tools. They are just one example of the many AI applications that employees use to make their lives easier and their work more effective. All too often, these tools have not been formally approved by the IT or security department (called shadow AI then), so they can pose a real threat to your company.
But why is that? How can using a seemingly harmless AI-powered web service impact decisions, lead to data leaks or privacy concerns, and affect the integrity of your data?
Let's consider a hypothetical incident involving a large financial institution. Imagine an employee working with customer data including personally identifiable information (PII) who wants to eliminate the need for tedious Excel calculations and decides to use ChatGPT to analyze the data. The employee grants the US-based web service complete access to the customer data that needs to be analyzed to develop new marketing strategies. This data includes a lot of financial data, but also personal identification details such as account numbers, addresses, phone numbers and ID numbers.
It's not clear whether the data is being handled in a secure manner when uploaded to an unauthorized web service. It could be stored or used for training purposes. If the web service were to be hacked, the data could be stolen and leaked without the security department even being aware of it.
Large Language Models are prone to providing incorrect results and hallucinating. If the employee does not check the output and decisions are made based on it, the LLM could initiate a completely useless marketing strategy that would waste a lot of money. Alternatively, the strategy could address the wrong people because the LLM was not given enough context, miscalculated a complex financial formula, or simply hallucinated.
Now imagine the employee sharing the chat with ChatGPT with their team to demonstrate how effectively the AI analyzed the data. If you followed the cybersecurity news closely at the end of July 2025, you might be aware of the major ChatGPT leak, in which shared private chats were exposed on the internet and could be easily found using specific search parameters on Google. As of August 1, zeit.de reported that 110k chats were still indexed on Google, despite OpenAI working actively with search engines to remove the indexed chats from their databases. Among these leaked chats, people with malicious intent targeting the financial company could easily find related chats by using this search: `site:chat.openai.com/share "name of the company`. They would then find all the data that the employee uploaded. Depending on the information present, this could lead to sophisticated phishing attempts against customers, demands for ransom, or sale to competitors.
This is just one of the dangers that shadow AI introduces. For example, AI note-taking tools that have not been verified by the security team could collect voice samples that could be used for Vishing attempts and impersonation, take screenshots of classified presentations or personally identifiable information (PII), or collect information about attendees in sensitive meetings. Unverified AI web services could read sensitive data from an invoice and feed it to a Google Workspace spreadsheet, where another unknown LLM could run automations on this sensitive data. Think about the AI tools you are familiar with and I am sure you will identify more potential security threats.
Nobody knows where the data is flowing. Where is it being processed? Will it be stored somewhere? If so, in which country and with which provider? Will it be encrypted? How is the security and privacy of your data guaranteed? What about the security of the underlying infrastructure? Will you be informed if your data is involved in an incident? How will you know what data has been given to unauthorized tools? When it comes to shadow AI, there are so many questions you cannot answer.
So, what can you do? The solution is quite simple. It's all about communication and building bridges between your security team, management, and end users. Firstly, everyone should be aware of the dangers of unauthorized AI tools and the potential reputational and financial damage the data loss could cause. Secondly, ensure that your IT or security departments are aware of the tools that could benefit employees. AI is on the rise for good reason, since it can automate tedious tasks and provide support in many areas. Denying access to AI for security reasons doesn't make sense since employees will always find a way to use it, even if it means copying sensitive data to private unmanaged and potentially insecure devices to work with their favorite AI tools. Ask your employees what they need and evaluate possible solutions or build them yourself. This gives you more control over the tools being used, makes it easier and faster to fix and react on security issues, decreases the risk of privacy concerns, and enables your employees to use AI to work more effectively with significantly less danger.
Of course, for certain special cases, you may not be able to provide a solution or prevent the use of unauthorized AI services. However, eliminating every small security risk will increase your overall security. If you communicate the importance of security effectively and explain the risks of shadow AI in a way that is easy to understand, you will benefit in many ways: happy employees who can use AI in their daily work, increased trust in the security department because they're taking care of the needs of employees and not only checking on their KPIs, and happy management because both security and productivity have been increased without any bigger restrictions.
So yes, Jamie, Fireflies, and Otter could still be part of a perfect summer evening - but in your workplace, they’re not bringing wine and sunsets. They’re bringing risk. Keep shadow AI in the open, set clear rules, and choose trusted tools. That way, your next date with AI ends in productivity, not a breach.
There are plenty of cybersecurity blogs out there - but this one’s a little different. Think of it as your personal cyber bedtime story: a calm(ish), reflective read to end your day, with just the right mix of insight, realism and a touch of provocation.
I’m thrilled to introduce The Luna(r) Brief, a new monthly blog series brilliant Luna-Marika Dahl will be writing for Cybersecurity Redefined - published on the second Monday of each month at 9PM CE(S)T.
Why late? Because cybersecurity doesn’t sleep - and neither do the thoughts that keep us up at night.
Each post is designed to be a thoughtful end-of-day read - short enough to digest after work, deep enough to spark new thinking.
The Human Factor: Why Holistic Security Beats Every Innovation
New cyber trends emerge almost daily. They promise to solve all your problems with minimal effort, offering maximum flexibility, scalability, and reliability - with AI integrated throughout. But what would happen if we focused solely on such solutions?
New cyber trends emerge almost daily. They promise to solve all your problems with minimal effort, offering maximum flexibility, scalability, and reliability - with AI integrated throughout. But what would happen if we focused solely on such solutions?
Picture a company where cybersecurity isn't a major focus. Employees may think cybersecurity is magic, nothing can go wrong, since, if stuff hits the fan, AI will save them all. Then, someone finds a USB stick, is curious to see what's on it, plugs it in, and your fancy AI solution doesn't detect the malware on it. Maybe you're lucky this time. It was a digital threat known to your AI. But now think of some real-world threats. Passwords sticking to your employee’s monitor, devices being unlocked and left alone, or more sophisticated: attackers pretending to be cleaning personnel or facility management who want to enter your office building or access specific rooms. And even more physical, an attack on insecure infrastructure. Bad locks, weak fences, no cameras, or even no security at all.
And who is it that you consider trustworthy? Real human beings run our health systems, produce our food - at least I hope this is still the case. Why should we then choose to put our entire trust in some machines when it comes to our data, our money, or the safety in our connected homes?
Don't get me wrong; IT innovations are great. AI is a superpower that we can and should utilize. What is more, focusing solely on digital security is in itself flawed, since cybersecurity is not only about implementing password policies or setting up firewalls, but involves physical security, personnel security, your IT systems, and your OT systems.
Sweden's Säkerhetsskyddslag (2018:585), or Protective Security Act is a good example of legislation that specifies (cyber)security as a combination of digital security, physical security, and personnel security.
You cannot create and embed proper security for human beings without humans being involved. Surely, at first glance, security often focuses on infrastructure, tech systems, and similar areas. But what is the reason for their existence? Our usage of them - or, more specifically, the services that run on them. We use them heavily every day for fun and business purposes, and much of the infrastructure runs critical services on which our modern society is built. Electricity, banking, healthcare, and the internet, for example. This, on the downside, is why companies that run critical infrastructure are often targeted. Attackers can easily extort money from them by threatening to open a water dam, shut down the mobile network in a region or cut off electricity to a hospital. And you know what's really shocking? Despite us talking about critical infrastructure right now, their security is often critical as well. Critically bad. Legacy systems are connected to the internet with no security measures in place, or with weak ones. What could possibly go wrong?
And that's exactly what we must keep in mind when talking about security. Knowing what could happen is a good start. Knowing what exactly you need to have in mind when planning or increasing your security. Don't focus on tools or trainings first, make sure to have a proper overview over your IT landscape and assets. And even though it sounds wrong, treat your employees as assets as well. Once you have that good overview, you can start thinking of how to implement security for your different types of assets - by the way, securing OT sensors in the fields is a different story than securing a web server, and awareness sessions are far away from technical security implementations.
Consider the potential threats you may face. The consequences can vary depending on who attacks you and their motivation. If the attackers want to disrupt your business, they may destroy anything and everything, whether physically or digitally. If they want to take control, they may have been sitting silently in your networks for months, or they may break into your location. Do they want to demand a ransom? They will trick your employees, access your networks, steal data, and threaten to keep it unless you pay. Be prepared for these scenarios and implement the appropriate measures. Compare their effectiveness and ease of implementation, and start with the most effective and easiest to implement. Additionally, make sure you know where to seek help if needed.
So, we have now discussed why security is about much more than AI innovations, technical implementations, and policies. However, as I also mentioned, AI is a superpower that we can and must utilize to improve our security. Machine Learning systems are ideal for identifying patterns or spotting the odd in tons of logs. Integrated LLMs can be used to support in the development of automations or security tools, craft queries for SIEM systems and detection rules for EDR/XDR. They can also explain potentially malicious code found in incidents, support penetration tests and summarize reports generated by sandbox solutions. Finally, they can support in incident communication by collecting important information, putting it into readable text that can be understood by both tech and non-tech people, and providing the analyst with a comprehensive summary.
From a technical perspective, yes, AI can be useful in each and every area of cybersecurity. However, it cannot replace human beings. Neglecting the vital role of humans may lead you off course, preventing a truly holistic approach to security - one that integrates not just IT concerns, but all essential dimensions of protection.