- Cyberattacks are on the rise
- The MOVEit data breach
- Could AI become our ally in the fight against cybercrime?
- Darktrace’s AI Cyber Loop can autonomously identify and stop an attack
Artificial intelligence (AI) technology has attracted a great deal of attention in recent years, especially following the launch of generative AI. In a relatively short period of time, generative AI models like ChatGPT, DALL-E, and Stable Diffusion achieved widespread adoption, with people mainly using them to produce text and images that are nearly indistinguishable from human-generated content. Unsurprisingly, it wasn’t long until the technology found its way into the workplace as well, where it is used to streamline various workflows and improve the efficiency of human workers. However, this growing influence of generative AI technology in numerous spheres of our lives has inevitably raised some serious concerns about the potential negative effects on people and society as a whole.
While most of the discussion has revolved around generative AI’s impact on human labour, this groundbreaking technology may pose even greater threats to our society. Many industry experts are concerned that generative AI could lead to a dramatic increase in the number of cybersecurity threats in the upcoming years, as it enables almost anyone to launch sophisticated cyberattacks, even if they have very little to no previous programming experience. Perhaps even more worryingly, generative AI could enable cybercriminals to develop entirely new types of attacks that our existing security measures would not be able to cope with, causing significant damage to individuals and organisations alike. So, is there anything we can do to protect ourselves?
“Cybercriminals are exploiting the biggest vulnerability within any organisation: humans. As progress in artificial intelligence (AI) and analytics continues to advance, hackers will find more inventive and effective ways to capitalise on human weakness in areas of (mis)trust, the desire for expediency, and convenient rewards”.
Amy Larsen DeCarlo, principal analyst at GlobalData
Cyberattacks are on the rise
The truth is that hackers have never shied away from embracing emerging technologies and exploiting them to achieve their nefarious goals, so it was only a matter of time before generative AI caught their attention as well. Generative AI’s ability to produce written text, audio, and even video content that looks authentic has made it easier than ever for cybercriminals to trick people into disclosing sensitive information. This reflects in a notable surge in social engineering attacks, which increased not only in frequency but also in severity in 2023, a trend that is expected to continue in 2024, according to data and analytics company Global Data. “Cybercriminals are exploiting the biggest vulnerability within any organisation: humans”, says Amy Larsen DeCarlo, principal analyst at GlobalData. “As progress in artificial intelligence (AI) and analytics continues to advance, hackers will find more inventive and effective ways to capitalise on human weakness in areas of (mis)trust, the desire for expediency, and convenient rewards”.
Similarly, the Google Cloud Cybersecurity Forecast 2024 report predicts that cybercriminals will increasingly use generative AI and large language models (LLMs) to execute a wide range of cyberattacks, including SMS and phishing attacks. In the past, such attacks were relatively easy to identify due to misspelt words and grammatical errors. However, now that generative AI is capable of mimicking natural language with a high degree of accuracy, this will no longer be the case. Generative AI models could also be used to disseminate fake news and create incredibly realistic deepfake photos and videos, making it increasingly difficult to differentiate fact from fiction and further eroding the public trust, even in legitimate news sources. It’s not just external threats that companies will have to contend with, though. Employees themselves could inadvertently expose their own company’s sensitive information by feeding it into a generative AI model, thus potentially enabling an attacker to gain access to this information by simply entering the right prompt. To prevent this from happening, companies will need to implement appropriate security measures and educate their employees on the proper use of generative AI in the workplace.
The MOVEit data breach
The past year has seen an endless stream of cyberattacks on a wide range of businesses, but one that stands out because of the sheer number of victims affected by it is the MOVEit data breach. By exploiting a zero-day vulnerability in this popular file transfer software, the ransomware group Clop was able to steal data from numerous businesses and even some government agencies. Some of the more high-profile victims of the attack included Shell, British Airways, and the US Department of Energy. Although the developer, Progress Software, reacted quickly and issued a fix for the vulnerability, it wasn’t before significant damage had already been done. Also, it took months before the full extent of the damage had become evident, with new names being added to the list of the victims week after week.
According to an investigation conducted by the antivirus company Emsisoft, a total of 2,167 organisations across the world have been affected by the data breach so far. Some 88 per cent of these are from the US, while the remaining 12 per cent are from Germany, the UK, and Canada. Furthermore, Emsisoft estimates that personal data from more than 62 million individuals was exposed in the breach. However, it’s important to point out that the real figures could be much higher, possibly reaching hundreds of millions. “It’s inevitable that there are corporate victims that don’t yet know they’re victims, and there are individuals out there who don’t yet know they’ve been impacted”, says Brett Callow, a threat analyst at Emsisoft. “MOVEit is especially significant simply because of the number of victims, who those victims are, the sensitivity of the data that was obtained, and the multitude of ways that data can be used”. The overall cost of the breach has been estimated at around $11 billion. Probably the most worrying aspect of the whole incident is that even some organisations that didn’t use the MOVEit software directly have had their data stolen in the breach because a third party or a vendor they collaborated with did use it, which enabled attackers to gain access to data from other companies in their network.
“Attackers are increasingly using AI and ML to develop more sophisticated attacks, but AI can also be used to counter these attacks. This arms race between AI-driven defence and AI-assisted offence will drive innovation in the cybersecurity industry, resulting in ever more advanced security solutions”.
Brian Roche, chief product officer at Veracode
Could AI become our ally in the fight against cybercrime?
It’s undeniable that AI has enabled cybercriminals to make their attacks more potent and difficult to recognise. However, there is no reason why we couldn’t also leverage the power of AI to keep those new threats at bay. Many industry experts are convinced that AI is going to become a potent new ally in our ongoing fight against cybercrime. “Attackers are increasingly using AI and ML to develop more sophisticated attacks, but AI can also be used to counter these attacks. This arms race between AI-driven defence and AI-assisted offence will drive innovation in the cybersecurity industry, resulting in ever more advanced security solutions,” says Brian Roche, chief product officer at Veracode. “AI-powered security solutions are already being used to identify and prioritise threats, automate incident response, and personalise security controls. In the future, these solutions will become even more sophisticated as they learn from experience and adapt to new threats in real time. This will enable AI-driven cyber defence systems to proactively identify and neutralise automated attacks fuelled by AI before they cause damage. In this evolving cybersecurity landscape, organisations need to embrace AI and ML to stay ahead of the curve”.
A recent survey conducted by CyberRisk Alliance, which included 800 senior IT and cybersecurity decision-makers from various US and UK organisations, reveals that 22 per cent of organisations already have the majority of their cybersecurity budget dedicated to AI-powered solutions, while 64 per cent are highly likely to implement one such solution within the next year. When asked about possible use cases for AI, 61 per cent of respondents said that they believed AI was better than humans at identifying threats, while 46 per cent said that the most notable advantage of AI is that it allows us to automate various response actions or repetitive tasks, such as alert triage. The vast majority of respondents agree, however, that humans will continue to have an important role to play in cybersecurity, as there are still some things they do better than machines, such as understanding and explaining the context of threats. “This survey reveals that the role artificial intelligence will play in enhancing threat detection and response is undeniable, yet it is crucial to recognise that technology alone cannot protect businesses against modern threats”, says Dan Schiappa, chief product officer at Arctic Wolf. “As threat actors become more advanced and leverage AI tools themselves, humans will have an essential role investigating novel attacks, explaining their context within their business, and most importantly, leveraging their knowledge and expertise to train the very AI and machine learning models that will become deeply embedded within next-generation cybersecurity solutions”.
Darktrace’s AI Cyber Loop can autonomously identify and stop an attack
For example, the Cambridge-based cybersecurity company Darktrace has developed a comprehensive set of cybersecurity solutions called Cyber AI Loop, which can autonomously identify a potential attack and stop it before any damage is done. The Cyber AI Loop consists of four separate components — PREVENT, DETECT, RESPOND, and HEAL — that work together to keep both your internal and external data safe from intruders. At the heart of each component is Darktrace’s proprietary Self-Learning AI, which sits in the background and keeps an eye on everything that goes on in the company’s network, including user behaviour and device activity. This enables it to gain a better understanding of what is considered normal behaviour for this particular company and identify potential threats.
The entire process can be outlined in four stages. First, Darktrace PREVENT uses AI to search through the company’s servers, networks, and IP addresses to identify all of the assets that belong to it. It then looks for potential vulnerabilities by emulating attacks on each individual asset and identifying those that may be exploited to gain access to the company’s networks. All of the findings are also forwarded to the DETECT and RESPOND components of the AI Loop, which they use to fortify your company’s defences. In the second stage, Darktrace DETECT uses anomaly detection techniques, behavioural analysis, and threat emulation to detect any unusual activity within your company’s network, including new threats that may not have been identified before. This is where Darktrace RESPOND enters the picture — it nullifies the threat by isolating the affected devices or parts of the network before it spreads to other areas. Any data related to the attack is subsequently fed back to the previous stages of the AI Loop to minimise the chances of similar incidents occurring in the future. The fourth and final stage involves Darktrace HEAL, which allows the company to restore its assets, devices, and networks to the states they were in before the attack took place. HEAL also generates a detailed report of the incident to provide all stakeholders with valuable insights into everything that took place. Another important feature of the HEAL component of the AI Loop is that allows companies to simulate various attacks on their systems, to ensure that their employees are prepared and know what to do when an attack occurs in real life.
Closing thoughts
The widespread integration of artificial intelligence technology into various aspects of our lives may have enhanced our productivity in ways we couldn’t imagine before, but it has also introduced new threats that could limit our ability to stay safe in this digital age. In the wrong hands, AI could indeed be a potent weapon used to launch sophisticated cyberattacks against unsuspecting victims. At the same time, AI also has the potential to be a force for good, a vigilant guardian that keeps our digital assets safe from both internal and external threats and ensures the continuity of our business operations. As we move forward, we need to ask ourselves the following question: how do we ensure that we harness the power of AI while safeguarding ourselves and our assets against the vulnerabilities it could introduce? The answer to this question will shape the trajectory of cybersecurity practices and the future security landscape of our increasingly connected world.
Share via: