Cybersecurity 2024: unmasking the tactics of AI-powered cybercriminals

From AI-powered disinformation campaigns that undermine the very foundations of democracy to the threat of social engineering attacks that prey on the vulnerabilities of the human psyche, the challenges we’ll face in 2024 are multifaceted and deeply concerning.
Industries: Cybercrime
  • Is AI-powered disinformation a threat to democracy?
  • How AI amplifies social engineering attacks
  • The public sector and critical infrastructure under attack

The world of cybercrime has undergone a dramatic shift in recent years. Cybercriminals, once primarily motivated by the desire to cause mischief and chaos, have evolved into profit-driven entities that operate with the strategic precision of legitimate businesses. As we look ahead to 2024, it’s clear that this year will be the most challenging yet in terms of cybersecurity, with individuals, organisations, and governments all facing unprecedented threats. Cybercriminals now use cutting-edge technologies, including artificial intelligence (AI), to conduct sophisticated and well-funded attacks against online infrastructure. These malicious actors exploit vulnerabilities and find new ways to bypass traditional security measures, making it more challenging to protect ourselves against their actions. From the growing size of ransomware payouts to the rise of imposter frauds, business email compromise, identity theft, and crypto fraud, the threats posed by cybercrime are becoming more diverse and complex. Adding to the challenges we face is the emergence of deepfakes powered by AI. These highly realistic forgeries blur the lines between truth and deception, making it increasingly difficult to distinguish genuine content from malicious fabrications. As fake kidnappings, ransoms, and money-wiring scenarios become more common, the need for heightened vigilance and proactive measures has never been greater.

“Given the ease with which AI tools can generate deceptive but convincing narratives, adversaries will highly likely use such tools to conduct information operations against elections in 2024”.

CrowdStrike 2024 Global Threat Report

Is AI-powered disinformation a threat to democracy?

One of the most troubling aspects of the developments in cybercrime is how AI-powered disinformation campaigns can be used to undermine the democratic process and influence election results. The 2016 and 2020 US presidential elections are particularly alarming examples of how the spread of false information can cause discord, fuel conspiracy theories, and erode trust in the electoral system. As we look ahead to the 2024 election, the concern is not only that these tactics will be used again but that they will be augmented by the latest advances in AI technology, enabling the creation of even more convincing and targeted disinformation. Of course, the threat of AI-powered disinformation is not limited to the United States. As 50 per cent of the world’s population prepares to vote in 2024, the potential for generative AI to be used as a tool for electoral manipulation is a growing concern among cybersecurity experts globally. Adam Meyers from CrowdStrike warns that the growing accessibility of these technologies could enable more actors to launch sophisticated disinformation campaigns, escalating information warfare.

“Given the ease with which AI tools can generate deceptive but convincing narratives, adversaries will highly likely use such tools to conduct information operations against elections in 2024”, reads the latest CrowdStrike report. “Politically active partisans within those countries holding elections will also likely use generative AI to create disinformation to disseminate within their own circles”. In fact, the use of AI in disinformation campaigns is not just a theoretical possibility; it is already a reality. In Slovakia, an AI-generated deepfake audio supposedly featuring a well-known journalist and a political party leader discussing election fraud was widely shared on social media just before the country’s elections. Despite swift efforts to debunk the audio, the timing of its release, which coincided with the pre-election silence period, made it challenging for media organisations and politicians to effectively counter the false narrative before voters went to the polls. It’s important to point out that the implications of AI-powered disinformation extend far beyond politics. In 2024, we can expect to see a surge in both politically and financially motivated disinformation campaigns targeting a wide range of sectors, from healthcare and finance to technology, education, and media. The potential consequences of these campaigns are far-reaching, as they can weaken trust in institutions, manipulate public opinion, and even put lives at risk by spreading false information about critical issues such as public health and safety.

“They work with emotions. When they put us in the right mood and trigger anger or fear, we forget all the advice. In those cases, we lose common sense, and that’s where attackers get us”.

Richard Werner, cybersecurity advisor at Trend Micro

How AI amplifies social engineering attacks

The rise of generative AI has enabled cybercriminals to enhance their social engineering attacks, making them more convincing and more personalised, and increasingly challenging to detect. Experts anticipate that the upcoming year will see a significant rise in AI-based predictive social engineering attacks, which use AI to launch highly targeted, emotionally manipulative attacks that prey on the vulnerabilities of the human psyche. By taking advantage of the vast amounts of personal data available online and using sophisticated algorithms to analyse patterns and behaviours, attackers can create phishing campaigns that are almost indistinguishable from real communications. One of the most cunning aspects of AI-enhanced social engineering is that it can create a false sense of familiarity and trust. Using natural language processing and machine learning, AI can generate phishing emails that are not only grammatically flawless but also mimic the unique communication styles of targeted individuals. This level of personalisation makes it increasingly difficult for even the most perceptive recipients to detect fraudulent messages, as they look as if they come from a trusted colleague, friend, or family member.

The threat of AI-powered deception extends beyond email, as advances in deepfake technology enable attackers to create stunningly realistic video and audio content. By analysing a target’s facial expressions, mannerisms, and vocal patterns, AI can generate synthetic media that is nearly impossible to differentiate from the real thing. This opens the door to a new breed of social engineering attacks in which cybercriminals can impersonate high-level executives, trusted partners, or even loved ones to manipulate victims into giving out sensitive information or transferring funds. The ability of AI to analyse vast amounts of data and identify patterns of vulnerability is another critical factor in the growing threat of AI-based social engineering. By analysing an individual’s digital footprint, including social media posts, online purchases, and browsing history, AI can produce a detailed psychological profile that highlights potential weaknesses and triggers. Armed with this knowledge, attackers can tailor their approach to exploit specific emotions, such as fear, greed, or curiosity, increasing the likelihood of a successful attack.

As the use of AI in social engineering attacks becomes more prevalent, even the most tech-savvy individuals may find themselves at risk. In one recent case, a couple in Brooklyn, Steve and Robin, fell victim to a scam in which an attacker used AI to replicate the voice of Steve’s mother, Mona. The attacker claimed to be holding Mona at gunpoint and demanded money through Venmo. Despite his background in law enforcement, Steve was manipulated by the emotional distress caused by the seemingly genuine plea for help. “Because we are so tech-centric, we forget that actually these scam tactics are old — predating even Internet scams — and very proven”, explains Richard Werner, cybersecurity advisor at Trend Micro. “They work with emotions. When they put us in the right mood and trigger anger or fear, we forget all the advice. In those cases, we lose common sense, and that’s where attackers get us”.

In another incident, a financial worker in Hong Kong was tricked into transferring $25 million after participating in a video conference with people he thought were senior staff members of his firm. The attackers had used AI to create convincing deepfakes of the staff members, successfully deceiving the worker and leading to a significant financial loss. Even companies at the forefront of technological innovation are not immune to the threat of AI-powered social engineering. The software development company Retool was targeted by a multi-pronged attack that compromised the accounts of 27 of its cloud customers. The attackers, pretending to be IT personnel, used a combination of SMS phishing, credential theft, and AI-generated voice deepfakes to gain access to sensitive customer data. The incident resulted in the theft of approximately $15 million worth of cryptocurrency from one of Retool’s clients, underscoring the devastating potential of AI-enhanced attacks.

The public sector and critical infrastructure under attack

The public sector is also dealing with growing complexities and challenges related to cybersecurity in 2024. From ideologically driven hacktivists to financially motivated cybercriminals and state-sponsored actors, the sector is under attack from all sides. Public institutions have become major targets for malicious groups and individuals. This is because these institutions hold large amounts of sensitive data and provide critical services that can be disrupted. The rapid digitalisation of the public sector, while undoubtedly beneficial to efficiency and accessibility, has also opened up new avenues for attack. As more sensitive information is stored in digital formats and critical services are delivered through online platforms, the attack surface has expanded significantly. This has not gone unnoticed by cybercriminals, who have been quick to capitalise on the opportunities presented by this digital transformation. One of the most alarming trends in recent years has been the increase in nation-state cyberattacks targeting critical infrastructure. In 2022, the number of such attacks increased by 20 per cent worldwide, driven in large part by the ongoing conflict between Russia and Ukraine. As geopolitical tensions continue to rise in various hotspots around the globe, this trend will likely continue throughout 2024 and beyond, increasing security risks even further.

The education sector in particular is being increasingly targeted by cybercriminals drawn by the sensitive data these institutions hold. From student records and financial information to cutting-edge research and intellectual property, educational institutions are a goldmine for those seeking to exploit this data for criminal purposes. In 2023, the hacking group Vice Society made headlines by leaking child passport scans, staff pay scales, contract details, and other sensitive information from Pates Grammar School in England. This incident was just one of many that affected educational institutions across Europe, with hackers infiltrating internal networks and IT infrastructures at universities in France and Germany. They even launched a DDoS attack on an online examination platform belonging to a high school in Greece. Public administrations are also struggling to fight off a growing number of cyberattacks. The July 2023 incident in which Kenya’s eCitizen portal was crippled by a malicious assault is a stark reminder of the far-reaching impact of such attacks. The portal, which is a digital gateway to over 5,000 government services, was made inaccessible, disrupting everything from passport applications and visitor visas to driver’s licences and health records. The consequences of this incident were felt far and wide, impacting mobile banking and transportation services and highlighting the interconnectedness and vulnerability of modern systems.

In the healthcare sector, where the stakes are literally a matter of life and death, the consequences of cyberattacks can be particularly devastating. According to the ENISA Threat Landscape: Health Sector report, nearly half of all ransomware attacks on public healthcare organisations result in data breaches or leaks. The March 2023 ransomware attack on Spain’s Hospital Clinic de Barcelona, which led to the cancellation of 150 non-emergency surgeries and approximately 3,000 patient check-ups, is just one example of the damage these attacks can cause. Regrettably, the public sector’s ability to defend against these threats is often hampered by various factors, including inadequate budgets, outdated systems, and a lack of skilled personnel. The ENISA report reveals the full scale of the problem, showing that only 27 per cent of healthcare organisations have a dedicated ransomware defence programme. Some 40 per cent lack even a basic security awareness programme for non-IT staff. It’s critical that public sector entities take a proactive, multi-layered approach to cybersecurity and address these growing risks and challenges. This will require a significant investment in technology and human capital, as well as a fundamental shift in organisational culture. The implementation of robust security frameworks, such as Zero Trust Architecture, and regular comprehensive security audits will be critical to identifying and mitigating vulnerabilities before they can be exploited. It will also be crucial to build a solid culture of cybersecurity awareness throughout the organisation. This means making a dedicated effort to educate and train all employees at all levels — from the boardroom to the mailroom.

Closing thoughts

From the rise of AI-powered disinformation campaigns that undermine the very foundations of democracy to the growing threat of social engineering attacks that prey on the vulnerabilities of the human psyche, the challenges we face are both multifaceted and deeply concerning. Yet, even in the face of these daunting obstacles, there is reason for hope. By embracing cutting-edge technologies like AI and machine learning and by fostering a culture of cybersecurity awareness and vigilance, we can begin to turn the tide against the forces of digital wrongdoing. It will require a combined effort from all stakeholders — individuals, businesses, and governments — to prioritise cybersecurity as a core imperative and to invest in the tools, training, and talent necessary to mount an effective defence. Going forward, we must ask ourselves: in an age where the very notion of truth itself seems to be under attack, how can we use AI and other emerging technologies to combat the scourge of disinformation and protect the integrity of our democratic institutions?

Industries: Cybercrime
We’re in the midst of a technological revolution and the trends, technologies, and innovations to look out for are all game-changers. They bring competitive advantages, increase the effectiveness of operations, make our daily lives more efficient, improve healthcare, and significantly change the landscape and beyond.

Free trendservice

Receive the latest insights, research material, e-books, white papers and articles from our research team every month, for free!