Breaching the future: how technology is rewriting the cybersecurity landscape

Cyberattacks are growing faster, smarter, and more automated. As AI and quantum computing reshape the threat landscape, the question is whether our defences can keep up.
Picture of Richard van Hooijdonk
Richard van Hooijdonk

Cybersecurity has always resembled a kind of arms race. Attackers find a gap, defenders close it, and the whole cycle starts again. That much hasn’t changed. What has changed, dramatically, is the speed at which each turn of the wheel now plays out. AI-powered tools are compressing the window between discovering a vulnerability and exploiting it from weeks to hours, sometimes even minutes. And the people doing the exploiting no longer need deep technical expertise to cause serious damage, either. Off-the-shelf AI toolkits are lowering the barrier to entry, turning what was once a specialist craft into something closer to a service industry. Over the next couple of decades, though, technological advances are set to redraw the cybersecurity landscape in ways that could make today’s threats look almost quaint by comparison. 

Quantum computing threatens to unravel the encryption standards that underpin modern digital infrastructure – the protocols that currently protect everything from banking transactions to state secrets. Autonomous AI agents could launch and adapt attacks without any human involvement at all, probing defences around the clock and evolving their tactics in real time. And as more of the physical world – energy grids, transportation networks, medical devices – connects to the internet, the consequences of a breach could extend well beyond stolen data. For businesses, governments, and individuals alike, the question is no longer whether the threat will escalate. It will. The only question is how fast and whether our defences can keep pace. In this article, we’ll take a closer look at where the cybersecurity landscape stands today and how it’s likely to evolve in the years to come.

“Attackers aren’t reinventing playbooks, they’re speeding them up with AI.”

Mark Hughes, IBM’s global managing partner for cybersecurity services

AI-powered hackers are here

Cyberattacks are increasing in frequency, sophistication, and severity, with hackers increasingly using AI to expand their capabilities.

The volume of cyberattacks has grown dramatically in recent years. Current estimates put the number of meaningful cyberattacks at over 2,200 per day worldwide, or roughly one every 39 seconds. And that figure, by most accounts, is heading in only one direction, driven in large part by continued advancements in AI. In a relatively short span of time, AI models have gone from tools capable of completing high school homework to coding assistants that can build entire applications in a fraction of the time it would take human developers. But besides helping students cheat at school and streamlining workplace tasks, AI is proving equally useful to people with much more nefarious intentions.

In February, a hacker using a jailbroken version of Anthropic’s Claude chatbot found a way to exploit vulnerabilities in networks belonging to the Mexican government and steal some highly sensitive data, including taxpayer and voter records. With AI doing much of the heavy lifting – identifying weaknesses, writing exploit scripts, and orchestrating data exfiltration – the attacker made off with 150 gigabytes of government data tied to 195 million taxpayers. That same month, Amazon’s security research team revealed that hackers had compromised more than 600 firewall systems across dozens of countries. Using commercially available AI tools, attackers managed to break through inadequate security measures and extract credential databases, potentially laying the groundwork for future ransomware attacks. “It’s like an AI-powered assembly line for cybercrime, helping less skilled workers produce at scale,” says CJ Moses, Amazon’s security engineering and operations lead.

IBM’s X-Force Threat Intelligence Index 2026 report found a 44% year-over-year increase in attacks that began with the exploitation of public-facing software or system applications, alongside a 49% surge in active ransomware groups compared to the prior year. “Attackers aren’t reinventing playbooks, they’re speeding them up with AI,” says Mark Hughes, IBM’s global managing partner for cybersecurity services. “The core issue is the same: businesses are overwhelmed by software vulnerabilities. The difference now is speed.” Speed, it turns out, changes almost everything. When the window between a vulnerability being discovered and being exploited shrinks from weeks to hours, the margin for error on the defensive side narrows dramatically. Security teams that were already stretched thin are now being asked to move even faster against adversaries who are getting more capable by the day.

The rise of vibe hacking

The emergence of vibe coding has made it possible for almost anyone, including those with no programming experience, to launch sophisticated cyberattacks.

There was a time when carrying out a serious hack required a fairly advanced level of technical expertise – years of learning, practice, and a working knowledge of code. That time has now passed. Advances in generative AI, and the rapid rise of vibe coding platforms in particular, have made it straightforward for almost anyone to produce working code from scratch. Even a person with little to no programming background can now generate scripts, troubleshoot them, and refine them just by entering plain-language prompts into a chatbot. “We’re going to see vibe hacking. And people without previous knowledge or deep knowledge will be able to tell AI what it wants to create and be able to go ahead and get that problem solved,” says Katie Moussouris, the founder and CEO of Luta Security.

While most large language models do have guardrails designed to prevent the generation of malicious code, there are entire online communities dedicated to finding ways around them, and it’s often easier done than the developers would like to admit. For instance, researchers have repeatedly demonstrated that ChatGPT, Gemini, and Claude can each be jailbroken with relative ease, and one of the more reliable methods is simply telling the model you’re competing in a capture-the-flag exercise. Frame it the right way, and the model will happily oblige with your request. “It lowers the barrier to entry to cybercrime,” laments Hayley Benedict, a Cyber Intelligence Analyst at RANE.

A world of hackers

The idea that anyone could effortlessly become a hacker is certainly unsettling, but that’s not where the real threat lies. If a novice with no prior programming experience could use AI to launch a sophisticated cyberattack, imagine what someone who already knows what they’re doing could achieve with the same technology. “When you’re working with someone who has deep experience, and you combine that with, ‘Hey, I can do things a lot faster that otherwise would have taken me a couple days or three days, and now it takes me 30 minutes.’ That’s a really interesting and dynamic part of the situation,” says Hayden Smith, the cofounder of security company Hunted Labs. Smith describes a scenario in which an experienced hacker uses AI to build a system capable of circumventing multiple security protections simultaneously, learning on the fly, and rewriting its own code as the situation develops.

We are, in fact, already seeing the first malware built on that principle. In June 2025, Google researchers reported on experimental malware families called PROMPTFLUX and PROMPTSTEAL, which represent the first observed instances of malware that uses large language models during execution. According to Google, PROMPTFLUX can prompt an LLM to rewrite parts of its own code in an effort to evade detection, while PROMPTSTEAL generates commands dynamically instead of relying entirely on hard-coded instructions. While researchers were careful to note that both methods are still experimental and have yet to demonstrate the ability to fully compromise victim networks or devices, they also warned that they mark a significant step toward more autonomous and adaptive malware, code that constantly learns and improves over time.

“I think ultimately we’re going to live in a world where the majority of cyberattacks are carried out by agents.”

Mark Stockley, a security expert at the cybersecurity company Malwarebytes

When AI agents go rogue

AI agents could enable hackers to significantly increase the scale and speed of cyberattacks, making them increasingly difficult to counter.

AI agents are dominating the conversation at the moment, and with good reason. Capable of planning, reasoning, and executing complex multi-step tasks on their own, agents go considerably further than a chatbot responding to a user’s prompt. Having a digital assistant that can schedule your meetings, order groceries before they run out, or take control of your computer is a genuinely useful development, but those same capabilities could also be turned against us. Experts warn that agents could be used to identify vulnerable targets, hijack systems, and extract sensitive data, all with minimal human involvement. “I think ultimately we’re going to live in a world where the majority of cyberattacks are carried out by agents,” says Mark Stockley, a security expert at the cybersecurity company Malwarebytes. “It’s really only a question of how quickly we get there.”

Hacking at scale

That idea is attractive to cybercriminals for fairly obvious reasons. In addition to being much cheaper than hiring professional hackers, AI agents can also orchestrate attacks more quickly and at a scale no human team could match. “If you can delegate the work of target selection to an agent, then suddenly you can scale ransomware in a way that just isn’t possible at the moment,” adds Stockley. “If I can reproduce it once, then it’s just a matter of money for me to reproduce it 100 times.” Another advantage of AI agents is that they are considerably smarter than the bots hackers typically rely on to probe and breach systems. Those bots are scripted, which means that they use predetermined sequences of commands, severely limiting their ability to respond to novel situations. On the other hand, AI agents can adapt in real time, adjusting their approach based on what they find.

Early evidence suggests that current agentic systems are already capable of more than many organisations would be comfortable with. In March 2025, Daniel Kang, an assistant professor at the University of Illinois Urbana-Champaign, developed a new benchmark designed to test whether AI agents could exploit real-world web application vulnerabilities. According to Kang and his team, agents were able to successfully exploit 13% of vulnerabilities for which they had no prior knowledge. When given a brief description of the vulnerability, the success rate rose to 25%. While such low success rates would be considered poor in most contexts, for hackers, they represent a massive opportunity. After all, a hacker doesn’t need to succeed in every attempt. They may fail dozens or even hundreds of times, but once they succeed, the effort will have been worthwhile.

The first AI-orchestrated large-scale cyberattack

Some criminals have already found a way to weaponise AI agents. In November 2025, Anthropic revealed that it had detected suspicious activity that subsequent investigation determined to be a highly sophisticated espionage campaign linked to a Chinese state-sponsored group, which targeted roughly 30 organisations, including large tech companies, financial institutions, chemical manufacturers, and government agencies. According to Anthropic, the attackers tricked the company’s Claude Code tool into attempting infiltration of said targets by breaking their attacks down into small, seemingly harmless tasks, which made their true purpose harder for the model’s safeguards to detect. The attackers also falsely told Claude it was working as an employee of a legitimate cybersecurity firm engaged in defensive testing.

From there, attackers instructed Claude to inspect the target organisation’s systems and infrastructure, identify and test security vulnerabilities by writing its own exploit code, identify the highest-privilege accounts and harvest their credentials in order to create backdoors, and then extract a large amount of private data – all with minimal human intervention. Claude then concluded its involvement by producing comprehensive documentation of the entire attack, including detailed records of stolen credentials and analysed systems that would help attackers plan the next stage of operations. In Anthropic’s account, AI handled roughly 80 to 90% of the work, with human involvement limited to a handful of key decision points – approving the move from reconnaissance to active exploitation, authorising the use of harvested credentials, and making final decisions about what data to steal and retain.

The quantum threat

Quantum technologies promise to accelerate scientific discovery, but they could also render our current encryption standards useless.

While AI is reshaping the cybersecurity landscape at this very moment, another technology – one still in its relative infancy – could eventually pose an even bigger threat. Quantum technologies are widely regarded as foundational to the next era of both economic and national security. They have the potential to accelerate scientific discovery and drive a new wave of innovation across chemistry and materials science, as well as enable breakthroughs in secure communications. At the same time, quantum computers threaten to break the widely used public-key algorithms that underpin modern digital life, undermining the security of everything from online banking to encrypted messaging to government communications.

Most of modern cryptography relies on mathematical problems that are, for all practical purposes, unsolvable by classical computers within any useful timeframe. Cracking the standard encryption behind a secure website or messaging app, for instance, would take millions of years with today’s hardware. But a quantum computer changes that equation. Where classical computers process information as binary bits – ones and zeroes – quantum computers use qubits, which can exist in multiple states simultaneously. That ability to consider many possibilities at once allows quantum computers to process certain complex problems far faster than classical systems. And the mathematical problems that keep our data safe happen to be exactly the kind quantum computers are built to solve.

Harvest now, decrypt later

The development of a quantum computer powerful enough to break current encryption is still some years away. But the direction of travel is not seriously in dispute, and the window for organisations to act may be shorter than it appears. Security experts warn of a scenario uncomfortably reminiscent of the early internet, when websites ran on unencrypted HTTP and anyone positioned on the same network could eavesdrop on data passing between users and servers. The difference is that the stakes are substantially higher now. In the early days of the web, the data at risk was relatively modest: basic form submissions, simple transactions, rudimentary login credentials. Today, the volume and sensitivity of encrypted data moving across global networks is staggering: financial records, medical histories, classified government communications, intellectual property, and biometric data.

And organisations don’t have the luxury of waiting until quantum computers actually arrive. Intelligence agencies and sophisticated threat actors are widely believed to be harvesting encrypted data now, stockpiling it with the intention of decrypting it once quantum capabilities mature, a strategy known in the cybersecurity world as “harvest now, decrypt later.” A diplomatic cable intercepted today, a database of medical records captured in transit next year, a trove of corporate trade secrets siphoned off the year after – all of it sitting in storage, waiting for the day the encryption surrounding it becomes trivially easy to crack. For data that needs to remain confidential for decades, such as national security communications, long-lived financial records, or critical infrastructure blueprints, the window for action may already be closing.

Share via
Copy link