The new face of deception: how AI is being misused

Picture of Richard van Hooijdonk
Richard van Hooijdonk
AI-powered scams are evolving at a frightening pace. As criminals find increasingly sophisticated ways to exploit artificial intelligence, will we spot these deceptions before it's too late?

Executive summary

AI is rapidly becoming a powerful tool for deception and fraud. A growing number of cases have emerged that demonstrate the increasingly sophisticated methods by which this technology can be exploited. From financial scams and property fraud to attacks on celebrities and journalists, the misuse of AI has become a pressing concern that affects individuals, businesses, and institutions alike.

  • Corporate fraud has evolved to include deepfake video calls and voice impersonations of executives, causing significant financial loss.
  • AI voice cloning is being used to create convincing distress calls, targeting parents with fake recordings of their children in danger.
  • In Germany, AI-generated audio was used to undermine public trust in journalism, with protesters playing fake confessions supposedly from news presenters.
  • The technology required for AI fraud is becoming increasingly accessible, lowering the barrier to entry for potential bad actors.
  • Traditional methods of verification are becoming less reliable as AI-generated content becomes more sophisticated.

As AI technology continues to advance, the distinction between authentic and artificial content will become increasingly difficult to discern. This could drag us toward a future where our ability to verify the authenticity of digital communications will become crucial. At the same time, it raises concerns about how this might affect our willingness to trust even genuine calls for help.

AI has become an integral part of our daily lives, and it’s easy to see why. Whether it’s professionals streamlining their workflows or artists pushing the boundaries of creativity, more and more people are discovering the incredible potential of generative AI tools. In a relatively short amount of time, this innovative technology has transformed the way we work, create, and solve problems, opening up possibilities we could only dream of just a few years ago.

But here’s the catch – while most of us are using AI to increase our productivity or express ourselves, not everyone has such innocent intentions. You know how they say any tool is only as good as the person using it? Well, AI is no different. Like any other powerful technology, its capabilities can be twisted to serve malicious ends, and we’re already seeing the consequences.

The growing sophistication of deepfake technology is a perfect example of how AI can be perverted from its intended purpose. What began as an impressive demonstration of AI’s capabilities has evolved into a tool for deception and harm. From sophisticated financial scams and election interference to targeted harassment and non-consensual pornographic content, the misuse of AI is becoming more widespread and damaging. In this article, we’ll examine some of the most notable recent incidents that highlight just how serious this emerging threat has become.

Deepfakes trick a finance worker into paying out US$25 million

A finance worker was tricked into transferring millions of dollars to fraudsters after attending a video call with senior executives who turned out to be deepfakes.

Perhaps the most (in)famous case of financial fraud involving the use of AI took place in Hong Kong in January 2024, when a finance worker at engineering company Arup was tricked into transferring millions of dollars of the company’s money into a fraudster’s bank account. The scheme began innocently enough – the employee received what appeared to be a message from the company’s chief financial officer, requesting his presence in a confidential video conference to discuss some important transactions.

While initially wary, his suspicions melted away when he joined the call and saw what looked like familiar faces of the company’s senior leadership team. Convinced that the people in the call were who they said they were, the employee proceeded to make 15 separate transactions, transferring a total of US$25.6 million to five different local bank accounts. It wasn’t until he later spoke to the company’s head office that he realised he’d been scammed and contacted the police. Unfortunately, it was too late—the money was already gone. It turned out that the people he spoke to in the call weren’t actually senior officers – they were all AI-generated deepfakes.

AI becomes a conspirator in streaming fraud

A man used AI to boost the streaming numbers of his AI-generated songs, stealing millions of dollars in royalties in the process.

The rise of music streaming platforms has opened up incredible opportunities for artists to share their work with the world. At the same time, generative AI has made it easier than ever to create and produce music. But one enterprising fraudster saw this as more than just a creative tool – he spotted an opportunity to trick the system. In what may be the first case of its kind ever recorded, a US musician was recently charged with multiple counts of fraud and conspiracy after it was discovered that he had used AI to artificially inflate streaming numbers for his songs and steal millions of dollars in royalties.

With the help of a chief executive of an AI music company, the man first mass-produced hundreds of thousands of songs and uploaded them to various streaming platforms. He then deployed an army of automated bot accounts – at one point, he operated as many as 10,000 – to stream his AI-generated songs billions of times. To avoid drawing attention, he programmed the bots to spread their activity across his massive catalogue of fake songs, rather than playing the same tracks on repeat.

Furthermore, to make the songs look like the real deal, he even generated titles and artist names for each one. The scheme was so well-crafted that it managed to fly under the radar for years, racking up more than US$10 million in royalty payments before finally being exposed. But now, it’s time to face the music. If convicted, the mastermind behind this digital deception could spend the next 20 years of his life in prison.

A man uses AI to exact vengeance on his boss

An athletic director in a US high school used AI to create fake audio recordings of the school’s principal saying disparaging things about students and other teachers.

Then, there are people who are using AI to settle personal scores. Frustrated by previous conflicts with the school’s principal, who had called him out for financial irregularities and unauthorised staff dismissals, the athletic director of a Maryland high school devised a malicious plan. Using AI technology, he created convincing audio recordings that appeared to capture the principal making bigoted remarks about students and fellow teachers. He then released these fabricated clips on social media, knowing full well the damage they would cause.

The fallout was immediate and severe. The principal was placed on temporary leave while the school investigated the allegations, but the damage didn’t stop there. The school was bombarded with threatening messages, and even the principal’s family became targets of online harassment. Fortunately, law enforcement’s investigation revealed the truth. Police analysis confirmed the recordings were AI-generated deepfakes, and the trail led straight back to the athletic director, who was promptly arrested and charged with stalking, theft, disruption of school operations, and retaliation against a witness.

“What’s so particularly poignant here is that this is a Baltimore school principal. This is not Taylor Swift. It’s not Joe Biden. It’s not Elon Musk. It’s just some guy trying to get through his day,” says Hany Farid, a professor at the University of California, Berkeley and digital forensics expert, who helped the police analyse the recording. “It shows you the vulnerability. How anybody can create this stuff, and they can weaponise it against anybody.”

Celebrities fall victim to AI scams

A BBC presenter was the victim of an AI scam, as criminals assumed her identity to extort money from an ad company.

Celebrities have also been on the receiving end of scams involving AI-generated deepfakes. Take the case of BBC wildlife presenter Liz Bonnin, who was shocked to discover her face plastered across posters advertising insect repellent spray – something she had never agreed to. When she contacted Incognito, the company behind the advert, the company’s Chief Executive, Howard Carter, said that he had received a series of voice messages from a person who claimed to be Bonnin, expressing willingness to appear in the poster. The messages later turned out to be AI-generated.

The scammer cleverly played on Carter’s previous acquaintance with Bonnin, suggesting the deal be handled directly rather than through her management agency as a personal favour. The deception was so convincing that Carter, who had met Bonnin several times before and thought he recognised her voice, transferred the payment without hesitation.

This wasn’t an isolated incident, either. Hollywood star Jennifer Lopez experienced a similar violation when AI-manipulated images of her face were used without permission in skincare product advertisements. The fraudsters had digitally aged her appearance, adding wrinkles to create before-and-after shots promoting anti-ageing products she had never endorsed.

A terrifying scam mimics the voices of loved ones in distress

A man in Los Angeles was scammed out of US$25,000 after receiving a phone call that sounded like his son but was actually a deepfake.

Some criminals have come up with a particularly cruel method by which to scam people – preying on their deepest fears about family members in danger. One noteworthy example of this occurred in Los Angeles, where a father fell victim to an elaborate scheme that played out like every parent’s worst nightmare. It began with a phone call that seemed to come from his son – the voice was so convincing because fraudsters had used AI to clone it to perfection. The fake son claimed he’d been in an accident, having hit a pregnant woman who was now in the hospital.

The father then received a second call from a supposed lawyer, who said he needed to post a US$9,200 bail to keep his son from spending 45 days in jail. The scammers even arranged for an Uber driver to collect the money. But they weren’t done – soon after, another “lawyer” called with devastating news: the pregnant woman had died, and the bail had been raised to US$25,000.

In his desperate state of mind, the father raised and handed over the additional money to a second Uber driver. Everything happened so quickly, creating such an overwhelming sense of urgency and emotion, that the victim had no time to step back and question what was happening. By the time the dust settled and reality set in, the scammers and the money were long gone.

Swatting-as-a-service

A Telegram channel called Torswats uses AI-generated voices to submit false emergency reports and trigger a police response.

Among the many ways criminals have found to abuse AI technology, perhaps none is more alarming than the emergence of “swatting-as-a-service”. One particularly notorious operation, run through a Telegram channel called Torswats, transformed this dangerous form of pranking into a full-blown business, using AI to make their attacks sound more convincing and harder to trace.

The service involved using AI-generated voices to phone in false emergency reports – such as bomb and mass shooting threats against high schools and other locations – to law enforcement, often resulting in heavily armed police units being dispatched to an unsuspecting victim’s location. For a mere $50, anyone could purchase what they marketed as “extreme swatting”, an incident that would see victims being handcuffed while armed officers ransacked their homes. School closures were priced at $75.

Torswats even offered customer loyalty perks, providing discounts to repeat customers and special pricing for high-profile targets like Twitch streamers and celebrities. All transactions were conducted in cryptocurrency, ensuring anonymity for both the perpetrators and their customers. In November 2024, the police finally caught up with the person behind the channel. It turned out to be an 18-year-old from California, who eventually pleaded guilty to making more than 375 swatting calls across the country. He is now facing up to 20 years in prison.

AI-powered deed fraud

A man used a deepfake video in an attempt to illegally claim ownership of a vacant property in Florida.

While property fraud isn’t new, AI now enables criminals to create increasingly convincing forgeries and impersonations, making these scams harder to detect than ever before. In September 2024, the FBI issued a warning about a troubling rise in deed fraud, often involving the use of sophisticated AI technology.

The scheme typically follows a familiar pattern: fraudsters file forged deeds with county clerks to claim ownership of vacant properties that don’t actually belong to them. Once in possession of these fraudulent documents, they move quickly to profit. In some cases, they simply sell the property outright. At other times, they may take out home equity loans, rent out the property to unsuspecting tenants, or refinance the mortgage.

Lauren Albrecht, president of Florida Title & Trust, the company responsible for ensuring the legality of real estate transactions, highlights one particularly notable example of this new type of fraud she experienced. During what seemed like a routine vacant land sale, several peculiarities caught her attention. The first red flag popped up when the supposed owner presented a West Virginia ID, but his bank account was in the Bahamas. Albrecht then requested a proof of life video call in accordance with the company’s standard procedures.

While someone did appear on the video call who matched the provided ID, there was something off about their behaviour – they made small, repetitive movements that seemed rather unnatural. It turned out to be an AI-generated deepfake, likely created using images stolen from a California missing person’s flyer – a discovery made through a simple reverse image search.

German journalists targeted with AI-generated audio clips

Demonstrators in Germany played fake audio clips that featured confessions about deliberate deception from well-known news presenters.

The use of AI-generated audio deepfakes also represents a growing threat to the freedom of the press. In February 2024, during one of the regular Monday Demonstrations in the city of Dresden – protests that have become a platform for expressing dissatisfaction with government policies – demonstrators played what appeared to be broadcasts from the respected Tagesschau news programme. The audio clips, accompanied by the programme’s official jingle, featured what seemed to be familiar news presenters making shocking confessions about deliberate efforts ti deceive the public.

“We have been brazenly lying to your face for over three years,” declared one voice. Another offered apologies for “one-sided reporting and conscious manipulation.” The voices, which mimicked Tagesschau presenters Susanne Daubner and Jens Riewa, turned out to be entirely AI-generated. Tagesschau quickly denounced the recordings as fraudulent, with editor-in-chief Marcus Bornheim pointing out the bitter irony: those who regularly denounce the media as “lying press” were themselves deliberately spreading false information.

Learnings

The cases we’ve explored in this article paint a rather disturbing picture of how criminals are misusing AI technology to create more convincing, more profitable, and potentially more dangerous scams. What makes this trend particularly worrying is how easy it’s becoming to pull off these schemes. The tools needed to create convincing deepfakes or clone voices are now readily available to anyone with an internet connection and a laptop.

The impact goes far beyond individual victims. When fraudsters use AI to forge property deeds or create fake news broadcasts, they’re not just scamming people – they’re eroding the trust that holds our society together. Each successful scam makes it harder for us to believe what we see and hear, especially when it comes from sources we once trusted without question. As these scams continue to become more sophisticated and widespread, are we looking at a future where scepticism becomes our default response to everything – even genuine calls for help?

Share via
Copy link