Blurring the truth: how deepfakes are rewriting reality

Picture of Richard van Hooijdonk
Richard van Hooijdonk
From celebrity scams to political manipulation, deepfakes are rewriting the rules of reality itself. When anything can be faked, how do we know what's real anymore?

Executive summary

What started as harmless fun – swapping faces in videos and creating celebrity memes – has morphed into something far more unsettling. As deepfakes become indistinguishable from authentic media, we’re witnessing the emergence of a world where our most basic assumption – that seeing is believing – no longer holds true, forcing society to grapple with unprecedented challenges to truth and trust.

  • According to Onfido’s 2024 Identity Fraud Report, the number of deepfakes online increased 31 times from 2021 to 2023.
  • Q1 2025 Deepfake Incident Report found that deepfake-enabled fraud resulted in over US$200 million in financial losses in the first quarter of 2025 alone.
  • Public figures, such as politicians and celebrities, accounted for 41% of victims, while another 34% of targets were private citizens.
  • According to a recent study by University College London, 27% of people cannot differentiate between real and deepfake audio recordings.
  • “What happens when we enter a world where we can’t believe anything?” asks Dr. Hany Farid, Associate Dean of the UC Berkeley School of Information.

As creation technology continues to outpace detection capabilities, humanity faces the prospect of navigating a world where nothing can be taken at face value. The solution won’t come from technology alone but will require a fundamental reimagining of how we establish trust, verify truth, and maintain authentic human connections in an era of infinitely malleable reality.

There was a time when we could rely on our senses to tell us what was real. When someone showed you a video or played you an audio recording, you could be reasonably confident that what you were seeing or hearing actually happened. Those days are quickly becoming a thing of the past. Deepfake technology has shattered our understanding of reality, enabling anyone to create synthetic content so convincing that even forensic experts can struggle to spot the fakes.

From the average person’s perspective, these technologies all seemed innocent enough – amusing face-swap videos and celebrity impersonations that went viral on social media. But the technology has quickly evolved from a harmless gimmick to a serious threat. We’re now well into territory where malicious actors can fabricate evidence, create non-consensual deepfake pornography, or spread political propaganda with tools that are becoming cheaper and easier to use by the day. What used to take Hollywood studios millions of dollars and months of work can now be whipped up on a semi-decent computer in someone’s basement.

The really unsettling part is that we’re just scratching the surface. Today’s deepfakes might fool most people most of the time, but tomorrow’s? They’ll be virtually indistinguishable from reality. Soon, we’ll inhabit a world where every video comes with an asterisk, where every piece of evidence needs “evidence” of its own authenticity. Our kids will grow up never quite trusting their eyes or ears, treating reality like a Wikipedia entry that anyone can just show up and edit. So the question is: in a world where anything can be faked, how do we even know what’s real?

“Deep fake technology has evolved to become so refined, and so inexpensive, as to allow anyone the ability to produce images and voices indistinguishable from reality.”

Craig Holman, the government affairs lobbyist for Public Citizen

Deepfakes: a double-edged sword

While deepfakes offer beneficial applications in entertainment, marketing, and healthcare, the technology is increasingly weaponised for fraud, manipulation, and harassment.

Here’s the thing about technology – it’s never inherently bad or evil. It’s all about how we choose to use it. And that’s exactly the case with deepfakes. While you’ve probably heard plenty about the concerning uses of this technology, there’s actually a whole world of beneficial applications that deserve the spotlight. Take Hollywood, for instance. The entertainment industry has embraced deepfakes to bring beloved deceased actors back to life on screen, allowing filmmakers to complete unfinished projects or create new stories featuring iconic performers. It’s a way to honour legacies while giving audiences one more chance to see their favourite stars in action.

The advertising world has also jumped on board with some fairly impressive results. For example, the German online retailer Zalando recently made waves with their innovative #whereeveryouare campaign, which used deepfakes of supermodel Cara Delevingne to create an astounding 290,000 localised ads for towns and villages across Europe. Instead of filming thousands of separate commercials – basically an impossible task – they could personalise each message while maintaining the star power of their celebrity endorser.

Most exciting of all is that deepfakes could help us break down language barriers. Emergency call centres across the US are already using AI to create synthetic translations of non-English speech, ensuring that first responders can actually understand callers regardless of the language they speak. And it’s not just about spoken language. Deepfake technology could also be used to generate videos of people communicating through sign language. In the future, we could use this technology to create sign language interpreters for live events in real time. Suddenly, concerts, conferences, and broadcasts that were once audio-only experiences would also become accessible to the deaf and hard-of-hearing communities.

The technology also offers hope for people who have lost their voice due to injury or illness. A company named Acapela Group has developed a product called My Own Voice, which uses recordings from a person (or even from a close family member with a similar voice) to synthetically recreate a personalised voice that sounds just like they used to. In July 2024, former Congresswoman Jennifer Wexton made history by giving the first speech with an AI-generated voice on the House floor, demonstrating how this technology can restore not just the ability to communicate but also give back dignity and independence to those who need it most.

The dark side of deepfakes

Let’s be honest here, though. The use cases outlined above can indeed improve people’s lives, but we all know – from lived experience – that deepfake technology is far more likely to be used for nefarious purposes. Countless celebrities have already fallen victim to highly public instances of deepfake fraud, with scammers using synthetic videos and imagery depicting famous people to promote everything under the sun. One day, it’s Emma Watson supposedly raving about revolutionary kitchenware, the next it’s Robert Downey Jr. pushing some sketchy crypto scheme that promises to make you rich (spoiler: it won’t). Diet pills, miracle cures, investment opportunities – if there’s a scam to run, there’s probably a deepfake celebrity pushing it somewhere online right now.

It’s not just famous faces that are being targeted, though. The technology is also increasingly being used to infiltrate the corporate world, with fraudsters using deepfakes to impersonate senior executives in video calls to trick employees into processing fraudulent payments. A growing number of businesses are also experiencing applicant fraud, where phoney applicants use deepfake technology during interviews and throughout the hiring process to infiltrate the organisation, which is typically done either for financial gain or industrial espionage. In fact, Gartner predicts that fake, AI-generated profiles could account for 25% of all job candidates globally by 2028.

Politics is no less immune to this troubling trend. Just a few years in, we’ve already witnessed a slew of deepfakes depicting political figures in compromising situations or making statements they never uttered. This goes beyond mere misinformation – it’s actively eroding public trust in democratic institutions and can easily manipulate electoral outcomes. Perhaps most viscerally disturbing is the booming underground market where individuals can commission AI-generated explicit content of unsuspecting victims. This nonconsensual deepfake pornography has become alarmingly accessible, causing devastating personal and professional consequences for those targeted.

Indistinguishable from reality

Research shows that deepfakes are becoming not only more prevalent but also more sophisticated. According to Onfido’s 2024 Identity Fraud Report, the number of deepfakes online increased 31 times from 2021 to 2023. This drastic increase can largely be attributed to the proliferation of cheap and easy-to-use AI tools. “Deep fake technology has evolved to become so refined, and so inexpensive, as to allow anyone the ability to produce images and voices indistinguishable from reality,” says Craig Holman, the government affairs lobbyist for consumer advocacy organisation Public Citizen. “It has become a tool for inflicting violence and intimidation, largely against women, by depicting the targeted persons in completely fabricated intimate situations.”

Equally alarming is our inability to distinguish them from genuine content. According to a recent study by University College London, as many as 27% of people cannot differentiate between real and deepfake audio recordings. As the technology continues to advance, this figure is only expected to grow – and the consequences could be devastating. Resemble AI’s Q1 2025 Deepfake Incident Report found that deepfake-enabled fraud resulted in over US$200 million in financial losses in the first quarter of 2025 alone. The report further found that public figures, such as politicians and celebrities, accounted for 41% of victims, while another 34% of targets were private citizens, demonstrating that no one is safe from this growing threat.

Your eyes may deceive you

From celebrity impersonations to fake job applicants, criminals are finding increasingly creative uses for deepfake technology.

Let’s zoom in and examine some of the most notorious. Earlier this year, a woman in France fell victim to an elaborate scam that cost her nearly a million euros to someone pretending to be Brad Pitt. Using AI-generated deepfakes and weaving in real details from news reports about Pitt’s divorce, the fraudster managed to convince this woman she was in an actual relationship with the Hollywood star.

Over in the UK, Nottingham gallery owner Simone Simms found herself caught up in a months-long deception involving a deepfake of Pierce Brosnan. Believing she was communicating directly with the James Bond actor, she organised an exhibition of his artworks and sold £20,000 worth of tickets, and even sent scammers £3,000 in “shipping fees” for the artwork. When the real Brosnan caught wind of this and denied any involvement, Simms had to cancel everything and issue refunds. Despite being a victim herself, the reputational and financial damage was so severe that she was forced to close the gallery in August 2024.

One of the most popular scam formats lately involves using deepfakes of respected public figures to promote fake investment opportunities. Dig around for five minutes on X and no doubt you will find a fake Elon Musk account promising to let you in on his amazing new crypto opportunity. But it’s not just Musk – David Kostin, chief US equity strategist at Goldman Sachs, recently became the unwilling star of an AI-generated video that circulated on social media. In the fake video, “Kostin” invited viewers to join an investment group promising “explosive growth”, suggesting people could double their money in just a few days. While this particular scam was caught, others haven’t been so lucky. One particularly poignant example involves an elderly victim in their 80s who lost a staggering US$700,000 after watching what they thought was – you guessed it – Elon Musk enthusiastically promoting a can’t-miss investment opportunity.

Security researchers at Palo Alto Networks’ Unit 42 have uncovered a troubling trend involving North Korean IT workers who use deepfake technology to create entirely synthetic identities for remote job applications. These state-sponsored operatives are systematically targeting organisations across the globe, seeking to infiltrate companies for espionage and sabotage purposes. The security firm KnowBe4 experienced this threat firsthand when they unknowingly hired one of these actors, only realising their mistake after the individual had already installed malware on company systems. This isn’t an isolated incident – organisations of all sizes, from massive Fortune 500 corporations to small businesses with fewer than ten employees, have fallen victim to these sophisticated infiltration attempts.

“What happens when we enter a world where we can’t believe anything?”

Dr. Hany Farid, Associate Dean of the UC Berkeley School of Information

What does the future bring?

As deepfake technology continues to advance and become increasingly sophisticated, could we lose the ability to determine what’s real?

By now, you’re probably well aware of just how harmful deepfakes can be. We’ve all seen the headlines about manipulated images and videos causing personal disasters, ruining reputations, and spreading misinformation like wildfire. But here’s the thing – we’re only just getting started on our journey into the future of deepfakes. The technology is evolving at breakneck speed, and every few months we see another leap forward in quality and sophistication. As AI algorithms get smarter and more nuanced, they’re learning to replicate the subtle quirks that make us human – the way someone’s eyes crinkle when they smile, the specific rhythm of their speech… even their unconscious gestures. Pretty soon, we’ll reach a point where the line between what’s real and what’s manufactured becomes practically invisible.

The liar’s dividend

The political implications of deepfakes are downright terrifying. Imagine the chaos that could unfold during election seasons when convincing fake videos of candidates surface at critical moments, showing political leaders making inflammatory statements, engaging in compromising behaviour, or revealing sensitive information. All completely fabricated but impossible to immediately – or entirely – disprove. The damage to democratic processes could be irreversible, especially when you consider how quickly misinformation spreads on social media. But deepfakes don’t just threaten us through the fake content they create. They’re also providing cover for real misconduct – a concept known as the ‘liar’s dividend’. When genuine evidence of wrongdoing surfaces – actual recordings of corruption, abuse, or criminal behaviour – the perpetrators can now dismiss it by simply claiming it might be a deepfake.

The proliferation of deepfakes will lead to a technological “arms race” between deepfake creators and detection systems. As deepfakes become more sophisticated, detection methods scramble to keep up. New algorithms emerge to spot fakes, only for deepfake technology to evolve and bypass them. Many experts believe this will ultimately be a losing race – that detection technology will always be playing catch-up to creation technology. “I fear there will be an end to the deepfake race not too far in the future,” says Peter Eisert, chair of visual computing at Humboldt University. “Personally, I think deepfakes will get so good that they’ll be hard to detect unless we focus more on technology that proves something hasn’t been altered, rather than detecting if something is fake.”

A post-truth society

The business world should be particularly worried about what’s coming. Companies have invested heavily in facial recognition, voice authentication, and other biological markers as secure methods for verifying identity. These systems power everything from smartphone unlocks to bank transfers to building access controls. But as deepfakes become more sophisticated, they’re rendering these security measures obsolete. Voice cloning can now replicate someone’s speech patterns from minimal audio samples, while facial synthesis can create convincing video of anyone saying anything. The entire security infrastructure that modern businesses – and consumers – depend on may need to be reimagined from the ground up.

Perhaps the most chilling prospect is what this means for the fabric of society itself. We’re heading toward what many call a “post-truth” society, where no media can be trusted, where video evidence means nothing, where seeing is no longer believing. “What happens when we enter a world where we can’t believe anything?” asks Dr. Hany Farid, Associate Dean of the UC Berkeley School of Information. “Anything can be faked. The news story, the image, the audio, the video. In that world, nothing has to be real. Everybody has plausible deniability. This is a new type of security problem, which is a sort of information security. How do we trust the information that we are seeing, reading, and listening to on a daily basis?”

Learnings

So, what’s the big takeaway here? History shows us that whenever a new communication technology emerges – from the printing press to social media – we go through a painful adjustment period where the worst actors figure out how to exploit it before society develops proper guardrails. With deepfakes, that adjustment period might be more devastating than anything we’ve experienced before. When trust itself becomes a casualty, the damage ripples through every aspect of human connection.

And yet, there may be a silver lining to the predicament we find ourselves in. In forcing us to question everything we see and hear, deepfakes might actually push us toward a more thoughtful, more sceptical society. Maybe we needed this shock to the system – a technological slap in the face that reminds us not to believe everything we encounter online. Perhaps our children, growing up in this hall of mirrors, will develop sharper critical thinking skills than any generation before them. And maybe we’ll finally stop taking truth for granted and start treating it like the precious resource it’s always been.

Share via
Copy link