Generative AI in the years ahead: ‘reality’ will never be the same again

The artfulness of AI has advanced to such a degree that we are nearing an era where distinguishing fact from machine-made fiction is no longer a straightforward task. Whether it's text, audio, or video content, the authenticity of what we perceive is about to be challenged like never before.
Industries: Cybercrime
  • The explosive rise of generative AI
  • Risks abound
  • What could possibly go wrong?
  • How to protect yourself

As we embark on the final quarter of the twenty-first century, we stand on the precipice of a technological revolution. The past few years have witnessed unprecedented progress in the field of artificial intelligence, with particular emphasis on the development of generative AI. This cutting-edge technology is pushing the boundaries of what we perceive as reality, reshaping our understanding of the world in ways we have only begun to grasp. The artfulness of AI has advanced to such a degree that we are nearing an era where distinguishing fact from machine-made fiction may no longer be a straightforward task. Whether it’s in the form of written text, audio, or even video content, the authenticity of what we perceive is about to be challenged like never before. Imagine a world where creations of AI are so intricate and so realistic that even the most sophisticated forensic tools fail to discern them from human-made content. This is not a dystopian science fiction narrative, but a plausible reality unfolding before our very eyes.

The consensus among the world’s leading AI experts is that this reality is not as far off as we might think. In fact, most agree that it’s more than likely to materialise within the next decade. Some more pessimistic scenarios envision that the unprecedented speed of progress in the field of generative AI might even lead us to lose our ability to differentiate between human and machine-made realities as early as the end of this year. The implications of this development would be profound, and its potential applications are both exciting and daunting. As we move forward into this new era, it is vital to navigate the ethical, social, and legal implications with care and consideration. The reality as we know it is about to change, and the role of generative AI in this transformation will be pivotal.

From generating original content and answering intricate questions to unearthing hidden patterns in data and optimising complex processes, the applications of generative AI are vast and varied, leaving no industry untouched.

The rise and rise of generative AI

It didn’t take long for the world to become enamoured with generative AI. By September 2023, ChatGPT exceeded 180 million users. Text-to-image models like DALL-E, Midjourney, and Stable Diffusion likewise achieved widespread adoption in a very short amount of time. However, while text and image generation — and chatbots, of course — remain the most popular applications of this groundbreaking technology, they are just scratching the surface of what generative AI is truly capable of. In fact, it is widely believed that generative AI will have a profound impact on business and society at large, extending its reach into realms previously thought to be exclusively human. From generating original content and answering intricate questions to unearthing hidden patterns in data and optimising complex processes, the applications of generative AI are vast and varied, leaving no industry untouched. Just imagine AI tools that can manipulate physical tools, create unique designs, produce compelling media, and even generate stunning works of art.

While the business world is just starting to tap into its potential, the effects are already profound. Today, we’re witnessing generative AI’s initial impacts in areas like marketing content generation and customer support. But this is just the beginning. As the technology evolves, we’ll see even more sophisticated applications, such as in financial decision making and eventually in highly-integrated sectors like industrial process automation. With this in mind, it’s no surprise that the generative AI market is predicted to reach anywhere between $75 billion and $130 billion by the year 2030. However, we need to point out that these predictions should be taken with a pinch of salt, as the speed of technological development — especially when it involves a new technology like generative AI —makes it difficult for industry analysts to be precise.

Risks abound

There is no doubt that generative AI brings numerous benefits, allowing us to significantly improve our productivity and efficiency in many different fields, including services, modes of communication, and industrial operations. However, as with any other transformative technology that came before it, there are also certain risks that need to be taken into account. By now, you’ve probably heard one or two wild proclamations about an impending AI apocalypse. Although these proclamations have been aided in large part by unprecedented hype and a substantial amount of misunderstanding surrounding the technology — and have thus far proven unfounded —  that’s not to say that the risks are non-existent. In fact, some of them are very real indeed. As is often the case with new technologies, generative AI is still hampered by certain flaws that can significantly affect the quality of its output. There is also the risk that bad actors will exploit the technology for their nefarious purposes, using it to launch more sophisticated cyberattacks or spread disinformation.

Like all other technologies that rely on machine learning, generative AI is only as good — or as flawed — as the data it’s trained on. If the training data carries biases, generative AI will mirror these biases, potentially perpetuating stereotypes and unfair representation. It’s like teaching a child to understand the world using a history book that only tells one side of the story. This can lead to an overrepresentation of dominant worldviews, while minority or oppressed groups might find their stories under- or misrepresented — or worse, omitted entirely. The risk here is that without careful supervision, generative AI can make biased decisions and produce biased outputs. This takes us to the next major flaw in generative AI models: hallucinations. Even if the training set features only correct information, there is always the possibility that the model will produce incorrect outputs, which are typically referred to as hallucinations. There are two types of hallucinations that have been identified in generative AI models: knowledge-based and arithmetic. The former involves the model providing incorrect information, while the latter involves it making incorrect calculations. Recent studies have shown that hallucination rates in certain generative AI models can exceed a staggering 50 per cent in some cases, such as answering professional exam questions. Finally, there is the issue of shallowness, which refers to AI’s inability to handle more complex requests, leading it to produce nonsensical images or text.

In the wrong hands, generative AI can be manipulated to destabilise societies, influence public opinion, and launch sophisticated cyberattacks. The technology has made it possible for just about anyone to create all sorts of convincing content, ranging from text and images to speech and video. However, this ease of use and wide accessibility have also paved the way for unethical exploitation. One such manifestation is the rise of deepfakes — highly realistic but manipulated media that is almost indistinguishable from the real thing for the untrained eye. In the period between 2020 and 2021, the number of deepfake videos posted online surged by a staggering 900 per cent. What’s even more troubling is that these deepfakes have become increasingly realistic, as evidenced by a recent study that showed that humans could only correctly identify an AI-generated face about 50 per cent of the time. As the technology continues to improve in the future, this will only become more difficult. This growing uncertainty around the authenticity of online content has led to the rise of a new phenomenon — the so-called liar’s dividend, which refers to the increasing ability of public figures to cast doubt on real events by simply claiming they are deepfakes. The result? A major blow to political accountability, a rise in conspiracy thinking, and an overall erosion of public confidence in web content. When combined with social engineering techniques, AI-generated content may even be used to launch phishing attacks and gain access to sensitive information, endangering the stability of entire organisations. In fact, the number of phishing attacks doubled between 2021 and 2022, which coincided with the advent of ChatGPT. So, even if it’s not a direct existential threat to humanity as some have proclaimed it to be, generative AI has inadvertently made it easier for unscrupulous individuals to initiate cyberattacks, spread fake news, and damage the reputation of innocent individuals, organisations, or even whole countries.

“In one or two years, you won’t be able to tell what is true and what is false”. 

Gil Perry, the CEO and co-founder of the Israeli AI company D-ID

What could possibly go wrong?

Synthesia is a London-based generative AI company that develops AI avatars that can be used for a wide range of business purposes, including training and development, customer service, and marketing. Trained on video footage filmed by professional actors, these digital humans allow companies to create realistic-looking videos in a matter of minutes, delivered in more than 120 languages and accents. All you have to do is enter the text you want the AI avatar to say, click on generate, and watch the magic come to life before your eyes. There are currently more than 140 avatars available on the platform, which come in a wide range of genders, skin tones, and clothing styles. While the company explicitly forbids the use of its technology to produce political, sexual, personal, criminal, or discriminatory content, bad actors have repeatedly found their way around the existing safeguards, using the avatars to spread misinformation or even commit elaborate crypto scams. In addition to taking responsibility for these videos, Synthesia has also taken additional steps to prevent the abuse of its platform by restricting the creation of news content to enterprise accounts and quadrupling the number of content moderators. “Content moderation has traditionally been done at the point of distribution. Microsoft Office has never held you back from creating a PowerPoint about horrible things or writing up terrible manifestos in Microsoft Word”, says Victor Riparbelli, the CEO of Synthesia. “But because these technologies are so powerful, what we’re seeing now is moderation is increasingly moving to the point of creation, which is also what we’re doing”.

While the technology still has certain limitations that make it relatively easy to spot that we are looking at an artificial creation — Synthesia’s avatars, for example, cannot move their arms — it’s only a matter of time before it becomes impossible to distinguish between what is real and what is fake. “In one or two years, you won’t be able to tell what is true and what is false”, says Gil Perry, the CEO and co-founder of the Israeli AI company D-ID. This could result in a massive increase in online abuse and bullying, as criminals turn to generative AI to produce fake images of their victims in compromising situations and use them to blackmail or simply humiliate them. Identity theft and fraud could also increase significantly, fuelled by further advances in voice cloning technology, which will be capable of convincingly replicating anyone’s voice from just a short audio recording. The same goes for AI-generated video content, which will become so realistic that we won’t be able to believe our eyes anymore. Or our ears for that matter. In response to this growing threat, a group of prominent tech figures led by Steve Wozniak and Elon Musk have published an open letter calling for an immediate pause on all large-scale AI experiments for at least the next six months. “Contemporary AI systems are now becoming human-competitive at general tasks, and we must ask ourselves: Should we let machines flood our information channels with propaganda and untruth? Should we automate away all the jobs, including the fulfilling ones? Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us? Should we risk loss of control of our civilisation?”, reads the letter. When the same people who are working on AI are so afraid of it, what does that tell us? Shouldn’t we pay serious attention?

“Generative AI can create content that looks and feels real, and regular people’s avatars can be inserted into content by third parties without their consent. This is not right, and we should never lose control over our identity, privacy or biometric data”.

Thomas Graham, cofounder and CEO of Metaphysic

How to protect yourself

The advent of generative AI can be likened to opening Pandora’s box; once the contents are out, they’ll be impossible to put back. However, we still possess the power to shape its deployment. The challenge lies in doing so thoughtfully and responsibly, ensuring that we leverage its benefits while mitigating potential harms. It would be exceedingly naive of us to trust AI companies to regulate themselves, though, as they have repeatedly demonstrated that they are driven primarily by profits, rather than the common good. So, what are we to do? The first step is to create cryptographic certification standards that will enable us to authenticate digital content and determine whether it’s real. Some companies have already taken concrete steps in this direction, as evidenced by the establishment of the Coalition for Content Provenance and Authentication, which is currently working on creating standards for the authentication of images, videos, text, and audio. Numerous prominent companies have already joined this organisation, including tech giants like Adobe, Microsoft, and Intel. Next, we need to raise public awareness and educate the public about the potential dangers associated with the use of generative AI. Last but not least, governments need to get involved and draft laws that will safeguard individual rights and punish those who use generative AI for malicious purposes. As of now, laws that regulate data protection and privacy exist in 157 countries worldwide, with the EU’s GDPR and China’s Cybersecurity Law being the most prominent examples. Unfortunately, regulations surrounding generative AI are still lagging behind the technology’s current pace of development. All eyes are now on the EU’s AI Act, which represents one of the most comprehensive attempts to regulate AI thus far, and its final form could significantly influence global norms around AI regulation.

Rather than waiting for the governments to protect their citizens, some companies have decided to take matters into their own hands and provide people with the means to safeguard their identities. For example, the AI software company Metaphysic, which garnered worldwide attention by releasing an unbelievably realistic deepfake of Tom Cruise back in 2021, recently launched the world’s first digital likeness protection platform that allows people to securely create, store, and protect their personal biometric data and control its subsequent use by third parties. “Generative AI can create content that looks and feels real, and regular people’s avatars can be inserted into content by third parties without their consent. This is not right, and we should never lose control over our identity, privacy or biometric data”, said Thomas Graham, cofounder and CEO of Metaphysic, before becoming the first person to submit his AI likeness for copyright registration with the US Copyright Office. To create their AI likeness, a person first needs to record a three-minute video of themselves, after which Metaphysic will use its proprietary AI tools to create a hyper-realistic avatar of the person. Several high-profile individuals have already signed up for the platform, including actors Tom Hanks, Rita Wilson, and Anne Hathaway, former tennis player Maria Sharapova, and Paris Hilton. “Whether you are an actor, performer, sportsperson or just a concerned citizen, it is critical that everyone takes active steps to protect their personal data that can be used to create a perfect AI version of your likeness or performance”, adds Graham. That being said, it remains to be seen whether registering a copyright for an AI likeness will actually provide individuals with rights and protections against third-party infringements — and to what extent. Can AI-generated works even be copyrighted? If so, who owns the copyright? These questions, among others, highlight the need for clear, comprehensive regulation in this area. Policymakers, legal experts, and AI developers must come together to navigate these complex issues, ensuring that the evolution of copyright law reflects the realities of AI advancements.

Closing thoughts

The emergence of generative AI marks a pivotal juncture in human history, propelling us towards a future where the boundaries between reality and artificiality continue to blur. As we stand on the cusp of this transformative technological era, the trajectory of generative AI’s evolution seems destined to permeate every facet of our lives, reshaping industries and societal norms in ways that are both awe-inspiring and perilous. Undoubtedly, the potential applications of generative AI are vast and promising, offering unprecedented avenues for innovation across various sectors, from enhancing business operations to fostering creative expression. However, this promise doesn’t come without significant risks. The ethical, social, and security implications associated with this burgeoning technology are multifaceted and demand meticulous consideration.

The pervasiveness of generative AI brings forth an array of concerns, ranging from the perpetuation of biases ingrained within the training data to the alarming rise of deceptive deepfakes. The ease with which sophisticated, AI-generated content can be manipulated poses a clear and present danger, amplifying the threat of cyberattacks, misinformation dissemination, and erosion of trust in digital content. Amidst these challenges, a proactive approach is imperative. Collaboration between technology companies, governments, and regulatory bodies is essential to instate robust frameworks that protect against malicious exploitation while preserving individual rights and privacy. Only through collaborative and conscientious efforts can we harness the full potential of this groundbreaking technology for the betterment of society while safeguarding against its darker implications.

Industries: Cybercrime
We’re in the midst of a technological revolution and the trends, technologies, and innovations to look out for are all game-changers. They bring competitive advantages, increase the effectiveness of operations, make our daily lives more efficient, improve healthcare, and significantly change the landscape and beyond.

Free trendservice

Receive the latest insights, research material, e-books, white papers and articles from our research team every month, for free!