The dark side of AI: 10 threats we can’t afford to ignore

Picture of Richard van Hooijdonk
Richard van Hooijdonk
As artificial intelligence continues to reshape our world, one question looms large: are we ushering in an era of unparalleled progress or paving the way towards our own extinction?
  • A disinformation offensive
  • The costly consequences of AI hallucinations
  • Falling for the machine
  • The loss of free will
  • Data-driven discrimination
  • The death knell for human labour
  • The end of democracy
  • Evolution takes a wrong turn
  • A world at risk
  • An existential threat to humanity

Executive summary:

Artificial intelligence seems to be everywhere these days. It’s powering our smart homes, driving us to work, and even helping us do our jobs. But in our desire to make our lives easier and more convenient, are we perhaps overlooking something? Could there be unintended consequences of our increased reliance on AI?

  • AI could make it easier to spread disinformation, allowing anyone to create realistic images, video, and audio content with a simple click of a button.
  • As AI becomes increasingly human-like, it could lead people to develop unhealthy emotional attachments to it and neglect human interaction.
  • Our growing reliance on AI could result in changes in our cognitive, social, and physical capabilities, potentially shaping the course of human evolution.
  • The lack of diversity in AI training data could exacerbate existing inequalities and lead to discriminatory outcomes against certain groups of people.
  • Further advancements in AI may give rise to new security threats and may even put humanity’s survival at risk.
  • AI could significantly alter the balance of power between nations, which could have a profound impact on the future of democracy.

The only thing we can say for sure right now is that AI is only going to get smarter and more powerful. The decisions we make today about how to handle this technology are going to have a massive impact on the world we leave behind for future generations. If we do it right, AI could help us create a brighter, more incredible future than we ever dreamed possible. But if we drop the ball, well…you probably have a pretty good idea how that will go.

Remember when the scariest thing about technology was the fear of your computer crashing right before you saved that important document? Oh, how times have changed! Nowadays, we are faced with far more unsettling tech-related risks, many of which stem from our growing reliance on one specific technology – artificial intelligence (AI).

Once capable of executing only the simplest of tasks, AI has advanced considerably over the years, becoming increasingly sophisticated and integrating itself into various aspects of our daily lives. From the virtual assistants in our smartphones to the algorithms that decide what we see on our social media feeds, AI is almost everywhere. But here’s the thing – such ubiquitous presence also carries with it some inherent risks, which have only intensified once generative AI burst onto the scene and became accessible to the general public.

For years, experts have warned us about the possibility that the technology could be used to cause harm – perhaps inadvertently so – or even weaponised by those with less-than-noble intentions. Some have gone so far as to call AI an existential threat to the human race. Now, that’s an alarming thought, isn’t it? Don’t get us wrong, though. Our intention is not to spread fear or get you to toss your gadgets out the window – not yet. We just want to show you both sides of the coin so that you can make a better-informed decision about the extent to which you’ll let AI into your life.

10. A disinformation offensive

AI’s ability to generate increasingly realistic images, video, and audio may soon make us doubt our own eyes and ears. How will we know what’s real then?

One of the biggest concerns associated with the proliferation of AI is that it could make it easier to spread disinformation. As AI-generated images, video, and audio content become increasingly realistic, they could diminish our ability to differentiate fact from fiction. Pretty soon, every piece of information might come with an invisible question mark, making us doubt even the most seemingly reliable sources. After all, if you can’t trust your own eyes and ears, what can you trust?

By now, you’ve probably heard about how AI has been used to influence election outcomes across the world, most notably in the US and France. That’s just the tip of the iceberg: generative AI, in particular, could also cause some serious harm in the business world. For example, the technology could be used to produce a convincing video of a company’s chief executive making inappropriate comments or to generate false financial reports or product recalls, sending the company’s stock prices plummeting. Such deceptions could spread like wildfire across social media platforms, causing irreparable damage to a company’s image within hours.

The technology could also play into the hands of sceptics and conspiracy theorists, enabling them to plausibly call into question real events by claiming that the footage was fabricated by AI tools. However, we can’t lay the blame for this particular problem entirely at AI’s doorstep. After all, there are still people who doubt the moon landing took place, and that happened long before AI was capable of generating a convincing image. Ultimately, some people just want to believe whatever they want.

So, what can be done to address this issue? First, we’ll need to develop more robust AI detection tools that can keep pace with the rapidly evolving capabilities of generative AI. We’ll also need to invest more in media literacy education, starting from early years and continuing long into adulthood. Thoughtful regulation will also play an important role. This could involve mandating clear labelling of AI-generated content and creating legal frameworks to hold bad actors accountable for the malicious use of AI.

  • Generative AI could make it easier to spread disinformation.
  • The technology has already been used to influence election outcomes around the world.
  • In the business world, malicious actors could use generative AI to generate false financial reports, for example, causing serious harm to companies.
  • Developing more robust AI detection tools, investing in media literacy education, and passing thoughtful regulation are some of the steps we can take to minimise this risk.

9. The costly consequences of AI hallucinations

AI’s propensity to produce factually incorrect responses doesn’t just make it difficult to trust AI-generated content but can also have serious real-world repercussions.

Everyone who has used generative AI will surely have noticed that it occasionally produces responses that simply aren’t true. These factual inaccuracies are typically referred to as ‘hallucinations’, and they pose a significant challenge to our ability to trust AI-generated content. Despite these concerns, many companies are rushing to put out their own chatbots, eager to capitalise on the hype surrounding generative AI. But in their haste to bring these products to market, they are often failing to properly address the hallucination problem. In some cases, the outcome can be rather amusing. But in others, the consequences can be markedly more serious.

Take, for example, the January 2024 DPD chatbot mishap. When a customer tried to inquire about a missing parcel and failed to get a satisfactory response, he set out to expose the chatbot’s failings and have some fun in the process. Following a series of carefully worded prompts, the customer was eventually able to convince the chatbot to swear at him, call itself ‘useless’, and even criticise its own company. While DPD quickly apologised for the malfunction and took the chatbot offline to resolve the issue, the incident highlighted AI’s propensity to generate inappropriate or offensive content when not properly constrained.

However, not all AI hallucinations are as harmless or amusing, as illustrated by a February 2024 incident involving Air Canada’s chatbot. In this case, a customer inquiring about the airline’s bereavement discount after losing a family member was erroneously informed by the chatbot that he could buy a ticket at a regular price and then apply for a discount within 90 days of purchase. However, the airline’s policy required that the request be submitted before the flight. After Air Canada refused to honour his refund claim, the customer decided to turn the matter over to a civil resolutions tribunal, which ultimately ordered the airline to pay the customer US$812.02 in damages.

Perhaps the most notorious example is Google Bard’s blunder about the James Webb Space Telescope. When asked about the telescope’s discoveries, Bard confidently stated that it had taken the first pictures of exoplanets outside our solar system. The only problem? That actually happened nearly two decades before the Webb telescope was launched. As news of the error spread, investors grew concerned about the potential impact on Google’s reputation and competitiveness in the AI market, leading to a sharp drop in the company’s stock price. In the end, the error cost Google’s parent company, Alphabet, a staggering US$100bn – a costly hallucination indeed.

To minimise this risk, we need to incorporate more robust fact-checking and verification mechanisms into AI systems. If we could cross-reference generated content against reliable sources and flag potential inaccuracies, we may be able to help catch and correct hallucinations before they cause harm. We also need to be more transparent about the limitations and uncertainties of AI-generated content. Rather than presenting chatbot responses as definitive truth, companies could include disclaimers or confidence scores to indicate the level of certainty associated with each output.

  • The biggest limitation of generative AI is its tendency to ‘hallucinate’ or produce false or misleading information.
  • Generating factually incorrect information and presenting it as the truth could have serious real-world implications for companies.
  • In February 2024, Google’s generative AI tool Bard incorrectly attributed the discovery of exoplanets outside our solar system to the James Webb Space Telescope, which ultimately cost the company a staggering US$100bn.
  • To identify and correct hallucinations before they can cause harm, we need to equip AI systems with robust fact-checking and verification mechanisms.

8. Falling for the machine

People worldwide are becoming increasingly attached to their AI assistants, with potentially detrimental effects on their social skills and mental health.

As AI becomes increasingly human-like, people could start attributing human qualities to these artificial entities or even becoming emotionally dependent on them. This isn’t some distant possibility; we’ve already seen a raft of stories stories about people developing deep emotional bonds with their chatbots, or even falling in love with them.

Of course, this isn’t an entirely new territory. Remember the Tamagotchi craze of the 90s? Those little egg-shaped digital pets captivated millions of people worldwide, with many becoming deeply invested in their virtual creature’s wellbeing. It’s easy to imagine AI taking this to a whole new level. Take, for instance, the Loverse app, which is gaining serious traction in Japan. The app features AI avatars tailored specifically to each user’s profile and idiosyncrasies. Some users have become so enamoured with their digital companions that they prefer the idea of marrying their avatars over real-life partners. Just imagine the look on your mother’s face when you break the news.

While this might seem harmless or even amusing at first glance, it raises some serious concerns. As people invest more time and emotion into these artificial relationships, they are at risk of cutting themselves off from real human interaction. This isolation will, of course, have profound impacts on mental health and wellbeing. We’re social creatures, after all, and genuine human connections are crucial for our psychological health. Without these connections, we will see a decline in empathy and social skills. It’s hard to practice reading social cues or handling conflicts when your primary relationship is with an entity programmed to please you, right?

The risks don’t stop there. As we grow more accustomed to these hyper-responsive AI companions, we may start to overestimate AI’s capabilities in other areas of life. This could lead to a false sense of trust, which unscrupulous companies might exploit to manipulate or deceive us. To mitigate these risks, we’ll need to set clear guidelines on how AI should interact with users, especially in emotionally sensitive areas. This might involve mandatory disclosures reminding users that they’re interacting with AI, not a real person. Most importantly, we’ll need to teach people, especially younger generations, about the significance of maintaining real-world relationships – not just digital ones.

  • There is a genuine risk that, as AI becomes increasingly realistic and human-like, people might grow emotionally dependent on it.
  • In Japan, a growing number of people are becoming enamoured with AI avatars, with some even marrying their digital companions.
  • Prioritising artificial relationships over real human interaction could have a negative impact on people’s social skills, mental health, and wellbeing.
  • It’s imperative to set clear guidelines related to human-AI interactions, especially when emotionally sensitive areas are concerned.

7. The loss of free will

As we delegate more and more of our tasks to AI, could we eventually sacrifice our free will as well?

While the potential for AI to redefine our personal relationships may be creepy, its influence on our lives extends far beyond them. AI is increasingly shaping our entire decision-making process, from what we watch on streaming platforms to what we buy online. While you may think having these personalised recommendations is rather convenient, there’s a genuine risk that overreliance on AI could lead to a gradual erosion of our critical thinking and problem-solving skills.

Think about it: if we constantly delegate our choices to algorithms, we may become less adept at weighing options, considering consequences, and making informed decisions for ourselves. Over time, as we outsource more and more of our decision-making to AI, we could even find our free will compromised. Today, it’s “What should I watch?”, but tomorrow it could be “What career should I pursue?” or “Who should I date?”. Before we know it, we might wake up to find our lives scripted by algorithms rather than shaped by our own choices.

Now, this isn’t to say that AI shouldn’t play any role in the decision-making process. In many areas, such as complex data analysis or medical diagnosis, AI’s superior processing power and pattern recognition can be incredibly valuable. The key is to find a balance between AI-assisted decision-making and human input, ensuring that we harness the strengths of both while mitigating the risks.

One way to strike this balance is to always keep a human in the loop. Rather than fully automating decisions, we envision a future where AI presents us with a range of options, carefully curated based on our needs and preferences. But ultimately, the final choice would rest with us – the human users.

  • AI now has a major influence on our choices in many aspects of our lives.
  • If we become overly dependent on AI, we risk diminishing our critical thinking and problem-solving skills.
  • There’s even a possibility that we may eventually sacrifice our free will.
  • To prevent this from happening, we need to find the right balance between AI and human input in the decision-making process, ensuring that there is always a human in the loop.

6. Data-driven discrimination

The data used to train AI systems often fails to accurately reflect the diversity of the entire population, leading to biased or discriminatory outcomes.

At the end of the day, AI systems are only as good as the data they’re fed. If that data isn’t representative of the entire population, the AI’s decisions could end up being biased or discriminatory, largely at the expense of those already marginalised or underprivileged. Consider the alarming discovery that some driverless cars were more likely to crash into dark-skinned pedestrians. It’s not because the AI had a vendetta against people of colour, but because it wasn’t trained on a diverse enough dataset of pedestrians. Similarly, facial recognition algorithms have been found to falsely accuse people of colour of crimes more frequently than their white counterparts.

But it’s not just about race. The Dutch childcare benefit scandal is another sobering example of how AI bias can ruin lives. In this case, an algorithm used by the Dutch tax authority to detect fraud in childcare benefit claims led to the wrongful accusation of thousands of families. The victims were disproportionately low-income families. As a result, many families were forced to repay large sums of money, driving some into financial ruin and even resulting in the removal of children from their homes in some cases.

So, what can we do about this? First and foremost, we need to invest heavily in developing unbiased algorithms and diverse training datasets. We also need more robust regulation. Just as we have safety standards for physical products, we need strict guidelines for AI systems, especially those used in critical areas like law enforcement, healthcare, and social services. It might also be a good idea for AI companies to involve ethicists at every stage of the design process. Having someone constantly asking “But is this fair?” could help catch potential biases before they become baked into the system.

  • An AI trained on data that isn’t representative of the entire population will often make biased or discriminatory decisions.
  • In 2019, researchers found that self-driving cars were more likely to crash into dark-skinned pedestrians, a direct consequence of the AI not being trained on diverse enough datasets.
  • In the Netherlands, an algorithm created by the Dutch tax authority wrongfully accused thousands of families, primarily those from low-income backgrounds, of committing childcare benefit fraud.
  • To eradicate bias, we need to invest in creating more diverse training datasets, as well as consider involving ethicists in AI’s design process.
https://www.youtube.com/watch?v=RdpdPbxJN1Y

5. The death knell for human labour

A growing number of jobs are at risk of automation. But what will happen if AI takes over all of our jobs? How will we earn a living then?

In previous chapters, we discussed how we’re now delegating more and more tasks to AI. As this trend continues to pick up pace in the years to come, there are growing concerns that AI will inevitably start to displace human workers. From a purely economic perspective, why would companies continue to hire humans when AI can perform the same tasks faster, cheaper, and often with greater accuracy?

This isn’t mere speculation. A 2023 report by Goldman Sachs predicts that AI could replace a staggering 300 million full-time jobs worldwide. This trend is likely to disproportionately impact low-skilled blue and white-collar workers, many of whom are already struggling with stagnant wages and job insecurity. As AI takes over routine tasks like data entry, assembly line work, and basic customer service, these workers may find themselves increasingly out of work, exacerbating existing inequalities and fueling social unrest.

Is it time to start panicking? Well, not really, because we’ve been here before. From the invention of the printing press to the rise of the automobile, history is replete with examples of workers having to switch professions in response to new technologies. While these transitions can be painful in the short term, they’ve often led to greater prosperity and innovation in the long run. So while AI will undoubtedly take over many jobs, it’s also likely to create new ones – possibly even more than it eliminates.

As the nature of work evolves, our educational systems will need to adapt, placing greater emphasis on skills like critical thinking, creativity, and emotional intelligence – the very qualities that distinguish us from machines. We may also need to consider more radical policy interventions, such as a universal basic income, which would provide a basic standard of living for all citizens regardless of their employment status. Moreover, while it’s unlikely that AI will take over all jobs, UBI could serve as a safeguard against this unlikely but not impossible scenario. It’s like an insurance policy for the entire workforce: hopefully we won’t need it, but it’s good to have in place.

  • As we continue to delegate more and more tasks to AI, many human workers may soon find themselves without a job.
  • Goldman Sachs predicts that AI could replace as many as 300 million full-time jobs worldwide.
  • Some experts believe that AI will also create many new jobs, which will require human workers to acquire new skills.
  • While AI is unlikely to take over all jobs, instituting a universal basic income could serve as a safety net in case it does happen.

4. The end of democracy

Generative AI has the potential to tip the balance of power in favour of countries that prioritise the development of this technology.

As generative AI continues to advance at an unprecedented pace, it’s becoming increasingly clear that this technology could have profound implications for the global balance of power. Countries that fail to invest in the development of generative AI risk falling behind those that do, potentially leading to significant shifts in economic, military, and cultural influence. China, in particular, has emerged as a dominant force in several key areas of AI research and development. The country has already achieved major breakthroughs in fields like facial recognition, natural language processing, and autonomous vehicles – thanks in large part to vast troves of data, strong government support, and a thriving tech ecosystem.

Now, China is turning its attention to generative AI, aiming to achieve a similar level of dominance. But what exactly would China becoming a leader in generative AI mean for the world? For one, it could give Chinese companies a major advantage in a wide range of industries. The country could also use the technology to create compelling content in multiple languages, enabling it to shape global narratives in ways that align with Chinese interests.

Perhaps most concerning, however, is the possibility that China could use its leadership in generative AI to export its model of digital authoritarianism to other countries. If they were given access to China’s AI tools and platforms, other governments may follow China’s lead and use them to monitor and control their populations, stifle dissent, and shape public discourse. This could have profound implications for privacy, individual freedoms, and the future of democracy worldwide.

To mitigate these risks and ensure that the benefits of generative AI are more evenly distributed, it’s essential that other countries take proactive steps to develop their own capabilities in this field. This will require significant investments in research and development, as well as efforts to build collaborative partnerships across government, industry, and academia. Countries with shared values should collaborate on AI development, pooling their resources and expertise to create a counterweight to such major geopolitical shifts.

  • Generative AI could have a profound impact on the global balance of power, leading to significant shifts in economic, military, and cultural influence.
  • Already a dominant force in many key areas of AI research and development, China is now investing heavily in the development of generative AI.
  • Should China become a world leader in generative AI, it could use its position to export its model of digital authoritarianism to other countries, which would have profound implications for the future of democracy.
  • The only way to ensure a more even distribution of generative AI’s benefits is for other countries to increase their investment in the technology, pooling their resources and expertise if necessary.

3. Evolution takes a wrong turn

Our growing dependence on AI could alter the course of human evolution – and not in ways that benefit us.

While AI undoubtedly has the potential to augment and enhance human capabilities in countless ways, it may also lead to unintended consequences that could alter the course of human evolution. We’ve already discussed how our growing dependence on AI could result in a dramatic decline in certain cognitive abilities. However, it’s not just our minds that could be altered – the same thing could happen to our bodies too.

For instance, if AI systems take over the majority of manual labour tasks, humans may become more sedentary and less physically active, leading to a range of health problems, such as increased rates of obesity, cardiovascular disease, and musculoskeletal disorders. We’ve already seen this to an extent with the proliferation of white-collar desk jobs. Over generations, these changes could even alter our physical appearance and capabilities. For example, if humans spend most of their time interacting with screens and virtual interfaces, we may see changes in our posture, vision, and dexterity.

Beyond the biological implications, AI could also have a profound impact on human culture and society. As AI systems increasingly control the information that we see online, we could see a homogenisation of cultures on a global scale. This cultural flattening could lead to a loss of diversity that has traditionally been a wellspring of human innovation and resilience. Local traditions, languages, and ways of thinking could gradually disappear as AI systems – often developed with particular cultural biases – become the primary mediators of our cultural experiences.

Fortunately, there are steps we can take to steer this process in a direction that enhances, rather than diminishes, our humanity. Most importantly, we need to develop AI systems that complement human abilities rather than replace them. To prevent the loss of cultural diversity, it’s essential that we prioritise the inclusion of diverse perspectives and experiences in the development of AI systems. This could involve initiatives to promote diversity and equity in the tech industry, as well as efforts to ensure that AI is trained on culturally diverse data sets that reflect the richness and complexity of the human experience.

  • Some experts are concerned that our growing reliance on AI could alter the course of human evolution in unexpected ways.
  • In addition to a decline in cognitive abilities, delegating the majority of our manual labour tasks to AI could make us more sedentary and lead to a range of health problems, potentially even affecting our physical appearance.
  • Ceding control of our experiences to AI could also result in a loss of cultural diversity and a gradual disappearance of local traditions and languages.
  • To preserve our cultural diversity, we need to make sure that AI training data accurately reflects the full breadth of the human experience.

2. A world at risk

AI could facilitate the emergence of new security threats, such as more sophisticated cyberattacks or autonomous weapons.

Think the idea of AI taking over your job is scary? What would you say if we told you that it’s not just your livelihood that may be at stake? Further developments in AI could also allow for the emergence of new security threats we may not be ready for. As AI becomes more sophisticated, so too will the tools available to hackers and other malicious actors. We can expect to see AI-powered cyberattacks that are faster, more adaptive, and harder to detect than anything we’ve seen before. Imagine a virus that can learn and evolve in real time, bypassing security measures as quickly as they’re put in place.

But the risks posed by AI extend far beyond cyberspace. The development of autonomous weapons, for example, raises serious questions about the future of warfare. While proponents argue that such weapons could reduce war casualties, critics warn that they could lead to a new arms race and lower the threshold for armed conflict. Ask yourself this, too: what happens if we lose control of these systems, or they fall into the wrong hands? The consequences could be catastrophic.

The first step towards minimising this risk is to invest in the development of AI-powered defence systems that would be able to detect and counteract AI threats, a task that is already well beyond human capabilities. We also need to promote ethical AI development, encouraging companies to embed ethical considerations into their AI-powered products from the ground up, thus mitigating the risks of misuse. Finally, we need to maintain meaningful human control over critical systems at all times – the decision to take a life should never be delegated entirely to a machine.

  • Generative AI could lead to the emergence of new security threats, both in cyberspace and in the real world.
  • Hackers could use the technology to launch more sophisticated cyberattacks that would be more difficult to detect and counteract.
  • Many are worried that the development of autonomous weapons will result in a new global arms race and possibly lower the threshold for armed conflict.
  • To mitigate these threats, we need to promote ethical AI development, develop AI-powered defence systems, and ensure that humans retain control over AI systems at all times.

1. An existential threat to humanity

Right now, AI is still in our service. But what happens if it develops goals that clash with human interests of starts to perceive humanity as a threat?

Let’s escalate things a little further. As AI becomes more intelligent and sophisticated, it may develop goals and priorities that are not aligned with human interests. What would happen if an AI system came to perceive humanity as a threat to its own existence or objectives? Who can guarantee us that it won’t take steps to neutralise that threat?

Throughout history, we’ve seen countless examples of species being driven to extinction by a predator that was smarter, stronger, or simply better adapted to their environment. In many cases, humans themselves have been the culprits, wiping out entire species through hunting, habitat destruction, or the introduction of invasive species. As unlikely as it may seem now, we cannot rule out the possibility that the same fate could befall us at the hands of AI. If we’re not careful in how we develop and deploy these technologies, we could inadvertently create a superintelligent AI that sees us as a threat to be eliminated.

Now, before you start stocking up for the robot apocalypse, it’s important to note that these are extreme scenarios. They’re possibilities we need to consider and prepare for, sure – but they are not inevitabilities. To ensure they don’t become a reality, we need to develop AI systems with built-in constraints that prevent them from harming humans or acting against human interests. We also need to invest in research on AI safety and control, ensuring that we maintain the ability to shut down or redirect AI systems if they start to act in unexpected or dangerous ways.

  • There’s a possibility, however unlikely, that AI will eventually develop goals and priorities that clash with human interests.
  • If AI starts to see humanity as a threat, it might decide to eliminate us.
  • Developing AI systems with built-in constraints that prevent them from harming humans could help protect humanity against this threat.
  • If something does go wrong, we need to retain the ability to shut the AI down.

Learnings

So, what’s the big takeaway here? Is AI a friend or foe? The answer to this question is not as straightforward as you might think. On one hand, AI could help us solve some of the world’s biggest challenges and open up possibilities we can barely imagine today. On the other hand, it could also be used in ways that are harmful, or that benefit some groups way more than others.

Throughout this article, we’ve explored how AI is changing the nature of work, potentially rendering many jobs obsolete. We’ve grappled with the ethical implications of AI-driven decision making and the challenges posed by AI bias. And we’ve contemplated how AI might influence human evolution itself, reshaping not only our cognitive abilities but also our physical appearance and cultural norms.

The good thing is that it’s not too late to act. We are still in control, and we get to decide how this plays out. Are we going to shape AI’s development, or are we going to let it shape ours? The answer to this question may very well be the difference between humanity thriving for centuries to come, or going the way of countless other species that encountered a superior adversary.

Share via
Copy link