Should we be worried about AI taking over the world, or are we freaking out for no reason?

Industries: General
  • Stephen Hawking believed that artificial intelligence could bring an end to the human race
  • Elon Musk is calling for proactive regulation in AI development
  • Others believe there’s no danger of computers ever becoming conscious

The idea of a superintelligent AI taking over the world and wiping out the human race in the process has long occupied humanity’s imagination. And, up until now, imagination is where this whole concept has remained confined to. Doomsday scenarios never actually came to pass, and we’re still firmly in control. But that hasn’t stopped people from continuing to sound the alarm about the dangers of AI. Is there actually any truth to what they’re saying? Are we really close to developing an AI capable of taking over the world?

Stephen Hawking believed that artificial intelligence could bring an end to the human race

The renowned physicist Stephen Hawking was one of the biggest proponents of the theory that AI poses a major threat to the future of mankind. While he acknowledged that most forms of artificial intelligence we’ve developed so far have been quite beneficial, he also expressed concerns that we would eventually develop an AI that would surpass our own abilities and possibly even replace us altogether. “The development of full artificial intelligence could spell the end of the human race,” he said. “It would take off on its own, and re-design itself at an ever-increasing rate. Humans, who are limited by slow biological evolution, couldn’t compete, and would be superseded.”

Stephen Hawking in his wheelchair
The renowned physicist Stephen Hawking was one of the biggest proponents of the theory that AI poses a major threat to the future of mankind.

To prevent this from happening, according to Hawking, we need to closely monitor the development of AI technology and find a way to identify potential threats before it’s too late. “We cannot know if we will be infinitely helped by AI or ignored by it and sidelined, or conceivably destroyed by it,” said Hawking. “AI could be the worst invention of the history of our civilisation, that brings dangers like powerful autonomous weapons or new ways for the few to oppress the many. AI could develop a will of its own, a will that is in conflict with ours and which could destroy us. In short, the rise of powerful AI will be either the best, or the worst thing ever to happen to humanity.”

Elon Musk is calling for proactive regulation in AI development

With Hawking’s passing, the role of resident AI fearmonger fell to Elon Musk, who took to it with real enthusiasm, going so far as to call AI “a fundamental existential risk for human civilisation”. And just like Hawking, Musk believes that we need more regulation in the field of AI development. “I think we should be really concerned about AI and I think we should… AI’s a rare case where I think we need to be proactive in regulation instead of reactive. Because I think by the time we are reactive in AI regulation, it’s too late,” he says. “AI is a fundamental existential risk for human civilisation, and I don’t think people fully appreciate that.” Without regulation, Musk believes we could end up creating an immortal digital dictator that could take over the world and possibly even destroy humanity if we get in its way.

Others believe there’s no danger of computers ever becoming conscious

However, not everyone shares these concerns. Some argue that today’s AI systems still have a very small range of abilities and can usually only perform the single tasks they’ve been explicitly programmed for. While they can perform these much better and faster than humans, at the same time, they often struggle to perform tasks that would pose no difficulties to a small child. The sets of data they’ve been trained with are usually highly specialised and the knowledge AI obtains from them can’t be applied to other, non-related problems.

If AI is ever to surpass human abilities, we first need to develop a system capable of successfully performing every intellectual task a human being can, a concept referred to in the AI community as artificial general intelligence (AGI). However, there are serious doubts about whether this will ever be possible. There’s still a lot about the human brain we don’t know, such as what consciousness really is or how it’s formed. And while it’s theoretically possible that we will learn everything there is to know about the human brain one day and find a way to replicate it in a machine, it’s such a remote possibility that it’s probably not worth losing sleep over. “We’re a long way from building machines that match human brains,” says AI expert Toby Walsh. “We can build machines that do narrow focused tasks – and they can do those tasks often at super-human level – but it’s yet to be (maybe 50 or 100 years, or ever) before we can build machines that match the full capabilities of humans. And we certainly don’t build machines that have any consciousness, sentience, or desires of their own.”

Today’s computers are fundamentally limited in what they can do. They don’t really understand the task they’ve been assigned, nor do they understand the outcome. They’re just executing a pre-programmed algorithm and that’s all they can do. All of our work on building a computer model of the human brain is based on the assumption that every process in the human brain is also algorithmic in nature, and therefore computable. But what if it’s not? Dr. Stuart Hameroff and Sir Roger Penrose propose a quantum mechanical model of the human brain that’s entirely non-algorithmic, which would make it impossible to simulate on a computer, at least not on the computers we have today.

However, that doesn’t mean we should completely ignore the possibility that artificial intelligence could be a threat to us. An AI doesn’t need to be conscious to be dangerous. The truth is, we can’t know for sure what the future might bring, so we need to proceed with caution. “I think the key thing is being clear-eyed about what the risks actually are, and not necessarily being driven by the entertaining yet science fiction-type narratives on these things – or projecting or going to extremes, assuming far more than where we actually are in the technology,” explains Tim Persons, the chief scientist at the Government Accountability Office (GAO). Every powerful technology brings certain risks, but it shouldn’t prevent us from working on something that could potentially make our planet a better place to live.

Industries: General
We’re in the midst of a technological revolution and the trends, technologies, and innovations to look out for are all game-changers. They bring competitive advantages, increase the effectiveness of operations, make our daily lives more efficient, improve healthcare, and significantly change the landscape and beyond.

Free trendservice

Receive the latest insights, research material, e-books, white papers and articles from our research team every month, for free!