- Terrorists may soon have a new weapon in their arsenal: generative AI
- The role of generative AI in the Israel-Hamas conflict
- How to counter the emerging threat
- Are fears about the effects of generative AI justified?
Generative AI has been widely praised for its ability to streamline a wide range of tasks and make our lives — and our jobs — much easier. However, as is often the case with transformative technologies, underneath the shiny surface lie a slew of hidden dangers that threaten to dampen the enthusiasm surrounding generative AI. These dangers lie not in the technology itself, though, but in its potential for misuse. As generative AI tools became more widely available, they inevitably caught the attention of individuals and organisations with malicious intent, who have been exploring the possibility of using these tools for a wide range of nefarious purposes, including cyberstalking, fraud, impersonation, and dissemination of false information. Perhaps the most concerning development is the growing adoption of generative AI by extremist groups, which have recognised its ability to produce vast amounts of text, image, audio, and video content as a potential ally that would allow them to flood digital platforms with an endless stream of propaganda.
“Hundreds of millions of people across the world could soon be chatting to these artificial companions for hours at a time, in all the languages of the world”.
Jonathan Hall, independent reviewer of terrorism legislation in the United Kingdom.
Terrorists may soon have a new weapon in their arsenal: generative AI
Terrorists are no strangers to cutting-edge technology. Over the years, they have repeatedly demonstrated a willingness to adopt technological innovations to bolster their capabilities, enabling them to launch devastating attacks against civil infrastructure, propagate their harmful ideologies, and attract new members. Cryptocurrencies, encryption, chatbots, AI-powered drones — these are just some of the technologies that have been employed by terrorists in recent years, and it’s only a matter of time before the same thing happens with generative AI. Industry experts are increasingly voicing their concerns that generative AI models like ChatGPT could be manipulated by extremist groups to spread terror messages and propaganda on a scale that was previously unimaginable. As generative AI becomes more and more sophisticated and capable of producing increasingly convincing output, detecting and combating such content will become progressively challenging, putting vulnerable people at an even greater risk. “Hundreds of millions of people across the world could soon be chatting to these artificial companions for hours at a time, in all the languages of the world”, says Jonathan Hall, independent reviewer of terrorism legislation in the United Kingdom.
The role of generative AI in the Israel-Hamas conflict
While experts warn about potential dangers associated with the use of generative AI, the technology has already been spotted at the heart of a real-life conflict. The ongoing war between Israel and Hamas has seen a staggering amount of misinformation and disinformation posted online, ranging from misleading content to manipulated photos and videos from other, unrelated events across the globe. There have even been attempts to pass off video game footage as evidence of atrocities committed by the opposing side. Both sides are also increasingly leveraging the power of generative AI to either solicit support for their cause or create the illusion of one. For example, there is an Israeli account that frequently shares AI-generated images of crowds cheering for the Israel Defense Forces (IDF). Similarly, researchers have identified numerous fake images that purportedly show innocent victims of Israel’s indiscriminate attacks on Gaza. This not only distorts the reality of the situation but also manipulates public perception and sentiment.
Some of the AI-generated images of the Israel-Hamas war have even found their way into online articles posted by smaller news outlets, often without any indication that the images are actually fake. The images in question come from Adobe Stock, an online marketplace that allows people to sell images produced using generative AI tools. In Adobe’s defence, the website clearly states that the images are fake, but there’s nothing to stop those who obtain them from trying to present them as real. To combat this issue, Adobe has joined forces with several other tech and media organisations, including Microsoft, the BBC, and the New York Times, to launch the Content Authenticity Initiative, which will work to promote the adoption of Content Credentials, a new kind of metadata that reveals various details about an image’s origins. “Content Credentials allows people to see vital context about how a piece of digital content was captured, created, or edited, including whether AI tools were used in the creation or editing of the digital content”, said an Adobe spokesperson. Solutions such as these will become increasingly important as AI-generated content becomes more and more realistic and difficult to identify.
“The use of digital platforms to spread violent extremist content is an urgent issue with real-world consequences”.
Brad Smith, vice chair and president at Microsoft
How to counter the emerging threat
Until now, hashing databases represented a very potent weapon in the fight against online extremism. Essentially a collection of previously flagged violent and extremist content, hashing databases enable online platforms to automatically identify when another user shares similar content and take it offline. However, the recent proliferation of AI-generated content threatens to limit or completely negate the usefulness of hashing databases. A recent report by Tech Against Terrorism has uncovered over 5,000 instances of AI-generated content within terrorist and extremist groups. This includes a guide by a pro-Islamic State tech group, which instructs IS supporters on using ChatGPT while ensuring operational and personal security. The report also mentions a channel on a popular messaging app that allows sharing of racist, antisemitic, and pro-Nazi images. Al-Qaeda propaganda posters incorporating AI-generated images also feature in the report. “Our biggest concern is that if terrorists start using gen AI to manipulate imagery at scale, this could well destroy hash-sharing as a solution. This is a massive risk”, says Adam Hadley, the executive director of Tech Against Terrorism.
Thankfully, generative AI could offer a potential solution to the very problem it has helped create. Tech Against Terrorism recently joined forces with tech giant Microsoft to develop an AI-powered tool that can automatically detect problematic content. “The use of digital platforms to spread violent extremist content is an urgent issue with real-world consequences”, says Brad Smith, vice chair and president at Microsoft. “By combining Tech Against Terrorism’s capabilities with AI, we hope to help create a safer world both online and off”. The tool will be particularly useful for smaller platforms, which usually lack both the resources and the expertise to handle the flood of AI-generated extremist content on their networks in a way that big tech players can. In fact, it could be argued that the success of large social media platforms in battling extremist content directly — although inadvertently — contributed to the increase of such content on smaller platforms, as it forced terrorists to abandon the likes of Facebook and X in search of more fertile ground. “AI systems, designed and deployed with rigorous safeguards for reliability and trustworthiness, could power a leap forward in detecting harmful content — including terrorist content created by generative AI — in a nuanced, globally scalable way, enabling more effective human review of such content”, adds Hadley.
Tech Against Terrorism also formed a similar partnership with Google’s subsidiary Jigsaw to develop the free tool ‘Altitude’, which is designed specifically to help smaller platforms spot and remove terrorist content from their networks. “Islamic State and other terrorist groups didn’t give up on the internet just because they no longer had the megaphone of their social media platforms. They went elsewhere”, says Yasmin Green, the CEO of Jigsaw. “They found this opportunity to host content on file-hosting sites or other websites, small and medium platforms. Those platforms were not welcoming terrorist content, but they still were hosting it — and actually, quite a lot of it”. Altitude enables companies to compare any piece of content found on their networks against existing content in the terror-tracking NGO’s Terrorist Content Analytics Platform, which helps improve their ability to detect terrorist content. In addition to providing context about the terrorist groups linked to that particular content, the tool also provides additional examples of similar types of content, as well as information on how it has been handled by other platforms. It even offers information on local or regional laws that apply to this type of content. “We are not here to tell platforms what to do but rather to furnish them with all the information that they need to make the moderation decision”, says Hadley. “We want to improve the quality of response. This isn’t about the volume of material removed but ensuring that the very worst material is removed in a way that is supporting the rule of law”.
Are fears about the effects of generative AI justified?
Still, there are those who claim that the concerns about generative AI’s impact have been blown way out of proportion. One of these people is Layla Mashkoor, an associate editor at the Atlantic Council’s Digital Forensic Research Lab. She believes that AI-generated images are unlikely to have much of an impact on public perception due to the sheer amount of misinformation and disinformation that is already out there. The recent paper Misinformation Review, which was published by the Harvard Kennedy School, echoes a similar thought, saying that conspiracy theory websites and 4chan forums already provide more than enough content of this type that there is simply no need, nor the demand, for a new source. “Given the creativity humans have showcased throughout history to make up (false) stories and the freedom that humans already have to create and spread misinformation across the world, it is unlikely that a large part of the population is looking for misinformation they cannot find online or offline”, reads the paper. While the authors acknowledge generative AI’s ability to produce highly realistic content, they believe that Photoshop or video editing software can achieve similar, if not even better, more effective results. “There’s a lot of ways to manipulate the conversation or manipulate the online information space”, Mashkoor concurs. “And there are things that are sometimes a lower lift or easier to do that might not require access to a specific technology, even though AI-generating software is easy to access at the moment, there are definitely easier ways to manipulate something if you’re looking for it”.
Closing thoughts
It’s clear that generative AI tools like ChatGPT – while undeniably beneficial – also entail certain risks, particularly in the hands of extremist groups. The spectrum of potential threats is vast, spanning from the widespread distribution of propaganda to the subtle manipulation of public sentiment. With AI tools generating increasingly realistic content, we are facing novel challenges in detecting and countering such deceptive material. Yet, paradoxically, these same technologies may offer the solutions to the problems they helped create. Projects like Tech Against Terrorism’s collaborations with tech giants Microsoft and Google’s Jigsaw exemplify the potential for AI to combat the spread of extremist content. However, the debate rages on, with some experts arguing that the fears surrounding generative AI’s impact are exaggerated, pointing to the already rampant spread of misinformation.
Regardless of where the truth lies, it’s indisputable that we are tasked with a significant challenge in this digital age; a challenge that necessitates not only advanced technological solutions but also a well-informed, critical, and discerning public. It becomes increasingly crucial to question, verify, and understand the narratives we encounter. We must recognise the profound influence of AI on our information ecosystem and strive to promote media literacy, critical thinking, and unwavering commitment to truth. This, perhaps, is the most potent weapon we hold against the misuse of generative AI: harnessing the power of our collective wisdom to discern fact from fabrication and to stand firm against the spread of digitally enabled extremism.
Share via: