Since Hamas launched its surprise attack on Israel on October 7, the resulting conflict has spawned an unparalleled deluge of disinformation. People use Generative AI to create an ‘algorithmically driven fog of war’. It has confounded major news outlets and left social media platforms grappling to discern fact from fiction.
However, amid the deluge of misleading images and videos circulating over social media, content created by AI tools has maintained a relatively peripheral role. Some incidents, such as the Israel and Hamas conflict would mark the first instance of false gen.
How Generative AI Misleading the Viewers?
Layla Mashkoor, an associate editor at the Atlantic Council’s Digital Forensic Research Lab, shared her remarks. She said, “There are certainly AI-generated images in circulation, but they haven’t assumed a central role in information dissemination.” AI-generated disinformation, she says, is primarily harnessed by activists to garner support for a particular side or to create the illusion of widespread support. Examples include an AI-crafted billboard in Tel Aviv championing the Israel Defense Forces. It is an Israeli account disseminating fabricated images of people rallying for the IDF, an Israeli influencer employing AI to fabricate condemnations of Hamas, and AI-generated images depicting victims of Israel’s bombardment of Gaza.
One key consideration is the lot of wrong information circulation, making it challenging for AI-generated images to spread the narrative. Mashkoor emphasizes, “The information world is already watching with genuine images & footage.
What is the perspective of organizations?
This perspective is mirrored in the Harvard Kennedy School’s recent publication, which delves into the potential role of generative AI in the global dissemination of false information. The authors have concerns about the negative impact of technology. Generative AI theoretically allows for the rapid proliferation of misinformation. Those seeking out such disinformation are often individuals with ‘limited trust in institutions’ or strong partisan beliefs. They already have an ample supply of familiar nonsense to pursue, ranging from conspiracy theory websites to 4chan forums. The demand for more is nonexistent.
Considering the creativity that humans have displayed throughout history in fabricating (false) narratives. Also, it gives the freedom they already possess to create & disseminate misinformation worldwide. It’s improbable that many people are actively searching for information they can’t find offline or online,” the publication concludes. Furthermore, misinformation gains influence only when people encounter it.
As for images that may find their way into mainstream feeds, the authors note that while generative AI can theoretically produce highly personalized and realistic content, so can Photoshop or video editing software. Manipulating the date on a low-quality mobile video could be equally effective. Journalists and fact-checkers wrestle less with deep fakes than they do with images taken out of context or crudely altered to misrepresent reality. It includes presenting video game footage as a Hamas attack.
In this sense, an excessive fixation on flashy new technology often proves to be a distraction. Sacha Altay, a co-author of the publication and a postdoctoral research fellow whose current focus is misinformation, trust, and social media at the University of Zurich’s Digital Democracy Lab, adds, “Being realistic isn’t always what garners attention or achieves virality on the internet.