By ‎Kanza Sohail

    The recent conflict involving the US, Israel, and Iran is characterized by a surge of propaganda, disinformation, and narratives enhanced by artificial intelligence (AI), along with false videos disseminated through social media. 

    Kanza Sohail

    The ongoing crisis in the Middle East has witnessed a spike in fake content generated by AI. Iran’s attacks on Gulf states are a response to a combined offensive by the US and Israel, which targeted Iran’s nuclear facilities, military assets, and leadership. Iran has launched hundreds of missiles and drones at Israel and Gulf countries that host US military personnel. But the videos and images of air strikes seem to have been intentionally crafted either through AI or recycling of older videos to take advantage of the current conflict and circulate panic and falsehoods regarding the true level of destruction in the Gulf states, Israel, and Iran. 

    AI’s Role in Disinformation Warfare

    ‎‎AI represents a new approach to information warfare; it provides military forces with the ability to automate responses, process data in quantities unattainable by humans, and execute actions with a speed and efficiency that far exceed human capabilities. This level of efficiency has made it essential for carrying out intricate operations across all warfare domains. Through automated disinformation and psychological warfare, AI technologies can manipulate public perceptions and influence societal narratives to serve the strategic goals of specific actors. AI facilitates the production of automated disinformation narratives, where content generated by machines is disseminated on social media to sway public opinion. Algorithms can generate messages that imitate human communication, producing numerous posts to promote a specific narrative, undermine adversaries, or spread propaganda. 

    Moreover, bots and algorithms powered by AI disseminate this content broadly, focusing on special individuals or groups according to their online habits and interests. Another significant advancement related to AI is deepfake technology. Deepfakes, also known as “synthetic media,” involve the use of manipulated digital content, like hyperrealistic synthetic video, audio, images, or texts, produced using sophisticated AI techniques, which can undermine targeted decision-making processes. The main epistemic threat is that deepfakes can easily lead people to acquire false beliefs. This technology can make fake information that looks exactly like the real information. This will affect operations that try to shape public opinion, social groups, political discourse, and both personal and national security.

    Brief Overview of Iran War

    The Iran war stemmed from Operation Epic Fury, which began on February 28, 2026 after a joint US-Israeli offensive that targeted Iran’s nuclear sites, military infrastructure and leadership, killing Supreme Leader Ayatollah Ali Khamenei. Iran retaliated by firing dozens of missiles and drones at Israel. The Gulf states that host US military facilities, including the UAE, Kuwait, Qatar, Saudi Arabia, Iraq, and Bahrain, are also being targeted by Iran because they have US air and naval bases. Since the attacks over 1,230 people have been killed in Iran as of March 6. Ali Larijani, Secretary of Iran’s Supreme National Security Council, warned that Iranian forces are “waiting” for a potential US ground invasion and threatened to kill and capture thousands of US soldiers.

    AI and Disinformation in the US-Israel Conflict with Iran

    AI deepfakes, video game footage misrepresented as actual warfare and chatbot-generated falsehoods are misrepresenting the US-Israel-Iran conflict, escalating a narrative war on social media. A digital crisis in the era of rapidly developing AI tools that have blurred the lines between fact and fabrication is highlighted by the information warfare taking place alongside ground battle, which was precipitated by US-Israel joint attacks on Iran’s facilities and its leaders. Because major internet platforms have generally undermined safeguards by reducing dependence on human-fact checkers and pulling down content moderation, the rise in wartime misinformation have revealed an urgent need for stronger detection technologies, according to experts. BBC Verify has seen examples of manipulated images of explosions and fake videos being shared online. 

    The information war has intensified flooding the internet with misleading content AI manipulated videos, deep fakes and recycled footages. Since the first reports of military action surfaced, dramatic videos and images claiming to show explosions, missile strikes, collapsing buildings and emergency responses have circulated widely online. Many of these posts garnered millions of views before verification. The fake videos in this ongoing conflict received a lot of attention on X, Facebook, and Instagram. As a result, X has changed its policies on misinformation because of the increasing amount of viral posts and the possibility that more might appear as users made money off the viral lies. On March 4, X stated that ‎users who upload AI-generated content of armed conflict without identifying them will be suspended from its Creator Revenue Sharing program. 

    AI-Fabricated Military Air Strikes

    During the ongoing US-Israel war with Iran, generative AI has played a significant role in the spread of disinformation across social media platforms. Numerous AI-generated images and videos have circulated online claiming to show missile and drone attacks destroying cities, bases, and ports in Israel, Iran, and Gulf states, even when such attacks never occurred. Such fabricated visuals depict massive explosions, destroyed infrastructure, and downed military aircraft, while others falsely portray the dead bodies of leaders to create the impression that key figures were killed in airstrikes. Notable examples of AI-fabricated attacks and prominent false claims examined and debunked by credible sources are: 

    • US Military Base in Iraq

    The BBC verified that AI was used to make a deepfake image purporting to depict a massive explosion at a US military base in Iraq. The AI fake seems to be based on an actual image that was initially posted online and shows a cloud of smoke billowing above Irbil’s international airport. ‎A US installation close to the airport in the capital of Iraq’s semi-autonomous Kurdistan region has been the target of drones. Using Google’s SynthID watermark detection, the BBC examined the image and discovered that it had been created or altered using Google AI. Additionally, there are visual inconsistencies in the fake image, such as an artificial fireball and oddly shaped structures on the right side that don’t show up in recent photos taken here. 

    • US Radar in Qatar

    ‎A deepfake image claiming to depict harm to a US naval base in Qatar as viewed from space is an AI fake, based on an old satellite image from a different location, according to the BBC. The photograph, which was extensively circulated online and uploaded on X by Iran’s state- affiliated Tehran Times, claims to show destroyed US radar equipment in Qatar. The fake picture of damage appears to be based on real satellite imagery of a US base in Bahrain from February 2025, which is publicly available on Google Maps and Google EQatar. The before and after image was generated or edited with Google AI, according to Google’s SynthID watermark detector. Even though the pictures were taken a year apart, there are differences between the roof of the building at the bottom of the so-called “after” photo and the real satellite photo, as well as three vehicles parked in the exact same locations in both. 

    • Burj Khalifa on Fire

    An AI-generated video that a Facebook user posted claiming to show an Iranian strike on the Burj Khalifa in Dubai received 2,000 reactions, 215 comments, and 679 shares. The false clip showed the Burj Khalifa was collapsing, Iranians were in panic and military trucks were being attacked by missiles. However, the Burj Khalifa has not actually collapsed. However, as a precaution, it was evacuated. Although the skyscraper was not struck, smoke could be seen in the vicinity because of the interceptions taking place in the downtown area. Other parts of the city suffered minor damage from falling debris from these interceptions, such as a small fire on the Burj AL Arab Hotel’s façade and rubble landing in residential areas. The footage, which incorrectly shows the ongoing conflict and tensions between Iran and UAE, was reported by the Hive Moderation AI detector as 99.8% likely AI-generated. 

    • US Naval Base in Bahrain 

    A fake clip was posted by an user on X showing the real-life moment Iran destroyed the US Navy Fleet in Bahrain on March 4. It was portrayed as it was taken by the US soldiers themselves while fleeing the area shouting OMG! The video displayed a watermark with the name of another account. The @paralelverse_net account also released the identical video two days prior on March 2. The inquiry revealed that the user acknowledged the content was generated by AI. At the end of the video’s description, that post stated: “This video is created with AI and is intended for entertainment purposes only.”

    • Attack on Iranian Bases

    A post on X uploaded a clip compilation consisting of four videos, illustrating  the moments when several Iranian military bases were attacked. That video was originally shared on social media in December 2025, when it was described as showing Iran’s 12-day war with Israel in June. Full Fact reported in December that three of the four clips in the video had evidence of being made using AI. The AI clips included distorted body parts and door frames, unrealistic background displays, and strange responses from people and objects in the room to the explosions. Only one of the video’s clips, which appeared on Iranian State TV after an Israeli strike in June 2025, was authentic, according to Full Fact. PolitiFact found that clip on YouTube as well. 

    • Image of Khamenei’s Dead Body Under Rubble

    A deepfake image showing emergency workers finding the dead body of Iran’s Supreme Leader Ayatollah Ali Khamenei was uploaded online as though it was an original photo. Reuters reported that it was made with AI, according to Google’s AI detection tool. A pro-Trump account, which also features a blue check mark, posted images claiming to show the before and after pictures of the palace of Khamenei, which was targeted during missile attacks. The before image shows the Mausoleum of Ruhollah Khomeini, which is situated on the opposite side of Tehran, while the after image seems to precisely show the palace following the attack. 365,000 people have viewed the post. 

    Employment of Recycled/Unrelated Footages

    In addition to pure AI generation, many campaigns have used older, unrelated, or, in some cases, real footage of past conflicts misrepresenting them as recent events from ongoing war. For example, footage from previous conflicts in the Middle East or explosions from unrelated incidents have been circulated on social media with misleading captions claiming to show strikes on Israel, US military bases and headquarters in Gulf states, and sinking of US warships etc. These recycled materials often appear convincing because they contain real scenes of destruction and chaos, which makes them difficult for ordinary users to verify. As a result, such misleading content contributes to confusion about the actual situation on the ground while creating fear among the native people. Following are the prominent examples of manipulation of older or unrelated footages in recent US-Israel-Iran conflict:

    • Iranian Attack on Haifa

    According to Reuters, a video of Israeli jets bombing Syria’s Defense Ministry in 2025 has been falsely ​described online as Iran hitting Haifa, a port city in Israel in retaliation ‌for US and Israeli airstrikes. Satellite ​imagery of Umayyad Square in Syria’s capital Damascus‎, shows the landscape and architectural elements of the area. Reuters also covered the incident in 2025, when Israel promised to protect Syria’s Druze from a violent outburst that pitted the community against Bedouin tribal members and Syrian government forces. 

    • Iranian Strike on Tel Aviv

    On March 2, a 2015 video of huge explosions in the Chinese port city of Tianjin was shared as visuals of Iran’s attack on Tel Aviv in Israel on Instagram. In the video, a big flame and two explosions illuminated a nighttime city skyline. However, the footage, which was shot on August 12, 2015, depicts explosions in China’s industrial area of Tianjin. On August 14, 2015, the BBC channel posted the same video on its website and YouTube as part of its coverage of the tragedy. It was misrepresented as depicting an Iranian attack on Haifa during aerial combat between the two nations in June 2025 and an Iranian attack on Israel’s Mossad intelligence agency headquarters in Tel Aviv in October 2024.

    • Attack on US Air Base in Saudi Arabia

    A 2024 video of Israel’s attack on a Yemen port was inaccurately associated with Iran’s attack on US air base in Riyadh, Saudi Arabia, as published by an X user “@Iraq_staff” on February 28. The clip was shared with the caption alleging that “Saudi Arabia is burning” and that Iraqi drones were participating in strikes on American bases in the Gulf. The post garnered 4.6 million views on social media. But a thorough investigation by the PTI Fact Check Desk revealed that the video was taken on July 20, 2024, and that the location was Hudaydah port in Yemen. This video was also posted on AIC TV’s official YouTube page. 

    • Sinking of USS Abraham Lincoln

    Fabricated photos emerged concerning the USS Abraham Lincoln sinking or damaged following an Iranian ballistic missile attack. US Central Command confirmed in an X post that the warship, one of two aircraft carriers the US military has deployed to the region, “was not hit” and that “the missiles didn’t even come close.” Numerous photos purporting to depict the aftermath of USS Abraham Lincoln date back many years ago. For instance, since at least 2021, a picture of a ship plunging into the sea while a helicopter hovers overhead has surfaced online. A June 2025 Facebook post featured a video  of a ship engulfed in a fire and billowing smoke. 

    • Attack on Israel’s Nuclear Power Plant

    A company called NewsGuard, which tracks fake narratives on the internet, discovered accounts that spread a video of a massive explosion and cloud of smoke, attributing it to an Iranian strike on a nuclear site in Southern Israel. In reality, the clip is from a 2017 fire at an ammunition depot in Ukraine. The video showed Ukraine’s Kharkiv region, which is close to the Russian border. On March 23, 2017, it was uploaded to YouTube with a description of the fire in Russian. Identifiable landmarks, such as a large tower, can be seen in both.

    • Attack on CIA Headquarters in Dubai

    According to Reuters fact-check team, a 2015 residential building fire in Sharjah was fabricated to depict damage to the US Central Intelligence Agency (CIA) Headquarters in Dubai. Social media users shared a video of smoke rising from a burning high-rise building that had nothing to do with the ongoing conflict, with the caption: “Footage ​of CIA headquarters in Dubai targeted this morning by ​Iran, has emerged. UAE is arresting those releasing the footage. But they ‌can’t ⁠hide this.”

    Manipulated Video Game Footage

    As in previous military conflicts, some accounts have also tried to pass off video game footage as real news clips. Reuters says that the video of a US fighter jet dodging an Iranian missile has been mislabeled and falsely shown online as a fight in the air between the US and Iran in ongoing conflict. The video can be traced back to a post on Instagram on December 24, 2025, by an account with the handle called “CreativeComparison” that shares simulation aircraft videos. Text written on the ⁠clip that was shared on Facebook on March 1 was: “When you are ​trained by the best! US fighter jet pilot escapes Iranian missiles.”

    Deepfake’s Impact

    One of the most dangerous effects of the rise of generative AI deepfakes is the so-called liar’s dividend; a term coined by legal scholars Danielle Citron and Robert Chesney. This concept describes the benefit that dishonest actors have taken the use of the public’s awareness of deepfakes in order to refute the veracity of real evidence. Practically speaking, anyone faced with incriminating footage can simply say, “It is a deepfake,” even if the evidence is completely real, provided they are aware that audio, video, and photos can be convincingly manufactured. 

    This makes it more difficult to prove responsibility and erodes public confidence in the context of the US-Israel-Iran conflict since genuine proof of civilian casualties and military blunders may be written off as fake. Public opinion might not change even after forensic analysis disapproves of an AI-generated fake. The impact of solid evidence may be neutralized by the introduction of doubt alone. This dynamic will only become more troublesome as generative AI technologies get more advanced and widely available, creating a highly cynical information environment where truth itself becomes disputed and elusive. 

     Conclusion

    In conclusion, recent conflicts occur not just in the physical domain but also in the digital realm. Narratives can be massively distorted by AI-generated misinformation, synthetic media, and automated bot amplification. Digital ecosystems are overflowing with conflicting claims, some of which are machine-generated, in the context of the US-Israel and Iran war, making it difficult to distinguish between truth and fabrication. Cognition itself is now the part of the battleground. Deepfakes are used as weapons to mold perceptions, obscure the fact, and produce epistemic ambiguity. The danger of the liar’s dividend was demonstrated during this conflict. Even more advanced generative AI disinformation is probably going to be used in future conflicts, which could seriously jeopardize escalation management and public trust. 

    Autor: ‎Kanza Sohail – MPhil scholar in Political Science at Kinnaird College for Women University, Lahore, Pakistan. Her research interests include global environmental politics, regional geopolitics, maritime politics and resource wars, with a particular focus on the Asia-Pacific region, Arctic geopolitics and the evolving dynamics of US-China rivalry. 

    (The opinions expressed in this article are solely those of the author and do not necessarily reflect the views of World Geostrategic Insights).

    Share.