Epic Fury: The online battle as AI disinformation spreads across the internet
Could you tell the difference between real and AI footage?
We are used to seeing dramatic content from conflict zones online, but now there is a new challenge: some of that footage isn't real.
AI-generated images and videos about the current Middle East conflict are spreading widely on social media.
Why are disinformation videos created?
BFBS Forces News spoke to Professor Peter Lee, Professor of Applied Ethics at the University of Portsmouth, about the dissemination of AI-generated disinformation amidst the ongoing war.
"There's a couple of really important reasons for being better at propaganda and misinformation than your opponent," Prof Lee told BFBS Forces News.
"So, for example, if you want to confuse your enemy into thinking that they're not doing as well as they are doing, and you want to be pumping out information that says you know you're really you're not hitting as many things as you like.
"So that's what Iran is saying. The United States is publishing more missile and bomb strikes than I have ever seen."

Like this (above), an image of the US aircraft carrier USS Abraham Lincoln on fire was shared by an IranMilitaryIR_page, which even has a verification badge.
Many people would not question whether it is real, and that is exactly how misinformation grows.
Who is making this content?

Prof Lee said that the material that looks professionally made is probably created by government-backed entities, and that the major producers of the content will be the two adversaries in the Iran war, the USA and Iran.
He also said that, given that the United States is home to many of the big technology companies, including Meta (Facebook), Apple, Amazon, Netflix, Google (Alphabet) and Elon Musk's X, Washington will want to use its social media muscle in the conflict.
Speaking about states producing their own AI-generated content, Prof Lee explained: "There will be [a] large production of new stories through social media and the US Department of [War] is blending original news footage with older footage and some that has been AI-generated as a matter of policy, but at least that's very obvious and open.
"Then there'll be China and Russia, who have an interest in this war and who will want to disadvantage the United States.
"Russia is famous for its bot farms, so it will literally farm them out to other countries, [meaning] they will not be directly traceable back to Russia."
The truth is, fake content is not that hard to make, like this AI-generated image of the Burj Khalifa burning (below).

AI tools can generate convincing images and increasingly realistic videos.
There are even cases of gameplay footage being manipulated to appear as real combat, such as a video widely shared on social media with more than one million likes on Instagram.
However, it was actually footage from the PC game War Thunder or a similar game.
Why should we be interested in the sharing of AI disinformation?

Is it just a lot of social media noise, or does it have serious implications for democracy?
Prof Lee said that AI-generated posts and information could be utilised to persuade people to support an unpopular action taken by a government or to suggest that the state is doing better in the conflict than it is in reality.
"I think ethically it's a grey area because it is state sanction[ed] dishonesty," Prof Lee cautioned.
"So, on one hand, people don't expect politicians to be completely honest, but we don't expect politicians to blatantly lie."








