On the Monday, March 30, 2026, episode of The Excerpt podcast:The US-led war in Iran is the first global conflict where AI is playing a major role, both on the literal battlefield and on social media where the battle for hearts and minds is playing out. Are we entering a dangerous new evolution of warfare with AI? Aalok Mehta, director of the Wadhwani AI Center for CSIS, and Mehta Alimardani, associate director at WITNESS, join The Excerpt to share their insights.
Hit play on the player below to hear the podcast and follow along with the transcript beneath it.This transcript was automatically generated, and then edited for clarity in its current form. There may be some differences between the audio and the text.
Podcasts:True crime, in-depth interviews and more USA TODAY podcasts right here
Dana Taylor:
One of the earliest headlines in the US-led war in Iran involved the bombing of a girl's primary school in Minab. Between 175 and 180 people were killed in the attack. Most of them, young girls. Meanwhile, adjacent to the school was a military compound of the Iranian Revolutionary Guard. Was AI to blame?
Hello, and welcome to USA TODAY's The Excerpt. I'm Dana Taylor. Today is Monday, March 30th, 2026. The US-led war in Iran is the first global conflict where AI is playing a major role, both on the literal battlefield and on social media where the battle for hearts and minds is playing out. Are we entering a dangerous new evolution of warfare with AI? We're going to dig into all of it with two experts today. Joining me to discuss the nascent use of AI on the battlefields of Operation Epic Fury is Aalok Mehta, director of the Wadhwani AI Center for CSIS, the Center for Strategic and International Studies.
Thanks so much for joining me on The Excerpt, Aalok.
Aalok Mehta:
Thanks for having me. I'm happy to be here.
Dana Taylor:
Start us out, if you would, with a 30,000-foot view of how AI is being deployed in the US-led war in Iran. What makes this conflict so different with regards to AI?
Aalok Mehta:
I think over time we've seen an evolution in the use of AI on autonomy in the battlefield. If you look at what is happening in Ukraine and Russia, we've seen lots of pioneering uses of AI in that war. And now, in Iran, we're seeing more of an evolution. If you think about it in terms of what the US is capable of, the best examples we have, we've seen some previews from Palantir of how the Maven smart system might work. And so, we're seeing that they've incorporated new generative AI technology into that system, and so the operations we're seeing in Iran by the US military are incorporating generative AI tools in the first instance where that's happening for the US military and in actual hot battlefield situation.
Dana Taylor:
There's an old terminology that I want to bring in here, and that's the kill chain, a chain of events that starts with identifying a target and ends with an attack. At what strategic point in the kill chain is AI being used?
Aalok Mehta:
AI is primarily being used, to the best of our understanding, as a tool that helps with integration of various types of information streams. You can think of AI tools as helping to bring together and synthesize lots of data from things like satellite imagery, troop telemetry, bring it together into an interface where then operators are able to query the system and help with things like conducting intelligence operations, finding gaps in intelligence, and finding various operational strategies to remedy intelligence gaps, brainstorming operational plans, and then coming up with strategic options for dealing with various battlefield situations.
Then you are able to use AI to task drones with humans involved in making decisions about where those attacks will happen, what kinds of targets are being struck. But right now, most of the use of AI is really in helping people in the military manage the enormous amount of information that's coming across their desk and be able to interface with that information using a more naturalistic type of way of interacting with it.
Dana Taylor:
Look, what guardrails are there for ensuring that a target is a legitimate military one and not, say a girls school, as happened on day one of the conflict?
Aalok Mehta:
The US military has a directive that oversees or provides guidance on how it's able to use autonomy in military systems. And this directive, DoD Directive 3000.09, lays out some of the ways that is appropriate and not appropriate for the military to use autonomous systems, essentially. And the key text here is that it requires an appropriate level of human judgment in decisions, especially decisions that have high consequences like use of lethal force on the battlefield.
What is happening is that the military almost certainly has human oversight over selection and actual execution of military strikes on targets. To the best of our understanding, this particular situation with the school was an issue, not with the particular use of AI in this instance, but issues with the underlying data. Essentially, I think the latest intelligence we have is that there were errors in the database that this site had previously been a military installation and its use had been transformed, it had been turned into a school. And our systems, our various data feeds had not fully incorporated that. And so, that persistent error in the data continued to make its way through the system, and it ultimately led to the circumstance in which the school was targeted.
Dana Taylor:
Developers of AI technology, including Claude and Maven, two of the most widely used AI tools, have voiced their concerns about the use of AI in warfare, specifically with regards to autonomous weapon systems versus decision support systems. Can you talk me through that distinction and how that plays out on the battlefield?
Aalok Mehta:
Yeah. The distinction here would be the difference between incorporating a bunch of information, selecting a target, and then telling a drone to attack that target. And that would involve low levels autonomy on the drone. You can tell it, "I want you to go here. I want you to drop your munitions in this location." And then the drone will use various low levels of autonomy to make sure that, as it's flying, it's able to navigate to the location, navigate around obstacles, make low level decisions to be able to continue on its flight path. That is a big difference. But between the type of autonomy in which you provide a much more high level or general guidance to a drone of, say, I want you to attack a strategic target, say attack enemy troop formations. And then the drone flies away, has its own sensors, looks at the battlefield, makes decisions about what it thinks is an enemy formation, and then engages without further human intervention and attack on that formation.
Now you have a lot more things that you want the drone to do, the requirements are much more precise, it has to make decisions that are a lot harder. This is the kind of distinction that the companies that you're talking about are worried about. Which is, as you give higher and higher level of abstraction in terms of the orders to drones, it's required to make more decisions. And our current AI tools, while very good, are not reliable in making those kinds of decisions in high stakes battlefields at the level of reliability that we really want when you're engaging in military operations.
Dana Taylor:
Is there an ethical line when it comes to using AI in warfare?
Aalok Mehta:
I think there are almost certainly going to be developments in how we think about the appropriate use of AI in warfare. Some of that will develop as we figure out what these AI tools are capable of. Not only that, but as we figure out various issues around implementation of AI and actual systems. It's one thing to think about AI in the abstract. It turns out that when you put AI in physical systems, there are all sorts of issues that you don't anticipate as you try to integrate AI with various physical components.
And I think we're going to be engaging in a learning process in which we understand more about the capabilities of AI systems. We learn more about the capabilities of those systems when they're integrated into a bigger module like an actual drone, and we get some experience about how they work in the real world. Then I do hope we have discussions within our government, between government and lawmakers about what is the appropriate use of military technology or AI and military technology, and where we might want to put some guardrails to make sure that we're protecting our troops, protecting our reputation as a country, ensuring that people trust AI technology as a whole.
Dana Taylor:
Look, when you think about the key role AI is playing in this war in Iran, what do you worry about most?
Aalok Mehta:
In terms of that, I think what I worry about is that we might start to deploy technology more quickly than our ability to absorb the lessons from using that technology. We see, time and time again, AI technology, ever since ChatGPT came out, has been evolving at a really rapid pace. It's overwhelming a lot of the systems and institutions that we've developed to help deal with the impacts of technology and society. And I'm worried that we'll see something similar happen in the military space where we'll engage in operations using AI, but we won't have the systems in place to be able to learn lessons from those, integrate lessons from different units, using AI in different ways, and then being able to roll that back up into higher level guidance or policy on an appropriate way to use AI and appropriate guardrails. I do hope that we take the time to learn lessons and synthesize information and really use that to inform how we think about AI development and integration in military operations.
Dana Taylor:
Really appreciate your time, Aalok. Thank you so much for hopping on The Excerpt.
Aalok Mehta:
Advertisement
Dana Taylor:
As I mentioned earlier, the other battle playing out in the war in Iran is the one for hearts and minds. It's taking place largely on social media. Today, I'm joined by USA TODAY producer and host, Zulekha Nathoo, who's breaking down for us how generative AI is having an impact on the war's narrative.
Zulekha Nathoo:
That's right, Dana. Social media is where generative AI is being used to a much greater extent than in previous conflicts, creating fake images and video at a rate that makes it nearly impossible to counter in real time. To talk more about that, I'm joined now by Mahsa Alimardani, who leads the technology threats and opportunities program at the human rights organization WITNESS. Thanks for joining me, Mahsa.
Mahsa Alimardani:
Thanks for having me.
Zulekha Nathoo:
Generative AI has been used extensively to create fabricated images and video that have flooded social media, and even some state-run media, since the beginning of this war. What are one or two examples of what you think have been the most widely spread or even dangerous fake images or videos of the war so far?
Mahsa Alimardani:
What's interesting is that we really are experiencing an unprecedented level of AI generated content by all the conflict actors. What we have been seeing is a prevalence of typical war propaganda now being mobilized in deceptive ways by Israel, by the Islamic Republic of Iran to promote their narratives. We've, of course, seen a lot of different examples of the Iranian state and affiliated social media accounts of state broadcasters show deceptive AI generated images of them attacking US spaces in the region that were fake or widely exaggerated. We've seen different types of content also come from Israeli sources showing certain things that don't exist, like AI generated images of military personnel and military equipment within schools.
The thing that it has been most corrosive, however, has been the fact that, not that we have this much content, which of course is a massive load on fact-checkers. The most corrosive aspect that we're seeing, it's the doubt that we have been seeing proliferate. The fact that it's very hard for people to trust what they see and what they believe. We have this kind of information environment in Iran, which is probably this laboratory of some of the worst excesses that you can see of information pollution, which is largely created by the fact that you have decades of information controls, media controls by the regime.
And of course you have the Iranian opposition and you have the Iranian diaspora. And all of these different actors have really been contributing to different sorts of biases and perceptions. And what we have really been seeing with the issue of doubt is evidence really being undermined. The inability to really, for a lot of Iranians inside of the country, to even believe that the civilian casualties that are being documented by the state are real. You have a regime that created this information environment that really has accelerated this concept of the liar's dividend. The liar's dividend is this term for this AI environment we are in where bad actors can use the accusation of AI to essentially deny the truth when it's inconvenient to them.
Zulekha Nathoo:
Then how does the use of AI generated imagery blur that line between psychological warfare and traditional battlefields reporting in the Iran war?
Mahsa Alimardani:
Well, we've always had war propaganda. What this really does is really blur the ability to know what is real and what is not. We have seen a massive acceleration in the capabilities of generative AI models. The different types of deception we've been seeing has really been unprecedented. My colleague, Shireen, and I actually did an analysis of people using even fake forensics analysis to then call content AI as well. And some of the fake forensics analysis was AI generated itself to lob those accusations as well.
It is this situation where the trust signals are really failing. We've had tech companies create a lot of these capabilities without fully assessing what kind of guardrails there needs to be in order not to have this kind of epistemic fracture of not knowing what to see and believe. It is this situation where you really do need the tech companies and the powers to be to step up and invest more in creating all of these different trust signals that we need. That's something that my team at WITNESS has been working on for a very long time in terms of what kind of labeling and what kind of providence and authenticity standards need to be embedded within these technologies, within the models that are being developed, and how the platforms need to be able to communicate this transparency. And the kinds of investments they need to be doing in terms of having human reviewers and fact-checkers.
Because, of course, if you've been following the trust and safety teams, the investments and fact checking across these platforms has been decimated over the past few years, making our information ecosystem especially vulnerable as this technology is accelerating at the same time.
Zulekha Nathoo:
Well, when altered or completely artificial images of destruction circulate faster than footage can be verified, who ultimately controls the war narrative?
Mahsa Alimardani:
Whoever really has the most resources and the most appeal. We really are seeing the situation where both sides have ways to appeal to certain sectors and populations. The Islamic Republic of Iran has long tried to present itself with this identity of representing the oppressed, representing the global majority. Of course, the irony being they are the biggest repressors of their own people, but this is the kind of identity that their propaganda likes to present to the world. And they've never had more raw material with the fact that they are under bombardment by the US and Israel, who have been these longtime boogeymen figures within their ideological and propaganda frameworks for over four decades. They are really playing to this advantage. It is this very toxic information environment you have where things like this are very quick to spread.
And especially, I work on this professionally, but also personally, I myself am Iranian in the diaspora and I see how this spread of AI-generated content, and even the doubt of AI-generated content, has created this just sense of uncertainty. And you can really see the real life impact of not having good and clear information even on the ground in the ways that people are making decisions about their safety, about how to evacuate.
Zulekha Nathoo:
What about the effects of repeated exposure to convincing AI-generated images? To what extent does being inundated with these fake images reshape international public perception? And then, does that influence wane if imagery is debunked? Do fact-checkers help quickly enough to be able to change people's minds about especially popular AI images out there?
Mahsa Alimardani:
Yeah. I think this has long existed even before AI. The lie travels much faster than the truth. Even when we have had well-known things that have been debunked, in my own networks I've seen it reshared. One really good example is, again, going to the first day of the war. The day before there was an AI generated image of military tanks in a schoolyard, creating this narrative that civilian locations as schools can be justifiable targets for bombings. And of course, the very next day, the first day of the war you had a school bombed. Even though very quickly, within the first 24 hours, that AI generated image was very easily debunked. You could see the Google AI Gemini watermark on that photo. People were still referencing that photo in replies to the news of the Minab school bombing.
Zulekha Nathoo:
Thank you so much for being with us, Mahsa.
Mahsa Alimardani:
Yes, thanks so much for having me.
Zulekha Nathoo:
Mahsa Alimardani is the associate director of the Technology Threats and Opportunities Program that the human rights organization WITNESS.
Dana Taylor:
And thank you, Zulekha, for joining us as well. Zulekha Nathoo is a USA TODAY producer and host with our special projects team.
Zulekha Nathoo:
Thanks for having me, Dana.
Dana Taylor:
Thanks to Kaely Monahan, Zulekha Nathoo, and Lamar Salter for their production assistance. Our executive producer is Laura Beatty. Let us know what you think of this episode by sending a note to podcasts@usatoday.com. Thanks for listening. I'm Dana Taylor. I'll be back tomorrow morning with another episode of USA TODAY's The Excerpt.
This article originally appeared on USA TODAY:Does AI targeting open the door to lethal errors in the kill chain? | The Excerpt