Artificial intelligence can analyze vast amounts of information and use it to identify targets, classify threats, and suggest priorities.
However, experts are concerned that human oversight could be eroded and warn about potential dangers, writes Sky News in an analysis.
Israel has used artificial intelligence systems in Gaza to identify potential targets and assist in prioritizing operations.
The United States military seems to have used the artificial intelligence tool from the company Anthropic, Claude, during its operation to capture Nicolas Maduro in Venezuela.
Even after Anthropic faced difficulties with the U.S. administration regarding how AI should be used in warfare, the U.S. military apparently continued to use Claude in its attack on Iran.
Experts say it is very likely that the missiles flying over Tehran today are targeted by AI-powered systems.
“AI is changing the nature of modern warfare in the 21st century. It is difficult to overstate the impact it has and will have,” says Craig Jones, senior lecturer at Newcastle University. “It is a potentially terrifying scenario,” he added.
There is no turning back
Whether terrifying or not, it seems there is no turning back. If you want to get an idea of the importance the U.S. military places on artificial intelligence, a good starting point is a memo sent by Defense Secretary Pete Hegseth to all top military leaders earlier this year.
“Order the Department of War to accelerate America's military dominance in AI, becoming a fighting force, from the front lines to the rear,” Hegseth wrote.
This is not an experiment but an order - to rapidly adopt AI on a large scale.
Autonomy is increasing in some areas. In Ukraine, for example, there are drones capable of continuing a mission even after losing contact with a human operator.
But we are not yet at the stage where autonomous killer robots roam the battlefield. “We are not in the Terminator era yet,” says David Leslie, professor of ethics, technology, and society at Queen Mary University in London.
AI-integrated systems - known as “decision support systems” in military jargon - are advisers that identify targets, classify threats, and suggest priorities.
AI systems can gather satellite images, intercepted communications, logistical data, and social media streams - thousands, even hundreds of thousands of inputs - much faster than any human team.
The idea is that they help commanders focus resources where they matter most, while being more precise than tired, overwhelmed, and stressed soldiers.
That means they are not just a tool, says Dr. Jones, but a new way of making decisions. “AI, as we see in our own lives, is more like infrastructure. It is embedded in the system. We have this ability to collect that surveillance data and have been doing it for a few years,” he noted.
"A very convincing tool"
Professor Leslie agrees that the new systems are extremely capable militarily.
An important feature of decision support systems is that AI does not press the button. A human does. This has been the main assurance in discussions about military AI. There is always “a human in the loop.”
As OpenAI, the company behind ChatGPT, stated after announcing a partnership to provide AI to the Pentagon: “We will have authorized OpenAI engineers assisting the government, with safety-certified researchers.”
OpenAI also emphasized that it reached an agreement with the Pentagon that its technology will not be used in ways that cross three “red lines”: mass domestic surveillance, direct autonomous weapon systems, and high-stakes automated decisions.
But even with a human in the loop, a question remains. When waging war, can a human truly verify every decision made by AI? When time is very short and information is incomplete, what does “human oversight” really mean?
“People are technically in the loop,” says Dr. Jones. “That doesn’t mean, in my view, that they are adequately prepared to have effective decision-making power and oversight of what exactly happened. AI is a very persuasive tool for decision-makers,” he pointed out.
Or, as Professor Leslie puts it: “We are truly facing a potential large-scale approval risk, where, due to speed, there is no active human involvement critical to evaluating the recommendations issued by these systems.”
And then there is the issue of AI's own reliability. Tests conducted by Sky News found that neither Claude nor ChatGPT could tell how many feet a chicken had, which did not match AI's expectations.
Furthermore, artificial intelligence insisted it was right even when the answer was clearly wrong.
The reason is that artificial intelligence guesses what is most likely based on previous data.
