The Pentagon says AI is speeding up its ‘kill chain’

Spread the love

Leading AI developers, such as OpenAI and Anthropic, are threading a fine needle to sell software to the US military: make the Pentagon more efficient, not let their AI kill people.

Today, their tools aren’t being used as weapons, but AI is giving the Department of Defense a “significant advantage” in threat detection, tracking and assessment, the Pentagon’s chief digital and AI officer, Dr. Radha Plumb, told TechCrunch in a phone interview.

“We’re obviously increasing the ways that we can speed up the execution of the kill chain so that our commanders can respond at the right time to protect our forces,” Plumb said.

“Kill chain” refers to the military process of detecting, tracking and eliminating threats involving a complex system of sensors, platforms and weapons. According to Plumb, generative AI is proving helpful in the planning and tactical stages of the kill chain.

The relationship between the Pentagon and AI developers is relatively new. OpenAI, Anthropic, and Meta Their usage policy gets back Let US intelligence and defense agencies use its AI systems in 2024 However, they still don’t let their AI harm humans.

“We’ve been really clear about what we will and won’t use their technologies for,” Plumb said when asked how the Pentagon works with AI model suppliers.

Nevertheless, it has started a speed dating round for AI companies and defense contractors.

meta Partnered with Lockheed Martin and Booz AllenAmong others, to bring its Llama AI models to defense agencies in November. In the same month, Paired with Anthropologist Palantir. In December, OpenAI strikes a similar deal With Anduril. more quietly, Koher Also has placed its model with Palantir.

As generative AI proves its usefulness in the Pentagon, it could allow Silicon Valley to relax its AI use policies and allow more military applications.

“Playing in different situations is something that generative AI can help with,” Plumb said. “It allows you to take advantage of the full range of tools available to our commanders, but also to think creatively about different response options and possible trade-offs in an environment where there is a potential threat, or series of threats, that needs to be judged.”

It is unclear whose technology the Pentagon is using for this task; Using generative AI in the kill chain (even in the early planning stages) seems to violate several leading model developers’ usage policies. Anthropological principleFor example, using its models to produce or modify “systems designed to harm or damage human life” is prohibited.

In response to our questions, Anthropic pointed TechCrunch to its CEO, Dario Amodei Recent interview with Financial TimesWhere he defended his military career:

The position that we should never use AI in defense and intelligence settings is incomprehensible to me. The position that we should be gangbusters and use it to create whatever we want – up to and including doomsday weapons – is clearly insane. We are trying to find a middle ground to act responsibly.

OpenAI, Meta, and Cohere did not respond to TechCrunch’s requests for comment.

Life and death, and AI weapons

In recent months, a defense technology debate has been swirling around Whether AI weapons should really be allowed to make life and death decisions. Some argue that the US military has weapons that can.

Anduril CEO Palmer Lucky recently said As mentioned in X That the US military has a long history of purchasing and using autonomous weapons systems such as a CIWS Turret.

“DOD has been purchasing and using autonomous weapons systems for decades now. Their use (and export!) is well understood, tightly defined and clearly regulated by rules that are not at all voluntary,” said Lucky.

But when TechCrunch asked if the Pentagon could buy and operate fully autonomous weapons — ones with no humans in the loop — Plumb rejected the idea on principle.

“No, short answer,” said Plumb. “As a matter of both reliability and ethics, we will always involve people in force deployment decisions, and that includes our weapons systems.”

The term “autonomy”. A bit vague And automated systems — such as AI coding agents, self-driving cars or self-igniting weapons — have sparked debate in the tech industry over whether they will become truly autonomous.

Plumb said the idea that automated systems were making independent life and death decisions was “very binary” and less “science fiction” than reality. Rather, he suggested that the Pentagon’s use of AI systems is truly a collaboration between humans and machines, with senior leaders making active decisions throughout the process.

“People tend to think of it like there are robots somewhere and then gonculators [a fictional autonomous machine] Spit out a sheet of paper, and the human just checks a box,” Plumb said. “That’s not how human-machine teaming works, and it’s not an effective way to use AI systems like this.”

AI security at the Pentagon

Military partnerships with Silicon Valley workers don’t always go well. Amazon and Google had dozens of employees last year Fired and arrested after protesting their company’s military contracts with IsraelThe cloud deal that falls under the codename “Project Nimbus”.

In comparison, there has been a fairly muted response from the AI ​​community. Some AI researchers, such as Anthropic’s Evan Hubinger, say the use of AI in the military is inevitable, and it’s important to work directly with the military to get it right.

“If you take the catastrophic risks from AI seriously, the US government is a very important actor to engage with, and trying to prevent the US government from using AI is not an effective strategy,” Hubinger said in November. Online forums less wrong posts. “It’s not enough to just focus on catastrophic risk, you also have to prevent any way that the government can abuse your model.”

Leave a Reply

Your email address will not be published. Required fields are marked *