Pentagon Enhances Decision-making Speed with AI, Avoids Direct Lethality

Advanced AI developers, including OpenAI and Anthropic, are finely balancing their dealings with the U.S. military. Their objective: increase Pentagon’s efficiency without employing their AI in lethal engagements.

The U.S. Department of Defense (DoD) is enjoying a “considerable advantage” from AI technology in identifying and monitoring threats, according to Dr. Radha Plumb, the Pentagon’s Chief Digital and AI Officer. However, these tools haven’t been weaponized.

Underlining how AI is fast-tracking the ‘kill chain,’ Plumb highlighted that the military process of detecting, tracking, and neutralizing threats has become more efficient. The AI’s input, particularly during the planning and strategy formulation stages, is noteworthy.

Growing Ties Between Pentagon and AI Developers

The alliance between the Pentagon and AI developers is relatively recent. Top innovators like OpenAI, Anthropic, and Meta revised their usage policies in 2024 to permit U.S. intelligence and defense agencies the use of their AI systems, albeit with human safety safeguards.

This move spurred a swift alliance between AI companies and defense contractors. Meta and Lockheed Martin, Anthropic and Palantir, OpenAI and Anduril all came together. Cohere also collaborated with Palantir under the radar.

This growing reliance on generative AI could urge Silicon Valley to ease its AI usage policies, ushering in more military applications.

Debates About AI and Lethal Decisions

Of late, the conversation has shifted towards the ethical use of AI in autonomous weapons systems. Some have claimed that such systems already exist within the U.S. military.

In answer to claims of full weapon autonomy at the Pentagon, Plumb strongly refuted this, stressing that humans would always be integral to the decision to use force.

Clarifying the term ‘autonomous,’ Plumb described the Pentagon’s use of AI systems as a blend of man-machine collaboration whereby senior officials make crucial decisions throughout the process.

AI Safety and Military Partnerships

Military contracts with tech companies have previously been met with resistance from Silicon Valley employees. In comparison, the AI community’s response has been markedly softer. Some, like Anthropic’s Evan Hubinger, believe in proactive engagement with the military to avoid potential misuse of AI technology.

Original source: Read the full article on TechCrunch