AI technologists, such as OpenAI and Anthropic, are straddling a delicate balance by providing software services to the US military. The objective? Boosting the Pentagon’s operational efficiency without crossing the ethical lines of using AI to inflict harm.
The tools currently provided are not weaponized, but they grant the Department of Defense an edge in detecting, tracking, and evaluating threats, as Dr. Radha Plumb, the Pentagon’s Chief Digital and AI Officer, shares in a recent phone interview with TechCrunch.
Instead, the use of AI aids in hastening the execution of ‘kill chain,’ a military procedure involving the identification, tracking, and elimination of threats. Generative AI plays a crucial role particularly during the initial phases of strategizing and planning.
AI technology providers, such as Meta, OpenAI, and Anthropic, have adjusted their usage policies recently in 2024 to allow US defense and intelligence agencies to use their AI systems, without inflicting harm on humans.
This has set the stage for increased collaboration between AI development companies and defense contractors. Meta, Anthropic, and OpenAI have partnered with Lockheed Martin, Palantir, Booz Allen, and Anduril respectively to deliver defense solutions using AI.
As generative AI demonstrates its efficacy within the Pentagon, there’s an anticipation that Silicon Valley might relax its AI usage policies to accommodate more military applications.
Despite the evident usefulness of AI in defense, the usage of AI still largely adheres to the ethical confines set out by technologists. Anthropic’s use strictly forbids using its models for causing harm or loss to human life.
The debate of embedding AI weapons with full autonomy has been a contentious issue. With the assertion that fully autonomous weapons wouldn’t be acquired, Plumb reinforces the imperative of human involvement in the decision-making process.
Contrary to the widespread perception of AI as a standalone entity making independent decisions, the true essence of AI systems lies in their collaboration with humans. The final call always falls in the hands of human leaders, making the process less sci-fi and more of a strategic partnership.
While the Pentagon’s partnership with technology companies has previously fuelled employee protests, the AI community has been comparatively silent. The engagement with governments and militaries is seen by some as inevitable, particularly if we want to ensure the safe and ethical deployment of AI models.
Original source: Read the full article on TechCrunch