8.3 C
New York
Thursday, March 28, 2024

A Tech Group Suggests Limits for the Pentagon’s Use of AI

logoThe AI Database →

Application

Ethics

Regulation

Company

Alphabet

Google

Microsoft

Facebook

End User

Government

Sector

Defense

The Pentagon says artificial intelligence will help the US military become still more powerful. On Thursday, an advisory group including executives from Google, Microsoft, and Facebook proposed ethical guidelines to prevent military AI from going off the rails.

The advice came from the Defense Innovation Board, created under the Obama administration to help the Pentagon tap tech industry expertise, and chaired by Eric Schmidt, Google’s former CEO and chairman. Last year, the department asked the group to develop ethical principles for its AI projects. On Thursday, the group released a set of proposed principles in a report that praises the power of military AI while also warning about unintended harms or conflict.

“Now is the time,” the board’s report says, “to hold serious discussions about norms of AI development and use in a military context—long before there has been an incident.” A section musing on potential problems from AI cites “unintended engagements leading to international instability,” or, put more plainly, war.

The Pentagon has declared it a national priority to rapidly expand the military’s use of AI everywhere from the battlefield to the back office. An updated National Defense Strategy released last year says AI is needed to stay ahead of rivals leaning on new technologies to compete with US power, such as China and Russia. A new Joint AI Center aims to accelerate projects built on commercial AI technology, expanding on a strategy tested under Project Maven, which tapped Google and others to apply machine learning to drone surveillance footage.

The Defense Innovation Board’s report lays out five ethical principles it says should govern such projects.

The first is that humans should remain responsible for the development, use, and outcomes of the department’s AI systems. It echoes an existing policy introduced in 2012 that states there should be a “human in the loop” when deploying lethal force.

Other principles on the list describe practices that one might hope are already standard for any Pentagon technology project. One states that AI systems should be tested for reliability, while another says that experts building AI systems should understand and document what they’ve made.

>

The remaining principles say the department should take steps to avoid bias in AI systems that could inadvertently harm people, and that Pentagon AI should be able to detect unintended harm and automatically disengage if it occurs, or allow deactivation by a human.

The recommendations highlight how AI is now seen as central to the future of warfare and other Pentagon operations—but also how the technology still relies on human judgment and restraint. Recent excitement about AI is largely driven by progress in machine learning. But as the slower-than-promised progress on autonomous driving shows, AI is best at narrowly defined and controlled tasks, and rich real world situations can be challenging.

“There’s a legitimate need for these kinds of principles predominantly because a lot of the AI and machine learning technology today has a lot of limitations,” says Paul Scharre, director of the technology and national security program the Center for a New American Security. “There are some unique challenges in a military context because it’s an adversarial environment and we don’t know the environment you will have to fight in.”

>

Although the Pentagon asked the Defense Innovation Board to develop AI principles, it is not committed to adopting them. Top military brass sounded encouraging, however. Lieutenant General Jack Shanahan, director of the Joint Artificial Intelligence Center, said in a statement that the recommendations would “help enhance the DoD's commitment to upholding the highest ethical standards as outlined in the DoD AI strategy, while embracing the US military's strong history of applying rigorous testing and fielding standards for technology innovations."

If accepted, the guidelines could spur more collaboration between the tech industry and the US military. Relations have been strained by employee protests over Pentagon work at companies including Google and Microsoft. Google decided not to renew its Maven contract and released its own AI principles after thousands of employees protested its existence.

Pentagon AI ethics principles might help executives sell potentially controversial projects internally. Microsoft and Google have both made clear they intend to remain engaged with the US military, and both have executives on the Defense Innovation Board. Google’s AI principles specifically allow military work. Microsoft was named Friday as the surprise winner of a $10 billion Pentagon cloud contract known as JEDI, intended to power a broad modernization of military technology, including AI.

Related Articles

Latest Articles