14.9 C
New York
Saturday, April 20, 2024

AI Could Enable 'Swarm Warfare' for Tomorrow's Fighter Jets

logoThe AI Database →

Sector

Defense

Aviation

Technology

Machine vision

The dogfight hardly seemed fair.

Two F-16s engaged with an opposing F-16 at an altitude of 16,000 feet above rocky desert terrain. As the aircraft converged from opposite directions, the paired F-16s suddenly spun away from one another, forcing their foe to choose one to pursue. The F-16 that had been left alone then quickly changed course, maneuvering behind the enemy with textbook precision. A few seconds later, it launched a missile that destroyed the opposing jet before it could react.

The battle took place last month in a computer simulator. Here’s what made it special: All three aircraft were controlled by artificial intelligence algorithms. Those algorithms had learned how to react and perform aerial maneuvers partly through a state-of-the-art AI technique that involves testing different approaches thousands of times and seeing which work best.

The three-way battle offers a glimpse of how AI may control future fighter jets—and the likely challenges of deploying the technology.

The Pentagon is interested. Last March, its Defense Advanced Research Projects Agency (Darpa) invited teams to develop AI systems capable of controlling fighter jets in aerial combat situations that exceed human abilities, including scenarios involving several aircraft. AI could allow multiple aircraft to “swarm” together in ways that could change the dynamics of air combat.

“One of the things that really stands out is the ability to enable what you would call swarm warfare—rapidly overwhelming opponents,” says Chris Gentile, an ex–Air Force fighter pilot who is a program manager at EpiSci, a military contractor that is developing the technology for the contest, dubbed Air Combat Evolution. He says pilots may someday tell an AI program to scan an area or take care of one adversary while the pilot engages with another. The instructions would be the equivalent of “cover me, basically,” Gentile says.

>

AI has already shown its chops in the simulated sky. Last year, a single AI-controlled fighter plane easily defeated a human pilot in a Darpa demonstration. That program was trained using an AI technique that’s produced breakthroughs in video games and robotics. Reinforcement learning, as the technique is known, can train machines to do tasks such as playing subtle board games with superhuman skill. The learning process involves a large simulated neural network honing its behavior in response feedback, such as the score in a game. EpiSci is also using reinforcement learning for Air Combat Evolution.

The workings of modern AI raise questions about how feasible it would be to deploy the technology, however. Pilots would be required to put their trust, and their lives, in the hands of algorithms that work in mysterious ways. Real fighter pilot trainees spend months learning the correct protocols for flying and fighting in tandem. With several jets wheeling around one another at high speed, the slightest error or miscommunication could prove catastrophic.

Military leaders say that a human will always be involved in decisions about using deadly force. “Human-AI teaming is a certainty,” says Dan Javorsek, the Darpa program manager responsible for the program. Javorsek says this is partly because of the risk that AI might fail, but also for “legal, moral, ethical” reasons. He notes that although aerial dogfights are extremely rare, they provide a well-understood arena for training AI programs to collaborate with human pilots. The plan is to test the best algorithms on real aircraft in late 2023.

But Missy Cummings, a professor at Duke University and former fighter pilot who studies automated systems, says the speed at which decisions must be made on fast-moving jets means any AI system will be largely autonomous.

She’s skeptical that advanced AI is really needed for dogfights, where planes could be guided by a simpler set of hand-coded rules. She is also wary of the Pentagon’s rush to adopt AI, saying errors could erode faith in the technology. “The more the DOD fields bad AI, the less pilots, or anyone associated with these systems, will trust them,” she says.

AI-controlled fighter planes might eventually carry out parts of a mission, such as surveying an area autonomously. For now, EpiSci’s algorithms are learning to follow the same protocols as human pilots and to fly like another member of the squadron. Gentile has been flying simulated test flights where the AI takes all responsibility for avoiding midair collisions.

>

Military adoption of AI only seems to be accelerating. The Pentagon believes that AI will prove critical for future warfighting and is testing the technology for everything from logistics and mission planning to reconnaissance and combat.

AI has begun creeping into some aircraft. In December, the Air Force used an AI program to control the radar system aboard a U-2 spy plane. Although not as challenging as controlling a fighter jet, this represents a life-or-death responsibility, since missing a missile system on the ground could leave the bomber exposed to attack.

The algorithm used, inspired by one developed by the Alphabet subsidiary DeepMind, learned through thousands of simulated missions how to direct the radar in order to identify enemy missile systems on the ground, a task that would be critical to defense in a real mission.

>

Will Roper, who stepped down as the assistant secretary of the Air Force in January, says the demonstration was partly about showing that it is possible to fast-track the deployment of new code on older military hardware. “We didn't give the pilot override buttons, because we wanted to say, ‘We need to get ready to operate this way where AI is truly in control of mission,’” he says.

But Roper says it will be important to ensure these systems work properly and that they are not themselves vulnerable. “I do worry about us over-relying on AI,” he says.

The DOD already may have some trust issues around the use of AI. A report last month from Georgetown University’s Center for Security and Emerging Technology found that few military contracts involving AI made any mention of designing systems to be trustworthy.

Margarita Konaev, a research fellow at the center, says the Pentagon seems conscious of the issue but that it's complicated, because different people tend to trust AI differently.

Part of the challenge comes from how modern AI algorithms work. With reinforcement learning, an AI program does not follow explicit programming, and it can sometimes learn to behave in unexpected ways.

Bo Ryu, CEO of EpiSci, says his company’s algorithms are being designed in line with the military’s plan for use of AI, with a human operator responsible for deploying deadly force and able to take control at any time. The company is also developing a software platform called Swarm Sense to enable teams of civilian drones to map or inspect an area collaboratively.

He says EpiSci’s system does not rely only on reinforcement learning but also has handwritten rules built in. “Neural nets certainly hold a lot of benefits and gains, no doubt about it,” Ryu says. “But I think that the essence of our research, the value, is finding out where you should put and should not put one.”

Related Articles

Latest Articles