Navigation
Join our brand new verified AMN Telegram channel and get important news uncensored!
  •  
A1F

US Military testing whether human pilots can trust robot wingmen in a dogfight

B-1B Lancer bombers flanked by USMC F-35 Lightning II and JASDF F-2 fighters execute a bilateral mission over the Pacific Ocean, demonstrating the United States' ironclad commitment to our allies in the face of aggressive and unlawful North Korean missile tests. (Courtesy Photo by Japan Air Self Defense Force/Released, by Japan Air Self-Defense Forces)

A U.S. military research program is advancing the study of humans and machines working together by testing how well pilots and artificially intelligent entities trust each other in one of the most challenging of tasks: aerial combat, or dogfighting.

The idea behind DARPA’s  Air Combat Evolution, or ACE, program, is that human fighter pilots will soon be flying alongside increasingly capable drones — dubbed “Loyal Wingmen” — that will help evade other fighters and air defenses. Military leaders often describe the F-35 Joint Strike Fighter as a kind of flying command center, with the human operator working less like a traditional pilot and more like a team captain. The craft is loaded with AI features that pilots say make it easier to fly than traditional fighters. That enables the pilot to digest and put to use the immense amount of data the F-35 pulls in.

So human pilots will be flying and surrounded by increasingly intelligent, and lethal, weapons. This brings up big concerns about what many in the arms control community call lethal autonomy — basically, killer robots.  But military leaders are very fond of reminding people that their 2012 doctrine prefers a human on-the-loop in lethal decisions. That’s why the question of measuring trust between humans and AI assistants is critical. It’s one thing to measure how well you trust your smartphone’s driving directions; it’s much harder to measure how well highly trained fighter pilots to trust experimental AI software when death is on the line.

“ACE will implement methods to measure, calibrate, increase, and predict human trust in combat autonomy performance,” notes the program announcement that came out Monday.

After testing how well humans and artificially intelligent aides do in dogfights, the program will test in a simulation environment how well human-machine teams work when there are multiple drones for the pilot to command, but also adversary drones and other elements to threaten the pilot. This will lay “the groundwork for future live, campaign-level experimentation,” according to the notice.

In many ways, the program represents a key step forward for the “Third Offset Strategy” that has guided Pentagon weapons development since November 2014. The military went all-in on a strategy of autonomy and artificial for dominance coupled with human oversight. They called this human-machine teaming. Its biggest champion was former Deputy Defense Secretary Bob Work.

Work believes that Russia and China will keep fielding ever-more-autonomous weapons. “We know that China is already investing heavily in robotics and autonomy,” Work said at a 2015 Center for New American Security event. “I will make a hypothesis: that authoritarian regimes who believe people are weaknesses…that they can not be trusted, they will naturally gravitate toward totally automated solutions…We believe that the advantage we have, as we start this competition, is our people.”

The philosophical thrust of the Third Offset Strategy was that humans paired with machines will be able to outshoot, outmaneuver, and outfight machines themselves. It’s a gamble, but there is evidence for it.

Consider chess, the classic human vs. machine scare fable we’ve been living under for 20 years. In 1997, IBM effectively proved that a machine could best a world chess champion. The theme of human inadequacy at the hands of AI was reinforced when Watson defeated two all-time champions on the quiz show Jeopardy and, again just last year, when Google’s Deep Mind program beat a human champion at the incredibly nuanced game of Go. All three seemed to suggest that relying on humans against machines was no strategy for success — even, or perhaps particularly, on the battlefield or in war.

Theoretical computational scientist Kenneth Reagan’s work redeems the value of human decision-making somewhat. He found that human chess players who were allowed to use computers during games outplayed humans or computers by themselves — and that even mediocre human-machine teams beat the best machines and the best humans competing solo. Reagan called this “freestyle chess” and dubbed the player-computer teams “centaurs.”

Freestyle chess, in essence, is the basis for human-machine teaming, the military’s strategy for the next world war. (Bob Work is said to love Tyler Cohen’s Average Is Over, which leans heavily on Reagan’s work.) In seeking to answer the question of how well humans and machines trust each other, the ACE program shows how important the Pentagon considers human decision making, even in an era where robots perform faster. There’s still a lot that’s unknown about how well the partnership between humans and AI will succeed. Freestyle chess provides a reassuring clue. The ACE program will show how well they play chess while being shot at.

—–

Copyright @ 2019 By National Journal Group, Inc. All rights reserved.

Distributed by Tribune Content Agency, LLC.