Navigation
Join our brand new verified AMN Telegram channel and get important news uncensored!
  •  

The US military’s AI can’t find targets on its own — yet, Top USAF general says

A Lattice Modular Heli-Drone is displayed during a test run of the Lattice Platform Security System at the Red Beach training area, Marine Corps Base Camp Pendleton, California, Nov. 8, 2018. The Lattice Modular Heli-Drone was being tested to demonstrate its capabilities and potential for increasing security. (U.S. Marine Corps photo by Cpl. Dylan Chagnon)

Nearly two years since the Pentagon started bringing artificial intelligence to the battlefield, the algorithms still need human help, a top U.S. Air Force general said Tuesday.

But Gen. Mike Holmes said the technology is getting better at identifying people, cars, and other objects in drone video. He also sees promise in other AI applications, like predicting when parts on planes will break.

“[W]e’re still in the process of teaching the algorithms to be able to predict what’s there from the data and be as reliable as we would like it to be or as reliable as our teams of people [who] are doing that,” the Air Combat Command leader said Tuesday at a Defense Writers Group breakfast.

“Those tools are there. We’re starting to use them and experiment with them,” he said. “I don’t think, in general, they’re at the point yet where we’re confident in them operating without having a person following through on it, but I absolutely think that’s where we’re going.”

While Air Combat Command is best known for its high-performance fighter jets, Holmes also oversees the Air Force’s drone program and the stateside intelligence centers that process the video and other data collected from high above the battlefield.

Two years ago, the Pentagon stood up Project Maven, a small cell tasked with putting algorithms inside the computers that receive video captured by drones above the battlefield. Maven deployed its first AI-powered tools in 2017, and Pentagon officials soon declared the initial experiments a success. But the deployment also sparked an ethical debate about using decision-making machines on the battlefield. A batch of Google employees objected to the company working on Project Maven.

In June 2018, Holmes called artificial intelligence “a big part of our future and you’ll continue to see that expanded.”

But the general’s comments Tuesday show that’s still a ways off. Holmes compared Project Maven to “teaching your three-year-old with the iPad” to pick out objects that are a certain color.

“I would watch my previous aide de camp’s three-year-old and he’d pick out all the green things,” Holmes said. “Green, green, green, not green. “That’s what we’re doing with Maven. Its car, car, car, not car.”

“You have to teach it and it learns and it’s learning, but it hasn’t learned yet to the point where you still don’t have to go back and have mom or dad looking over the shoulder of the three-year-old to say, ‘Yeah, those really are cars.’ Or ‘those really are green’.”

___

© 2018 By National Journal Group, Inc.

Distributed by Tribune Content Agency, LLC.