Navigation
Join our brand new verified AMN Telegram channel and get important news uncensored!
  •  

The US Army wants to reinvent tank warfare with AI

U.S. Soldiers in an M1A2 Abrams tank. (Sgt. Terry Rajsombath/U.S. Army)
October 28, 2019

Tank warfare isn’t as easy to predict as hulking machines lumbering across open spaces would suggest. In July 1943, for instance, German military planners believed that their advance on the Russian city of Kursk would be over in ten days. In fact, that attempt lasted nearly two months and ultimately failed. Even the 2003 Battle of Baghdad, in which U.S. forces had air superiority, took a week. For the wars of the future, that’s too slow. The U.S. Army has launched a new effort, dubbed Project Quarterback, to accelerate tank warfare by synchronizing battlefield data with the aid of artificial Intelligence.

The project, about a month old, aims for an AI assistant that can look out across the battlefield, taking in all the relevant data from drones, radar, ground robots, satellites, cameras mounted in soldier goggles, etc., and then output the best strategy for taking out the enemy(s) with whatever weapons available. Quarterback, in other words, would help commanders do two things better and faster, understand exactly what’s on the battlefield and then select the most appropriate strategy based on the assets available and other factors.

Just the first part of that challenge is huge. The amount of potentially usable battlefield data is rapidly expanding, and it takes a long time to synchronize it.

“Simple map displays require 96 hours to synchronize a brigade or division targeting cycle,” said Kevin McEnery, the deputy director of the Army’s Next Generation Combat Vehicle Cross Functional Team, said on Thursday at an event at the National Robotics Engineering Center. One goal is to bring that down to “96 seconds, with the assistance of AI,” he said.

“All the vast array of current and future military sensors, aviation assets, electronic warfare assets, cyber assets, unmanned aerial, unmanned ground systems, next generation manned vehicles and dismounted soldiers will detect and geolocate an enemy on our battlefield. We need an AI system to help identify that threat, aggregate [data on the threat] with other sensors and threat data, distribute it across our command and control systems and recommend to our commanders at echelon the best firing platform for the best effects, be it an F-35, an [extended range cannon] or an [remote controlled vehicle],” McEnery said.

Ultimately, the Army is looking for a lot more than a data visualizer. They want AI to help with battle strategy, said  Lt. Col. Jay Wisham, one of the program leaders. “How do you want to make decisions based on [battlefield data]? How do you want to select the most efficient way to engage a target, based on probability of hit, probability of kill? Do you have indirect fire assets available to you that you can request? Do you have real assets that you can request? Can I…  send you my wingman… or, does the computer then recommend, ‘Red One, our wingman should take that target instead of you for x, y reasons?’ That goes back to that concept of how you make a more informed decision, faster. And who is making that decision could be a tank commander or it could be a battalion commander,” he said.

The Army’s future plans rely a lot not just on AI but also on ever-more-intelligent ground robots. Right now, a single U.S. Army operator can control about two ground robots. The Army’s plans are to get that ratio to one human to a dozen robots. That will require those future ground robots to not just collect visual data but actually perceive the world around them, designating (though primitively) objects in their field of perception. Those robots will have to make decisions with minimal human oversight as well since the availability high-bandwidth networking is hardly certain.

During the event, which was put on by the Army Research Lab, Carnegie Mellon researchers unveiled robotic experiments where ground robots demonstrated that they could collect intelligence, maneuver autonomously and even decipher what it meant to move “covertly,” with minimal human commands. The robot learns and applies labels to objects in its environment after watching humans.

Relying on those sorts of robots will require a deeper dependence on small and large artificially intelligent systems that reach conclusions via opaque, neural networked or deep learning reasoning. Both of these are sometimes referred to as black box learning processes because, unlike straight or simple statistical models, it’s difficult to tell how neural nets reach the decisions that they do.  In other words, commanders and soldiers will have to become more comfortable with robots and software that produce outputs via processes that can’t be easily explained, even by the programers that produced them.

The way to develop that trust, said Wisham, is the same way humans develop trust in one another, slowly and with a lot of practice. “Most humans are not as explainable as we like to think… If you demonstrate to a soldier that the tool or the system that you are trying to enable them with generally functions relatively well and adds some capability to them… they will grow trust very, very rapidly.”

But, he said, when it comes to big decision aids, “that will be much harder.”

Anthony Stenz, director of software engineering at Uber’s Advanced Technologies Group, said, “You trust something because it works, not because you understand it. The way that you show it works is you run many, many, many tests, build a statistical analysis and build trust that way. That’s true not only of deep learning systems but other systems as well that are sufficiently complex. You are not going to prove them correct. You will need to put them through a battery of tests and then convince yourself that they meet the bar.’

The surging availability of big data and exascale computing through enterprise cloud architectures is also hastening a new state of neural networks and deep learning solutions, one that is potentially more transparent. “In machine learning, there’s a lot of work going on precisely in this direction,” said Dieter Fox, senior director of robotics research at NVIDIA. “Techniques are being developed [to] inspect these networks and see why these networks might come up with a certain recognition or solution or something like that” There’s also important emerging research in fencing off neural networks and deep learning systems while they learn, including neural networks in robots, “How we can put this physical structure or constraints into these networks so that they learn within the confines of what we think is physically okay.”

___

© 2018 By National Journal Group, Inc

Distributed by Tribune Content Agency, LLC.