Navigation
Join our brand new verified AMN Telegram channel and get important news uncensored!
  •  

US military changing ‘killing machine’ robo-tank program after controversy

A M2 Bradley fighting vehicle and a group of M1 Abrams tanks. (U.S. Army photo by Spc. Hubert D. Delany III / 22nd Mobile Public Affairs Detachment)

It was a frightening and dramatic headline: “The US Army Wants to Turn Tanks Into AI-Powered Killing Machines.” The story, published this week in Quartz, details the new Advanced Targeting and Lethality Automated System, or ATLAS, which seeks to give ground combat vehicles the ability to “acquire, identify, and engage targets at least 3X faster than the current manual process.”

The response seems to have spooked the Army, which is now changing its request for information to better emphasize that the program will follow Defense Department policy on human control of lethal robots. They are also drafting talking points to further the new emphasis.

The robot’s ability to identify, target, and engage doesn’t mean “we’re putting the machine in a position to kill anybody,” one Army official told Defense One.

A second Army official said the changes had been “suggested” by the Office of the Secretary of Defense to the AI task force of the Army’s Futures Command. The official didn’t know whether the changes had been made, but said they’d likely be made before the program’s March 12 industry day.

A Defense Department official said the language change might be followed by other unspecified ones.

The ATLAS program shows how much has changed since 2014 when the idea of armed ground robots was anathema to the U.S. military. The idea has seen ups and downs. In 2003, the Defense Department began to experiment with a small, machine-gun tank robot called SWORDS. In 2007, it was sent to Iraq. But the military ended the program after the robot began to behave unpredictably, moving its gun chaotically.

The military abandoned research on armed ground robots for years. A half-decade later, there had been more progress on doctrine governing battlefield robots than the machines themselves. In 2012, the Defense Department issued directive 3000.09, which says humans must have veto power over the actions of armed robots. (There can be special, limited exceptions.) That directive remains in force.

In 2014, there was still “no focused, near-term dialogue on this type of topic,” said Chris Jones, then director of strategic technology for iRobot, the company behind the famous PackBot A particular technical sticking point was the difficulty of building targeting systems for ground robots.

But technology progressed. By 2017, the military was more comfortable with the idea, and integrated some armed ground robots into some training exercises.

“The controversy over ATLAS demonstrates that there are continuing technological and ethical issues surrounding the integration of autonomy into weapon systems,” said Michael C. Horowitz,  associate professor of political science at the University of Pennsylvania and a senior adjunct fellow at the Center for New American Security.

“Lack of clarity concerning what would truly constitute an autonomous weapon system, even under the existing DoD directive, means it is not entirely clear the ATLAS program would be fully autonomous.”

Horowitz said the wording change sounded like a good step. “It is critical that any revisions to the ATLAS program not only clarify the degree of autonomy and the level of human involvement in the use of force, but also ensure that any incorporation of AI occurs in a way that ensures safety and reliability,” he said.

The incident comes after a separate controversy involving the Army’s relationship with Microsoft. On Feb. 22, a group of employees sent a letter to Microsoft leaders protesting the work that the company was doing with the Army on the Integrated Visual Augmentation System, or IVUS, a helmet display technology based on the Microsoft HoloLens video game headset.

“While the company has previously licensed tech to the U.S. Military, it has never crossed the line into weapons development,” the letter says.

Since IVUS was to be the signature product of the Army’s new Futures Command, and since the protest involved a major, name-brand company, the protest scored ink.

But Microsoft CEO Satya Nadella was quick to squash speculation that the protest would affect on the company’s partnership with the military.

On Tuesday, Army Undersecretary Ryan D. McCarthy noted that the IVUS is a training aid, not a weapon. “If you have a system where you can pipe in synthetic training, you could wear this same piece of equipment into combat. You could train with it at home and you could also collect data. So if you’re coming in to do the room clear, what’s the individual [meaning the wearer’s] heart rate? The marksmanship of the shots they took? So you can get performance data on the individual.”

A key area of controversy is over what is sometimes called Rapid Target Acquisition, or RTA — a method of finding targets, putting little red digital boxes around them on a screen, and putting a bullet, missile, or bomb into that box. It’s an emerging capability fraught with difficult ethical considerations and complexity: Is the data that goes into the process of box-drawing correct? Is the intelligence collection behind that data good or was it gleaned from unreliable sources? Where was human supervision during the process? It’s not clear what role RTA plays in either IVUS or ATLAS.

What these two incidents illustrate is the public concern about military use of AI is so high that it will occasionally manifest in protests or statements of objection that are based more in speculation about what the military is doing than actual fact. The incident with Microsoft, in particular, shows that the opinion of the mainstream tech community is sometimes unfairly rooted against the military community.

How leaders of companies like Microsoft and Amazon navigate that space is an open question.

On its face, the ATLAS controversy represents a public relations setback in the military’s efforts to reach out to the tech community. But the Army’s final response might also show that military leaders are sensitive to the issue and are capable of responding quickly to criticism.

“While outside groups will undoubtedly have concerns about the ATLAS program, even if the requirements are altered, the U.S. military is attempting to take the challenge of AI seriously across several dimensions,” said Horowitz.

As military adoption of AI moves from the air domain to land, from drones and fighter jets to helmets and tanks, it will also enter a foggier phase. It’s one thing to apply artificial intelligence to aerial surveillance and very much another to put it alongside troops, soldiers tasked with bursting through the door with limited understanding of what’s on the other side, especially in confusing urban-warfare scenarios.

Directive 3000.09 is a poor guide for what to do all of those instances. The Department knows this and has begun a process of drafting its own list of ethical principles for the use of artificial intelligence in war in the future.

“Between the efforts of the Defensive Innovation Board, the Joint Artificial Intelligence Center, the new National Security Commission on Artificial Intelligence, and others, now is the time to have these important conversations,” said Horowitz.

Artificial intelligence in the hands of ground troops has the potential to make the task of charging through the door not only safer for the soldier but potentially for the people on the other side of the M4, if, in fact, soldiers can use it to rapidly differentiate real threats from fake ones in confusing, high-energy situations. But questions about design and implementation will persist. Some will be more valid than others.

Military leaders, in explaining their perspective on arming and equipping soldiers, are fond of saying that they never want their troops to face a fair fight. Translation: achieving overmatch is not optional. But as new capabilities come online, capabilities like those outlined in the ATLAS proposal, commanders and officials will have to make hard choices about how much speed, firepower, and lethality, how much unfairness, are they willing to part with.

However much, it will likely be more than the adversaries they are facing.

___

@ 2018 By National Journal Group, Inc. All rights reserved.

Distributed by Tribune Content Agency, LLC.