During a recent television interview, U.S. Defense Department’s Chief Digital and Artificial Intelligence Officer Craig Martell explained that artificial intelligence will never make military decisions without a “responsible human.”
“It’s very clear to all of us that there’s always a responsible human who makes the decision,” Martell said in his interview with CNN. “It will always be the case that somebody has decided that we are going to leverage a particular technology and it will always be the case that someone will be responsible. It will be a responsible agent. We don’t imagine a world where machines are making these sorts of decisions on their own.”
Martell noted that the government’s responsible artificial intelligence guidelines are receiving a new update from the previous guidelines outlined in a 2012 document.
“I’m not sure it’s been released yet, but if not, it’s going to be released very, very soon,” he said.
While Martell said that from a scientific perspective, the massively popular generative artificial intelligence and large language artificial intelligence models are “fascinating,” he warned that the technology is not at a stage where people can have full confidence in artificial intelligence.
READ MORE: Henry Kissinger on a potential artificial intelligence arms race
Martell acknowledged that generative artificial intelligence has “moved the field decades forward” and allowed the technology to reach a level that was not previously thought possible, he warned that artificial intelligence does not generate “factual coherent text” every time.
“When I say I’m scared to death, what I really meant is the natural proclivity to believe things that speak authoritatively, and these things speak authoritatively, so we just believe them,” he said. “And that makes me afraid, because from a science perspective, I love where we are. From a product perspective, I don’t believe we’re ready.”
On one hand Martell indicated that artificial intelligence is “pretty close” to being able to reliably generate “novel ideas.” However, on the other hand, he noted that the Pentagon would have to be confident that artificial intelligence was able to reliably “tell the truth like 99.999% of the time” if it was being used for new technology deployed on the battlefield.
“We have to look at this use case by use case and tackle each of the use cases in turn,” he said.