[email protected] (+2) 03-4373635 | 03-4341228

News | The Pentagon’s $2 billion gamble on artificial intelligence

Mohamed Hakim
News

It’s the chilling plot line to every science fiction movie about robots in the future: Once they sta

Think of the HAL 9000 in "2001: A Space Odyssey," or the replicants in "Blade Runner," or the hosts in "Westworld."

These days the Pentagon is doing a lot of thinking about the nascent scientific field of artificial intelligence, also known as “machine learning,” developing computer algorithms that will allow cars to drive themselves, robots to perform surgery, and even weapons to kill autonomously.
The race to master artificial intelligence is the No. 1 priority of the Defense Advanced Research Projects Agency, the tiny organization with just over 200 workers that was instrumental in developing stealth technology, high precision weapons, and the Internet.

“In reality, over about the last 50 years, DARPA and its research partners have led the way to establishing the field of artificial intelligence. We are not new to this game,” said DARPA Director Steven Walker at the agency’s 60th anniversary symposium in September.

Early artificial intelligence research perfected systems technology and later statistical learning capabilities, but the next leap in AI is developing what’s known as the “third wave,” technologies focused on “contextual reasoning,” or what we humans call “thinking.”

To that end, at the conference outside Washington, Walker announced an initiative to invest up to $2 billion over the next five years in new research efforts aimed at advancing artificial intelligence, to explore “how machines can acquire human-like communication and reasoning capabilities.”

The potential for computer-assisted thinking to make America’s future troops into cyborg warriors with abilities beyond normal human limitations is not lost on Gen. Paul Selva, the vice chairman of the Joint Chiefs and one of the deep thinkers about the potential of artificial intelligence to help warfighters make sense of the “mountains of data” on the battlefield.

“It should be able to let us see faster, sense faster, decide faster, and act faster,” Selva told defense reporters at a roundtable this year. “If you can’t do all those things, you’re not actually taking advantage of the technology. You’re just admiring it.”

Some of what artificial intelligence could do is fairly basic, such as speeding up mundane tasks that consume a lot of human resources, reviewing records for background checks or analyzing hours of drone footage.

But some tech companies, including Google, are recoiling at the idea their AI tools might be assisting in the business of war.

Last week, Google pulled out of competition for a $10 billion Pentagon cloud-computing contract because it “couldn't be assured that it would align with our artificial intelligence principles," which include a pledge not to build weapons or other systems intended to cause harm, Google said in a statement.

And as weapons get smarter, the moral questions get stickier.

What ethicists worry about is the rise of LAWS, short for Lethal Autonomous Weapons Systems, such as the fictional Skynet in the Terminator movies. A more realistic example is autonomous aircraft, like today’s killer drones with no human pilots at the controls.

“The general idea is that a LAWS, once activated, would, with the help of sensors and computationally intense algorithms, identify, search, select, and attack targets without further human intervention,” writes Regina Surber, scientific adviser for a peace foundation based in Geneva.

In a paper published this year, Surber laid out the arguments for and against.

On the one hand, she points out that autonomous “thinking” systems would not depend on communication links, could operate at increased range for extended periods, would reduce the number of humans needed for some military operations, and could save lives on both sides.

“Their higher processing speeds would suit the increasing pace of combat, by replacing human soldiers, they will spare lives; and with the absence of emotions such as self-interest, fear or vengeance, their ‘objective’ ‘decision-making’ could lead to overall outcomes that are less harmful,” Surber wrote.

On the other hand, the more artificial intelligence substitutes for human judgment, the greater the risk it could desensitize warfighters to the value of human life.

“Further, humans may no longer be able to predict who or what is made the target of an attack, or even explain why a particular target was chosen,” Surber argued. “As a machine and not a human being ‘decides’ to kill,” she wrote, “the physical and emotional distance between the programmer or engineer and the targeted person may generate an indifference or even a ‘Gameboy Mentality’ on the side of the former.”

It’s something that Defense Secretary Jim Mattis has considered as he’s watched his Pentagon plunge headlong into the race to obtain a competitive edge in machine learning systems.

Mattis told reporters traveling with him this year that taking the human out of the loop could alter what he has always considered an immutable tenet of war, namely that while the technologies change, its fundamental nature does not.

Mattis notes that today drones are misnamed when they are called unmanned aerial vehicles instead of remotely-piloted aircraft.

“It may not have a person in the cockpit, but there's someone flying it. There's someone over their shoulder. There's actually more people probably flying it than a manned airplane,” he said. “It’s not unmanned.”

But if sentient machines replace humans, it could change everything.

“If we ever get to the point where it's completely on automatic pilot and we're all spectators, then it's no longer serving a political purpose, and conflict is a social problem that needs social solutions, people, human solutions,” Mattis said.

“I'm certainly questioning my original premise that the fundamental nature of war will not change,” he said. “You have to question that now.”