Feature:Opinion

The Military-Industrial-Googleplex is Creating Artificial Intelligence

Scott Cleland Contributor
Font Size:

What could possibly go wrong with Google creating military-grade artificial intelligence (AI)?

Stephen Hawking, a world-leading scientist, warned on the BBC that “the development of full artificial intelligence could spell the end of the human race” in part because it involves “developing weapons we cannot even understand.”

Elon Musk, cofounder of Tesla Motors and Paypal, told The Guardian: “with artificial intelligence we are summoning the demon,” and that it’s “our biggest existential threat.”

“This is not a case of crying wolf about something I don’t understand,” Musk said.

Google Chairman Eric Schmidt dismissed such fears at a Financial Times event: “These concerns are normal; they’re also to some degree misguided.” He explained something bad could happen only “while we’re not watching.” He reassured that “we had the power cord in our hands.”

What Mr. Schmidt did not say is that Google may be the only entity in the world that is purposefully assembling most all the component parts necessary to create a global military-grade artificial intelligence.

Consider the evidence that Google already has the key component parts in place.

Google is arguably the world leader in artificial intelligence because of its broad leadership in machine learning.

Google’s Nest founder Matt Rogers explains Google’s essence: “Google is about big data, machine learning, and operational efficiency,” according to Fast Company.

“Everything in the company is really driven by machine learning,” said Matthew Zeiler, the CEO of visual search startup Clarifai, who worked for Google on Google Brain – the company’s corporate machine learning effort, according to Wired.

Google co-founder Sergey Brin explained “the brain project, is really machine learning… we’ve been using it for the self-driving cars. It’s been helpful for a number of Google services. And then, there’s more general intelligence, like the DeepMind acquisition that — in theory — we hope will one day be fully reasoning AI… you should presume that someday, we will be able to make machines that can reason, think and do things better than we can,” according to a fireside chat with Khosla Ventures.

Google has quickly acquired or hired many of the world’s top artificial intelligence and machine learning experts including DeepMind, the founders of Dark Blue Labs and Vision Factory.

Google executive Jeff Dean boasts that Google has “probably 30 to 40 different teams at Google using our [AI] infrastructure,” according to Wired. Dean also told Wired that Google’s AI models become more accurate the more data they process, so they are scaling their AI models to process billions rather than millions of data points.

Google is also unique in its breadth, depth and global reach of data collection, infrastructure and software applications that are the necessary building blocks of military-grade artificial intelligence.

With a unique “mission to organize the world’s information and make it universally accessible and useful,” Google searches 60 trillion unique URLs to build its 100-million gigabyte index of information. The world’s largest machine-readable base of 1.6 billion facts resides in Google’s Knowledge Vault.

And only Google has auto-translation services for the world’s top eighty languages covering 97 percent of the world’s population.

The world’s largest fully integrated data-center-system — with server points-of-presence in 68 percent of the world’s countries — are Google’s, according to USC research.

Five of the top six-billion user web platforms in the world — Search, Android, Maps, YouTube, and Chrome — are Google’s.

And 98 percent of the top 15-million websites in the world use Google Analytics to track their users.

Google is also a leading U.S. defense contractor.

From Salon’s expose, “Google’s Secret NSA Alliance,” we learned Google has had a “cooperative research and development agreement” with the National Security Agency since 2010. DOD’s NSA can use the facilities that Google and the NSA build together to monitor Internet traffic, and Google gets “the exclusive patent rights to build whatever was designed.”

Google Maps has long been the dominant mapping application used by U.S. military and intelligence services. In 2010, Fox News reported the National Geospatial Intelligence Agency sought to grant a no-bid mapping contract to Google.

In 2012, Google hired Regina Dugan, DOD’s Director of the Defense Advanced Research Projects Agency (DARPA).

In the last 18 months, Google has bought eight robotics companies including military soldier robotics contractor, Boston Dynamics.

In the last year Google has acquired satellite maker Skybox Imaging and high-altitude drone maker Titan Aerospace.

Google’s well-known driverless vehicles have great military value as unmanned supply vehicles, or “robautos” that could keep soldiers out of harm’s way during combat.

Last month, Google leased NASA’s Moffett Naval Airfield for 60 years to house and test its drones, planes, and robots.

Evaluating all this as a whole, one can see Google’s emerging indispensible value to the U.S. military going forward.

Google’s new robotics chief, James Kuffner, is credited with coining the new term and concept, “cloud robotics,” which means that robots’ “brains” are actually in Google’s distributed data centers, according to the WSJ. The value of this robot machine-learning is that when one robot learns to recognize or use a particular object, every other cloud-connected robot has immediately learned it as well.

That means the immediate, machine-learning capabilities of distributed robots, self-driving vehicles and pilotless drones can all “learn” immediately from the others integrated into Google’s cloud.

Combine all this with Google’s unmatched mapping and tracking capabilities, and Google is on path to eventually create exceptionally unified military, Command, Control, Communications, and Intelligence (C3I) capability for 21st century warfare.

Google is on path to produce, deploy and control a potential soldier-less army of robots, vehicles, and drones – near full-warfare automation — that eventually could greatly minimize putting American personnel in harm’s way during future combat.

The open question would be, who is in true control of what is potentially the most-complex control system ever, to ensure that the U.S. government, and not their contractor, was truly in command of this exceptional C3I military capability.

What could possibly go wrong with Google creating military-grade artificial intelligence?

One needs to understand why Hawking, Musk and many other AI experts are truly frightened about the potential uncontrollability of machine-learning and artificial intelligence as it relates to warfare.

At the simplest level, Google’s DeepMind researchers are experimenting with what happens when programmers do not tell a computer how to accomplish a task, but only provide it with the end-goal they want it to achieve.

The reason DeepMind required Google to set up an ethics board to oversee its research is because it understood they were fundamentally pursuing “ends-justify-the-means” computer programming, where they were enabling a computer to have no ethical, moral, or societal limits in pursuit of a set goal.

People should be afraid of this irresponsible Google approach to artificial intelligence because programmers can neither predict what an AI program like this will ultimately do, nor can they understand how it did it after the fact. If the smartest programmers can’t understand their own AI creations, they by definition don’t control them.

It gets worse.

Google’s most important engineering values have long been speed and scale. Google’s engineering approach is to iterate rapidly, or use what Google Chairman Schmidt calls Google’s “launch first, fix-later” creative ethos.

Google’s official security philosophy is to crowdsource its software, delegating to users the task of finding many of the flaws and malware in Google software.

Google has also earned a reputation for disrespecting national sovereignty, rule of law, accountability, privacy and property.

What possibly could go wrong with Google creating military-grade artificial intelligence?

It doesn’t take an algorithm, just common-sense, to figure out the many ways this emerging future could go horribly wrong.

Forewarned is forearmed.

Scott Cleland is President of Precursor LLC, a consultancy serving Fortune 500 clients, some of which are Google competitors. He is also author of “Search & Destroy: Why You Can’t Trust Google Inc. Cleland has testified before both the Senate and House antitrust subcommittees on Google and also before the relevant House oversight subcommittee on Google’s privacy problems.