Opinion

The Terminator Is Coming. So What Do We Do About It?

Font Size:

In recent but separate interviews, three notably credible and exceptional minds have warned us of the risks of artificial intelligence (AI). Namely Stephen Hawking, Elon Musk, and Bill Gates. Each has expressed concern that robots, more likely software than the big, strong, physical type made famous by Arnold Schwarzenegger, pose a very serious, if not existential risk to mankind.

Is the Terminator coming? I don’t know. Do some of the possibilities of AI frighten me? A little. But I try not to lose too much sleep over things that are still long in the future, that I can’t control, and that I can’t avoid.

But that doesn’t mean we shouldn’t be talking about AI today. Because if the Terminator is coming in 50 years, today is when we must talk about it.

When I speak on innovation there’s a question I often ask; “What is the best way to predict the future?” Nine out of ten people answer with, “by looking at the past.” I completely disagree.

In my experience, the best way to predict the future is actually to look to science fiction. We humans have amazing imaginations, but even more impressively, when we set our minds to something we usually figure it out.

Nothing speaks to the way science fiction has inspired us more than the Tricorder Project. When I ask audiences if they remember the “Beam me up, Scotty” transporter from Star Trek, almost everyone of a certain age or proclivity does. But, when I ask whether they think it can become a reality in the relatively near future, very few believe it can. Yet the Tricorder Project tells us we are on our way (pun intended).

Travel at the speed of light? Mathematically we already know how to do it. Transport yourself from one place to another? Three years ago, Michio Kaku was explaining the expectations of our ability to ultimately teleport inorganic matter, then organic matter, and then ultimately people. The inanimate version is here already.

We know AI is certainly advancing rapidly, with a Russian-made program being the first to pass the Turing Test last June.  The Turing Test is 65-years old and gauges whether a computer can trick a human being into thinking the computer is also another human. Some use it as a proxy to gauge the effectiveness and maturity of artificial intelligence. Passing the Turing Test is quite a big milestone that was reached, like being the first to run a four-minute mile — a feat once considered impossible.

While Hawking and Gates spoke about the enormous potential dangers we face with AI, Musk goes so far as to offer a solution: government regulation.

On the risk, I certainly agree. On the solution — not at all.

Recently, it was asked, “Is AI good or bad for humanity?” The mistake in that question is similar to the mistake that’s been made for well over a decade on global warming, namely, “is man responsible for global warming and if so, how much of it is manmade?” Some would like to argue, “none at all.” That’s rather absurd. There’s no doubt that humankind has developed a big enough global footprint to affect the environment. How big an effect? Who knows, but more importantly, who cares? Changing the behavior of seven billion self-interested, short-term thinking people, or dozens of even more self-interested nations, simply isn’t going to happen.

It’s a very similar situation to AI. Trying to regulate the development of artificial intelligence is a fool’s game. There are so many military and intelligence applications of AI that it would be like countries agreeing not to spy on each other. When has that ever worked? In fact, the “wink-wink” reality in the spy world is so ubiquitous that every country knows it’s subject to being spied on by every other — including its closest allies.

Sure, politicians may feign outrage publically for political purposes, but behind closed doors, it’s all about the wink. Any agreement to limit the proliferation of AI for military and intelligence purposes would be like an agreement to get rid of nuclear weapons. Even after the agreement, every nuclear country will keep a few — hidden — just in case (wink, wink). That’s why it will never happen.

Of course, even defining AI would be a regulatory impossibility. Are we really going to limit the kind of software and computer development that’s done in university labs? And if we did, do we believe every other country in the world would follow suit? The idea of regulation is absurd.

Instead, we have to protect ourselves in a manner more similar to how we protect ourselves against cybercrime and malicious hacking. We need to stay many steps ahead (though we’re not doing so well proving that we can). We need software-based defensive systems similar to the ones being created at DARPA to stay steps ahead of cybercrime.

Already, certain sophisticated cyber weapons are designed to take on a life of their own once they infiltrate a target’s systems, because once inside, we often can’t communicate with or control them. AI will also be a tool of cyberattack. That’s why it should be dealt with in very similar ways and by very similar agencies.

Having a philosophical debate about AI, or passing token and ineffective laws, is like carrying a sign that says, “No more nukes.” It will never happen and distracts us from having the really important conversation that’s needed. Let’s talk instead about how innovation, not regulation, can help us harness the power of AI and protect us from its unintended consequences.