Op-Ed

NEWSFLASH, PEOPLE: Human Drivers Are Much Safer Than Soulless ‘Autonomous’ Vehicles

Shutterstock/Chesky, Shutterstock/Maridav

Mike Nalepka CEO, VideoProtects.com
Font Size:

Technology is on fast track for the driverless or “autonomous” car and truck. Tens of billions of dollars are being spent to develop the “inevitable” brave new world of vehicles driving the road by algorithm, programming, and artificial intelligence (AI). It is so 21st century, forward thinking and enlightened. We are told we don’t have enough drivers to fill commercial demand and that it will only grow worse each year. We’re told humans are prone to bad and risky driving habits — that we use smartphones and text while driving, that we drink and drive, and on and on.

Basically, the message is: WE are dangerous on the road. The answer we’re now told? The fully autonomous vehicle. That it is coming and that we need to get over it and jump on the bandwagon. Seriously! Really?

We all need to hit the collective pause button. Let’s do a bit a critical thinking here.

Please humor me. Allow me to take you back to one of my favorite episodes of the original Star Trek series titled “The Return of the Archons.” In the episode, the USS Enterprise is on a mission to find out why the USS Archon was reportedly lost 100 years prior. On planet Beta III, the Enterprise crew finds an entire world controlled by a reclusive dictator known as Landru. They discover that Landru is not really alive in the human sense but that he built and programmed a computer 6000 years earlier as a mirror image of his personality so he could reign forever. Captain Kirk and Mr. Spock finally discover there is no human Landru. When they shoot through his hidden oracle they find a mainframe computer and this exchange takes place:

Landru: “I am Landru! You have intruded!”

Captain Kirk: “Landru died, 6000 years ago.”

Landru: “I am Landru. I am he. All that he was, I am. His experience, his knowledge….”

Captain Kirk: “But not his wisdom. He may have programmed you, but he could not have given you a soul. You are a machine.”

The back and forth continues with Laudru finally having to admit “…but I reserve creativity to ME.”

In the end, Captain Kirk outwits Landru with human soul-inspired intuitive insight and wisdom.  Landru ends up destroying itself. Captain Kirk got Landru to realize that by not allowing everyone else creativity, Landru was evil. Landru didn’t have a soul or wisdom, only programming.

And here comes my point. Computers and programming make good and efficient servants. But we should not allow them to freely control cars and trucks that are not on tracks, monorail or operating within concrete barricaded special lanes. Autonomous vehicles simply cannot be allowed to mingle with normal traffic flows, human driven vehicles or around pedestrians — ever.

The autonomous vehicle can very easily become a lethal weapon with only a miniscule programming error, a glitch, an act of sabotage, an intentional virus or a new event or circumstance for which it has not be programmed. Programming cannot and will not ever replace human intuition, wisdom or our souls. Those things are what makes us unique and special in the universe.

We as humans are not perfect. We make mistakes. But, we can be held accountable by law enforcement, the justice system and legislation. Our laws are based on human decency and morality standards established over thousands of years. So, who then would be to blame when autonomous vehicles cause injuries or fatalities? Guess who? The companies that sold and made the autonomous trucks and cars as well as all the technology associated with it. Can anyone say, “astronomical liability settlements”? Or, “self-driving trial law”?

What would it be like if autonomous and driverless vehicles operating freely on public roads that are comingled with vehicles using human drivers and where there are pedestrians? Let’s also factor in emergency vehicles, police cars, erratic drivers, weather conditions, times of day, drunk drivers, teen drivers, elderly drivers, kids darting after errant balls, wild animals, family dogs, angry rush hours drivers merging onto freeways, school buses, crossing guards directing traffic, broken-down cars, buses, bus stops, joggers, bikers, motorcyclists, moped enthusiasts and skateboarders. (Don’t forget visual hand signals we give other drivers also.)

The list of hazards which human drivers deal with regularly is truly endless.

Here are just a few examples of “what would happen if________” scenarios to think about:

  • A mother is walking between two cars with about 2 feet of space between bumpers with her children. One of the cars is autonomous and idling. You and your children’s lives depend on the programming of the autonomous car. Would you take that chance with your kids’ lives?
  • You are yielding right of way allowing other drivers into traffic and the autonomous vehicle slows down also. You speed up, it speeds up. You slow down, it slows down. This keep repeating. Then you get rear ended. Who is at fault?
  • An autonomous driven tractor trailer hits an ice patch and starts jackknifing. There are a lot of people in its path on both sides of the road: two elderly women walking, four children sitting at a bus stop with their mothers, and a man walking his dog. The truck starts to regain control but can only safely avoid two out of three scenarios. Which is the least important to the programming? (Now the programming is making moral and ethical decisions.)
  • A runaway car was running over street signs and hedges on a busy street because the elderly driver had died from a heart attack. Everybody was jumping out of the way and cars were swerving to avoid it. What if the autonomous car had a chance to put itself between the runaway car and children about to be run over. What will the programming do vs. what a human driver would do? (I actually witnessed this in Florida.)
  • What about giving motorcycles, bicyclists, pedestrians, or animals plenty of right of way and courtesy? What does the autonomous car do?
  • I am driving in moderate snow, but the roads are getting iced over. I see the car next to me is autonomous without a driver. Do I: a) not worry, b) slow down and let the driverless car get way ahead me? I would definitely select b). Why? Because do I really trust the driverless car to handle hazardous driving? Heck no! I’m getting as far away from it as possible.
  • You are about to board a plane and you find out that the airplane will be flying today autonomously without a pilot using their new pilotless technology. Do you board? (Count me out on that one!)

The bottom line? Who will make the moral and ethical driving decisions in vehicles equipped with autonomous technology? It’s certainly not a person in the car behind the wheel because there won’t be one. It will be, of course, the programming, algorithms, and artificial intelligence in the onboard computer system. Doesn’t it sound very similar to Landru — the machine in Star Trek?

But wait. There’s more. If programming, artificial intelligence, and algorithms were so insightful, wise and discerning, then why did Google just hire 10,000 human reviewers — that’s right 10,000! — to reduce the amount of “problematic content” on its YouTube video platform.

If Google can’t even create programming to review videos automatically without human reviewers, why would we believe that autonomous vehicles can be programmed to be safe?

I’ll just say it. It is not possible.

We are now going through this phase (or craze) of wanting to believe somehow, someday soon, the autonomous car will soon be an everyday part of driving on all roads. In the end, that will not be allowed to happen. What will happen is that we will finally land on various forms of “driver-assist” technology of self-driving in which a human driver will still be in the driver seat and in front of the steering wheel able to take control at any time. This will allow for critical life-and-death driving decisions to be made by a real human driver, not by computer programming.

As Captain Kirk said: “He may have programmed you, but he could not have given you a soul. You are a machine.”

Michael Nalepka is the CEO of VideoProtects.com and a leading industry expert in vehicle video recording, the Internet of Things and the “Connected Car.”


The views and opinions expressed in this commentary are those of the author and do not reflect the official position of The Daily Caller.