February 25, 2021

The Missing Link Between AI and AV: Fear

4 min read
Scotty: Hello, computer. Dr. Nichols: Just use the keyboard. Scotty: Keyboard. How quaint.” — Star...

Scotty: Hello, computer.

Dr. Nichols: Just use the keyboard.

Scotty: Keyboard. How quaint.”

— Star Trek 4: The Voyage Home

MADISON, Wis. — Whenever I’m tasked to help EE Times cover “smart” devices, I remember my first encounter with the “smart home.” It popped up at the Consumer Electronics Show at least two decades ago, in the form of a preposterous Microsoft concept called “Bob.” Needless to say, Bob died of stupidity.

“Hello, computer.”

I also recall the scene in Star Trek 4 in which Scotty, engineer from the future, tries to operate a 1985 computer. He’s flummoxed when the computer, unlike the system on the starship Enterprise, doesn’t respond to his voice. That wry moment of techno-comedy is now 35 years old. Today, we’ve progressed to a point where we have rudimentary voice-activated “smart” home devices that respond to simple commands— “Play Twisted Sister for me, Alexa” — but only if you enunciate very slowly and don’t mind if the twisted sisters turn out to be Patty, Maxene and LaVerne singing “Beat Me Daddy, Eight to the Bar”.

The Enterprise computer was introduced conceptually in 1966. five years before John Blankenbaker’s crude Kenbak-1 laid claim to being the first personal computer. The element in the Enterprise computer that was beyond the Kenbak’s ken and remains out of the reach of computer scientists today — and for the foreseeable future — is that Gene Roddenberry conceptualized it to be “smart” the way people are smart. When Scotty or Captain Kirk says, “Computer,” the machine has ears to hear and a (sexy, female) voice to answer. Her mind understands complicated instructions and responds quickly in language, information and explanation that simulates human thought and conversation. She doesn’t require Scotty to completely reorganize his intellect, dumb down his communication skills and squeeze his intuitive and imaginative “analog” mind into a digital box. Scotty doesn’t need to go back to high school and study touch-typing.

Nowadays, as I follow developments in automotive technology, specifically in the aspirational realm of autonomous vehicles (AV), I go back to Scotty and that dimwit ’80s computer in Star Trek 4. Wielding breathtaking advances in artificial intelligence (AI), Tesla, Waymo, Ford and Toyota are working feverishly on sensor technologies and computer software that can instantly discern the difference between a windblown plastic bag in the road and a runaway baby carriage. Alas, they haven’t quite made the leap.

They’re trying hard but their technology is their handicap. They suffer the same dilemma Scotty faced when the 1985 computer wouldn’t listen. No matter how swift and sophisticated the on-board computer in a $100,000 “Level 5” car, it still thinks in a straight line composed of a huge but finite number of ones and zeros.

This poses a paradox. The average driver cannot “compute” at a tiny fraction of the speed of a run-of-the-mill PC. But, remarkably and inexplicably, this average driver — even without a college education — can think faster than any computer yet conceived. He or she is equipped with a mind that accommodates threes and fours, tens and hundreds as well as ones and zeros. It even understands fractions. It doesn’t need a terabyte of input to tell the differences, in a split second, between a schnauzer, a Great Dane, a polar bear and a pickup truck. Best of all, the driver’s mind is infused with fear.

Every driver rides with fear. He or she understands that getting into a vehicle is a mortal enterprise. You climb inside alive, but you could come out dead. Worse, you could come out alive but someone else — a passenger, a stranger, a spouse, a child, the little baby in the carriage who is definitely not a plastic bag — could be killed. By your car. By you. By carelessness, inattention, hesitancy or the hot coffee in your lap.

The one feature that the computer on the starship Enterprise has in common with the venerable Kenbak-1 and the state-of-the art sensor/camera/AI arrays in today’s sleek and glamorous AV prototypes is that they are devoid of emotion. They’re not afraid of dying because they don’t know they’re going to die. In this respect, polar bears and elephants are smarter, and your average long-haul truck driver is Albert Einstein.

We might arrive someday at a point where computers evolve into truly thinking machines that feel a sort of emotion, that understand fear and grasp the imminent finality of death — both their own and others.

For the implications of this prospect, we need to reference another science fiction movie: 2001: A Space Odyssey.

First let me note the irony that author Arthur C. Clarke and filmmaker Stanley Kubrick envisioned the implementation of a talking, thinking, feeling computer system, the HAL 9000, on a future date (now 20 years behind us) when such a machine remained beyond the wildest dreams of any sane technologist.

At the movie’s critical moment, it becomes necessary for Dave (Keir Dullea), the human astronaut on the spaceship, to hit the kill switch on the dangerously malfunctioning HAL 9000. Dave is foiled because HAL has a feature absent even from the Enterprise system. It has been taught death and it’s afraid to die. To protect itself, it does what Dave would do if threatened with his own murder. It kills.

This brings the concept of artificial intelligence around, full circle, to a high-tech Catch-22. To create a system that can “self-drive” a car — or bus, or fleet of end-to-end eighteen-wheelers five miles long — with absolute safety, it must be flawed with human frailty. It must be a feeling machine haunted by an emotion — fear — whose direst awareness its own obliteration.

Here’s the catch:

If a machine is aware of death, will it fear for me or will it fear more for itself?

The post The Missing Link Between AI and AV: Fear appeared first on EETimes.

Source link

Copyright © All rights reserved. | Newsphere by AF themes.