Hyperedge- . IoT, Embedded Systems, Artificial Intelligence,
Greater integration of self-driving car tech for regular human drivers has the potential to increase distractions and needs to be carefully tailored. (Credit: Getty Images)  

By Lance Eliot, the AI Trends Insider  

There’s an old proverb that dates back to at least the year 1670 and declares that sauce for the goose is also sauce for the gander. 

Say what?   

A more modern and altogether familiar version is the assertion that what is good for the goose is good for the gander. That’s a saying that we all know well. In today’s world, this ostensibly suggests that something applicable in one instance is likely applicable in another (consult your favored online dictionary for further elaboration).   

I often highlight cutting-edge technology bringing about AI-based true self-driving cars. I like to highlight foundational R&D work taking place in research labs that are focused on creating autonomous vehicles.    

The thing is, a lot of the autonomous tech will also end up in human-driven cars too. This might seem surprising. Many assume that the tech devised to aid AI-based autonomous driving would solely be used by autonomous driving vehicles. But it turns out that there are numerous handy ways in which the tech derived for self-driving cars can be leveraged for human-driven vehicles.   

Not every piece of tech that goes into a self-driving car is necessarily going to find its way into a human-driven vehicle. Some contraptions will; some will not. The tech that gets crossed over into a human-driven car is bound to have other aspects included, recasting the tech and making it more readily accessible or usable by humans versus when the target “user” being an AI driving system (note that I’ve put the word “user” into quotes since I don’t want to suggest or imply that today’s AI is somehow human-like). 

The overarching question is this: Can the advanced tech devised for self-driving cars be leveraged for use in human-driven cars on a crossover or crisscrossing basis?   

Well, I’ve already tipped my hand and propounded my answer, which is a resounding yes, albeit the seemingly outsized affirmation is tempered with the strident precaution to mindfully undertake any such crossovers.   

Before we get deeper into the matter, let’s make sure we are on the same page about what constitutes a self-driving car.   

For my framework about AI autonomous cars, see the link here: https://aitrends.com/ai-insider/framework-ai-self-driving-driverless-cars-big-picture/   

Why this is a moonshot effort, see my explanation here: https://aitrends.com/ai-insider/self-driving-car-mother-ai-projects-moonshot/   

For more about the levels as a type of Richter scale, see my discussion here: https://aitrends.com/ai-insider/richter-scale-levels-self-driving-cars/ 

For the argument about bifurcating the levels, see my explanation here: https://aitrends.com/ai-insider/reframing-ai-levels-for-self-driving-cars-bifurcation-of-autonomy/   

Understanding The Levels Of Self-Driving Cars 

As a clarification, true self-driving cars are ones where the AI drives the car entirely on its own and there isn’t any human assistance during the driving task. 

These driverless vehicles are considered Level 4 and Level 5, while a car that requires a human driver to co-share the driving effort is usually considered at Level 2 or Level 3. The cars that co-share the driving task are described as being semi-autonomous, and typically contain a variety of automated add-on’s that are referred to as ADAS (Advanced Driver-Assistance Systems).   

There is not yet a true self-driving car at Level 5, which we don’t yet even know if this will be possible to achieve, and nor how long it will take to get there.   

Meanwhile, the Level 4 efforts are gradually trying to get some traction by undergoing very narrow and selective public roadway trials, though there is controversy over whether this testing should be allowed per se (we are all life-or-death guinea pigs in an experiment taking place on our highways and byways, some contend).   

Since semi-autonomous cars require a human driver, the adoption of those types of cars won’t be markedly different from driving conventional vehicles, so there’s not much new per se to cover about them on this topic (though, as you’ll see in a moment, the points next made are generally applicable).  

For semi-autonomous cars, it is important that the public needs to be forewarned about a disturbing aspect that’s been arising lately, namely that despite those human drivers that keep posting videos of themselves falling asleep at the wheel of a Level 2 or Level 3 car, we all need to avoid being misled into believing that the driver can take away their attention from the driving task while driving a semi-autonomous car.   

You are the responsible party for the driving actions of the vehicle, regardless of how much automation might be tossed into a Level 2 or Level 3. 

For why remote piloting or operating of self-driving cars is generally eschewed, see my explanation here: https://aitrends.com/ai-insider/remote-piloting-is-a-self-driving-car-crutch/   

To be wary of fake news about self-driving cars, see my tips here: https://aitrends.com/ai-insider/ai-fake-news-about-self-driving-cars/   

The ethical implications of AI driving systems are significant, see my indication here: https://aitrends.com/selfdrivingcars/ethically-ambiguous-self-driving-cars/   

Be aware of the pitfalls of normalization of deviance when it comes to self-driving cars, here’s my call to arms: https://aitrends.com/ai-insider/normalization-of-deviance-endangers-ai-self-driving-cars/   

Self-Driving Cars And Tech Crossovers   

For Level 4 and Level 5 true self-driving vehicles, there won’t be a human driver involved in the driving task. All occupants will be passengers; the AI is doing the driving. 

But the AI involved in today’s AI driving systems is not sentient. In other words, the AI is altogether a collective of computer-based programming and algorithms, and most assuredly not able to reason in the same manner that humans can.   

Why this added emphasis about the AI not being sentient?   

Because I want to underscore that when discussing the role of the AI driving system, I am not ascribing human qualities to the AI. Please be aware that there is an ongoing and dangerous tendency these days to anthropomorphize AI. In essence, people are assigning human-like sentience to today’s AI, despite the undeniable and inarguable fact that no such AI exists as yet.   

Now that we’ve set the stage appropriately, time to dive into the myriad aspects that come to play on this topic about tech crossovers.   

Let’s start with the most obvious crossover candidates: the sensors. 

On a self-driving car, a bunch of specialized sensors is used to detect the driving scenevideo cameras, radar, LIDAR, ultrasonic devices, thermal imaging units, and the like (these differ by automaker and self-driving tech approaches and system packages).   

Generally, the sensors are used to gather data about the world surrounding the vehicle. Visual images are streamed into the onboard systems via the video cameras. Image processing software inside the car attempts to ferret out what kinds of objects and blobs are discoverable in visual imagery. The same can be said of the radar. Radar returns are examined computationally to try and find patterns and identify objects outside of the vehicle.   

Computational analysis of the sensory data is only just the start of the AI-based driving process. Once the data has been mathematically explored, the AI driving system is usually set up to make use of an internal virtual world model that acts as a type of internal database about the driving scene. For example, the driving scene might currently consist of a car ahead of the self-driving car, a car to the right in an adjacent lane, and some pedestrians that are attempting to cross the street.   

The virtual world model would be marked to indicate where each of those objects seems to be, along with their anticipated pace and predicted future path. Based on the status of the virtual world model, the AI driving system attempts to ascertain what the self-driving car should next do. Perhaps the self-driving car should come to a stop and wait for the pedestrians to proceed. In that case, the AI driving system would issue electronic commands to the vehicular controls to indicate that a halt or stopping action is to take place.   

Without those sensors, the self-driving car would be unable to properly undertake those calculations about what to do, since the external world or nearby driving scene would be completely unknown. You can somewhat liken this to human drivers in the semblance of how human drivers look outside the car to try and identify what the driving scene consists of. People turn their heads back and forth, they strain to see distant objects, and otherwise primarily use their vision to figure out what is around them. 

Okay, so we have human drivers that use their eyes as their cornerstone sensory element, while self-driving cars make use of video cameras, along with likely other sensory equipment such as radar, LIDAR, and the rest. 

Would it be useful to consider putting video cameras onto a human-driven car, presumably for purposes of aiding the driving task (and not simply to do a GoPro-like recording for posterity’s sake, which of course is somewhat commonly done)?   

Your first thought might be that it would not seem prudent to have video cameras to aid human drivers during the driving chore. The reason for this concern would be the question of how the human driver would watch the video and do so without being distracted from the roadway. We already are all worried about distracted drivers that watch cat videos while they are supposed to be driving a car. It seems that even though a video camera aimed at the road ahead would certainly be more relevant than a cat video, it nonetheless is a potential distractor.   

On top of that notable point, the video camera would seem duplicative and therefore unnecessary.   

A person using their eyeballs does not seem to need to have a video camera. They can see the world with their own eyes. The idea of taking your gaze away from the actual roadway to glance at a video screen showcasing the imagery being captured by a video camera is a loony idea. Just keep your eyes on the darned road and don’t look elsewhere, that would be a prudent mantra, many would contend.   

Yes, those are worthwhile points.   

What this does not include is the potential for the video camera to be able to “see” in ways that a human cannot. 

For example, you are driving on a wide-open highway. You notice that way up ahead there is something in your lane. The distance is rather far away and your eyes cannot discern what the object is. You aren’t sure either whether the object is stationary or possibly moving around. All that you can detect for the moment is that there is definitely something out there, and it will inevitably potentially be in your path.   

We’ve all had those moments.   

You could switch lanes and hope that the object stays in the prior lane. Perhaps you might begin to slow down, figuring that whatever the object is, if you are going at a slower speed you will have more time to decide what to do when your car gets closer to the unknown object. Your mind is filled with ideas about what the object might be.   

Suppose the object turns out to be a tumbleweed. In that case, it is bound to float and flit around on the highway. You might not be able to readily avoid it. That could be okay since most of the time a tumbleweed will harmlessly bounce off your car. On the other hand, maybe this is a piece of furniture that dropped off the back of a truck. Hitting a sturdy piece of furniture is not a good idea. You could damage your car, and possibly lose control of your vehicle in the process of striking the solid object.   

You hopefully get the notion that if you could somehow better determine what that object is, you would have more options available and be able to sooner decide what to do. This in turn would increase your safety and the safety of other nearby cars. I point this out because if you opt to suddenly swerve at the last moment to avoid the object, any cars around you could get caught off-guard and a cascading series of crashes could ensue.   

A video camera might provide the visual imagery that your eyes cannot provide. Assume that the video camera has long-range scanning optics and can much more clearly present the object to you.   

Wouldn’t you want that capability? I dare say, most of us probably would.   

Taking a look at the video, you would be able to figure out what the object is, along with mentally considering what to do about the object. And due to the optics of the video camera, you are able to do all of this far in advance of when your human eyes alone could have discerned the details of the object. 

This seems to illustrate the value of using a video camera for a human-driven vehicle, though we still need to explore how the human driving and the video camera imagery can be dovetailed together.   

Recall that one serious and quite realistic concern is that the human might look down at a video screen to try and see what the object is, and meanwhile lose sight of the active roadway. In that split second, imagine that another car has cut in front of the driver, looking down when they should have been intently observing the existing traffic. 

One possibility would be to make the video available as part of a HUD or heads-up display. The driver looks through the windshield and simultaneously is presented with the video imagery as showcased on the windshield. This can work, somewhat, though it has downsides too.   

Another approach involves wearing special glasses used for driving purposes. Even if you don’t need glasses, nonetheless, when you get into a specially equipped car, you put on glasses that then will display the video onto the glasses themselves. This is a kind of augmented reality (AR) approach, somewhat similar to the famous Pokémon Go AR experience or the infamous Google Glasses.   

But some insist that the driver ought to not be looking at any video, no matter what is pertinent to the driving task at hand. It is just much too risky.   

Ergo, another approach involves using the same kind of AI driving system capabilities that do the visual image processing for self-driving cars. The computer system onboard a vehicle would do a computational analysis of the streaming video, doing this in real-time. Upon detecting an object in the roadway, the system might emit a tone or bell or alter that an object is up ahead.   

Going further, the system could describe vocally the object, telling the human driver what the object seems to be. Perhaps the computer would via Natural Language Processing (NLP) akin to Alexa or Siri indicate to you that there is an object up ahead, and the object is in your lane. Furthermore, if the NLP was more extensive, the driver could interact with the system.   

For example, while still keeping your eyes riveted to the roadway, you might ask the NLP whether the object is stationary or moving. You might ask the NLP to provide a guess of what the object is. And so on, entering into a dialogue with the NLP (a stilted dialogue, mind you, given the NLP available to date).   

In the use case of a human-driven car, this illustrates how a type of tech used in a self-driving car, and for which a self-driving car could be said to require such capabilities, can be reused in the human driving circumstance.   

For a self-driving car, the AI driving system doesn’t need to verbally discuss the imagery analysis with the image processing capability. There are electronic messages sent back and forth, but that’s different from verbalizing the matter. The reason I point this out is that it highlights that the tech used in a self-driving car might need to be adjusted or tailored for use by humans.   

The example of the video camera would indicate that merely slapping video cameras onto human-driven cars is not a good idea. The same goes for merely tossing the image processing software and hardware into a human-driven car (not a particularly useful notion by itself). No, the wholesale tit-for-tat of using something from a self-driving car for a human-driven car is rife with problems and ought to be done with some sensibility involved.   

Having covered the video camera as a type of self-driving car sensory device, we can reapply the same logic to the other sensory devices too. It could be handy to have radar units on a human-driven car, providing additional detection about the driving scene. Whatever the radar captures would need to be analyzed computationally and then seamlessly relayed or made available to the human driver, doing so without distracting the driver. Repeat this same aspect for the LIDAR, ultrasonic units, and the like. 

For more details about ODDs, see my indication at this link here: https://www.aitrends.com/ai-insider/amalgamating-of-operational-design-domains-odds-for-ai-self-driving-cars/ 

On the topic of off-road self-driving cars, here’s my details elicitation: https://www.aitrends.com/ai-insider/off-roading-as-a-challenging-use-case-for-ai-autonomous-cars/ 

I’ve urged that there must be a Chief Safety Officer at self-driving car makers, here’s the scoop: https://www.aitrends.com/ai-insider/chief-safety-officers-needed-in-ai-the-case-of-ai-self-driving-cars/ 

Expect that lawsuits are going to gradually become a significant part of the self-driving car industry, see my explanatory details here: https://aitrends.com/selfdrivingcars/self-driving-car-lawsuits-bonanza-ahead/ 

Conclusion 

There’s a lot more potential crossover where that came from.   

You can dissect all the components of a self-driving car and its AI driving systems and find a realistic means to suggest that some aspect of the tech can be reused for human-driven cars. But doing so without considering the human-driving ramifications is dangerous. 

Poorly integrated tech from self-driving cars could actually undercut human driving and produce a human driving calamity of the likes that none of us would want. There is a tendency to find grand excitement and outstretched attention by jamming new tech into human-driven cars. Without taking into account the human factors of driving, and driver behaviors, the tech might be wasted and never used, or worse still, utilized and lend itself to stoking more car crashes than otherwise might have occurred.   

When it comes to dealing with human foibles of driving, the cure should not be worse than the disease, as they say.   

Another realm that provides a handy possibility for crossover includes V2X (vehicle-to-everything) electronic communications. For example, V2V is vehicle-to-vehicle electronic messaging that allows a self-driving car to send messages to another self-driving car, such as warning that a piece of debris is sitting in the roadway up ahead. The expectation is that self-driving cars are going to be outfitted with V2V. In addition, you can anticipate V2I too. V2I is vehicle-to-infrastructure and consists of roadway devices that would transmit aspects such as the traffic signal is green, the bridge up ahead is passable, and so on. 

Many construe these capabilities as solely being done for the benefit of AI-based driving systems and self-driving cars and similar autonomous vehicles. But they are potentially advantageous to human drivers too.   

A human driver would certainly find it useful getting V2V communications, when applicable. For example, on that open highway that we discussed earlier, assume that a car ahead of your car is coming upon the object that has been sitting in your lane. That vehicle might transmit a message indicating that the object is a tumbleweed.   

We would need to find a useful and prudent means of you receiving that message. If you have to look down at a screen or your smartphone, this would be a dangerous distraction from the roadway. Similar to the discussion about the sensory data being relayed to you in some more efficacious way, the same goes for any of the V2X transmissions.   

One last comment for now. 

Some wonder whether human drivers can cope with the potentially massive amount of additional capabilities that could be arranged for their use while driving a car. Talking with an NLP about the roadway is still a form of distraction. Getting V2X transmissions, even if spoken to you, would be a form of distraction. Each additional piece of information has to be weighed as to the value it provides to safely driving a car versus the potential for undermining the driving of the car.   

That’s a tough balancing act, for sure.   

This is precisely why some vehemently say that we need to arrive at true self-driving cars.   

Rather than putting more and more tech into human-driven cars, let’s get the human driver out of the equation altogether, they implore. If you give a human driver more tech, you are misleadingly making them believe that they can be a better driver. The best possible driver is not a human one, some would assert, and instead, we need to have AI-based driving systems.   

To that degree, there is a somber worry that if there is any significant crisscrossing and crossover of tech from self-driving cars into human-driven cars, this is only sustaining something that ought to be excised anyway. Don’t allow human driving to be extended and thus prolong the agonies associated with the act of human driving (others counterclaim that the tech might improve human driving and lessen the foibles and adverse impacts arising from unaided human driving).   

We end with a parting thought about those geese.   

The goose that is self-driving and the gander that is human driving are presumably seeking the same sauce, whatever the secret sauce might be to ensure the safest driving and that the revered mobility-for-all dream can be ultimately attained. 

Copyright 2021 Dr. Lance Eliot http://ai-selfdriving-cars.libsyn.com/website 

This post was first published on: AI Trends