Hyperedge- . IoT, Embedded Systems, Artificial Intelligence,
Child sculpts a snowman in a park

By Lance Eliot, the AI Trends Insider    

We misclassify a lot of things, all the time, daily, and at any moment. 

You are waiting in a restaurant for a friend to come and have lunch with you. Your eyes are scanning the people that are entering the busy eatery. Assume that it is a cold day and raining or snowing, which means that most of those coming into the restaurant are wearing heavy clothes and generally covered up. It would be quite easy to spot someone that appeared to be your friend, based perhaps on their height and overall shape, yet once they removed their coat and hat, presumably by now seeing clearly the face of the person, you would realize it is not the person you were waiting for. 

No harm, no foul. But consider another example of a misclassification, though one with greater consequences.   

You are driving your car on a winding road. It is hard to see very far ahead. As you come around a sharp curve, there is something in the middle of the roadway. What is it? Your mind races to quickly assess the nature of the object. Time is a key factor. You need to decide whether to try and swerve around the object, which is going to be dangerous to perform, or directly plow into the object, another potentially dangerous act. 

In a split second of available attention, your mind decides it is a tumbleweed. 

Usually, it is feasible to ram into a tumbleweed and do so without any notably adverse results. Sure, your car paint might get scratched, but at least you stayed in your lane and did not incur the dangers of swerving, especially on this winding road that was (let’s say) rounding on sheer cliffs. So, you drive immediately ahead, and the tumbleweed lightly smacks your car. You are still thankfully safe and sound, able to continue the driving journey unabated.   

But imagine that in that brief moment of classification, you inadvertently misclassified the object.   

Turns out it was a meshy ball of steel cables that had come from a construction site and fallen off the back of a truck on this same winding road. The mesh was rolling and bobbling, just like a tumbleweed, and happened to be painted white and resembled a tumbleweed in both looks and actions on the roadway. Yikes, your decision to proceed ahead based on the belief that this was a tumbleweed is now quite problematic. You strike the object and it smashes your left headlight and gets entangled with your tires. A tire blows out. The car is now difficult to control.   

That’s an example of how misclassification can ruin your day (let’s assume, for sake of discussion, you, fortunately, survive the incident and live to tell the tale of the misclassified tumbleweed, so go ahead and let out a sigh of relief, and continue reading herein). 

Why bring up this discussion about classifications and misclassifications? 

Besides humans making classifications, there is an expectation that AI systems will be making classifications. Consider the use case of AI-based true self-driving cars that routinely need to classify the roadway objects that are encountered during a driving journey.   

The sensors of a self-driving car are collecting voluminous data about the world surrounding the vehicle. This includes data from the on-board video cameras, radar, LIDAR, ultrasonic units, and the like. As the data gets collected by the sensors, the AI system has to discern what is out there in the world and thus inspects the data mathematically accordingly. Various computational pattern matching techniques are often utilized, including the employment of Machine Learning and Deep Learning (ML/DL). 

Some people seem to think that AI is amazingly infallible and an idealistic form of mechanized perfection.   

Please toss that absurd notion out of your mind. 

I assume you agree that humans can misclassify things, and as such, you need to realize and expect that AI systems can and will also misclassify things too (I’m not suggesting that humans and AI of today are equivalent, and do not wish to somehow anthropomorphize current AI, thus merely pointing out that AI can misclassify, in the same semblance of a notion of misclassification as that which befalls humans).   

A recent social media post by Oliver Cameron, CEO of Voyage, brought up an interesting question about the classification and misclassification aspects of self-driving cars. In particular, I’m referring to a posted indication of a snowman that was misclassified as a pedestrian by an AI driving system.   

I’ll give you a moment to ponder the ramifications of that type of misclassification. Almost as though you were playing a chess game, consider what kind of moves and countermoves that specific misclassification portends. Is it more akin to the misclassifying a bungled up friend, or closer to the misclassification of the tumbleweed?   

Before we get into the details, first let’s clarify what I mean when referring to AI-based true self-driving cars.   

For my framework about AI autonomous cars, see the link here: https://aitrends.com/ai-insider/framework-ai-self-driving-driverless-cars-big-picture/   

Why this is a moonshot effort, see my explanation here: https://aitrends.com/ai-insider/self-driving-car-mother-ai-projects-moonshot/ 

For more about the levels as a type of Richter scale, see my discussion here: https://aitrends.com/ai-insider/richter-scale-levels-self-driving-cars/ 

For the argument about bifurcating the levels, see my explanation here: https://aitrends.com/ai-insider/reframing-ai-levels-for-self-driving-cars-bifurcation-of-autonomy/ 

  

Understanding The Levels Of Self-Driving Cars   

As a clarification, true self-driving cars are ones that the AI drives the car entirely on its own and there isn’t any human assistance during the driving task.   

These driverless vehicles are considered a Level 4 and Level 5, while a car that requires a human driver to co-share the driving effort is usually considered at a Level 2 or Level 3. The cars that co-share the driving task are described as being semi-autonomous, and typically contain a variety of automated add-on’s that are referred to as ADAS (Advanced Driver-Assistance Systems).   

There is not yet a true self-driving car at Level 5, which we don’t yet even know if this will be possible to achieve, and nor how long it will take to get there. 

Meanwhile, the Level 4 efforts are gradually trying to get some traction by undergoing very narrow and selective public roadway trials, though there is controversy over whether this testing should be allowed per se (we are all life-or-death guinea pigs in an experiment taking place on our highways and byways, some contend). 

Since semi-autonomous cars require a human driver, the adoption of those types of cars won’t be markedly different than driving conventional vehicles, so there’s not much new per se to cover about them on this topic (though, as you’ll see in a moment, the points next made are generally applicable).  

For semi-autonomous cars, it is important that the public needs to be forewarned about a disturbing aspect that’s been arising lately, namely that despite those human drivers that keep posting videos of themselves falling asleep at the wheel of a Level 2 or Level 3 car, we all need to avoid being misled into believing that the driver can take away their attention from the driving task while driving a semi-autonomous car.   

You are the responsible party for the driving actions of the vehicle, regardless of how much automation might be tossed into a Level 2 or Level 3. 

  

For why remote piloting or operating of self-driving cars is generally eschewed, see my explanation here: https://aitrends.com/ai-insider/remote-piloting-is-a-self-driving-car-crutch/ 

To be wary of fake news about self-driving cars, see my tips here: https://aitrends.com/ai-insider/ai-fake-news-about-self-driving-cars/   

The ethical implications of AI driving systems are significant, see my indication here: https://aitrends.com/selfdrivingcars/ethically-ambiguous-self-driving-cars/   

Be aware of the pitfalls of normalization of deviance when it comes to self-driving cars, here’s my call to arms: https://aitrends.com/ai-insider/normalization-of-deviance-endangers-ai-self-driving-cars/ 

  

Self-Driving Cars And Misclassifications   

For Level 4 and Level 5 true self-driving vehicles, there won’t be a human driver involved in the driving task. All occupants will be passengers; the AI is doing the driving.   

Here’s our scenario: A self-driving car is going for a jaunt, doing so in an area that had a recent snowfall. Assume that the self-driving car is either heading to pick up a ridesharing passenger or maybe is simply roaming and awaiting a request for a lift. 

Imagine that a snowman has been assembled on a somewhat snow-covered grassy area that is adjacent to the roadway.   

This happens all the time and we can certainly expect that once the snow season arrives, there will be lots of bustling children (and adults) that opt to craft a snowman. Perhaps this amounts to one of the most delightful aspects of living in an area that gets snow. You might carp about having to shovel snow from your driveway or complain bitterly about how treacherous the streets become when coated with snow and ice, but by gosh, you can make snowmen!   

As the self-driving car comes up on the road that has the snowman, the sensors of the vehicle are all doing their thing, such as visual imagery pouring in from the cameras, radar data being obtained, LIDAR data being collected, etc.   

This data is assessed computationally to classify the objects that are in the driving environment. A properly devised AI driving system makes use of Multi-Sensor Data Fusion (MSDF), meaning that the interpretations that are being derived via each of the types of sensory data are being aligned and compared, aiding in trying to discern and classify objects (think of this as though you might use your eyes and your ears, in combination, when trying to decide what an object is, thus, a multi-sensory form of classification). 

Upon collecting the sensory data, the AI driving system determines that a thing consisting of puffy white balls (of snow) and that has a hat and some seeming arms (made of sticks) might be a pedestrian.   

Most people do not realize that these kinds of AI-based classifications are usually assigned a probability or, if you will, an uncertainty value. Perhaps the AI classifier has computed a 90% chance that this is a pedestrian or maybe only a 5% chance. Depending upon the threshold devised for the AI driving system, and the nature of the object as it is estimated to be, the result can be that the AI stipulates that the object is a pedestrian, though with an assigned chance that it is and an assigned chance that it is not. 

Anyway, let’s assume that the snowman has been classified, or more worrisome, misclassified as a pedestrian. 

Your first thought is that this is funny and not at all a concern. It is seemingly a lot better to misclassify a snowman as a pedestrian than to do the opposite of misclassifying a pedestrian as a snowman. If an AI-based classifier mistook a pedestrian to be a snowman, and if the AI system was devised to assume that snowmen do not move and otherwise are not to be a matter of attention, this could lead to some rather unfortunate and possibly ugly consequences.   

Hopefully, even in this reversal of a misclassification, once the pedestrian (“snowman”) started to walk or move, the sensors would detect the action, and the AI classifier would reclassify the object to be considered a pedestrian. That doesn’t quite solve the issue though. Perhaps, if the AI had correctly classified the pedestrian at the start of the process, it would have maybe slowed down the self-driving car since there appeared to be a pedestrian near the roadway. Now, somewhat after-the-fact, having reclassified, the available time to take an avoiding driving action might be diminished and thus increase the risks associated with the existing driving scene.   

Back to the situation of misclassifying the snowman as a pedestrian. 

You are perhaps now thinking that it is “safest” to have made the misclassification in that direction rather than somehow doing the reverse misclassification. All told, this would seem to be a “get out of jail free” card, namely that it is better to misclassify (if misclassification is inevitable) toward being a human than being a non-human (i.e., a pedestrian in lieu of a snowman).   

Yes and no.   

It partially depends upon what actions the AI driving system has either prior devised via the use of ML/DL or been explicitly programmed to do when encountering a pedestrian.   

Suppose the AI determines that since this does seem to be a pedestrian, the self-driving car should slow down. This seems quite prudent. The pedestrian is standing near the curb. There is a possibility that the pedestrian might opt to suddenly leap into the street or dart across the road. Jaywalking happens all the time.   

Admittedly, this pedestrian is not moving around, and nor crouched as though about to lunge into the street. By all appearances, the pedestrian seems to be at a standstill and not an immediate threat to the path of the self-driving car. But, better to be safe than sorry, as they say in self-driving cars. 

The self-driving car slows down.   

Meanwhile, the sensors continue to feed data about the object (and the other myriad of objects in the scene), just in case this particular object (which is now assumed to be a pedestrian), makes any sudden moves. 

You could argue that the act of slowing down, when slowing isn’t required, would be a somewhat unintended and possibly adverse consequence of this misclassification. Perhaps a human driver in a car behind the self-driving car is caught off-guard. There doesn’t seem to be any reason whatsoever to be surprisingly slowing down. The human driver wouldn’t even imagine that the snowman is the culprit in this case. Human drivers see snowmen all the time and realize right away that it is a snowman, mindfully classifying the snowman as indeed being a snowman.   

You can still assert that the slowing down is fine, and though perhaps disturbing to the human driver in the car behind, nonetheless not a big deal.   

Let me take you on a slippery slope about this. Assume that there are lots and lots of self-driving cars on the roadways. Envision that they are all using the same AI-based classifiers (at least for a given brand and model). Whenever they spot a snowman, they each of their own accord will slow down. This happens by the thousands upon thousands of those self-driving cars. None of them classifying a snowman as indeed a snowman (well, in some instances), and always opting to slow down under the misclassification of the snowman-as-pedestrian.   

If the world only consisted of self-driving cars, perhaps this would be dandy. But, the reality is that there will be a mix of self-driving cars and human-driven cars for quite a while, likely decades (there are about 250 million conventional cars in the U.S. alone, and they are not going to overnight be replaced by self-driving cars). These “safety first” self-driving cars are going to disrupt on a large-scale the human-driving population. In theory, this could end-up leading to human drivers rear-ending those self-driving cars (being caught off-guard by the slowing action) or lead to road rage against self-driving cars (we’ll get in a moment to the counterargument about the nature of human drivers, hold on).   

I don’t want to extend that futuristic vision very far since it does fall apart rather quickly.   

Presumably, the automakers and self-driving tech firms would get feedback about the exasperating misclassifications and take action to enhance the classifier for dealing with the “snowman apocalypse” if you will. 

And, for those seeking to ultimately ban human driving, under the assumption that AI driving systems will be safer as drivers (not drinking and driving, not driving distracted, etc.), they would undoubtedly use this snowman-as-pedestrian reaction by human drivers as yet additional evidence that human drivers need to go (which, some human drivers insist you will only take away their driving whence you pry their dead cold hands from the wheel).

  

For more details about ODDs, see my indication at this link here: https://www.aitrends.com/ai-insider/amalgamating-of-operational-design-domains-odds-for-ai-self-driving-cars/ 

On the topic of off-road self-driving cars, here’s my details elicitation: https://www.aitrends.com/ai-insider/off-roading-as-a-challenging-use-case-for-ai-autonomous-cars/ 

I’ve urged that there must be a Chief Safety Officer at self-driving car makers, here’s the scoop: https://www.aitrends.com/ai-insider/chief-safety-officers-needed-in-ai-the-case-of-ai-self-driving-cars/ 

Expect that lawsuits are going to gradually become a significant part of the self-driving car industry, see my explanatory details here: https://aitrends.com/selfdrivingcars/self-driving-car-lawsuits-bonanza-ahead/   

Conclusion   

There are a slew of other considerations on this rather simple but telling snowman-as-pedestrian dilemma. 

Someone opts to purposely build a snowman in the street as a joke or prank on self-driving cars, which is distinctly not a good idea, and I’ve discussed repeatedly in my columns that people pranking self-driving cars ought to not do so. By the way, in case you are worried that I’ve just let the cat out of the bag, please know that people do sometimes build snowmen in the street just for fun, not due to self-driving cars, and hence this is something that self-driving cars need to be prepared for.   

What does the self-driving car do?   

Human drivers would presumably ascertain that it is a snowman, and in a civil manner drive slowly around the obstruction. Some self-driving cars of today would do the same, while other brands or models might get logically jammed-up about what to do and send an alert to the fleet operator. And so on.   

One argument is that self-driving cars ought to not be on our public roadways until they have been taught or “learned” how to deal with all these various roadway aspects. Others argue that the only viable way for the AI driving systems to be readied involves being on public roadways, rather than relying solely on simulations and special closed training tracks. This is an ongoing and at times acrimonious debate.   

Here’s another twist on the snowman-as-pedestrian.   

Even if you believe that defaulting to the snowman-as-pedestrian is a safer way to treat the matter, nonetheless the public at large might become concerned that self-driving cars cannot seem to differentiate between the likes of a snowman and a pedestrian.   

Say what? 

This to most humans is a rather obvious and ordinary form of classification. If self-driving cars cannot figure this out, it bodes for some grave concern.   

Furthermore, those same qualms might be extended further, leading to the trepidation that maybe there are lots of other misclassifications going on. Maybe fire hydrants are being classified as pedestrians. Maybe small trees are being classified as pedestrians. Where does this end? Indeed, maybe the AI-based classifier is classifying all objects as pedestrians. 

For those in the self-driving car industry, they would say that kind of thinking is misguided and outright hysteria. Maybe so, but it is useful to keep in mind that the public at large is the determiner of whether self-driving cars will be on our roadways, doing so via their elected officials and the regulations that are ultimately put in place or as laws are adjusted based on public opinion.   

There is also the ever-present specter of lawsuits that might one day be launched against those that make self-driving cars. Suppose a self-driving car gets into a car accident, one that the AI ought to arguably have avoided. An astute attorney during the trial might ague to the jury that this AI was (by implication) so bad that it couldn’t even identify a snowman.  

All because of a normally joyous and completely uneventful snowman. 

Despite all of that bit of an icy tale about snowmen, one might say that we ought to not take this instance and turn it into a snowball that runs down a snow hill and becomes a larger issue than it deserves (let’s avoid making a mountain out of a molehill, one would contend).   

Wait for a second, here’s another viewpoint, maybe tell children they can no longer make snowmen near the street anymore. This comports with the belief by some that the real-world will need to conform to what self-driving cars can do, rather than self-driving cars being sufficiently improved to handle the real-world that they are immersed in. 

One shudders to think that kids would no longer be able to make snowmen out in front of their homes, and assuredly is not the spirit of the snowy season and an absurdly upside-down way of solving things.   

As they say, snowmen aren’t forever, but their memories are.   

Copyright 2021 Dr. Lance EliotThis content is originally posted on AI Trends.  

[Ed. Note: For reader’s interested in Dr. Eliot’s ongoing business analyses about the advent of self-driving cars, see his online Forbes column: https://forbes.com/sites/lanceeliot/] 

http://ai-selfdriving-cars.libsyn.com/website 

This post was first published on: AI Trends