Hyperedge- . IoT, Embedded Systems, Artificial Intelligence,
Programming an AI driving system to climb up a ramp onto a flatbed is not very high on the list of top priority development efforts. It might be stubborn. (Credit: Getty Images)

By Lance Eliot, the AI Trends Insider   

You likely know the catchphrase “stubborn as a mule.” We all know people that are altogether inarguably stubborn and exercise mule-like behavior. Turns out, we often consider things to be stubborn-like too. For example, a car can seem to be just about as stubborn as a mule, perhaps being downright bull-headed. 

Here’s an example of something I witnessed first-hand the other day. A tow truck was getting ready to take a car for a tow. This was a flatbed style tow truck, consisting of a flat and rear raised area that serves to hold and essentially piggyback a car for being transported. In that sense, the tow truck isn’t as much towing the car and instead of giving it a ride.   

The tail end of the flatbed portion tilts at a somewhat acute angle to allow for driving a car up onto the riding platform. If needed, a car could be towed from ground level and pulled agonizingly up the ramp by a special winching apparatus. Presumably, this feature is reserved for cars that can’t be driven, such as after a car crash or when the engine has conked out and won’t start. In this instance, the car appeared to be quite drivable, and thus things were readied for the car to make its way up the ramp, ultimately aiming to become nestled and secured onto the flatbed.   

The person that owned the car got into the driver’s seat and aligned the vehicle with the skids of the tilting rampway. You could see that the driver was a bit nervous about this kind of driving predicament.   

I say predicament because whoever is driving the car will need to make sure the tires of the car are able to perfectly make their way into the ramp skids. Next, you need to give the car enough gas to get it to readily accelerate up the steep ramp. Of course, you also need to make sure that you come to a proper stop once you’ve made your way up the ramp and ended up on the flatbed portion.   

There is a danger that you might overshoot the acceleration needed, possibly zooming up the ramp and onto the flatbed, and subsequently find yourself zipping forward out of control and ramming into the back of the tow truck itself. This car did not yet seem to have any outward appearance of damages. It would certainly be a shame to crush the front of the car or otherwise dent this pristine-looking vehicle, simply by the act of trying to get it onto the tow truck flatbed.   

The tow truck driver made no effort to assuage the car driver’s qualms.   

I’m sure that if the tow truck driver had offered to do the driving, the driver would likely have acquiesced, gleefully so. Meanwhile, the driver looked to be the type of person that would not ever ask to have the tow truck driver do the driving. This would be a driver’s loss of face, as it were, and an admission of weakness or helplessness that this particular car driver seemed unwilling to concede. 

There you have it, a moment whereby a reluctant car driver wants to undertake a driving activity that is rarely performed in any semblance of ordinary daily driving, and a tow truck driver that probably has done this driving act a thousand times but is deferring to the ordinary driver in this case.   

What might happen?   

I felt as though I might be witnessing a car accident in the making. It seemed like going to a bullfight and awaiting the moment that the bullfighter might get gored. You assume that the matador is probably unlikely to get into any problematic circumstance, though there is a tense air of possibility and thus the whole thing is certainly worth watching. 

The driver began to inch up the ramp. 

When referring to inches, I mean inches. Time was nearly standing still, that’s how slowly the car was making its way up the ramp. The car was straining to do so. Gravity was working its magic and the weight of the vehicle was helping out. You could tell that the driver was not willing to apply a lead foot to the gas pedal and instead was hoping that the car could crawl rather than dart aboard the flatbed.   

The expression on the face of the tow truck driver was priceless. 

He was somewhat bemused at the difficulty of the car driver. Imagine that you are an artist, and you see someone flail around with a paintbrush and not know how to properly cast paint onto a canvas. The tow truck driver was undoubtedly imbued with car-up-the-ramp artistry after years of toil and was now witnessing the rather lame and ineffectual efforts of a layman at his same craft. 

That being said, bemusement does not translate into cash in one’s pocket. It is doubtful that the tow truck driver was getting paid by the hour. He wanted to finish this job, expediently, and take the car to wherever it had to go. The longer this snail-paced inching took, it meant the tow truck driver was losing out on future opportunities awaiting his valued towing services.   

The tow truck driver yelled to the car driver and inquired as to whether he needed any help. That seemed to spur the car driver to take heightened risks, and sure enough, the car finally bolted up the rest of the ramp and landed squarely onto the flatbed.   

No harm, no foul. 

It is a sure bet that the car driver went home that night and bragged about his driving prowess. You can also bet that the tow truck driver got home and carped about a car driver that took eons to get a car onto his flatbed. Two different views involving two people living in different worlds, as it were.   

Anyway, you could assert that the car was stubborn and didn’t want to go up that ramp.   

I doubt any of us would seriously though contend that the car “knew” it was going up a ramp. All that happened was that physics was working against the car as it tried to climb the ramp. That’s about it.   

You might say the stubbornness was rooted in the car driver. He was unable to smoothly drive the car up the ramp. He wasn’t stubborn in the sense of not wanting to drive the car, it was more like stubbornness about insisting on driving in a situation that he wasn’t versed in driving (and, for which he had an immediate substitute on-hand that did know how to do so). Maybe the car driver was worried that the tow truck driver would be overly callous and not gingerly drive his prized baby (car) onto the flatbed. Or perhaps the car driver wanted the thrill of trying a new and tricky maneuver. People have innumerable reasons for what they do. 

Shifting gears, the future of cars entails self-driving cars. This stubbornness element in the flatbed truck tale brings up an interesting facet about self-driving cars and one that few are giving much attention to. 

First, be aware that true self-driving cars are driven by an AI-based driving system and not by a human driver. Thus, in the case of this flatbed truck scenario, if the car had been a self-driving car, the AI driving system would have been trying to get the car up that ramp and onto the flatbed.   

Secondly, there are going to be instances wherein a human wants a self-driving car to go someplace, but the AI driving system will “refuse” to do so.   

I want to clarify that the AI is not somehow sentient since the type of AI being devised today is not in any manner whatsoever approaching sentience. Perhaps far away in the future, we will achieve that kind of AI, but that’s not in the cards right now.   

This latter point is important because the AI driving system opting to “refuse” to drive someplace is not due to the AI being a sentient being, and instead is merely a programmatic indication that the AI has detected a situation in which it is not programmed to drive. Please dispense with any anthropomorphizing of the AI and realize that this is merely automation that is providing an automation-based invoked response. 

Today’s then intriguing question consists of this: How can you convince a self-driving car to go where you want it to go when it won’t go there?   

We’ll use the loading of the car onto the flatbed as a handy exemplar. 

Time to unpack this matter and see.   

For my framework about AI autonomous cars, see the link here: https://aitrends.com/ai-insider/framework-ai-self-driving-driverless-cars-big-picture/   

Why this is a moonshot effort, see my explanation here: https://aitrends.com/ai-insider/self-driving-car-mother-ai-projects-moonshot/   

For more about the levels as a type of Richter scale, see my discussion here: https://aitrends.com/ai-insider/richter-scale-levels-self-driving-cars/   

For the argument about bifurcating the levels, see my explanation here: https://aitrends.com/ai-insider/reframing-ai-levels-for-self-driving-cars-bifurcation-of-autonomy/ 

Understanding The Levels Of Self-Driving Cars 

As a clarification, true self-driving cars are ones where the AI drives the car entirely on its own and there isn’t any human assistance during the driving task. 

These driverless vehicles are considered Level 4 and Level 5, while a car that requires a human driver to co-share the driving effort is usually considered at Level 2 or Level 3. The cars that co-share the driving task are described as being semi-autonomous, and typically contain a variety of automated add-on’s that are referred to as ADAS (Advanced Driver-Assistance Systems). 

There is not yet a true self-driving car at Level 5, which we don’t yet even know if this will be possible to achieve, and nor how long it will take to get there. 

Meanwhile, the Level 4 efforts are gradually trying to get some traction by undergoing very narrow and selective public roadway trials, though there is controversy over whether this testing should be allowed per se (we are all life-or-death guinea pigs in an experiment taking place on our highways and byways, some contend). 

Since semi-autonomous cars require a human driver, the adoption of those types of cars won’t be markedly different than driving conventional vehicles, so there’s not much new per se to cover about them on this topic (though, as you’ll see in a moment, the points next made are generally applicable). 

For semi-autonomous cars, it is important that the public needs to be forewarned about a disturbing aspect that’s been arising lately, namely that despite those human drivers that keep posting videos of themselves falling asleep at the wheel of a Level 2 or Level 3 car, we all need to avoid being misled into believing that the driver can take away their attention from the driving task while driving a semi-autonomous car.   

You are the responsible party for the driving actions of the vehicle, regardless of how much automation might be tossed into a Level 2 or Level 3. 

For why remote piloting or operating of self-driving cars is generally eschewed, see my explanation here: https://aitrends.com/ai-insider/remote-piloting-is-a-self-driving-car-crutch/   

To be wary of fake news about self-driving cars, see my tips here: https://aitrends.com/ai-insider/ai-fake-news-about-self-driving-cars/ 

The ethical implications of AI driving systems are significant, see my indication here: https://aitrends.com/selfdrivingcars/ethically-ambiguous-self-driving-cars/   

Be aware of the pitfalls of normalization of deviance when it comes to self-driving cars, here’s my call to arms: https://aitrends.com/ai-insider/normalization-of-deviance-endangers-ai-self-driving-cars/ 

Self-Driving Cars And Going Beyond   

Envision a self-driving car that is supposed to get itself onto the flatbed of a tow truck.   

We’ll assume that the self-driving car is fully operational. This is worth mentioning because any problems associated with the autonomous driving capabilities could readily undermine the capability of the AI driving system to perform this particular driving task. 

For example, if the sensors needed for driving weren’t working properly, the car isn’t likely going to be able to drive itself up that ramp and onto the flatbed. The sensor suite, consisting usually of video cameras, radar, LIDAR, ultrasonic units, and the like, is used to detecting the driving environment. Without those devices functioning properly, the AI driving system is essentially blind to the world around the vehicle. By and large, the AI driving system is programmed to not proceed with driving if the sensors aren’t able to work right.  

If this was a scenario whereby the self-driving car was generally inoperable, presumably the tow truck would use the winch to drag the self-driving car up onto the flatbed. This is the same activity that would take place if a conventional car was otherwise inoperable. 

One small twist is that it could be that the normal car aspects of the self-driving car are within operable capacity, and only the AI and related self-driving aspects are out-of-whack. In other words, the car might be a fully drivable car, by a human being, assuming that there are still human-drivable driving controls available in the self-driving car. 

The future design of self-driving cars is that they will likely eliminate the incorporation of human driving controls. Your first thought might be that doing so is counterintuitive. It seems like keeping the driving controls intact would be handy in any circumstance involving the AI driving system being unable to adequately drive the car. A human could take over. 

Well, the counterargument is that a key premise for the advent of self-driving cars is that they will reduce dramatically the annual number of car crashes and fatalities thereof. In the United States alone, there are about 40,000 annual car crash-related fatalities and approximately 2.5 million injuries. Much of that is due to drunk drivers, distracted driving, and other foibles of human drivers. By excising the human driver from behind the wheel, the hope is that the AI driving systems won’t incur those kinds of car statistics (AI driving systems won’t drive drunk, they won’t be distracted, etc.). 

Thus, if you retain the human driving controls, you are keeping the door open to human driving, and are not going to gain the full semblance of benefits by having AI driving systems. The nightmare is that a human will decide to turn off the AI driving system, perhaps doing so after coming out of a nighttime bar and having had too much to drink, and opt to drive the vehicle. Rather than allow that temptation, some assert that the best bet is to remove the human-accessible driving controls entirely.   

We don’t yet know how this will play out. Some insist that no one will ever take away their driving privileges, at least not until you pry their cold dead hands from the steering wheel.   

Returning to the flatbed tow truck scenario, assume that the AI driving system of the self-driving car is working fine and that the car aspects of the self-driving car are working fine too. Okay, you might be tempted to assume that the AI driving system will just zip up that ramp and park that self-driving car on the flatbed, doing so like a jet plane that lands on an aircraft carrier deck.   

Indeed, perhaps the AI driving system would undertake this flatbed tow truck driving challenge a lot better than the car driver did. We wouldn’t expect the AI to be quite so timid as the human driver. By taking the nervousness out of the driving equation, we seem to have a robot-like system that will drive without emotion and therefore drives more straightforwardly.   

Going even further, it causes one to wonder whether the AI driving system might do better at this task than the tow truck driver. Since we are comparing the AI driving system to human drivers, we ought to not just look at the car driver that was a neophyte at driving up onto the flatbed. Indubitably, it makes sense to compare to the expert driver, the tow truck driver, having done this same action zillions of times.   

In theory, if the AI driving system has been programmed for this kind of driving task, one supposes that it will potentially do as good of or even a better job than the car driver did. The problem is that the AI driving system probably has not been programmed for this specific kind of driving task.   

Keep in mind that the automakers and self-driving tech firms are focusing their energies on getting AI driving systems to drive a car safely from point A to point B, such as going from someone’s home to the mall. This involves driving on normal streets and highways. It involves coping with bike riders and pedestrians. And so on.   

That alone is a moonshot-like load of car driving difficulties. 

Programming an AI driving system to climb up a ramp onto a flatbed is not very high on the list of top priority development efforts. At best, it is on the edge or corner cases list, meaning that it is something identified as eventually worth pursuing, but considered a low priority right now.   

Aha, you are thinking, maybe the AI doesn’t need specific programming for going up a ramp and parking on the flatbed. Seems like that type of driving is generalizable and can be devised by anyone or anything that knows how to drive. 

In other words, pretend we went and found a teenage novice driver. He barely knows how to drive a car. We present him with the situation of the flatbed tow truck. Could this newbie driver figure out what to do? The reasonable answer is that yes, he could. The teenager might not like doing so, or might have difficulty with the task, but overall, we can likely agree that he could get the job done.   

Unfortunately, there are very few AI driving systems that have that kind of generalizable capability. In fact, we would want to think twice about that kind of capacity. There is a chance that the AI driving system might attempt to drive in inherently dangerous settings. Realize that the AI has absolutely no common-sense reasoning. 

No matter what you might say about teenagers, they do have some modicum of common sense. Sure, go ahead and scoff at this point, but when comparing thinking human beings, albeit immature ones, they still have a common-sense facility that no one has been yet able to get an AI system to imbue. Admittedly, the teenager might “reason” themselves into a really bad situation, and for that, we would all be saddened, though nonetheless they at least had some capacity to reason about what they are doing.   

Here’s then where this all lands in terms of the flatbed truck scenario. 

Unless an AI driving system has been programmed or ostensibly trained to cope with the tow truck matter, the odds are that the AI driving system would be ill-prepared to drive up that ramp. 

In addition, the sensors would likely be detecting the tow truck and the flatbed, and not be able to discern what it all portended. In short, the tow truck and its elements would be a blob that is blocking the path forward. 

The AI driving system would likely balk at being commanded to drive ahead. There are too many obstacles and seemingly unknown objects that are in the path forward. In a similar manner to how the AI driving system might be programmed to avoid ramming a parked car that is ahead of the self-driving car, by refusing to move forward into the rear-end of a parked car, this same reaction is bound to be the response by the AI driving system to the flatbed truck.   

You could say that the AI driving system would “refuse” to try and drive up the ramp. Again, recall that the refusal is not by a conscious effort. This is a refusal in the same manner that you might have a garage door with a built-in obstruction detector that won’t finish closing the garage door because an object is in the way.   

How can you convince the AI driving system to get up that darned ramp? 

Setting aside the earlier point that you could winch the self-driving car up the ramp, which doesn’t count per se since the self-driving car is merely a hefty paperweight in that instance (that’s not the spirit of the problem being tackled herein), we can consider some other options. 

You could try to trick the AI driving system.   

Perhaps mask the appearance of the tow truck and its ramp, making it look more like a normal road is ahead of the self-driving car. 

This is pretty much a really bad idea.   

Some are worried that it might be overly easy to fool a self-driving car into driving in settings that it should not be doing so. Various adversarial attacks have been experimented with. In any case, the idea of doing this even with good intentions (versus evildoer intentions) is also rife with problems and dangers. You might regrettably fool the AI into getting the self-driving car into a quite dicey predicament for everyone involved.   

Another approach would be to contact the fleet operator of the self-driving car. Here’s what that means. Unlike conventional cars, it is anticipated that all self-driving cars will be part of some fleet, of one kind or another. The fleet operator will be able to use OTA (Over-The-Air) electronic communications to download software updates into the onboard AI driving system. Also, the OTA can be used to collect or upload data from the self-driving car, such as the driving scene data that was derived by the sensor suite.   

It could be that the AI driving system has a special software-enabled mode or added software component that has been devised to cope with this kind of tow truck flatbed driving scenario. Maybe it hasn’t been loaded into the self-driving car as yet, or maybe it is on-board and dormant until activated by an authorized remote indication. 

In that case, in theory, the component could be downloaded and then invoked, or if already on-board then simply activated. This is a bit of a stretch in that doing so on the fly is rather sketchy, but this does count as a possibility.   

Another thought you might have would be to ask a remote operator to take over the driving controls and manually drive the self-driving car up that vaunted ramp.   

Well, now you’ve opened an entire can of worms. 

Some self-driving cars are going to be outfitted with remote accessible driving controls, and some will not. Those that are opposed to remote accessible driving controls point out that such a capacity could be used by bad actors and in insidious ways. I won’t say much more about that here. Even if such remote driving were done by an authorized person, they are reliant upon the electronic communication connection between the vehicle and the agent that is doing the driving. 

You might be somewhat okay with the electronic communication delays or disruption that could occur, depending upon the driving setting involved. For example, when a rover on another planet is remotely controlled, you have to expect that the communications are time-delayed and fractured. Of course, here on earth, the danger is that the vehicle is potentially surrounded by humans that are going to get hit, and a disruption in the communications linkage could lead to dire consequences.   

In short, there is a great deal of controversy about having remote driver accessibility to self-driving cars and you cannot pin your hopes on that kind of feature existing to solve the flatbed truck conundrum that we are discussing.   

There is a notable variant of that kind of approach. The variation is that a remote agent can provide advice to the AI driving system. This is an important distinction, namely between remote driving and the alternative of providing remote guidance or advice. In the use case of remote guidance, the AI driving system is still considered in charge of the driving task. 

The remote human makes recommendations to the AI driving system. The AI driving system must use its own capabilities to then ascertain whether to proceed. This has various limitations and tradeoffs, somewhat trying to get the best of both worlds, and might be viable to cope with the flatbed tow truck situation.   

For more details about ODDs, see my indication at this link here: https://www.aitrends.com/ai-insider/amalgamating-of-operational-design-domains-odds-for-ai-self-driving-cars/ 

On the topic of off-road self-driving cars, here’s my details elicitation: https://www.aitrends.com/ai-insider/off-roading-as-a-challenging-use-case-for-ai-autonomous-cars/ 

I’ve urged that there must be a Chief Safety Officer at self-driving car makers, here’s the scoop: https://www.aitrends.com/ai-insider/chief-safety-officers-needed-in-ai-the-case-of-ai-self-driving-cars/ 

Expect that lawsuits are going to gradually become a significant part of the self-driving car industry, see my explanatory details here: https://aitrends.com/selfdrivingcars/self-driving-car-lawsuits-bonanza-ahead/   

Conclusion   

A self-driving car is not a mule. 

That being said, you could assert that at times an AI driving system will be stubborn in terms of “refusing” to drive where you want it to go. The beauty of human driving is that a human can drive however they want, meaning they can drive in places that you wouldn’t think a car could go. The downside and ugliness of human driving involve driving places you ought to not go.   

Perhaps someday AI systems will be astute and cunning, able to navigate all kinds of driving situations that it has not heretofore seen. Until then, it won’t make sense to get mad at the mule, and definitely don’t kick one, since it will only hurt your own foot and won’t especially bother the AI. 

  

Copyright 2021 Dr. Lance Eliot  

http://ai-selfdriving-cars.libsyn.com/website 

This post was first published on: AI Trends