Hyperedge- . IoT, Embedded Systems, Artificial Intelligence,
The use of edge computing and the use of the cloud for self-driving vehicles is not a win-lose affair, and instead ought to be considered a win-win. (Credit: Getty Images) 

By Lance Eliot, the AI Trends Insider 

What happens when you mix together cats and dogs? 

Some would say the result could be rather volatile. This is because we usually believe and tend to accept the notion of cats versus dogs. But that doesn’t always have to be the case. Instead of saying cats versus dogs, it might be better to emphasize cats and dogs. Anyone that has watched online videos about cats and dogs would certainly see that these two beloved animals can get along.   

There is nothing more endearing than an excitable dog and an effervescent cat that opt to play together, plus sharing a hard-earned nap side-by-side, and otherwise relishing jointly their coexistence on this planet. Yes, they can coexist and even become BFFs (best friends forever).   

What tends to tear them apart in non-domesticated settings amid the wilds of nature involves the bitter fight for survival and having to battle over scarce food that they both are seeking. One can certainly understand how being pitted against each other for bare-bones survival purposes might get them into fierce duels when nourishment is on the line.   

Some distinctive animalistic behavioral differences enter into the picture too. For example, dogs delight in chasing after things, and thus they are prone to chasing after a cat that they might spy and seek to play with. Cats aren’t necessarily aware that the dog is giving chase for fun and are apt to therefore react as though the pursuit is nefarious. 

Another aspect of a notable difference is that dogs tend to wag their tails when they are happy, while cats usually whisk their tails when they are upset. From a dog’s perspective, the cat’s tail wagging might seem like a friendly gesture and an indication that all is well. From a cat’s perspective, the dog’s tail whipping back-and-forth might be interpreted as a sign of an angry beast that ought to be avoided. In that sense, you could conjecture that the difficulty of having cats and dogs get along is based on everyday miscommunication and misunderstanding of each other.   

Why all this discussion about cats and dogs? 

Because there is another realm in which there is a somewhat false or at least misleading portrayal of two disparate entities that supposedly don’t get along and ergo must be unpleasant adversaries. I’m talking about edge computing and the cloud.   

Some pundits claim that it is edge computing versus the cloud. Wrong! 

The more sensible way to phrase things entails striking out the “versus” and declaring edge computing and the cloud (for those of you that prefer that the cloud get first billing, it is equally stated as the cloud and edge computing; you are welcome to choose whichever order seems most palatable to you).   

The point is that they too can be BFFs. 

Let’s consider a particular context to illustrate how edge computing and the cloud can work together hand-in-hand, namely within the realm of autonomous vehicles (AVs).   

As avid readers of my column are aware, I’ve emphasized that we are on the cusp of some quite exciting days ahead for the advent of autonomous vehicles. There is a grand convergence taking place that involves high-tech advances, especially in the AI arena, along with continued miniaturization of electronics and the ongoing cost reduction of computing that is inexorably making AI-based driving systems efficacious.   

When I refer to autonomous vehicles, you can generally interchange the AV moniker with a reference to self-driving, which is the somewhat informalized and less academic-sounding way to describe these matters. There are autonomous cars, autonomous trucks, autonomous drones, autonomous submersibles, autonomous planes, autonomous ships, and so on that are gradually being crafted and put into use. You can readily recast this by saying there are self-driving cars, self-driving trucks, self-driving drones, self-driving submersibles, self-driving planes, and self-driving ships, rather than using the AV naming.   

A rose by any other name is still a rose. 

For this discussion about the cloud and edge computing, it will be easiest to focus on self-driving cars, though you can easily extrapolate the remarks to apply to any of the other self-driving or autonomous vehicle types too. 

How does the cloud pertain to self-driving cars?   

Via the use of OTA (Over-The-Air) electronic communications, it is possible and extremely useful to push new software updates and patches into the onboard AI driving system of a self-driving car. This remote access capability makes the effort to quickly apply the latest software an enormous breeze, rather than having to take the vehicle to a dealership or car shop and physically have the changes enacted.   

OTA also provides for uploading data from the onboard systems up into the cloud. Self-driving vehicles have a slew of sensors that are used to detect the surroundings and figure out where to drive. In the case of self-driving cars, this oftentimes includes video cameras, radar, LIDAR, ultrasonic units, and the like. The data collected can be stored within the vehicle and can also be transmitted up into the cloud of the fleet operator or automaker.  

You hopefully have a quick gist now of what the cloud and self-driving cars have in common. 

Next, consider the nature of edge computing and how it applies to self-driving cars.   

Edge computing refers to the use of computer-based systems that are placed at the “edge” or near to the point at which a computing capability is potentially needed. For roadway infrastructure, there is interest in putting edge computing devices along our major highways. The notion is that this computing facility would be able to keep track of the nearby roadway status and electronically signify what the status is. 

Imagine that you are driving along on a long and winding road (hey, that’s something worthy of making a song about). You are dutifully keeping your eyes on the highway and are trying to drive with abundant care and attention. Unbeknownst to you though is that there is some debris about a mile up ahead, sitting smack dab in the middle of your lane. 

Without getting any kind of precautionary alert, you are bound to unexpectedly come upon the debris and react impulsively. Perhaps you swerve to avoid the debris, though this veering action might cause you to lose control of the vehicle, or maybe you slam head-on into traffic coming in the other direction. Had you been tipped beforehand about the debris you could have prepared to cope with the situation.   

Assume that an edge computing device has been placed along that stretch of road. The edge computer has been getting info about the roadway and accordingly taking action. Upon getting notified about the roadway debris, the edge computer has contacted the local authorities and requested that a roadway service provider come out and remove the debris. Meanwhile, this edge computing device is also acting as a kind of lighthouse beacon, sending out an electronic message to alert any upcoming traffic about the debris. 

A car that was equipped with a receiver that could read the edge computer emitted signals could let a human driver know that there is debris up ahead. In the case of a self-driving car, the AI driving system would be receiving the signal and opt to plan the driving task to deal with the soon-to-be reached debris. 

There are major efforts underway to develop and deploy V2I (vehicle-to-infrastructure) capabilities that would undertake the kind of activities that I’ve just depicted. We will eventually have traffic signals that are more than simply light-emitting red-yellow-green lanterns. You can expect that traffic signals will be packed with computing capabilities and be able to perform a host of traffic control tasks. The same can be said for nearly all types of roadway signs and control features. The speed limit can be conveyed electronically, in addition to being shown on a signboard.   

Since we are discussing V2I, it is worthwhile to also mention V2V (vehicle-to-vehicle) electronic communications. 

Cars will be equipped to send messages to other nearby cars via V2V. Returning to the debris scenario, suppose a car came upon the debris and no one else had yet encountered the obstruction. This first car to do so could transmit an electronic message to alert other nearby cars to be wary of the debris. Other cars that are within the vicinity would presumably pick up the electronic message and then warn the driver of the vehicle accordingly. 

One aspect of V2V that comes into question is the longevity of such messages. If there is a bunch of car traffic, they would all be sharing about the debris. On the other hand, if the first car to encounter the debris sends out a message, but there isn’t any other nearby traffic, this implies that the debris warning won’t be hanging around and able to forewarn others. A car that perchance comes along an hour later on this perhaps somewhat deserted highway will not be within range of the other car and therefore not get the helpful warning.   

This is a key point in favor of edge computing as an augmentation to V2V (or, in lieu of V2V if not otherwise available).   

An edge computing device could be stationed along a roadway and be scanning the V2V messaging.   

By examining the V2V crosstalk, the edge device opts to start beaconing that there is debris on the road up ahead. This now allows for greater longevity of the messaging. Even after that first car is long gone and much further away, the edge computer can continue to make any additional traffic aware of the situation. Note that it is also possible that the car finding the debris might have done a direct V2I to the edge device, in which case that’s another means for the edge computer to discover what the status of the roadway is.   

Time for a twist in the tale. 

I mentioned earlier that some are suggesting that edge computing and the cloud are at loggerheads with each other. You might be puzzled as to how cloud computing and edge computing are rivals when it comes to the self-driving car setting that I’ve described (they aren’t, but some are claiming that they are).   

Here’s the (vacuous) assertion. 

Those pundits are claiming that the time lag of the cloud versus edge computing means that the cloud is unsuitable for self-driving cars, while edge computing is suitable since it is a lessened latency (by-and-large) for electronically communicating with those in-motion self-driving vehicles. 

We can unpack that contention and reveal that it is invalid overall. 

First, it will be useful to clarify the difference between autonomous vehicles and semi-autonomous vehicles.   

For my framework about AI autonomous cars, see the link here: https://aitrends.com/ai-insider/framework-ai-self-driving-driverless-cars-big-picture/ 

Why this is a moonshot effort, see my explanation here: https://aitrends.com/ai-insider/self-driving-car-mother-ai-projects-moonshot/ 

For more about the levels as a type of Richter scale, see my discussion here: https://aitrends.com/ai-insider/richter-scale-levels-self-driving-cars/   

For the argument about bifurcating the levels, see my explanation here: https://aitrends.com/ai-insider/reframing-ai-levels-for-self-driving-cars-bifurcation-of-autonomy/   

Understanding The Levels Of Self-Driving   

As a clarification, true self-driving cars are ones where the AI drives the car entirely on its own and there isn’t any human assistance during the driving task.   

These driverless vehicles are considered Level 4 and Level 5, while a car that requires a human driver to co-share the driving effort is usually considered at Level 2 or Level 3. The cars that co-share the driving task are described as being semi-autonomous, and typically contain a variety of automated add-on’s that are referred to as ADAS (Advanced Driver-Assistance Systems).   

There is not yet a true self-driving car at Level 5, which we don’t yet even know if this will be possible to achieve, and nor how long it will take to get there.   

Meanwhile, the Level 4 efforts are gradually trying to get some traction by undergoing very narrow and selective public roadway trials, though there is controversy over whether this testing should be allowed per se (we are all life-or-death guinea pigs in an experiment taking place on our highways and byways, some contend).   

Since semi-autonomous cars require a human driver, the adoption of those types of cars won’t be markedly different from driving conventional vehicles, so there’s not much new per se to cover about them on this topic (though, as you’ll see in a moment, the points next made are generally applicable). 

For semi-autonomous cars, it is important that the public needs to be forewarned about a disturbing aspect that’s been arising lately, namely that despite those human drivers that keep posting videos of themselves falling asleep at the wheel of a Level 2 or Level 3 car, we all need to avoid being misled into believing that the driver can take away their attention from the driving task while driving a semi-autonomous car.   

You are the responsible party for the driving actions of the vehicle, regardless of how much automation might be tossed into a Level 2 or Level 3.   

For Level 4 and Level 5 true self-driving vehicles, there won’t be a human driver involved in the driving task. All occupants will be passengers. The AI is doing the driving. 

For why remote piloting or operating of self-driving cars is generally eschewed, see my explanation here: https://aitrends.com/ai-insider/remote-piloting-is-a-self-driving-car-crutch/   

To be wary of fake news about self-driving cars, see my tips here: https://aitrends.com/ai-insider/ai-fake-news-about-self-driving-cars/   

The ethical implications of AI driving systems are significant, see my indication here: https://aitrends.com/selfdrivingcars/ethically-ambiguous-self-driving-cars/ 

Be aware of the pitfalls of normalization of deviance when it comes to self-driving cars, here’s my call to arms: https://aitrends.com/ai-insider/normalization-of-deviance-endangers-ai-self-driving-cars/   

Delving Into Edge Computing And The Cloud 

Returning to the point made about the claimed slowness of cloud access in contrast to edge computing access, you’ll see in a moment that this is a generally legitimate distinction but that it is being misapplied and used in a misguided or misleading manner.   

As an aside, there are obviously instances whereby the access to a conventional cloud could be slower than access to an edge device (all else being equal, we might expect this), but there are also instances whereby the cloud access might be faster (though, likely rarer, depending upon numerous technological assumptions).   

Anyway, do not be distracted by the ploy about the access timing. It is like one of those infamous card tricks or hat tricks, getting you to look elsewhere and not keeping your eye on the ball. The trickery involves an allusion to the idea that an autonomous car is going to be taking active driving instructions from either the cloud or edge computing. To this, I say hogwash. Admittedly, some are pursuing such an approach, but I’ve previously and extensively argued this is a dubious avenue.   

Here’s what I mean. 

Consider for a moment the role of a human driver when approaching the earlier depicted scenario about debris being in the roadway. A human driver might receive a message, however so received, whether by text message, phone call, etc., letting them know that there is debris up ahead. The human driver then decides to perhaps slow down, getting ready to potentially come to a stop. Upon reaching the debris, the human driver opts to veer into the emergency lane to the right of the roadway, undertaking a means to deftly drive around the roadway debris.   

Notice that the driving actions were entirely performed by the human driver. Even if a text message might have said to slow down and get ready to aim to the right of the debris, nonetheless the final choice of how to drive the car was on the shoulders of the driver. They merely received hints, tips, suggestions, or whatever you want to call it. In the end, the driver is the driver.   

The reason for covering that seemingly apparent aspect of the driver being the driver is that (in my view) the AI driving system has to be “the driver being the driver” and not be driven via some remote outside-the-car entity.   

If messages are coming from the edge device about what to do, the AI driving system is still on its own, as it were, needing to ascertain what to have the driving controls undertake. The same thing applies to any communications with the cloud. The AI driving system, despite whatever the cloud might be informing the vehicle, should still be “the driver” and undertake the driving task.   

I think you can see why latency would be a crucial matter if the AI driving system was dependent upon an external entity to actually drive the controls of the vehicle. Just imagine that a self-driving car is moving along at say 75 miles per hour, and there is an external entity or being that is controlling the driving (such as a human remote operator). All it takes is for a split-second delay or disruption in the communications, and a calamity could readily result.   

Okay, so if the AI driving system is the driver, this also implies that the latency from the edge computing or the cloud should not make a demonstrative difference per se. Just as a human driver cannot assume that something external to the car is always available and always reliable, the driving aspects have to be dealt with by the onboard AI driving system and do so regardless of available externally augmented info.   

In the roadway debris example, suppose that there is an edge computing device nearby that logged an indication about the debris, and accordingly is beaconing out an electronic warning. A car is coming along. In a perfect world, the beacon signal is detected and the driver is forewarned. 

In the real world, perhaps the beacon is faltering that day and not sending out a solid signal. Maybe the edge device is broken and not working. Furthermore and alternatively, whatever device on the car that picks up the signal might be faulty. And so on.   

As long as the AI driving system considers such connections as supplemental, there is not a glaring issue per se, since the AI is presumably going to cope with the debris upon directly detecting the matter. Sure, we would prefer a heads-up, but the point is that the heads-up is not essential to the driving task.   

Some might misinterpret this point as though I am suggesting that there should not be any such external information being provided, which is not at all what I am saying. Generally, the more the merrier in terms of providing relevant and timely info to a driver. The key is that the driver, even without such info, must still be able to drive the car. 

For more details about ODDs, see my indication at this link here: https://www.aitrends.com/ai-insider/amalgamating-of-operational-design-domains-odds-for-ai-self-driving-cars/ 

On the topic of off-road self-driving cars, here’s my details elicitation: https://www.aitrends.com/ai-insider/off-roading-as-a-challenging-use-case-for-ai-autonomous-cars/ 

I’ve urged that there must be a Chief Safety Officer at self-driving car makers, here’s the scoop: https://www.aitrends.com/ai-insider/chief-safety-officers-needed-in-ai-the-case-of-ai-self-driving-cars/ 

Expect that lawsuits are going to gradually become a significant part of the self-driving car industry, see my explanatory details here: https://aitrends.com/selfdrivingcars/self-driving-car-lawsuits-bonanza-ahead/   

Conclusion 

The use of edge computing and the use of the cloud for self-driving vehicles is decidedly not a win-lose affair, and instead ought to be considered a win-win synergy. Unfortunately, it seems that some feel compelled to pit the advent of edge computing and the advent of the cloud against each other, as though these two have to be acrimonious enemies. Use the edge, don’t use the cloud, because of the claimed latency aspects, these pundits exclaim. 

They are making a mishmash that doesn’t hold water in this context.   

One might (generously) excuse their misguided viewpoint as being similar to misunderstanding the wagging of the tail of a dog and the whisking of the tail of a cat. In any case, trying to rile up a sensible and peaceful coexistence into a seemingly adverse battle or struggle of one over the other is not particularly productive.   

A last thought for the moment on this topic.   

The remaining and beguiling question is whether the somewhat analogous example entailing the dogs and cats means that the cloud is the dog and the edge computing is the cat, or perhaps the dog is the edge computing and the cat is the cloud. I’ll ask my beloved pet dog and cat what they say, and maybe let them duke it out to decide.   

Well, then again, I know that will likely take things in stride, gently nudging upon each other as they mull over this thorny question, and they are likely to arrive at an answer that they both find delightful. That’s just how they are.  

Copyright 2021 Dr. Lance Eliot  http://ai-selfdriving-cars.libsyn.com/website 

This post was first published on: AI Trends