the famous tullock spikes thought experiment pertains to ai autonomous cars hyperedge embed image
The Tullock spike, a thought experiment to promote safer driving, might be outmoded inside a self-driving car—or perhaps not. (Credit: Getty Images) 

By Lance Eliot, the AI Trends Insider   

One of the most fascinating thought experiments about the safety of how we all drive our cars is the legendary “Tullock spike” idea. 

Here’s how it goes.   

Ad: Get the help you need with your project.

Imagine that on the steering wheel of every car there was a steel spike protruding toward the driver. Upon sitting down in the driver’s seat, you would be within a fraction of an inch of the endpoint of the spike. While driving such an equipped car, you would be continually under the threat of piercing your own chest by any driving action that caused you to lurch forward in the driver’s seat.   

This is a crafty and yet quite simple device that would get your attention, and presumably remind you to drive safely. 

That is the underlying crux of the thought experiment dreamed-up by Professor Gordon Tullock of George Mason University sometime in the early 1960s. It’s mentioned in his 1962 book Calculus of Consent co-authored with James Buchanan, but others attribute the idea to Armen Alchian of UCLA. In any case, the popularity of the steel spike notion has garnered Tullock’s name. 

Would you be a safer driver if you had a steel dagger threatening your existence? It seems patently obvious that you would be.   

All of us would drive as though our lives depended upon it. Indeed, the beauty, or perhaps the ugliness of the spike concept is that you would be more conscious of the dangers involved in driving a car. The act of driving a car carries grave risks all the time, yet we enormously downplay those risks. 

Driving recklessly is easy to do.   

There is a huge mental gap between thinking about how to safely drive and the potential result of driving poorly. Those drivers that zip along on the freeway, weaving in and out of lanes, do not make a mental connection between their speed and their chaotic driving actions. While inside the bubble of a car, the outside world at times appears to be a simulation as though you are playing at driving while inside a video game or maybe inside The Matrix. 

The spike would reset that way of thinking.   

Acting as a front-and-center reminder of the dangers of driving, the abstract elements of driving safely would become exceedingly tangible. Tapping the brakes with any sudden movement would likely cause your chest to take a sharp nick from the tip of the dagger. Fortunately, not enough of a bleeder to do full harm. Nonetheless, those occasional cuts and pokes would add to the reminder of what happens when you aren’t driving safely.   

On the surface, this steel spike seems quite telling.   

Though if you try to carry this thought experiment to further mental reaches, the whole thing begins to somewhat unravel. Suppose you are a really safe driver and another car careens out of nowhere and rams into your car. There was nothing you could do to avoid the collision. No matter how attentive you were, the other vehicle mercilessly swung out of traffic and managed to steer into your car. The sad result is that you are fatally gored on the spike, albeit due to no particular cause of your own doing.   

Perhaps you shouldn’t have used a car at all. The safest way to avoid a car accident is to entirely avoid using cars. That doesn’t seem like a satisfactory way to look at things. 

Perhaps you are supposed to only drive on side streets and drive slowly. Even in that context, there are still opportunities for you to get caught on the spike. A dog runs out from a wooded area onto the street. Even though you were going only 15 miles per hour, you hit the brakesand once again the dagger claims a driver.   

Setting aside those gruesome thoughts, there is still something useful about the steel spike concept. Here’s the deeper meaning.   

This thought experiment arose during the time that seat belts were still coming into being. For those of you around during those times, you might recall the heated debates about wearing a seat belt. Some insisted they would never wear one. One argument was that it constrained your ability to drive and thus would make you get into car accidents, rather than aiding in avoiding them. Another argument was that they were unsightly and marred the joyful experience of driving.   

People came up with some real doozy of reasons to keep from putting on a seat belt. Today, we all seem to agree that wearing seat belts makes sense. The advent of seat belt usage control indicators served ably as a means to readily convert people into becoming seat belt wearers. Still, there were people then and likely still some people today that insist on trying to subvert the requirement by buckling the seat belt and then sitting atop it, or trying other outlandish ways to subvert a seat belt.   

As they say, where there’s a will, there’s always a way to mess things up. 

The topic of seat belts is not solely rosy. 

Researchers pointed out that there was an oddball or ironic adverse consequence that undercut the safety-gaining basis of seat belts. The rub was that people would believe themselves to be safer, and therefore they would drive less safely. 

Work by Sam Peltzmann had derived an economic theory of risk compensation, and it was seemingly evident in the case of seat belts. He contended that attempts to increase safety measures will inexorably lead to heightened behaviors of risk-taking. This led to numerous studies that tried to figure out whether the use of seat belts by drivers was producing more car crashes and more fatalities, or whether it was reducing those numbers. The results of those studies have been mixed, sometimes making one claim and then a different study undermining that stated result.   

We can seem to agree that a seat belt is an innocent device and poses no outsized threat. In theory, its primary purpose is to keep you at the wheel and in control of the car. Also, if you did get struck by another vehicle, it keeps you from flying around inside the vehicle or being ejected out onto the street.   

The qualm about this added safety is that it might inspire you to drive recklessly. In your mind, you know that the seat belt allows you to stretch things to the dire edge. Without a seat belt, you would be unable to take such chances.   

In short, something intended to make us safer can inadvertently spark us to be riskier in our behavior. 

A one-for-one correspondence would imply that we are equally safer and equally riskier, and perhaps the net result is a balance that means the added safety measure made no difference. That would be a darned shame given the cost to implement the safety measure. Worse still, the riskiness might go off the charts, far exceeding the added safety, and ergo we become more deadly in our driving. 

That’s quite a dose of irony when including a new safety contraption. 

If a safety measure leads to worse results, presumably the right step involves removing the added safety apparatus. Things would then go back to normal. Well, not necessarily. It could be that the genie was let out of the bottle. People got used to driving like maniacs, and they do not accordingly readjust their behavior when the safety measure is removed. You could argue that the behavior would gradually readjust, though that has some debatable contentions too. 

Economist Sanford Ikeda noted in a presumed tongue-in-cheek retort that you could replace some steel spikes with identical-looking rubber ones. This would be randomly undertaken. When you got into a car, you (in theory) would not have any means to ascertain whether the vehicle had the relatively harmless rubber spike, or whether it had the life usurping steel spike. People would potentially be spared when they had the rubber spike, though all would presumably drive more safely since they had to assume that the steel spike was in their car.  

Another twist that has been floated involves removing the brakes on cars. Yes, if brakes are a form of safety, and if safety leads to riskier driving, might as well remove any and all mechanisms of safety including the brakes. In that case, the driving behavior would no longer be as risky, or so the absurdity extremism would imply.   

Shifting gears, as it were, we can ponder what the future of cars and car driving will likely consist of.   

The emergence of self-driving cars would appear to eliminate the need for a steel spike at the steering wheel, for the obvious reason that there will no longer be steering wheels at all. The eventual path of self-driving cars entails removing all the driving controls from the interior and thus preventing any human attempts to drive the vehicle. 

Does this imply that we put to rest the Tullock steel spike and call it a day? 

Turns out that there are still some handy insights that arise by considering the vaunted steel spike, even for the revered arrival of AI-based true self-driving cars.  

Let’s unpack the matter and see.   

For my framework about AI autonomous cars, see the link here: https://aitrends.com/ai-insider/framework-ai-self-driving-driverless-cars-big-picture/ 

Why this is a moonshot effort, see my explanation here: https://aitrends.com/ai-insider/self-driving-car-mother-ai-projects-moonshot/   

For more about the levels as a type of Richter scale, see my discussion here: https://aitrends.com/ai-insider/richter-scale-levels-self-driving-cars/ 

For the argument about bifurcating the levels, see my explanation here: https://aitrends.com/ai-insider/reframing-ai-levels-for-self-driving-cars-bifurcation-of-autonomy/  

Understanding The Levels Of Self-Driving Cars   

As a clarification, true self-driving cars are ones that the AI drives the car entirely on its own and there isn’t any human assistance during the driving task.   

These driverless vehicles are considered a Level 4 and Level 5, while a car that requires a human driver to co-share the driving effort is usually considered at a Level 2 or Level 3. The cars that co-share the driving task are described as being semi-autonomous, and typically contain a variety of automated add-on’s that are referred to as ADAS (Advanced Driver-Assistance Systems).   

There is not yet a true self-driving car at Level 5, which we don’t yet even know if this will be possible to achieve, and nor how long it will take to get there.   

Meanwhile, the Level 4 efforts are gradually trying to get some traction by undergoing very narrow and selective public roadway trials, though there is controversy over whether this testing should be allowed per se (we are all life-or-death guinea pigs in an experiment taking place on our highways and byways, some contend). 

Since semi-autonomous cars require a human driver, the adoption of those types of cars won’t be markedly different from driving conventional vehicles, so there’s not much new per se to cover about them on this topic (though, as you’ll see in a moment, the points next made are generally applicable).  

For semi-autonomous cars, it is important that the public needs to be forewarned about a disturbing aspect that’s been arising lately, namely that despite those human drivers that keep posting videos of themselves falling asleep at the wheel of a Level 2 or Level 3 car, we all need to avoid being misled into believing that the driver can take away their attention from the driving task while driving a semi-autonomous car.   

You are the responsible party for the driving actions of the vehicle, regardless of how much automation might be tossed into a Level 2 or Level 3. 

For why remote piloting or operating of self-driving cars is generally eschewed, see my explanation here: https://aitrends.com/ai-insider/remote-piloting-is-a-self-driving-car-crutch/   

To be wary of fake news about self-driving cars, see my tips here: https://aitrends.com/ai-insider/ai-fake-news-about-self-driving-cars/ 

The ethical implications of AI driving systems are significant, see my indication here: https://aitrends.com/selfdrivingcars/ethically-ambiguous-self-driving-cars/   

Be aware of the pitfalls of normalization of deviance when it comes to self-driving cars, here’s my call to arms: https://aitrends.com/ai-insider/normalization-of-deviance-endangers-ai-self-driving-cars/ 

Self-Driving Cars And Safety 

For Level 4 and Level 5 true self-driving vehicles, there won’t be a human driver involved in the driving task. All occupants will be passengers; the AI is doing the driving. 

As mentioned earlier, the belief is that all the human accessible driving controls will ultimately be removed from true self-driving cars. Today, there are many efforts underway that have supplemented a conventional car with the self-driving suite of capabilities. In those instances, the steering wheel and pedals remain, though they are generally AI-controlled to avert a double-driver occurrence (some believe we will always keep the driving controls, while others insist that they have to be removed). 

Assume for the moment that the preponderance of true self-driving cars will not have any steering wheels in them. This means there is also no steel spike either.   

The underlying crux of the spike concept was that it got us to think about safety and human behavior. This was prompted too by the supposition that as a safety measure is added, there is a potential consequent reaction that drivers would become riskier in their driving.   

Let’s put that same overall notion into reuse.   

Imagine that you are a passenger inside a self-driving car. You know that the AI is a highly safe driver, especially in comparison to a human driver (this has yet to be shown, and we might end-up with AI driving systems that are only as good as human drivers in terms of safety, perhaps worse).   

Would your actions as a passenger inside a true self-driving car be less risky, equally risky, or riskier than if you were riding in a human-driven car?   

Well, per the theory about safety and risk, the odds are that you would behave in a riskier manner.   

Your first comment might be that it doesn’t matter since you are not driving the car. That is indeed the case, namely, you aren’t driving the vehicle, but you are nonetheless inside an automobile and for which you can still get injured or killed. Suppose the self-driving car is zipping along on the freeway and all of a sudden has to jam on the brakes. If you are not wearing a seatbelt, due to the belief that there was no need to do so, you might go flying and end-up getting badly hurt.   

Take another example. 

You opt to reach outside the self-driving car and wave at pedestrians on the sidewalk. That might be the full extent of your attempts at outreach, but since you are comfortably nestled inside a true self-driving car, you extend further out the window. About half of your body is now clinging to the outside of the car. I trust you can see where this is going. Another car comes along and perhaps sideswipes the self-driving car, crushing you, or maybe you entirely fall out of the self-driving car, doing so while the vehicle is underway.   

The point is that there is a potential danger that passengers inside self-driving cars will believe themselves to be in a safer posture than when inside a conventional human-driven car, thus leading to untoward behaviors with sad and severe consequences.   

Do we need to add a kind of veritable “steel spike” to the innards of a self-driving car to remind riders that they are still inside a rolling canister that can at any time cause tremendous physical forces to adversely impact them? 

Maybe.   

There is another angle to the human behavior conundrum. 

We have to expect that there will be both human-driven cars and self-driving cars on our roadways, both mixing together, and doing so for likely many decades to come. Keep in mind that today there are about 250 million conventional cars in the United States alone, and they will not simply disappear overnight due to the advent of self-driving cars. 

I’ve written extensively that we are already beginning to witness human drivers that opt to play dirty tricks on self-driving cars. These roadway bullies like to force self-driving cars to make a sudden braking action, sometimes just for fun and sometimes because they (the human bully) are driving erratically. In more subtle ways, you can get ahead in traffic by rapidly switching lanes and cutting in front of a self-driving car. The AI is usually programmed to let you in, plus the AI will slow down the self-driving car to try and regain the proper driving distance between vehicles.   

Why do people believe they can get away with this kind of risky driving behavior?   

Because they assume that the AI self-driving car is a safe driver. Were the AI to be a rogue or rude driver, which is akin to what humans do, those human daredevils would be less likely to pull those kinds of stunts (so one would assume).   

Again, the safety and risk equation rears its head. We can anticipate that as more and more self-driving cars enter into the roadway mix, and assuming they are sufficiently safe at driving, the human drivers might increase their riskiness of driving practice under the belief that it is okay to do so, similar to the logic used about wearing seat belts.   

For more details about ODDs, see my indication at this link here: https://www.aitrends.com/ai-insider/amalgamating-of-operational-design-domains-odds-for-ai-self-driving-cars/ 

On the topic of off-road self-driving cars, here’s my details elicitation: https://www.aitrends.com/ai-insider/off-roading-as-a-challenging-use-case-for-ai-autonomous-cars/ 

I’ve urged that there must be a Chief Safety Officer at self-driving car makers, here’s the scoop: https://www.aitrends.com/ai-insider/chief-safety-officers-needed-in-ai-the-case-of-ai-self-driving-cars/ 

Expect that lawsuits are going to gradually become a significant part of the self-driving car industry, see my explanatory details here: https://aitrends.com/selfdrivingcars/self-driving-car-lawsuits-bonanza-ahead/ 

Conclusion 

We can add one additional example to this plateful of safety and risk heightened behavioral stew.   

AI self-driving cars are ostensibly being programmed to be safe drivers. Part of this safety element involves being able to contend with human drivers. Some assert that perhaps self-driving cars should not be so gentle and therefore by being less safe (or appearing to be so), would curtail the risk increase by human drivers. 

Assume that eventually there are hardly any human-driven cars and the mainstay of traffic is entirely self-driving cars. 

Could self-driving cars be amped up to take more extreme driving measures?   

Sure, it seems logical, given that the other surrounding cars are all safer than before (we are so assuming), and thus the risk of driving at the edge is presumably lessened. An example would be that we could lift the existing speed limits and let the self-driving cars go as fast they so deem. Part of the basis for speed limits was to reduce the severity of car crashes, but if there are going to be very few such car accidents, maybe we can do away with the speed limits.   

This doesn’t fully pencil out, so don’t get overly excited. Pedestrians can still step into the street and ruin everyone’s day. In any case, on freeways and open highways, there is some logic to letting those self-driving cars run with the wind.   

You can imagine how elated the AI would be, asking you as a passenger whether it is okay to put the pedal to the metal. Might as well let the AI have some fun, and meanwhile, you can get to your destination in record time.   

Just hope the AI keeps its steely eyes peeled on the road ahead and there aren’t any steel spikes laying on the pavement.  

Copyright 2021 Dr. Lance Eliot   

[Ed. Note: For reader’s interested in Dr. Eliot’s ongoing business analyses about the advent of self-driving cars, see his online Forbes column: https://forbes.com/sites/lanceeliot/] 

http://ai-selfdriving-cars.libsyn.com/website 

This post was first published on: AI Trends

Ad: Get the help you need with your project.