Lol. Lmao even.
I’ve at times thought myself too much a pessimist, but self-driving cars is a technology where I feel that several companies and hype machines are knowingly barking up the wrong tree. Self-driving cars aren’t a technological problem, they are truly a political and legal problem. Let me explain.
We have had for many years the technology capable of making a fully autonomous car using sensors and automatic feedback for controls, and it only took a few years of Google engineering before they were able to make a program which could drive with greater fidelity than most any human. Fidelity in this case means ability to get there and back in a reasonable amount of time while adhering to road safety. Obviously a car doesn’t have an ego, so it can be programmed not to speed, to drive defensively, to obey traffic laws etc. And the split second reaction times required when zooming down the freeway are more easily handled by a computer than a human anyway. But that isn’t the barrier to self-driving cars in my view, the barrier is what happens when things go wrong.
If a self-driving car is responsible for a crash, who is held responsible? In the real world, responsibility in crashes is assigned in order to pay restitution and prevent future harm. Someone has to pay for the victim’s hospital bills, and it might be necessary to prevent future harm by prohibiting unsafe drivers from driving. Under pretty much every imaginable circumstance, the driver of the car is presumed solely at fault if their car is responsible for a crash, but under a few specific circumstances the manufacturer of the car or even the person who last worked on it can be held at fault if the driver acted correctly and the car did not respond to their inputs.
But who is at fault if a self-driving Google Lexus crashes? Let’s cut to the chase, Lexus will not be at fault in any sense, and in Google’s visionary world there would be no peddles or steering wheel in the self-driving Google car, so no “driver” as such. The only answer then is that Google itself must be at fault as the writer of the self-driving algorithm. This isn’t an open question, someone must be at fault to pay restitution, and there is very little possibility that the passenger of a car with no way to influence it could be held liable. But is Google, or any company for that matter, willing to take on the burden of fault for every possible crash their cars could get into? Google has handily sidestepped this problem by pointing out that so far their cars have never been in an at-fault crash, but that really isn’t an answer. All software fails eventually, that is an iron law of nature no matter what the programmers say. There will always be a bug in the code, an unexpected edge case, or an update pushed out without proper oversight. And so eventually Google’s car will cause a crash and someone must be held responsible. This isn’t just one person’s hospital bills either, if Google’s car causes a crash and there’s no peddles or steering wheel, they would be responsible for the harm to people in both cars. I surmise that Google is unwilling to take on that responsibility.
So this truly is a question that cannot be sidestepped, and I think that is why even though the tech is “there” for self-driving cars, none have come to the mass market. You can make a car navigate through 99.99% of all driving problems with ease, but no one is willing to be responsible for the 0.01% of times their car will fail. So even though humans might only navigate 99% of driving problems with ease, and thus even though self-driving cars are already “better” than us, we take on the burden of responsibility when we fail, as defined by laws and legally mandated insurance. In exchange for this burden we get the privilege of going place to place much faster than we would otherwise. Google would only get the privilege of our money in exchange for taking on that burden, and I suspect the economics of the exchange don’t yet work for them.
One thought on “Google will have a fully self-driving car on road by 2020”