Remember when Monkeypox was the next COVID?

Oh the halcyon days of three months ago. I was younger, the air was warmer, and twitter was aflutter with hysterical reports on how Monkeypox was rampaging through the population and would soon run wild through schools as soon as the fall semester began

Tweet from August 2nd 2022

Today Monkeypox is in the rear-view mirror. The vaccine rollout was stymied by some impressively bad government bureaucracy, but the vaccines worked, the virus was only as contagious as the experts said, and spread almost solely through the populations the experts said it would. It was never airborne, and it never ran rampant the way doomers thought (hoped) it would.

Speaking of the distant past, remember Credit Suisse?

Absolutely no hint of irony

A few weeks ago, Credit Suisse was supposed to have a “Lehman Brother’s” moment, their debt was so extraordinary that they were destined to collapse, taking the global banking system down with them. There’s up 20% in the last month and show no signs of default.

Remember Shanna Swan? She’s made headlines claiming that the human race could go extinct due to chemicals in our environment destroying male fertility. Personally I knew this claim was bunk from the moment I read it because biologically speaking, human reproduction is basically identical to all mammalian reproduction. If human fertility really was plummeting due to the chemicals in our environment, then other mammals (the cows we ranch, the dogs and cats we live with, even the rats that infest our subways) should have also seen plummeting male fertility due to their bad luck of sharing the planet with us. Yet somehow no overall drop in mammalian fertility was recorded, this catastrophe only affected humans. No ranchers complained of an inability to fertilize their cows, no reduction in stray dogs and cats was reported due to drop in male fertility, this was somehow the one biological process that humans and no other mammals were subject to. It turns out there was a good reason for that because her whole doom prediction was junk and rested entirely on flawed assumptions.

I’ve grown pretty tired of the endless predictions of collapse doled out by social media. Every week it seems there’s another new thing that will destroy us all but when life carries as normal none of the prediction-mongrels ever admit they were wrong. There’s more than enough actual bad things out there without social media taking misunderstood factoids and extrapolating the complete worst-case scenario out of it all. I’d like to have some more accountability for prediction-mongers but social media makes that impossible as by the time some coming catastrophe can be conclusively proven false, a new one has been conjured up in its stead. Repeat ad nauseam, giving constant predictions of collapse and using any downturn of any kind as evidence for your accuracy. It’s just tiring.

Google will have a fully self-driving car on road by 2020

In 2015, Google claimed they would have a fully self-driving car within 5 years, completely removing humans from the equation.

Lol. Lmao even.

I’ve at times thought myself too much a pessimist, but self-driving cars is a technology where I feel that several companies and hype machines are knowingly barking up the wrong tree. Self-driving cars aren’t a technological problem, they are truly a political and legal problem. Let me explain.

We have had for many years the technology capable of making a fully autonomous car using sensors and automatic feedback for controls, and it only took a few years of Google engineering before they were able to make a program which could drive with greater fidelity than most any human. Fidelity in this case means ability to get there and back in a reasonable amount of time while adhering to road safety. Obviously a car doesn’t have an ego, so it can be programmed not to speed, to drive defensively, to obey traffic laws etc. And the split second reaction times required when zooming down the freeway are more easily handled by a computer than a human anyway. But that isn’t the barrier to self-driving cars in my view, the barrier is what happens when things go wrong.

If a self-driving car is responsible for a crash, who is held responsible? In the real world, responsibility in crashes is assigned in order to pay restitution and prevent future harm. Someone has to pay for the victim’s hospital bills, and it might be necessary to prevent future harm by prohibiting unsafe drivers from driving. Under pretty much every imaginable circumstance, the driver of the car is presumed solely at fault if their car is responsible for a crash, but under a few specific circumstances the manufacturer of the car or even the person who last worked on it can be held at fault if the driver acted correctly and the car did not respond to their inputs.

But who is at fault if a self-driving Google Lexus crashes? Let’s cut to the chase, Lexus will not be at fault in any sense, and in Google’s visionary world there would be no peddles or steering wheel in the self-driving Google car, so no “driver” as such. The only answer then is that Google itself must be at fault as the writer of the self-driving algorithm. This isn’t an open question, someone must be at fault to pay restitution, and there is very little possibility that the passenger of a car with no way to influence it could be held liable. But is Google, or any company for that matter, willing to take on the burden of fault for every possible crash their cars could get into? Google has handily sidestepped this problem by pointing out that so far their cars have never been in an at-fault crash, but that really isn’t an answer. All software fails eventually, that is an iron law of nature no matter what the programmers say. There will always be a bug in the code, an unexpected edge case, or an update pushed out without proper oversight. And so eventually Google’s car will cause a crash and someone must be held responsible. This isn’t just one person’s hospital bills either, if Google’s car causes a crash and there’s no peddles or steering wheel, they would be responsible for the harm to people in both cars. I surmise that Google is unwilling to take on that responsibility.

So this truly is a question that cannot be sidestepped, and I think that is why even though the tech is “there” for self-driving cars, none have come to the mass market. You can make a car navigate through 99.99% of all driving problems with ease, but no one is willing to be responsible for the 0.01% of times their car will fail. So even though humans might only navigate 99% of driving problems with ease, and thus even though self-driving cars are already “better” than us, we take on the burden of responsibility when we fail, as defined by laws and legally mandated insurance. In exchange for this burden we get the privilege of going place to place much faster than we would otherwise. Google would only get the privilege of our money in exchange for taking on that burden, and I suspect the economics of the exchange don’t yet work for them.