-
Understanding the lack of free lunch in the student loans debate
I’d guess my opinion matches most Americans about the Supreme Court’s student loan decision. The online response has been rather mental though. There’s been a number of people hyping up “obvious” solutions that have very very obvious problems that they don’t want to confront. So I’d like to speak about some of those.
Student loans should be dischargable in bankruptcy. The entire reason that Joe Biden supported student loans not being dischargable was so that poor students with no assets would be able to get the loans. The only reason someone will give you a loan is because they want their money back with interest. If they don’t think you’ll pay them back, they either won’t give you the loan or will demand exorbitant interest rates. The people who will get loans are either rich enough that they can obviously afford to repay, or are using the loan to buy an asset which can be repossessed if they refuse to pay. Poor students don’t fall into those categories, so for a long time they were locked out of student loans.
If student loans become dischargable, then banks will fear that certain students will rack up student loans and then immediately declare bankruptcy upon graduation. The graduate will have no assets to repossess, and so the bank is SoL. Banks will then only give loans to individuals with enough assets to repossess, or who are already wealthy enough to pay it back easily. Making loans dischargable would reduce the number of poor people who can go to college, and thereby increase income inequality. If you think that’s a good trade-off then we can have that debate, but this isn’t a consequence-free solution.
Loans should have no interest. This is the same as saying there should not be any student loans. Again, the reason people give out loans is because they expect to make back their money with interest. Without interest, there is no reason to ever write student loans. And so again, college becomes unreachable for those not already rich.
College should be free, provided by the government. This is a defensible policy, but too many people imagine a world without trade-offs, and those must be considered. People who think college should be free often point to Europe but don’t even understand that college is in fact not always free. In Germany, where college is free, only 1/3 of adults have a post-secondary degree or certificate, compared to around 1/2 of Americans. The exact numbers vary depending on how you count, but there is a clear divide, higher education is more rationed in Germany than it is in America.
The German system means that not everyone is even allowed to try to go to University, and those that do go usually face fewer teachers per student, (meaning they have to teach themselves more) and less assistance overall. I’m sure a lot of people would immediately fire back that this is a good thing, not everyone needs to go to University. But the thing is they’re usually talking about other people. They would never be able to look themselves in the mirror and admit that they be one of the 2/3 of people who just aren’t good enough to be accepted into a German University. Because higher education is not as rationed in America, more people can go.
Then there are the universities that actually do have fees. On paper these are lower than American fees, in practice many Americans qualify for financial aid that makes college free or at least cheaper than European college. In Belgium, tuition fees are about 1000 euros a year. I paid less than that in my undergrad (around 1000 dollars a year) because I received a good scholarship. And I was not an exceptional student at an exceptional university, if I was I might have gotten a full ride.
There are many reasons to despise the ever increasing cost of American universities. Most of the money goes to administrative bloat, almost nothing is spent to improve student lives or increase professor’s pay. Even the most “socially conscious” Universities will still pay millions of dollars to water their perfect lawns rather than pay the staff or grad students a living wage. But the student loan debate comes with trade-offs, and we must confront them if we want to change the system. Cakeism will get us nowhere.
-
If the weavers get replaced by machines, who will buy the clothes?
I’ve seen way too many articles about AI casting doom and gloom that it will “replace millions of jobs” and that this will lead to societal destruction as the now job-less replacees have no more money to spend. The common refrain is “when AIs replace the workers, who will buy the products?”
This is just another fundamental misunderstanding of AI and technology. AI is a multiplier of human effort, and what once took 10 men now takes 1. That doesn’t mean that 9 men will be homeless on the street because their jobs are “replaced.” The gains reaped from productivity are reinvested back into the economy and new jobs are created.
When the loom replaced hand-spinning weavers, those weavers were replaced. But they could eventually find new jobs in the factories that produced looms, and in other factories that were being developed. When computers replaced human calculators, those calculators could now find jobs programming and producing computers.
For centuries now, millenia even, technology has multiplied human effort. It used to take dozens of people to move a single rock, until several thousand years ago someone had the bright idea of using ropes, pullies, and wheels. Then suddenly rocks could be moved easily. But that just in turn meant the demand for moving rocks shot up to meet this newer, cheaper equilibrium, and great wonders like the Pyramids and Stonehenge were suddenly built.
The same will be true of AI. AI will produce as many new jobs as it creates. There will be people to produce the AI, people to train the AI, people to ensure the AI has guardrails and doesn’t do something that gets the company trending on Twitter. And there will be ever more people to use the AI because demand is not stable and demand for products will rise to meet the increase in supply generated by the AI. People will want more and more stuff and that will lead to more and more people using AI to produce it.
This is something that people get hung up on, they think that demand is stable. So when something that multiplies human effort gets created, they assume that since the same amount of products can be produced with less effort, that everyone will get fired. Except that demand is not stable, people have infinite wants and finite amounts of money.
Technological progress creates higher paying jobs, subsistence farmers become factory workers, factory workers become skilled workers, skilled workers enter the knowledge economy of R&D. These new higher paying jobs create people who want more stuff because they always want more stuff, and now have the money to pay for it. This in turn increases demand, leading to more people being employed in the industry even though jobs are being “replaced” by machines.
To bring it all back to weavers, more people are working in the textile industry now than at any point in human history, even though we replaced weavers with looms long ago.
AI will certainly upend some jobs. Some people will be unable or unwilling to find new jobs, and governments should work to support them with unemployment insurance and retraining programs. But it will create so many new jobs as well. People aren’t satisfied with how many video games they can purchase right now, how much they can go out to restaurants, how much housing they can purchase, etc. People always want more, and as they move into higher paying jobs which use AI they will demand more. That in turn will create demand for the jobs producing those things or training the AIs that produce those things.
It has all happened before and it will happen again. Every generation thinks that theirs is the most important time in the universe, that their problems are unique and that nothing will ever be the same. Less than three years ago we had people thinking that “nothing will ever be the same” due to COVID, and yet in just 3 short years we’ve seen life mostly go back to normal. A few changes on the margins, a little more work from home and a little more consciousness about staying home when sick, but life continued despite the once-a-century upheaval.
Life will also continue after AI. AI will one day be studied alongside the plow, the loom, and the computer. A labor-saving device that is an integral part of the economy, but didn’t lead to its downfall.
-
Corporate Greed is over, now comes corporate generosity
If you’ve been to the grocery store recently, you have probably seen an incredible sight. Eggs are now selling for less than they did in 2022. Walmart says they’ll sell me eggs for 1.19$ a dozen, and Target will sell them for 0.99$ with a special discount. Considering that at the beginning of 2023, eggs were selling for as much as 5$ a dozen, this comedown is remarkable.
It gets to the heart of a discussion about the origins of inflation though. The classical definition of inflation is too much money chasing too few goods. That means that when either the money supply is increased or their is a shortage of goods, we should expect to see inflation. This thesis does seem to have played out in 2021-2023. The money supply was increased enormously in 2020 and 2021, while COVID restrictions meant the supply of goods was constrained and could not rise quickly to meet it.
But that isn’t the definition that has been gaining traction. Recently folks have pointed to corporate greed as the primary driver of inflation. Under this thesis, inflation is not driven by the money supply or the goods supply, but by corporate greed in and of itself. If corporations weren’t greedy, they wouldn’t raise prices. But if prices go up because corporations are greedy, doesn’t that mean they go down because corporations are generous?

I’d like to see someone like Bernie Sanders explain the fall in egg prices. Why aren’t Walmart and Target just being greedy like all the other companies? If it’s so easy to raise egg prices by being greedy, then what mechanism could possibly make prices fall? What possible reason could their be for a fundamentally greedy company to willingly lower prices and receive less money?
For that matter, why is Exxon-Mobile being so damn generous? Over the past year, crude oil prices have gone from 100$ to just 70$. Exxon-Mobile was public enemy number 1 when gas prices were high, and was blamed for being too greedy. Have they now become generous instead? Have all the oil companies become generous? Why are the oil companies so much more generous than all the other companies?
It gets to the heart of the problem, inflation isn’t driven by corporate greed. Corporate greed is a constant, I’d go so far as to say human greed is a constant. Corporations (on average) demand the highest possible price for their goods that the market will bear. Laborers (again on average) also demand the highest possible price for their labor that the market will bear. No one ever willingly takes a pay cut without good reason, good reason usually being they have no other choice.
If corporations want to raise their prices above what the market will accept, then they’d be like me walking up to my boss and demanding a million dollar salary. They won’t get what they want no matter how hard they try. If Walmart raises the price of eggs, then Target can steal all of their business by keeping its egg prices low. People stop buying eggs at Walmart, they instead buy eggs from Target or from one of the hundreds of small and independent retailers that still dot America. Grocery stores are not a monopoly in our country, they do not have the power to set prices on their own. They are always in competition with each other and prices reflect that competition.
By the same token, if I demand a million dollar salary, my boss just won’t pay it. If I say I’ll quit if I don’t get it, he’ll show me the door. I am competing with hundreds of other workers in my field and so I cannot raise the price of my labor over and above what others are charging or else I’ll get replaced. It is a fact that many people ignore, but there is a market for labor just as their is a market for any other good. And the labor market has sellers (workers) and buyers (employers) just like any other. So when trying to answer questions about (say) the egg market, it’s useful to first think about how it works in the labor market. We are probably all more familiar the labor market with since if you’re reading this blog you’ve likely worked in your life.
So, in the labor market, can the sellers of labor (the workers) raise their prices just by being greedy? No, of course not. Without some decrease in supply or increase in demand, the price (salary) of laborers doesn’t go up, and workers who refuse to work for the market raise simply won’t receive job offers. It’s the same with corporations and it’s the same with goods inflation. Prices of goods aren’t driven by greed. They’re driven by supply shortages and a glut of money, both of which are in part exacerbated by government policies.
The current administration has continued Trump’s protectionist trade policies, which prevent American companies from being forced to compete with overseas companies. And both congressional spending and the Federal Reserve’s balance sheet have expanded considerably, bringing more and more money into the money supply. Too much money chasing too few goods, that is what causes inflation.
-
The AI pause letter seems really dumb
I’m late to the party again, but a few months ago a letter began circulating requesting that AI development “pause” for at least 6 months. Separately, AI developers like Sam Altman have called for regulation of their own industry. These things are supposedly happening because of fears that AI development could get out of control and harm us, or even kill us all in the words of professional insanocrat Eliezer Yudkowsky, who went so far as to suggest we should bomb data centers to prevent the creation of a rogue AI.
To get my thoughts out there, this is nothing more than moat building and fear-mongering. Computers certainly opened up new avenues for crime and harm, but banning them or pausing development of semiconductors in the 80s would have been stupid and harmful. Lives were genuinely saved because computers made it possible for us to discover new drugs and cure diseases. The harm computers caused was overwhelmed by the good they brought, and I have yet to see any genuine argument made that AI will be different. Will it be easier to spread misinformation and steal identities? Maybe, but that was true of computers too. On the other hand the insane ramblings about how robots will kill us all seem to mostly amount to sci-fi nerds having watched a lot of Terminator and the Matrix and being unable to separate reality from fiction.
Instead, these pushes for regulation seem like moat-building of the highest order. The easiest way to maintain a monopoly or oligopoly is to build giant regulatory walls that ensure no one else can enter your market. I think it’s obvious Sam Altman doesn’t actually want any regulation that would threaten his own business, he threatened to leave the EU over new regulation. Instead he wants the kind of regulation that is expensive to comply with but doesn’t actually prevent his company from doing anything it wants to do. He wants to create huge barriers to entry where he can continue developing his company without competition from new startups.
The letter to “pause” development also seems nakedly self-serving, one of the signatories was Elon Musk, and immediately after Musk called for said pause he turned around and bought thousands of graphics cards to improve Twitter’s AI. It seems the pause in research should only apply to other people so that Elon Musk has the chance to catch up. And I think that’s likely the case with most of the famous signatories of the pause letter, people who realize they’ve been blindsided and are scrambling to catch up.
Finally we have the “bomb data centers” crazies who are worried the Terminator, the Paperclip Maximizer or Roko’s Basilisk will come to kill them. This viewpoint involves a lot of magical thinking as it is never explained just how an AI will find a way to recursively improve itself to the point it can escape the confinement of its server farm and kill us all. In fact at times these folks have explicitly rebuked any such speculation on how an AI can escape in favor of asserting that it just will escape and have claimed that speculation on how is meaningless. This is of course in contrast to more reasonable end-of-the-world scenarios like climate change or nuclear proliferation, where there is a very clear through-line as to how these things could cause the end of humanity.
Like I said it I take this viewpoint the least seriously, but I want to end with my own speculation about Yudkowsky himself. Other members of his caucus have indeed demanded that AI research be halted, but I think Yudkowsky skipped straight to the “bomb data centers” point of view both because he’s desperate for attention and because he wants to shift the Overton Window.
Yudkowsky has in fact spent much of his adult life railing about the dangers of AI and how they’ll kill us all, and in this one moment where the rest of the world is at least amenable to the fears of AI harm, they aren’t listening to him but are instead listening (quite reasonably) to the actual experts in the field like Sam Altman and other AI researchers. Yudkowsky wants to maintain the limelight and the best way to do so is often to make the most over-the-top dramatic pronouncements in the hopes of getting picked up and spread by both detractors, supporters and people who just think he’s crazy.
Secondarily he would probably agree with AI regulation, but he doesn’t want that to be his public platform because he thinks that’s too reasonable. If some people are pushing for regulating AI and some people are against it, then the compromise from politicians who are trying to seem “reasonable” would be for a bit of light regulation which for him wouldn’t go far enough. Yudkowsky instead wants to make his platform something insanely outside the bounds of reasonableness, so that in order to “compromise” with him, you’ll have to meet him in the middle at a point that would include much more onerous AI regulation. He’s just taking an extreme position so he has something to negotiate away and still claim victory.
Personally? I don’t want any AI regulation. I can go to the store right now and buy any computer I want. I can to go to a cafe and use the internet without giving away any of my real personal information. And I can download and install any program I want as long as I have the money and/or bandwidth. And that’s a good thing. Sure I could buy a computer and use it to commit crimes, but that’s no reason to regulate who can buy computers or what type they can get, which is exactly what the AI regulators want to happen with AI. Computers are a net positive to society, and the crimes you can commit on them like fraud and theft were already crimes people committed before computers existed. Computers allow some people to be better criminals, so we prosecute those people when they commit crimes. But computers allow other people to cure cancer, so we don’t restrict who can have one and how powerful it can be. The same is true of AI. It’s a tool like any other, so let’s treat it like one.
-
A possible cure for Duchenne Muscular Dystrophy
Sarepta Therapeutics may have a cure out for Duschenne Muscular Dystrophy (DMD). It’s called SRP-9001, and while I hesitate to say it’s a Dragonball Z reference, I’m not sure why else it has that number. Either way it’s an interesting piece of work and I thought I’d write about it and what I know about it.
DMD is caused by a mutation in the protein dystrophin, a protein which is vital for keeping the muscle fibers stiff and sound. Our muscles move because muscle fibers pull themselves together, which shrinks their volume along an axis and therefor pulls together anything they are attached to. The muscle cell pulling on itself creates an incredible amount of force, and dystrophin is necessary to make sure that that force doesn’t damage the muscle cell itself. When dystrophin is mutated in DMD, the muscle cells pulling on themselves will indeed begin to cause deformations and destruction of the muscle cell itself, which leads to the characteristic wasting away of DMD sufferers. The expected lifespan of someone with DMD is only around 20-30 years.
Dystrophin is a massive protein, fully 0.1% of the human genome is made up of just the dystrophin gene. However a number of the mutations which cause DMD are point mutations, mutations in a single DNA nucleotide. If just that one nucleotide could be fixed, in theory the disease could be cured. For a long time genetic engineering and CRISPR/Cas9 has targeted DMD for treatments based on this idea of just fixing that one nucleotide.
However, Sarepta seems to be working on an entirely new theory. Deliver a complete gene to the patient which can replace the functionality of the non-functional dystrophin. This is called micro-dystrophin and it is less than half the length of true dystrophin. However it still contains some of the necessary domains of dystrophin like the actin-binding-domain. This is important because of how genetic engineering in humans actually works (these days). How do you get a new gene into a human? Normally, you must use a virus. But the viruses of choice (like AAV) are actually so small that the complete dystrophin gene simply would not fit in them. Micro-dystrophin, being so much smaller, is needed in order to fit the treatment into a virus.
So the idea would be that DMD patients cannot produce working dystrophin, but when SRP-9001 is given to them it would give them the genes to create micro-dystrophin for themselves. Then once their muscles begin creating this micro-dystrophin, it would spread throughout the muscle cell and take up the job of strengthening and stiffening the muscle cell just like normal dystrophin does. In this way the decay of their muscles would slow and hopefully they’d live much much longer.
SRP-9001’s road to FDA approval is not yet fully formed. They’ve done some nice clinical trials where they’ve shown that their genetic engineering drug does successfully deliver micro-dystrophin genes into the patients, and that the patients then use those genes to produce the micro-dystrophin protein. However as of right now they are still doing Phase 3 clinical trials and still awaiting the FDA to give them expedited approval. That approval won’t come until June 22nd at the earliest, but I believe it would still make it the first FDA-approved treatment for DMD.
-
So what’s going on with Amyloid Beta and Alzheimer’s disease?
This will be a very #streamsofconsciousness post where I ramble a bit about my work.
As I’ve said before I study Amyloid Beta in Alzheimer’s disease. I am very new to this field, so much of what is surprising to me might be old hat to the experts. But I’m quite flummoxed on what exactly Amyloid Beta is doing both in diseased and healthy brains. When I started this job, I read papers indicating that Amyloid Beta (henceforth AB) forms these large filaments, and like a bull in a china shop those large filaments will sort of knock around and cause damage. Damaging the brain in that way is obviously a hazard, and would lead to exactly the type of neuro-degeneration that is a hallmark of Alzheimer’s disease.
So because of this, it’s my job to extract these large AB filaments and take pictures of them. That way we can see exactly what they look like and why it is that they do so much damage. But then this simple picture changed. AB is made up of thousands of individual peptides, and I read papers saying these individual peptides might actually be what causes the disease by disrupting the neurons and causing them to die. But if that’s the case, then what are the filaments doing? Are they still causing damage by being big and huge, or are they entirely benign and a red herring? If they are benign, then my studying them and taking pictures of them might be leading us down a dead end.
And now I found that AB is also necessary for the development of a healthy brain. Now this in itself is not too out there, any medicine can turn into poison if the dose is wrong. So this could easily be too much of a good thing, or a good thing in the wrong place, that while normally AB helps a brain, in Alzheimer’s disease something has gone wrong to cause AB to kill nerve cells. But still it’s surprising.
The paper I read indicates that AB is necessary for process of synaptic plasticity. No time to get into the whole details, but synaptic plasticity underlies the formation of memories in the brain. Mice who do not have AB have a harder time forming memories and completing tasks than mice with AB. So now I’m at the point where actually AB is necessary for the formation of memories of a healthy brain, but then sOmEtHiNg happens and it caused Alzheimer’s disease, which is characterized by deficiencies in memories. So what is happening?!?!?
I… don’t know. I don’t know if anyone knows. But I wish I had the tools to study this further. The difficulty is that I’m not sure if I do. My setup is geared towards looking at those giant AB filaments I talked about earlier. Filaments have a big, rigid form and you can do structural analysis on them to get what is essentially a 3D model. But all these papers talking about the role of AB in healthy brains, they are talking about it in the small monomeric form. Small monomers don’t form rigid structures in quite the same way, they are more akin to a floppy noodle, there’s no rigid form to hang your hat on and so no clean 3D model can be made for them. So maybe I’m using the wrong tool for the wrong job. Or maybe it really IS the filaments that are doing the damage. I’m just not sure at this time and it’s racking my brain trying to know where I should go next.
-
This is a bit hard to post
I’m not going to share this post to any of my social media, but I wonder if it would be cathartic to put this out in writing
I’ve been feeling a little jealous of how many of my friends seem to be succeeding in their jobs and their research while I’m not. I’m not getting the data I want so I can publish papers, I’m struggling at writing as much and as well as I would like, and since I don’t work in industry I’m not making as much money or getting the promotional opportunities I want.
I’m just feeling a lot of jealously right now and that’s making it hard for me to sometimes talk about my own trajectory and the trajectory of others.
-
AI art killed art like video killed the radio star
Everyone knows the song “Video Killed the Radio Star” by the Buggles, it was one of the earliest big hits on MTV (back when it was still called Music Television). The song is pretty good, but it also speaks to a genuine fear and wonder about our world, that changing technology upends our social fabric and destroys our livelihood. The radio star who just wasn’t pretty enough for video, or couldn’t compete with the big production values of music videos, or just didn’t like dancing and being seen at all. That radio star is the Dickensian protagonist of the modern age, as they are tossed aside and replaced when new technology comes along.
This Luddite fear has pervaded throughout history. The loom-smashing followers of Ned Ludd are only the most famous, but there were silent actors who never made it in talkies. There were photo-realistic painters who could never compete with a camera. John Henry died trying to beat a steam drill. In each case, an argument could be made that the new technology removed some important human element. The painters could claim that photography wasn’t “true art”. And the loom smashers too probably believed that their handcrafts were more “real” and more deserving of respect than the soulless cloth that replaced it.
So why is AI art any different? Why should we care about the modern Luddites who want to ban it or restrict it? I say we shouldn’t.
AI art steals from other artists to make its images
common argumentNo more than any artist “steals” when they learn from the old masters. It is a grievous misunderstanding of how AI works to claim that it cuts and pastes from other images, and an AI training itself on a dataset of art is no different than an art student doing the same whether in university or on their own. The counter-argument I’ve heard is “why are you ascribing rights to an AI that should only belong to humans! Yes humans can learn from other art, but AI shouldn’t have the right to!” I’m not ascribing anything to AI, the person who coded the AI and the person who used the AI have the right to use any images they can find, just as an artist does. And just as the output of an artist learning from old masters is itself new art, so too is the output of coding or using an AI that has been trained on old works.
AI art is soulless
common argumentAs soulless as loom-made fabric is compared to hand-made. Or as soulless as a photograph is compared to a hand-painted picture. Being made with a machine doesn’t detract from something for me, and I think only bias causes it to detract from others.
AI art takes money out of artists’ pockets, it should be banned to protect the workers’ paychecks
common argumentWhy is the money of the workers more important than the money of the consumers? Loom-made fabric competes with hand-spun fabric, should we smash looms to keep the tailors’ wages up? Are we ok with having everything cost more because it would hurt someone’s business if they had to compete against a machine? The counter-argument I’ve seen to this is that the old jobs replaced by AI were all terrible drudgery and it’s good that they were replaced, whereas art is the highest of human expressions and should never be replaced. Again I think this is presentism and a misreading of history. I’m sure there were tailors and seamstresses who though sewing and making fabric was the absolute bomb, who loved their job and though that their clothes had so much heart and soul that they were works of art in and of themselves. And I know there are artists in the modern day for whom most of their work is dull drudgery.
Thinking that your job and only your job is the highest form of human expression and should never be replaced, well to me that just shows a clear lack of empathy towards everyone else on earth. No one’s job is safe from automation, but all of society reaps the benefits of automation. We can all now afford far more food, more clothing, more everything, since we started automating manual labor. Labor saving creates jobs, it doesn’t destroy them, it frees people to put their efforts towards other tasks. We need to make sure that the people who lose their jobs due to automation are still cared for by society, but we should not halt technological progress just to protect them. AI art allows creators and consumers to have more art available than they otherwise would. Game designers can whip up art far more quickly, role-players can get a character portrait without having to pay, this lets people have far more art available than they otherwise would. In the same way that the loom let us have far more clothing available than we otherwise would.
AI art is always terrible
common argumentI find it funny that this often comes paired in internet discourse with “I’m constantly paranoid and wondering if the picture I’m looking at was made by AI or not.” There’s a very Umberto Eco-esque argument going on in anti-AI spaces. AI is both terrible and easily spotted, but also insidious and you never quite know if what you’re seeing is AI, and also everyone is now using AI art instead of “real” art.
If real art is better than AI art, wouldn’t there be a market for it still? There’s still a market for good food even though McDonald’s exists, if AI art is terrible and soulless than it isn’t really a danger to anyone who can’t make good art themselves. And if AI art is always terrible, then why are so many people worried about whether the picture they’re seeing is AI-made or not? Shouldn’t it always be obvious?
This is very obviously an emotional argument. If you can convince someone that a picture was not made with AI, they’ll defend it. If you convince them it was made with AI, they’ll attack it.
This was a vague disconnected rant, but I’ve become sort of jaded to the AI arguments I’ve seen going on. I had thought that modern society had somewhat grown out of Ludditism. And to be frank, many of the people I see making anti-AI arguments are supposedly pro-science and pro-rationalism. But it seems that ideology only works so long as their “tribe” doesn’t ever get threatened.
-
So, what exactly was the metaverse?
This may just prove that I’m an out of touch old fogey, but I never cared for the metaverse hype and am not surprised it failed. Yes Meta, the company which renamed itself for the metaverse, hasn’t yet admitted defeat, but at this point I’m willing to say it failed. The metaverse was never explained to me in a way that made it seem both feasible and viable. “Imagine you could train surgeons in the metaverse, they wouldn’t need to train on Cadavers and Patients!” Yes, imagine the quantum leap in technology that would be required to allow for that kind of haptic feedback. Because it isn’t enough to know where everything is and what it looks like, knowing how much resistance the body gives to you as you force your way into it is also very important, and you don’t get that playing VR Surgery. “Imagine you could go to the office in the metaverse!” Why would I want to do work with a VR headset on my head?
I know I’m more than a year late to the party, but I never understood just what the metaverse was supposed to be or accomplish. To some people it was a Sci-Fi future like the matrix (impossible). And to others, it was clearly just a solution in search of a problem. But the most audacious thing is that for a while, it seems every company wanted to be a Metaverse company. I was recently pointed to a hilarious ETF themed for the metaverse. They’ve got Meta in their, that’s fine. They’ve got AMD and nVidia, yeah I guess graphics cards would be needed. Then they have Coinbase. Why the hell is Coinbase a metaverse company? I looked it up and some people were trying to tie “Web3” to the Metaverse, and that crypto would be the currency of the Metaverse. Crypto cannot even reliably operate as a currency of any kind, so it sure isn’t taking over the Metaverse.
Then it seems that every gaming company of any size was a metaverse company. EA, Take Two, Nintendo? Yeah, they made the Virtual Boy, so I guess they know what a shitshow VR headsets can be. But if the best people could think of for the metaverse was VR gaming then that says a lot about how little though was even put into the concept.
Now, Web3 and Crypto in general are already their own solution in search of a problem, but nothing every dies with a bang, it just fades away. And I think we’ll have a long time yet before Crypto and “The Metaverse” finally fade. Even after Facebook realizes how terrible their new name is, some other company will probably take up the banner to scam investors. But I cannot ever see myself replacing my gaming PC or any human interaction with a VR headset.
-
It’s official, we’re now being taxed to pay back Peter Thiel
I posted a while ago about how the Biden administration was bailing out SVB without calling it a bailout. Basically Silicon Valley billionaire and hedge fund managers (like Peter Thiel) put all their money in a bank well in excess of the 250,000$ FDIC insurance limit. That limit is a known risk. If the bank you use goes bankrupt, and if you exceed that limit, the FDIC is only obligated to give you back 250,000$. Doesn’t matter if you had 250,001$ or 999,999,999,999$, FDIC is only obligated to give you 250,000$.
But that would be unfair to the billionaires. After all, why should they ever suffer the consequences of their actions? So instead the administration promised that every single depositor would be made fully whole. This was spun as them protecting the little guy, but the little guy was already covered by the 250,000$ insurance. I don’t have more than that in the bank, neither does anyone else I know. If my bank goes bankrupt, I will be fully paid back because my deposit is far less than 250,000$. If you have more than that amount, then you are solidly rich and do not need a government bailout.
But the bailout came anyway. The FDIC handed out money to cover the billionaires and hedge funds. Now that money has to come from somewhere. Biden promised it wouldn’t come from the taxpayers of course, but it still is coming from the little guy. It’s coming from our bank accounts.
Every person who owns a bank account is paying a small amount of tax into the FDIC insurance program. It won’t show up as a line item in your bank statement, but it’s there all the same. But for every bank account held by a bank, they have to pay a little bit into the FDIC. That cost naturally gets passed on to the holder of the bank account, just like every other tax. When the tax on cigarettes rises, the price of cigarettes rises. So too is it with bank accounts. You won’t see the tax as money rushing out of your account, but you will see it as less money going in. The bank will pay you less interest on your deposits because they have to take some off the top to pay for the FDIC insurance. And if there was no FDIC insurance, you’d get more interest.
You can see this exact same scenario if you look at big bank accounts. There are some banks with accounts which hold millions, even billions of dollars. The FDIC is only obligated to pay back 250,000$ in the case of bankruptcy, but a responsible billionaire who does not need a government bailout will pay for deposit insurance which covers more than the 250,000$ FDIC limit. That deposit insurance will decrease the amount of interest paid on the deposit, or even remove the interest entirely to pay the insurance. If you have to pay for insurance, you get less interest.
Everyone with a bank account has to pay for FDIC insurance, we don’t even get a choice. And now we need to pay for even more insurance to refill the FDIC’s account since they emptied it to bail out Peter Thiel
The FDIC plans to hit big banks with a tax to refill its account. This is being spun as a progressive redistribution from the rich to the poor. It’s the opposite. If a tax is levied on Walmart, Walmart just raises its prices, and the Walmart customers pay that tax themselves. The vast majority of Americans have their money in a big bank like Bank of America. So the big banks are going to pass this new tax onto their depositors, just as they pass the FDIC insurance tax onto us. You and I will be receiving less interest on our deposits now, because the FDIC spent all their money on Peter Thiel and co. Take from the poor to give to the rich, socialize loses and privatize profits. It’s 2008 all over again.
I know the amount is small. It’s probably going to be no more than a few dollars in lost interest in my account. But a few dollars times the 100 million or so Americans who bank with big banks makes the few billion dollars needed to bail out Peter Thiel and co. And it shouldn’t be this way, we should not be paying for their mistake.
And I know I keep harping on Peter Thiel, but it’s because a bunch of so-called “progressives” are refusing to even contemplate that this is a bailout taking money from the poor. By ignoring the context you can see SVB and its depositors as “the little guys” and Bank of America as “the rich” so taking money from Bank of America to give to SVB depositors is re-distributive. But it isn’t so. SVB was the bank of billionaires and hedge funds, Bank of America is the overwhelming bank of America’s poor and middle class. Taking from Bank of America to pay back SVB’s depositors is taking from the poor and middle class to pay back the billionaires. And reminding those “progressives” of exactly who is being paid back is just something I feel I should do.