If the weavers get replaced by machines, who will buy the clothes?

I’ve seen way too many articles about AI casting doom and gloom that it will “replace millions of jobs” and that this will lead to societal destruction as the now job-less replacees have no more money to spend.  The common refrain is “when AIs replace the workers, who will buy the products?”

This is just another fundamental misunderstanding of AI and technology.  AI is a multiplier of human effort, and what once took 10 men now takes 1.  That doesn’t mean that 9 men will be homeless on the street because their jobs are “replaced.”  The gains reaped from productivity are reinvested back into the economy and new jobs are created.

When the loom replaced hand-spinning weavers, those weavers were replaced.  But they could eventually find new jobs in the factories that produced looms, and in other factories that were being developed.  When computers replaced human calculators, those calculators could now find jobs programming and producing computers.

For centuries now, millenia even, technology has multiplied human effort.  It used to take dozens of people to move a single rock, until several thousand years ago someone had the bright idea of using ropes, pullies, and wheels.  Then suddenly rocks could be moved easily.  But that just in turn meant the demand for moving rocks shot up to meet this newer, cheaper equilibrium, and great wonders like the Pyramids and Stonehenge were suddenly built.

The same will be true of AI.  AI will produce as many new jobs as it creates.  There will be people to produce the AI, people to train the AI, people to ensure the AI has guardrails and doesn’t do something that gets the company trending on Twitter.  And there will be ever more people to use the AI because demand is not stable and demand for products will rise to meet the increase in supply generated by the AI.  People will want more and more stuff and that will lead to more and more people using AI to produce it.

This is something that people get hung up on, they think that demand is stable.  So when something that multiplies human effort gets created, they assume that since the same amount of products can be produced with less effort, that everyone will get fired.  Except that demand is not stable, people have infinite wants and finite amounts of money. 

Technological progress creates higher paying jobs, subsistence farmers become factory workers, factory workers become skilled workers, skilled workers enter the knowledge economy of R&D.  These new higher paying jobs create people who want more stuff because they always want more stuff, and now have the money to pay for it.  This in turn increases demand, leading to more people being employed in the industry even though jobs are being “replaced” by machines.

To bring it all back to weavers, more people are working in the textile industry now than at any point in human history, even though we replaced weavers with looms long ago.

AI will certainly upend some jobs.  Some people will be unable or unwilling to find new jobs, and governments should work to support them with unemployment insurance and retraining programs.  But it will create so many new jobs as well.  People aren’t satisfied with how many video games they can purchase right now, how much they can go out to restaurants, how much housing they can purchase, etc.  People always want more, and as they move into higher paying jobs which use AI they will demand more.  That in turn will create demand for the jobs producing those things or training the AIs that produce those things. 

It has all happened before and it will happen again.  Every generation thinks that theirs is the most important time in the universe, that their problems are unique and that nothing will ever be the same.  Less than three years ago we had people thinking that “nothing will ever be the same” due to COVID, and yet in just 3 short years we’ve seen life mostly go back to normal.  A few changes on the margins, a little more work from home and a little more consciousness about staying home when sick, but life continued despite the once-a-century upheaval.

Life will also continue after AI.  AI will one day be studied alongside the plow, the loom, and the computer.  A labor-saving device that is an integral part of the economy, but didn’t lead to its downfall.

AI art killed art like video killed the radio star

Everyone knows the song “Video Killed the Radio Star” by the Buggles, it was one of the earliest big hits on MTV (back when it was still called Music Television). The song is pretty good, but it also speaks to a genuine fear and wonder about our world, that changing technology upends our social fabric and destroys our livelihood. The radio star who just wasn’t pretty enough for video, or couldn’t compete with the big production values of music videos, or just didn’t like dancing and being seen at all. That radio star is the Dickensian protagonist of the modern age, as they are tossed aside and replaced when new technology comes along.

This Luddite fear has pervaded throughout history. The loom-smashing followers of Ned Ludd are only the most famous, but there were silent actors who never made it in talkies. There were photo-realistic painters who could never compete with a camera. John Henry died trying to beat a steam drill. In each case, an argument could be made that the new technology removed some important human element. The painters could claim that photography wasn’t “true art”. And the loom smashers too probably believed that their handcrafts were more “real” and more deserving of respect than the soulless cloth that replaced it.

So why is AI art any different? Why should we care about the modern Luddites who want to ban it or restrict it? I say we shouldn’t.

AI art steals from other artists to make its images

common argument

No more than any artist “steals” when they learn from the old masters. It is a grievous misunderstanding of how AI works to claim that it cuts and pastes from other images, and an AI training itself on a dataset of art is no different than an art student doing the same whether in university or on their own. The counter-argument I’ve heard is “why are you ascribing rights to an AI that should only belong to humans! Yes humans can learn from other art, but AI shouldn’t have the right to!” I’m not ascribing anything to AI, the person who coded the AI and the person who used the AI have the right to use any images they can find, just as an artist does. And just as the output of an artist learning from old masters is itself new art, so too is the output of coding or using an AI that has been trained on old works.

AI art is soulless

common argument

As soulless as loom-made fabric is compared to hand-made. Or as soulless as a photograph is compared to a hand-painted picture. Being made with a machine doesn’t detract from something for me, and I think only bias causes it to detract from others.

AI art takes money out of artists’ pockets, it should be banned to protect the workers’ paychecks

common argument

Why is the money of the workers more important than the money of the consumers? Loom-made fabric competes with hand-spun fabric, should we smash looms to keep the tailors’ wages up? Are we ok with having everything cost more because it would hurt someone’s business if they had to compete against a machine? The counter-argument I’ve seen to this is that the old jobs replaced by AI were all terrible drudgery and it’s good that they were replaced, whereas art is the highest of human expressions and should never be replaced. Again I think this is presentism and a misreading of history. I’m sure there were tailors and seamstresses who though sewing and making fabric was the absolute bomb, who loved their job and though that their clothes had so much heart and soul that they were works of art in and of themselves. And I know there are artists in the modern day for whom most of their work is dull drudgery.

Thinking that your job and only your job is the highest form of human expression and should never be replaced, well to me that just shows a clear lack of empathy towards everyone else on earth. No one’s job is safe from automation, but all of society reaps the benefits of automation. We can all now afford far more food, more clothing, more everything, since we started automating manual labor. Labor saving creates jobs, it doesn’t destroy them, it frees people to put their efforts towards other tasks. We need to make sure that the people who lose their jobs due to automation are still cared for by society, but we should not halt technological progress just to protect them. AI art allows creators and consumers to have more art available than they otherwise would. Game designers can whip up art far more quickly, role-players can get a character portrait without having to pay, this lets people have far more art available than they otherwise would. In the same way that the loom let us have far more clothing available than we otherwise would.

AI art is always terrible

common argument

I find it funny that this often comes paired in internet discourse with “I’m constantly paranoid and wondering if the picture I’m looking at was made by AI or not.” There’s a very Umberto Eco-esque argument going on in anti-AI spaces. AI is both terrible and easily spotted, but also insidious and you never quite know if what you’re seeing is AI, and also everyone is now using AI art instead of “real” art.

If real art is better than AI art, wouldn’t there be a market for it still? There’s still a market for good food even though McDonald’s exists, if AI art is terrible and soulless than it isn’t really a danger to anyone who can’t make good art themselves. And if AI art is always terrible, then why are so many people worried about whether the picture they’re seeing is AI-made or not? Shouldn’t it always be obvious?

This is very obviously an emotional argument. If you can convince someone that a picture was not made with AI, they’ll defend it. If you convince them it was made with AI, they’ll attack it.

This was a vague disconnected rant, but I’ve become sort of jaded to the AI arguments I’ve seen going on. I had thought that modern society had somewhat grown out of Ludditism. And to be frank, many of the people I see making anti-AI arguments are supposedly pro-science and pro-rationalism. But it seems that ideology only works so long as their “tribe” doesn’t ever get threatened.

So, what exactly was the metaverse?

This may just prove that I’m an out of touch old fogey, but I never cared for the metaverse hype and am not surprised it failed. Yes Meta, the company which renamed itself for the metaverse, hasn’t yet admitted defeat, but at this point I’m willing to say it failed. The metaverse was never explained to me in a way that made it seem both feasible and viable. “Imagine you could train surgeons in the metaverse, they wouldn’t need to train on Cadavers and Patients!” Yes, imagine the quantum leap in technology that would be required to allow for that kind of haptic feedback. Because it isn’t enough to know where everything is and what it looks like, knowing how much resistance the body gives to you as you force your way into it is also very important, and you don’t get that playing VR Surgery. “Imagine you could go to the office in the metaverse!” Why would I want to do work with a VR headset on my head?

I know I’m more than a year late to the party, but I never understood just what the metaverse was supposed to be or accomplish. To some people it was a Sci-Fi future like the matrix (impossible). And to others, it was clearly just a solution in search of a problem. But the most audacious thing is that for a while, it seems every company wanted to be a Metaverse company. I was recently pointed to a hilarious ETF themed for the metaverse. They’ve got Meta in their, that’s fine. They’ve got AMD and nVidia, yeah I guess graphics cards would be needed. Then they have Coinbase. Why the hell is Coinbase a metaverse company? I looked it up and some people were trying to tie “Web3” to the Metaverse, and that crypto would be the currency of the Metaverse. Crypto cannot even reliably operate as a currency of any kind, so it sure isn’t taking over the Metaverse.

Then it seems that every gaming company of any size was a metaverse company. EA, Take Two, Nintendo? Yeah, they made the Virtual Boy, so I guess they know what a shitshow VR headsets can be. But if the best people could think of for the metaverse was VR gaming then that says a lot about how little though was even put into the concept.

Now, Web3 and Crypto in general are already their own solution in search of a problem, but nothing every dies with a bang, it just fades away. And I think we’ll have a long time yet before Crypto and “The Metaverse” finally fade. Even after Facebook realizes how terrible their new name is, some other company will probably take up the banner to scam investors. But I cannot ever see myself replacing my gaming PC or any human interaction with a VR headset.

Socialism Betrayed: Racist Great Man theory of history strikes again

There was some mid historian who once said: “The history of modern Europe can be defined by 3 men: Napoleon, Lenin, and Hitler.” This plithy remark sums up much about the “great man” theory of history.

For those who don’t know, the great man theory believes that history is moved not by economic or societal or any large scale forces, but by the actions of individuals, the “great men” (almost never women). This theory opines that it was Napoleon, whose conquests spread republicanism throughout Europe and whose terrorizing of European monarchs lead to the Concert of Europe, it was this Napoleon who defined the course of the 19th century. And in just the same way, Lenin and Hitler in their own ways defined the course of the 20th century, pulling Europe in their directions of communism or fascism, remaking the modern world through their life and death. NATO and the Warsaw pact, whose presence defined Europe for half a century, came about because of Hitler. And Leninist communism, which defined the ideological struggle between East and West, came about obviously due to Lenin.

This great man theory has been attacked by much better historians than I, but I want to focus right now on how it completely invalidates the role of any individual in society except the Great Man himself. Napoleon without an army to command and a state to lead is nothing, and yet his soldiers, his bureaucrats, and the entire nation he inherited are meaningless in the great man theory of history. And the revolutions which toppled the monarchy and allowed Napoleon to begin his rise were not the actions of solitary great men, but a great mass movement of the French people as a whole. It is likely that even if Napoleon had never existed, the conflict between revolutionary republicanism and monarchism which defined much of his legacy would still have happened. And if Lenin had not existed, the conflict between capitalism and communism would likely still have been present.

I’m reading “Socialism Betrayed” by Roger Keeran and Thomas Kenny and it’s startling how in the very first pages of the book, they define their thesis that the great man theory is true and the people of society do not matter.

The collapse of the Soviet Union did not occur because of an internal economic crisis or popular uprising. It occurred because of the reforms initiated at the top by the Communist Party of the Soviet Union (CPSU) and its General Secretary Mikhail Gorbachev

Socialism Betrayed

Really?! It didn’t happen because of nationalist movements among the subjugated peoples of the USSR, like the Estonians, Latvians and Lithuanians? It didn’t happen because of mass movements which defined the collapse of every other Warsaw Pact nation in Europe? It didn’t happen because of the well-documented shortages and flailing USSR economy propped up almost entirely by oil and gas money? How easy it is to do history when you can define your villain and ignore all context!

I can already tell that this book will be dumb. Real dumb. Probably as bad as “The End of Growth” for how much it will ignore the facts to suit and opinion. Why are all the dumbest books I read the anti-capitalist ones?

Joel Kurtzman is the opposite of Richard Heinberg

I just wanted to start by saying I’ve become much more lackadaisical about these posts recently. My work is getting interesting, so I’m not putting as much time and effort into my research prior to posting. I’m mostly shooting from the hip based on whatever comes to mind. I still enjoy this though so I’ll keep doing it, and I hope my couple of readers don’t mind the decline in quality.

With that said, it’s so interesting that Joel Kurtzman detects the exact opposite problem as Richard Heinberg. For those who remember, Richard Heinberg wrote “The End of Growth” in which he posited that there would be no more economic growth after 2010 (lol, lmao even). He claimed that this was because the world had entered an inextricable supply crunch, there just wasn’t enough stuff to go around (especially oil!) and our economy was already well past the carrying capacity of the planet. This meant that we couldn’t keep growing, because without more stuff to put in our factories we couldn’t make products to sell to people. We would all have to get by with less.

Hilariously, Joel Kurtzman detects the opposite problem from his vantage point in 1987. He detects a severe overproduction of commodities and finished goods caused by the industrialization of the global south and its competition with America, Europe and Japan. In Kurtzman’s thesis, we are entering an inescapable race to the bottom where wages will continue to fall further and further as companies try to make money while the prices of goods fall. Not only that but the nations of the world have financed their overproduction through the accumulation of debt, which they won’t be able to pay off as prices fall meaning there will be a debt collapse and further unemployment.

I’m sure both authors would think me uncharitable towards their theses, but that was my reading from their books.

The point is, I think both of them are suffering from extreme recency bias. Heinberg was writing after a decade of constricted oil supply had caused a rise in prices and had been followed by an economy crash. He thought the constricted supply would continue forever and the low-growth era following the crash was permanent.

Kurtzman was writing after a supply crunch had turned into a supply glut. OPEC’s oil embargo of the 70s had forced the world’s economies to become more efficient and induced many companies to step up their own oil production. In the late 80s, rising oil investment turned into an oil boom, and to maintain market share OPEC countries increased production without the consent of the entire group. This, alongside new technologies to make oil use more efficient, led to an oil glut and depressed prices. Add to this that prices were falling in other sectors, and Kurtzman thought this trend would continue forever.

Both Kurtzman and Heinberg astutely identified trends in their immediate present, and then extrapolated those trends infinitely into the future to arrive at their desired policy goals. For Heinberg: it was degrowth. For Kurtzman: it was protectionism. Both of them failed to understand that actions change with changing conditions. Heinberg didn’t realize that a rise in oil prices would spur investment into new extraction methods (fracking) and more efficient usage of oil (hybrid/electric cars). Kurtzman didn’t understand that falling commodity prices allows companies to produce more for less, nor did he understand that the American economy didn’t need manufacturing jobs to stay highly paid. If more stuff is being produced while still profitable, then consumers win because prices go down. And American consumers won most of all because tech jobs were replacing laborious manufacturing jobs.

I know pontificating is a hard job, I think all the pontifications I’ve made on this blog have been off the mark (though I don’t ask for money). But I find it fascinating that these two authors erred in exactly the same way to arrive at completely divergent answers. I’d love to have Kurtzman from 1987 debate Heinberg from 2010. Don’t let them use historical data, just explain to each other why will commodity prices have to remain high/low for the foreseeable future? I wonder whose head would explode first.

Follow up: what did Joel Kurtzman think of the 90s and 2000s?

I wrote a post last week about Joel Kurtzman’s “The Decline and Crash of the American Economy,” a book from the 80s that posited that America’s best days were behind it. Kurtzman’s central thesis appears to be:

  • Manufacturing is moving overseas, causing America to run a trade deficit
  • To buy foreign goods, America and Americans are becoming indebted to the rest of the world
  • Foreign investment is flooding into American stocks and American debt, causing us to lose control of our own economy
  • The much touted “service jobs” and “information age economy” are a mirage
  • As a result of the above four facts, the American economy is entering a period of decline and crash which can only be solved by strong protectionism and government control of the economy

This was all written in the 80s, and to an old-school leftists I guess it all seemed very sensible. I could imagine Jeremy Corbyn or Bernie Sanders making these exact arguments in 1980, while adding a few more worker-centric chapters of their own. The problem is that this thinking has largely been supplanted by modern economics.

Manufacturing is not the only thing an economy does. The knowledge economy, which Kurtzman scoffed at as the “information age economy,” has rapidly eclipsed all the manufacturing that came before it and continues to propel American forward. Likewise foreign investment flooding into America is by no means bad, as it allowed American companies and the Government to finance themselves with debt or equity. If foreign investment was fleeing America, that would be cause for concern. Being in debt is not a biblical sin for an economy. We all take on debt all the time because the value of having a car or a house now is greater than the value of the money we will use to pay off that debt over 5 to 20 years. The same is true for companies expanding, and foreign investment flooding into America means companies can issue debt much more cheaply than they could otherwise.

Furthermore Kurtzman’s prescription was largely abandoned in the 90s. Both Republicans and Democrats largely made peace with free trade (although the 2 most recent presidents have bucked this trend). There is a strong argument to be made that tariffs on foreign goods hurt the American economy as much as they do the foreign economy for a number of reasons. Tariffs create a walled garden for certain goods, allowing noncompetitive industries to remain in business for longer than they should. In turn these noncompetitive industries suck up investment and compete for resources, making it harder for actually competitive companies to expand as they should be able to. There is only so much supply of money, parts, and workers, if Ford was heavily subsidized by tariffs, would Tesla have been able to take off? Finally tariffs alter the incentive calculus for a company because once tariffs are part of the political equation, companies can increase their profits more by demanding higher and higher tariffs from the government than they can by actually improving production. This caused some Latin American countries to enter a tariff spiral where goods became more and more expensive because rather than compete with the rest of the world, companies put their effort into demanding higher and higher tariffs.

In the 90s and the 2000s America largely abandoned Kurtzman’s thesis and his prescriptions. Angst and newsrooms aside, the trade deficit kept expanding, NAFTA remained in place, the service and information sector were seen as avenues of growth, and debt kept piling up. If Kurtzman then thought the Financial Crisis was proof of his theory, he would have been rather sad that America came out of the crisis much better than most of the nations he said it was indebted to, such as Japan, Latin America, and Europe.

Reading Kurtzman’s book is like reading politics from a bygone age. I once read a book about “the Crime of ’73,” a much maligned bill which removed the right of silver-bullion-holders to have their silver minted into dollars. Pro-silver advocates despised this bill so utterly that it eventually launched William Jennings Bryan as a presidential candidate, a candidacy he might not have gained had the silver movement not been so motivated and powerful. Yet reading it today, it’s hard to understand why this economic debate was filled with such hatred and vitriol. It’s hard to understand the motivations behind the players, and how for them this was the defining issue of their age. Because honestly, America has moved past that debate long ago: silver isn’t money and neither is gold, dollars are. I almost feel the same way with Kurtzman’s book. The last 2 presidents notwithstanding, most of my adult life has been shaped by a bipartisan agreement on free trade and the importance of the information economy over traditional manufacturing. I just wonder what Kurtzman would think now.

Energy Return on Energy Investment, a very silly concept

Today I’d like to address one concept that I read about in Richard Heinberg’s The End of Growth, Energy Return on Energy Investment or EROEI. The concept is an attempt to quantify the efficiency of a given energy source, and in the hands of Heinberg and other degrowthers it is a way to “prove” that we are running out of usable energy.

EROEI is a simple and intuitive concept, taking the amount of energy produced by a given source and dividing by the amount of energy it costs to set up and use that source. Oil is a prime example. In the beginning of the 20th century oil extract was easy since it just seeped out of the ground in many places. Drilling a small oil well won’t cost you that much, hell you can probably do it with manpower alone. In that case the oil gushing forth will easily give you a good energy return.

In the 21st century however, things have become harder. Oil wells require powerful machines to drill (which costs energy), and the amount and quality of the oi you get out is often lower. Add to that the fact that modern wells require huge amounts of metal and plastics, all of which cost energy to produce and even more energy to transport to their location, then add the energy it took to find the oil wells in the first place using complex geographical surveys and seismographic data, and taken together some people claim that the EROEI for a modern oil well is already less than 1, meaning that more energy is being put in than the energy we get out.

And oil isn’t the only fuel source heading towards and EROEI of less than 1. Modern mining techniques for coal require bigger and bigger machines, natural gas requires more and more expansive facilities, even solar panels require minerals that are more and more difficult to acquire. It seems everything but hydro power and (perhaps) nuclear power are becoming harder and harder to produce, sending energy returns down further and further.

This phenomenon, where the EROEI for our energy sources is less than 1, is supposed to presage an acute energy crisis and the economic cataclysm that degrowth advocates have been warning us about. If we’re getting out less energy than we’re putting in, then we’re really not even gaining, aren’t we? The problem is, I’m struggling to see how EROEI is even a meaningful way to look at this.

First let me note that not all energy is created equal. Energy in certain forms is more usable to us than in others. A hydroelectric dam holds water which (due to its being elevated above its natural resting place) acts as a store of potential energy. The release of that water drives a turbine to produce electricity. But you can’t fly a plane using water power nor keep it plugged in during flight. Jet fuel is another source of potential energy, and it has a number of advantages versus elevated water. Jet fuel is very easy to use and transport, you can fill a tank with it and move it to wherever your plane is, then fill the plane’s tanks from there.

If the only two energy sources in the world were jet fuel and hydroelectric power, we would still find it beneficial to somehow produce jet fuel using hydroelectric power even though that would necessity an EROEI of less than one. Because although this conversion would have less total energy, the energy would be in a more useful form. People would happily extract oil using hydroelectric power, then run refineries using hydroelectric power, because jet fuel has so much utility. This utility means that (supply being equal), jet fuel would command a higher price than hydroelectric power per unit of energy. And so the economic advantages would make the EROEI disadvantages meaningless.

This is the fatal flaw of EROEI in my mind. The fact that some forms of energy are more useful than others means we can’t directly compare energy out and energy in. The energy that is used to run a modern oil well comes to it from the grid, which is usually powered by coal, solar, wind, or nuclear, none of which can be used to fuel a plane. Converting these forms of energy into oil is an economic gain even if it is an energy loss. Furthermore EROEI estimates are generally overly complex and try to account for every joule of energy used in extraction, even when those calculations don’t really make sense. Let me give you an example:

A neolithic farmer has to plow his own fields, sow his own seeds, reap his own corns. Not only that, but the sun’s rays must shine upon his fields enough to let them grow. Billions of kilocalories of energy are hitting his plants every second, and most of then are lost during the plants’ growth process because photosynthesis is actually not all that efficient to begin with. The plant will have used billions of kilocalories of energy, and from them the farmer gets a few thousands of kilocalories of energy. Most of the energy is lost.

This is the kind of counting EROEI tries to do, applied to farming. When you count up every joule of energy that went into the farmer’s food, you find his food will necessarily provide him with an EROEI of less than one thanks to the first law of thermodynamics. But this isn’t a problem because Earth isn’t a closed system, nor are our oil wells. We are blasted by sunlight every minute, our core produces energy from decaying nucleotides, our tides are driven in part by the moon’s gravity, there is so much energy hitting us that we could fuel the entire world for a thousand years and never run out. The problem is that there are some scenarios where that energy isn’t useful. You can’t fly a plane with solar or geothermal or gravitational energy, but you can power an oil well. So we happily use the energies we have lots of (including our use of solar power to grow useful plants and animals!) and use that energy to help us extract the energies with greater utility.

I think EROEI failed from the very beginning for this very reason. It ignores economic realities and the massive amount of energy that surrounds us, and instead argues from the first law of thermodynamics. Yes in any closed system energy eventually runs out, but it isn’t even clear that our universe is a closed system, and the earth definitely is not, so we need to face up to economic reality on this.

Interpretatio graeca for Chinese myths and legends

I’ve been reading an interesting book from 1931. It discusses the motifs and references used in Chinese art, highlighting the Taoist, Confucian, and Buddhist stories that many of them derive from. However the book has a problem in that the authors were clearly trying to relate every Chinese story back to the stories they were more familiar with, mainly Indian Buddhist stories but also Roman/Greek ones as well. The Romans used to do this all the time, they called it “interpretatio graeca.” The Romans figured that every god or goddess in every culture was merely a manifestation of a god they were already familiar with, so they would “interpret” foreign gods as being the same or similar to their Roman/Greek gods. So Ra, the chief god of the Egyptians, got conflated with Apollo in Roman writings because since they shared a sun motif they must be identical, right? But Ra was not the same as Apollo, and Chinese myths are not the same as Indian myths, yet the authors of this book keep conflating the two and interpreting Chinese myths through a lens of Indian myths.

The book itself is called Outlines of Chinese Symbolism & Art Motives (sic) by C.A.S. Williams. In many respects it works well as an overview of the history and stories that make up a lot of Chinese art, and a primer into Chinese art culture. And yet it falls into this trap again and again of trying to interpret everything unfamiliar through the lens of the familiar. I understand perhaps that for the reader this can make things easier, saying that “This god is the king of the gods, he rules the sky and causes lightning to happen” may be harder to remember than saying “he’s like Zeus,” but saying “he’s like Zeus” brings a bunch of inaccurate assumptions that really aren’t true to what the Chinese sky god is actually like.

I wonder if this is in part because of out-dated theories in comparative religion. There was a vibe for a time of assuming that all myths and legends were just borrowed or stolen from earlier cultures. Jupiter and Zeus weren’t an original idea, they must have been borrowed by the Greeks and Romans from some previous culture that had a sky god wielding thunderbolts and ruling the other gods. The theory went on to say that every single sky-god in history was just a borrowing of a borrowing from an “original” sky god that was dreamed up 10,000 years ago. But the other option is to realize that “sky god causes thunder” is an easy thing for different people to come up with independently. Assuming that every myth in history was borrowed from somewhere else is also how you got inaccurate claims that for example “Jesus was just re-branded Mithra” and other ahistorical nonsense. It’s a very human feeling to want to related everything back to something you already know well, but it doesn’t lead to good history and so it should not be a feeling used in Academic writing.

Still, for a book from 1931 Outlines is surprisingly good, I enjoy being able to read the characters and phrases it writes in original Chinese, and learning the meaning behind some of them with it’s usually accurate descriptions of etymology. The descriptions of myth and stories generally seem accurate and the nonstop conflations with Indian myths can be ignored. I got this for 6$ at a used book store and I think it was worth the money.

Pointless prognosticating, what is the “Next Big Thing”

If you follow the Tech industry, you know that everyone’s always searching for the Next Big Thing, and if you remember my series on The American Challenge, you might remember that I talked about how that book badly missed on some of its predictions of what The Next Big Thing would actually be. This got me thinking, what do I think the Next Big Thing is? What do I think will be the next trillion-dollar industry, the type of thing countries will want to focus on and people will want to invest in, things like semiconductors and computers in the 80s, mass-built automobiles in the 1910s, or trains in the 1800s. The kind of thing that will change the way we do everything, and if you have a chance to get in at the ground floor you’ll be kicking yourself in 20 years if you don’t take it.

To start with, I’ll talk about others’ predictions.

I’ve heard some people talk about Cloud Computing as the Next Big Thing, but it’s hard to tell if it’s truly Next or if it’s more of a continuation of the Current Big Thing. Like, would it make sense to separate the internet revolution from the computer revolution? Both happened concurrently, the first couldn’t have happened without the second and the second was truly skyrocketted by the first. So how does Cloud Computing fit into all this, it’s already a trillion dollar industry with the largest tech companies in the world all throwing money into it, and even if I can’t explain how it works personally I can definitely see that others are talking about it as a revolution. But again it feels hard to tease it apart from computers and internet as a whole, and it doesn’t seem like we’re on the ground floor anymore. Microsoft, Google, Amazon and Meta have all put so much money into their cloud infrastructure that I don’t see any small fries really taking pieces off of them. I’d say Cloud Computing is the current Big Thing.

But that’s mostly semantics, I’ve also heard people say 3D printing is the Next Big Thing. The University of Nottingham for instance has a department that wants to be able to 3D print a smartphone, circuitry and all, using just metal and plastics as inputs. The ability to mass-produce using 3D printing has long been a holy grail of the field, and the ability to custom manufacture pretty much anything by just fiddling with a computer model would certainly be a game-changer. But 3D printing has so many technological limitations that I still wonder if it will truly take off, most glaringly, 3D printed items tend to not work well out of the printer, and fall apart quickly even if they do, which is a big barrier for mass-production. Ultimately I just wonder if 3D printing will be something more like Supersonic Travel was in the 70s, something that was seen as the mass-market future but was in fact relegated to only specialized roles while more boring “old fashioned” things kept their market share.

The Internet of Things is something I’ve never really gotten the hype for. There are certain applications where having a device always connected to the wifi seems like it could be value added, but most of the hype seems to be marketers trying to see a subscription service for a device that used be be a one-time purchase, or from unrealistic promises that don’t fix the Oracle Problem (ie suppose you give you machine a wifi connection so it can always tell you when certain conditions are met, but will you necessarily trust that your machine is giving you good data or will you have to double check each time anyway, negating the benefits of having the wifi in your machine). Frankly, I don’t want anything in my house to be connected to the wifi unless I expect or need to play Youtube on it.

Another Next Big Thing could be the DNA/protein revolution. The Human Genome Project was a massive success, as was the development of modern Mass spectrometry, and a huge amount of modern biochemistry couldn’t exist without these techniques. Our ability to read the sequence of any protein or piece of DNA we want to, and to alter them in any way we please, have definitely given us a leg up in fighting genetic diseases and engineering proteins for a number of different purposes. In theory, biochemistry can let us create proteins to do just about any job that ordinary chemistry does, only faster and better. This includes highly speculative roles like uranium enrichment and carbon capture to even humdrum every day roles like plastic production. The ability to use genetics and proteomics to both cure our diseases and for industrial purposes is certainly enticing, but I’m still not sure the technology is there or will be there soon. Without getting too jargon-y, proteins can only do their job if they have the correct shape, and our ability to create any shape we want is not fully developed. When you change a single piece of a protein, it can have enormous effects on the protein’s structure and function, and it’s often difficult to even test these effects. Some people have told me that “genes and proteins are the next coding language” but until it’s as easy to test a protein as it is to test a program, I’m not sure that’s true.

Finally, outer space. Will the next trillion dollar company be a space company and not a tech company? I’d love that to be true, but I’m not sure. The best argument I’ve heard for the economic viability of space colonies was actually a really dumb and technical one. If you assume that there is already people living on both the Moon and on Earth, then in theory it is cheaper to ship anything from the Moon to the Earth, versus shipping something from the Earth to the Moon (due to differing gravity and atmospheric drag effects). If we then assume that economies of scale can be harnessed to make producing things on the Moon and producing things on Earth cost almost the same amount, then any company that moves its production from the Earth to the Moon has a comparative advantage that cannot be taken away, and it can service both the population on the Moon and the population on Earth more cheaply. Thus a Moon colony should be (economically) self-sustaining once it reaches a certain size. There are of course a hell of a lot of assumptions with this plan, and some of them are even bad assumptions, but this is genuinely the only compelling argument I’ve heard for colonizing space other than the Tsiolkovsky argument, which isn’t much more of an argument than but I WANT it to happen.

So what is the Next Big Thing? Honestly I don’t know, and I don’t think anyone does at this point. That was one thing I kept thinking about while reading The American Challenge. JJSS and people like him seemed to think that the best way to run a country was to foresee what would be the “Next Big Thing” and then invest in it. But JJSS’s predictions on The Next Big Thing were 1/3 or 1/4 depending on how you wanted to score him, and frankly redirecting national budgets into government projects with all the bureaucratic inertia and election-cycle-thinking that comes with them just seems like a terrible idea. Better to let the free market create a virtuous cycle where the good ideas win and the bad ideas lose, rather than create a government system that can be handcuffed by political or interest-group concerns to throw good money after bad and ignore successes in favor of prestigious failures. I don’t know what the Next Big Thing is, but what do you think? Feel free to comment below.

I don’t think Twitter is dying

You can stop tweeting #RIPTwitter

Over this past week, Twitter has gotten weird. Reports flying that Musk fired literally everybody, that there’s no engineers managing the servers, that he demanded everyone work 80 hours or quit and most of them quit. Forgive me for not posting sources but most of this is ultimately unsourced info from social media anyway. Regardless, people on Twitter are tweeting up a storm about how this is The End of Twitter and how they’ll all move to Facebook or Instagram or Mastodon when Twitter inevitably goes down for good. I don’t think that’s going to happen, at least not for another year or more.

Twitter may lose some of userbase as its billionaire owner continues to go crazy, but I highly doubt it will be replaced all at once, or even in the next year, or so and for a few reasons:

  • 1.) Lack of alternatives

When MySpace lost the battle to Facebook, it was a true battle between two platforms that did mostly the same thing. Both were neck and neck in terms of usercount and both focused on very similar styles of content and posting. Twitter doesn’t have that problem, Facebook and Instagram are nothing like Twitter in terms of its microblogging content or its ability to spread content to every corner of the userbase by latching onto its trending topics. And Mastodon has a tiny fraction of the total usersbase, if it continues to grow every year and Twitter loses half its userbase every year, then in around 5 years they’ll be neck and neck like Myspace and Facebook were in 2008. Until I see a sustained long-term trend of that nature, I’m not ready to proclaim that This Is The Death Of Twitter.

  • 2.) Institutional Buy-In

Twitter gives institutions something that they really really want, the ability to spread their message easily to all its users at almost no cost. There’s a good reason that Justin Trudeau, his holiness The Pope, and the People’s Daily (most read newspaper in China) all have official active accounts on twitter. Most would never be caught dead on Reddit in an official capacity, and Facebook/Instagram/other sites don’t allow them to reach every user in the way that Twitter does. Even if everyone with a net worth under 1 million dollars left Twitter TODAY, the site would likely continue on the inertia from Institutions for quite some time, as they would find tweeting something and having it get picked up by other institutions (especially newspapers) would still be a great way to get their viewpoint out into the wider world. Institutions don’t change rapidly, and even if Twitter does die it could take years for many institutions to migrate off of it. And the key is that as long as those institutions remain on Twitter, Twitter will still have value to many different users. Users who like to troll politicians’ comments, or bloggers/journalists looking to keep up with what the institutions are putting out, these people will stay on Twitter as long as the institutions stay on Twitter. So even if you start posting your dog pictures solely to Instagram, I doubt the Washington Post newsroom will abandon Twitter any time soon.

  • 3.) The Court of Lord Musk

People like to see billionaires as unaccountable god-kings creating or destroying everything in their path. This is partly because that is the image most billionaires cultivate and partly because they are certainly held less accountable than those of us who work for a living. But Musk isn’t the sole proprietor of Twitter, or even the sole proprietor of Musk Enterprises. There are a legion of accountants, lawyers, and investors who check and double check his every move. It seems strange that a man flaunts both the SEC and slander laws is being checked and double checked, but the very fact that he has never been punished for what he’s done is a testament to the work of his lawyers, accountants, and investors. These intermediaries act as a moderating influence on Musk the auteur CEO and so will likely ensure that no matter what he does the bills will keep getting paid and the lights will stay on at Twitter Enterprises.

  • 4.) Ease of use

Twitter has already been integrated into just about everything imaginable. I only have a Twitter handle (@streamsofconsc) to tweet out my daily blog posts. But WordPress (and basically every other posting software) has made it super easy to link your Twitter handle to your blog and auto-post everything you do with no added work necessary. Mastodon isn’t integrated into this ecosystem and probably won’t be any time soon.

  • 5.) I’ve seen this game before with Musk

This is a bit personal, but I’ve predicted the downfall of Musk before myself. I was part of the Musk hate-culture in r/enoughmuskspam for a fair bit, and fell easily into the echo chamber which pushed a narrative where Musk was constantly on the edge of destruction. I eventually got out, but it made me realize how easily hatred and castigation get amplified in such an echo chamber. Twitter is currently a strong echo chamber declaring the death of the platform and the End of Musk, and since there’s no social benefit to going against the grain, the most hyperbolic and outrageous claims of destruction are shared and amplified. This reminds me all too much of the patterns I saw with the hate-culture surrounding previous Musk ventures, and it makes me skeptical about people’s claims for this one.

I don’t think Twitter will go down because Musk fired too many of the people doing server maintenance. I don’t think Twitter will be replaced by Mastodon within the next few years. I don’t think Musk will be charged with market manipulation or treason for how he’s purchases a major avenue of public speech and trashed it. And I don’t think the people who are declaring the death of Twitter today will ever look back and admit they were wrong (if they are indeed proven wrong) any more than the people who declared the death of Tesla back in 2018 or the peak-oilers of the early 2000s. I think most people will either forget entirely or will claim that they were “early, but not wrong,” eternally pushing back their predicted death-date as they get more and more wrong by the year.

I may be wrong on this, and I’ll try to revisit this post in a year or so to either give my mea culpa or to declare how much smarter I am than everyone, but at this point I’d happily take the gamble that Twitter won’t be dying any time soon.