“I go with the athletes, not the science”

Sorry I haven’t written about finance in a while, I know science+finance (SciFi, if you will) was kinda my niche, but since I got serious about my fitness I’ve been recommended a lot of fitness content by the Almighty Algorithm, and it’s gotten me thinking.

Today’s topic requires just a tiny bit of background. As I wrote about, I’ve been following the advice of Dr Mike Israetel in part because he says all the right science-y shibboleths to make me believe he knows what he’s talking about. But I’ve also gotten recommended content from many other lifters who push back against some of his claims.

To an extent their pushbacks pass the smell test as well, they reference the same concepts that Dr Mike (and others) discuss, but they interpret those concepts differently. So the disagreement between Dr Mike’s “science-based” advice and other people’s advice seems to be a legitimate disagreement over the science, rather than a denial of science and the substitution of personal preference in its stead.

But other parts of this disagreement strike me as more… thoughtless. I watched a video critiquing some of the science-based conclusions, and it stated (paraphrased) “people say this move is terrible, but then you see world record power lifters doing it and you think hmmm, maybe it’s not so terrible after all.”

I think this appeal to authority has no place in a science-based discussion. Now yes, every scientific theory on exercise must be tested and proven *outside* the lab as well as in the lab. If a conclusion only works in a controlled lab environment then it isn’t necessarily best in the “real world.” But saying “well the best power lifters do this so the science must be wrong” is kind of absurd, because maybe they could be *better* if they actually listened to the science.

It reminds me of a story about Pliny the Elder. Pliny was a wealth Roman politician, whose wealth was derived mainly from vast agricultural estates. Not only that, he had extensive sources of the best knowledge available in the Roman world. So in his book Natural History, he draws upon his knowledge and experience to categorically state that *if you do not honor the gods, you will not be successful in agriculture*. And if you asked any of the Roman agriculturalists of his era, they’d probably give you the same answer.

Is the science on agriculture wrong? If all the best farmers honor the Gods, is that the only way to succeed?

No.

So if the best power lifters in the world are doing a certain move that science says is terrible, maybe the science is actually right and the power lifters are succeeding due to their own innate abilities combined with all their other training. I’d hazard a guess that a single move isn’t make or break to their training at all, and defending a move with this appeal to authority doesn’t really seem logical. It seems more like casting about for evidence to support an idea that you’d like to be true.

Science must be refuted with science. You have to be able to use real-world data and say “lab results say this move is bad but here’s all the evidence showing that people who eschew the move generally fail and people who use the move generally succeed.” You can’t point to a single piece of anecdote and say “well some people who use it succeed,” because then you’d be pointing to Pliny the Elder and saying “well I guess honoring the gods does improve your farm, because this guy was a really successful farmer and that’s what he did.”

Anyway, exercise science still seems to be in its infancy. I hope it gets more rigorous and comprehensive in the future, but it still seems to need some time before we can believe its claims as much as we can believe virology or chemistry.

Exercise and shibboleths

I’ve been trying to lose weight and gain muscle for years. But despite being in the target Young Male demographic, I never listened to Joe Rogan, or Logan Paul, or any of the exercise/fitness influences. Part of that was that they just didn’t interest me. Part of that was that fitness is filled with a lot of pseudoscience, and as a scientist myself I could see that almost everything said online was tinged with nonsense and falsehood. Everyone is looking for “one weird trick” to get abs of steel and 4% body fat, which leads to a proliferation of voodoo practitioners giving terrible advice and selling you supplements.

I stayed away from online exercise discussions.

But while idly scrolling one day, I found a video by Dr Mike Israetel of Renaissance Periodization. And for the first time in my life, I’m hooked. I’m watching his videos, I’m trying to learn his techniques, I’m putting into practice what he say I should be doing.

I think a large part of this sudden switch is that Dr Mike seems to have legit credentials. A teaching record at Lehman College, a genuine publication history, this guy is clearly doing science, not voodoo. But I think even more than his credentials are his shibboleths.

Put simply, Mike Israetel says all the right words as a scientist to make me (a fellow scientist) believe he knows what he’s saying. There are certain words that started out in science but have reached the mainstream: anyone can talk about carbohydrates and calories. But few people know what a motor unit is, or can accurately talk about the immune system. Dr Mike is saying things that pass the smell test to me (I am a fellow biology but not an exercise scientist specifically), and that helps me believe him when he says things I might otherwise be skeptical of.

And those shibboleths… make me nervous. Because I know I’m not actually doing research, I’m not actually seeking out all sides of the debate and forming my own rational conclusions. There’s hundreds of hucksters selling you on “the best way” to do exercise, so am I trusting Dr Mike for all the wrong reasons? Maybe he knows his biochemistry, but his exercise science is dogshit. I’d never know.

And even if Dr Mike is truly giving me the most accurate, up-to-date information in the scientific literature, that information could be wrong, and I could spend my time following baseless advice and getting less fit than if I’d just trusted the gymbro with a 6-pack and pecs.

I haven’t looked for any advice outside of Dr Mike, because to be honest I don’t have the time or the background necessary to know if he’s *really* got the goods or is a huckster like all the others. I have the background to know he knows his biochemistry, but beyond that I’m lost. But as someone without much time to exercise anyway, I feel like latching on to a charismatic Youtube professor is at least better than latching on to any other charismatic Youtuber, and is hopefully better than flying blind like how I used to exercise.

Time will tell.

So just how *do* you get good at teaching?

As a scientist with dreams of becoming a professor, I know teaching is part of the package. Whether it’s a class of undergraduates or a single student in a lab, your knowledge isn’t worth anything if you cannot teach it to others. I always say: no one would have cared about Einstein if he couldn’t accurately explain his theories. It doesn’t matter how right you are, science demands you explain your reasoning, and if you can’t explain in such a way to convince others, you still have a ways to go as a scientist.

Einstein was a teacher. After discovering the Theory of Relativity, he wrote and lectured so as to teach his theory to everyone. Likewise I must be a teacher, whether teaching basic concepts to a class of dozens, or teaching high-level concepts to an individual or a small group, teaching is part of science, and mandatory for a professor.

But how do I get good at it?

The first problem is public speaking. I don’t think I get nervous speaking in public, but I do have a tendency to go too fast, such that my words don’t articulate what I’m actually thinking. It’s hard to realize that the concepts you know in your head will be new and novel to the whole world that lives *outside* your head. When teaching these concepts to someone else, you need to go step by step so that they understand the logical progression, you can’t just make a logical leap because you already know the intervening steps.

So OK, I need to practice speaking more, but beside that, what’s the best method for teaching? And here we get to the heart of why I’m writing this post, *I don’t know and I don’t think anyone does*.

Every decade it seems sociologists find One Weird Trick to make students learn, and every decade it seems that trick is still leaving many students behind. When I went to school, teaching was someone standing at the front of the class, giving a lecture, after which students would go home and do practice problems. This “classic” style of teaching is now seen as passe at best, outright harmful at worst, and while it’s still the norm it’s actively shunned by most newer teachers.

Instead, teachers now have a battery of One Weird Tricks to get students to *really* learn. “ACTIVE learning” is the word of the day, the teacher shouldn’t just lecture but should involve the students in the learning process.

For instance, the students could each hold remote controls (clickers) with the numbers 1 through 4 on them. Then the teacher will put up a multiple-choice question at random points during class, and the students will use their clicker to give the answer they think is correct. There’s no grade for this except participation, and the students’ answers are anonymized, but the teacher will give the correct answer after all the students answer, and a pie chart will show the students how most of their classmates answered. So the theory is that this will massively improve student learning in the following ways:

  • Students will have a low-stakes way to test their knowledge and see if they’re right or wrong, rather than the high-stakes tests and homework that they’re graded on. They may be more willing to approach the problem with an open mind, rather than being stressed about how it will affect their grade.
  • The teacher will know what concepts the students are having trouble on, and can give more time to those prior to the test.
  • Students stay more engaged in class, rather than falling asleep, and likewise teachers feel more validated with an attentive class

The only problem is that the use of clickers has been studied, and has failed to improve student outcomes. Massive studies and meta-analyses with dozens of classes, thousands of students, and clickers don’t improve student’s learning at all over boring old lectures.

Ok, how about this One Weird Trick: “flipped classrooms.” The idea is that normally the teacher lectures in class and the students do practice problems at home. What if instead the students’ homework is to watch the lecture as a video, then in class students work on problems and the teacher goes around giving them immediate and personalized feedback on what they’re doing right or wrong?

In theory this again keeps students far more active, they’re less likely to sleep through class and the immediate feedback they receive while working through the problem sets helps the teachers and students know what they need to work more on. Even better, this One Weird Trick was claimed to narrow the achievement gap in STEM classes.

But another large meta-analysis showed that flipped classrooms *again* don’t improve student learning, and in fact *widen* the achievement gap between minority and white students. Not at all what we wanted!

In theory, science teaches us the way to find the truth. Our methods of storing information have gotten better and better and better as we’ve used science to improve data handling, data acquisition, and data transmission. I read both of those meta-analyses on my phone, whereas even just 30 years ago I would have had to physically go to a University Library and check out one of their (limited) physical journals if I wanted to read the articles and learn if Active Learning is even worth it or not.

But while we’ve gotten so much better at storing information, have we gotten any better at teaching it? We’ve come up with One Weird Trick after One Weird Trick, and yet the most successful (and common) form of teaching is a single person standing in front of 20-30 students, just talking their ears off. A style of teaching not too far removed from Plato and Aristotle, more than 2,000 years ago.

I want to get better at teaching, and I think public speaking is part of that. But beyond just speaking gooder, does anyone even know what good teaching *is*?

Gene drives and gingivitis bacteria

One piece of sci-fi technology that doesn’t get much talk these days is gene drives. When I was an up and coming biology student, these were the subject of every seminar, the case study of every class, and they were going to eliminate malaria worldwide.

Now though, you hardly hear a peep about them. And I don’t think, like some of my peers, that this is because anti-technology forces have cowed scientists and policy-makers into silence. I don’t see any evidence that gene drives are quietly succeeding in every test, or that they are being held back by Greenpeace or other anti-GMO groups.

I just think gene drives haven’t lived up to the hype.

Let me step back a bit: what *is* a gene drive? A gene drive is a way to manipulate the genes of an entire species. If you modify the genes of a single organism, when it reproduces only at most 50% of its progeny will have whatever modification you give it. Unless your modification confers a lot of evolutionary fitness to the organism, there is no way to make every one of the organism’s descendants have your modification.

But a gene drive can do just that. In fact, a gene drive can confer an evolutionary disadvantage to an organism, and you can still guarantee all of the organism’s decedents will have that gene. The biggest use-case for gene drives is mosquitoes. You can give mosquitoes a gene that prevents them from sucking human blood, but since this confers an evolutionary disadvantage, your gene won’t last many generations before evolution weeds it out.

But if you put your gene in a gene drive, you can in theory release a population of mosquitoes carrying this gene and ensure all of their decedents have the gene and thus won’t attack humans. In a few generations, a significant fraction of all mosquitoes will have this gene, thus preventing mosquito bites as well as a whole host of diseases mosquitoes bring.

Now this is a lot of genetic “playing God,” and I’m sure Greenpeace isn’t happy about it. But environmentalist backlash has never managed to stamp out 100% of genetic technology. CRISPR therapies and organisms are on the rise, GMO crops are still planted worldwide, environmentalists may hold back progress but they cannot stop it.

But talk about gene drives *has* slowed considerably and I think it’s because they just don’t work as advertised.

See, to be effective a gene drive requires an evolutionary contradiction: it must reduce an organism fitness but still be passed on to the progeny. Mosquitoes don’t just bite humans for fun, we are some of the most common large mammals in the world, and our blood is rich in nutrients. For mosquitoes, biting us is a necessity for life. So if you create a gene drive that knocks out this necessity, you are making the mosquitoes who carry your gene drive less evolutionarily fit.

And gene drives are not perfect. The gene they carry can mutate, and even if redundancy is built in, that only means more mutations will be necessary to overcome the gene drive. You can make it more and more improbable that mutations will occur, but you cannot prevent them forever. So when you introduce a gene drive, hoping that all the progeny will carry this gene that prevents mosquitoes biting humans, eventually one lucky mosquito will be born that is resistant to the gene drive’s effects. It will have an evolutionary advantage because it *will* bite humans, and so like antibiotic resistant bacteria, it will grow and multiply as the mosquitoes who still carry the gene drive are outcompeted and die off.

Antibiotics did not rid the world of bacteria, and gene drives cannot rid the world of mosquitoes. Evolution is not so easily overcome.

I tell this story in part to tell you another story. Social media was abuzz recently thanks to a guerilla marketing campaign for a bacteria that is supposed to cure tooth decay. The science can be read about here, but I was first alerted to this campaign by stories of an influencer who would supposedly receive the bacteria herself and then pledged to pass it on to others by kissing them. Bacteria can indeed be passed by kissing, by the way.

But like gene drives, this bacteria doesn’t seem to be workable in the context of evolution. Tooth decay happens because certain bacteria colonize our mouth and produce acidic byproducts which break down our enamel. Like mosquitoes, they do not do this just for fun. The bacteria do this because it is the most efficient way to get rid of their waste.

The genetically modified bacteria was supposed to not produce any acidic byproducts, and so if you colonized someone’s mouth with this good bacteria instead of the bad bacteria, their enamel would never be broken down by the acid. But this good bacteria cannot just live in harmony and contentment, life is a war for resources and this good bacteria will be fighting with one hand tied behind its back.

Any time you come into contact with the bad bacteria, it will likely outcompete the good bacteria because it’s more efficient to just dispose of your waste haphazardly than it is to wrap it in a nice, non-acidic bundle first. Very quickly the good bacteria will die off and once again be replaced by bad bacteria.

So I’m quite certain this little marketing campaign will quietly die once its shown the bacteria doesn’t really do anything. And since I’ve read that there aren’t even any peer reviewed studies backing up this work, I’m even more certain of its swift demise.

Biology has brought us wonders, and we have indeed removed certain disease scourges from our world. Smallpox, rinderpest, and hopefully polio very soon, it is possible to remove pests from our world. But it takes a lot more work than simply releasing some mosquitoes or kissing someone with the right bacteria. And that’s because evolution is working against you every step of the way.

Crying over Cryo-EM

OK so the title is hyperbole, but I’ve definitely struggled recently with my cryo-electron microscopy. I guess here I’ll give an overview of what exactly electron microscopy is and why I’ve struggled.

Professor Jensen of CalTech has a great series of videos on Cryo-EM. Why we use it, how we use it, and what it is. Anyone interested in the technology should watch it, but for my own purposes:

  • Cryo-electron microscopy consists of freezing a sample and then shooting electrons at it to see the 3d structure of it at the smallest atomic scales.
  • We’re using it to study a number of proteins that cause diseases. In particular we want to know how the 3d shape of a certain protein creates that protein’s function. And how that function can then go on to cause a disease.
  • So we purify a specific protein, make a cryo-grid from that purified protein, and then look at that cryo-grid under electron microscopy hoping to get a good 3d structure.

But that’s where the problems start. First of all, purifying a protein to 99.9% purity is no small feat, especially when you’re taking proteins out of actual patient samples. I’ve dearly struggled to get the required purity that would be needed to make good grids for imaging.

But once I have some “pure” protein, I need to add it to a grid to image it. A cryo-grid is a 1 millimeter by 1 millimeter circle about 1 micrometer thick. On that grid are cut out many 1 micrometer by 1 micrometer squares. And in each square are a mesh of 100 nanometer by 100 nanometer holes. When I add a tiny drop of my protein sample (which is in water) onto the grid, the hope is that the proteins will settle down into the holes. I will then “blot” the sample by pressing some paper onto both sides of the sample, which wicks away all the water not in the holes. I then instantly plunge the sample into liquid ethane, freezing all the liquid in the holes in an instant.

What you get is supposed to be a grid covered in a tiny thin layer of ice, and in each hole the ice contains your proteins of interest. Since they were flash frozen in ethane, the ice here is “vitreous,” which means glass-like. It’s see-through just like glass. And so a beam of electrons can pass into the ice to create an image of the proteins inside the ice.

But there’s problems. Let’s get back to making the grid: most proteins are hydrophilic which means water-loving. The opposite of hydrophilic is hydrophobic which mean water hating, like oil. Oil and water don’t mix, and neither do hydrophobic and hydrophilic things. Our grids are made of copper covered in a layer of carbon, and that stuff is naturally hydrophobic, meaning it doesn’t interact well with the hydrophilic proteins (and the water they are in).

So before adding proteins we have to glow discharge our grids. This means putting them in a machine that shoots broken-up water molecules at them. Those broken-up water molecules have oxygen in them, and some of them will bind to the grid creating oxygen-containing compounds. Those compounds are very hydrophilic, so the whole grid becomes hydrophilic enough for the proteins to interact with it.

At some point we got a new glow discharger, and I swear that it started destroying my grids. Like I said the grids are tiny and fragile, 1 millimeter across, 1 micrometer thick! This glow discharger shoots water at them, and the new one shot the water so hard that it was punching through my grids and destroying them completely at the microscopic level. I couldn’t see the damage because it’s microscopic, but after adding the protein to my grids and flash-freezing them, I’d look at them under a microscope and see nothing but a completely destroyed grid. I finally just stopped trusting it completely and moved on to using a new glow discharger that’s a bit weaker.

So OK I solved the glow discharge problem, but now here comes the ice problem. Like I said above, you want the proteins to be encased in glass-like vitreous ice. If you have no ice, well you have no proteins. And if the ice is too thick, it’s no longer glass-like and you can’t see through it. I kept being on both sides of those extremes, first I had ice so thick I couldn’t see anything, then I had no ice at all. You are supposed to manage this problem by configuring your blotting time, which is how long you wick away the water before plunging the grid into the liquid ethane. Shorter blot time, thicker ice, longer blot time, thinner ice or no ice at all. Try long and short times to get the ice just right.

And yet I was using ultra-short blot times and still getting thick and thin ice sometimes at random. On the balance I got more grids with no ice at all, so I kept thinking I needed to drop the blot time more and more. My adviser said that there is a minimum blot time of about 2 seconds and you never want to go lower than that, but I tried 2 seconds and the ice was still way to thin or non-existence. That seems to say that my blot time is still too long, yet 2 seconds is as short as I can go.

I finally asked an expert in the chemistry department who suggested I used their facilities instead. He also suggested that 1 second of blot time is perfectly fine, and so that was what I did. I FINALLY seemed to start getting good grids, so let’s hope it hold out.

So I’ve struggled with glow discharging, and then blot times, as well as protein purity. I’ve finally got some good grids, and I hope I can collect a lot of data on them. If I do that, I may be able to get 3d structural information using AI and a whole bunch of analysis. We’ll see though, we’ll see.

Good idea: financially supporting workers displaced by AI. Bad idea: taxing companies when for displacing workers with AI.

AI is again the topic of the day and people are discussing what to do about the coming “job-pocolypse.” It seems AI can do anything we humans can do better and so 30% or more of jobs will be destroyed and replaced by AI. Leaving aside how accurate that prediction is, if 30% of all jobs will be impacted then it does warrant a public policy response. Everyone’s got their own personal favorite, but one I see come up again and again is that companies should face a hefty tax any time they replace a worker with AI.

To be blunt, taxing companies for replacing workers with AI is a terrible idea. Let’s leave aside the argument of “how do you prove it,” and cut straight to the fact that the government should not be taxing technological progress. Just to start with some history, how many farmers were displaced by tractors? Millions. In 1900 40% of Westerners worked on farms, now it’s less than 5%. Tractors meant that a single farmer could do the labor of tens or hundreds of men, and so they could fire many of their farm hands to be replaced by tractors. But does anyone reading this wish nearly 1/2 of us were still farmers? Should the government have heavily taxes tractors to preserve the idyllic rural farm life?

The argument in favor of taxing companies that replace workers with a machine is that the company is becoming more profitable at the expense of the worker, and they should pay it back. The current hullabaloo is about being replaced by AI, but in the 20th century similar calls were made when factory workers were being replaced by robots. The problem with this argument is that ignores society. The worker and the company are not the only 2 pieces of the equation, society in general benefits when companies become more efficient. Technology is deflationary, and it has allows many products to drop or price or not increase as rapidly as wages in general. Food today costs less as a percent of annual income than at nearly any time in history, and a large part of that is because the cost of food is decoupled from the cost of labor. So farm hands being replaced by tractors helped all of society by giving us cheaper food, and all of society would have been harmed if taxes had been instituted to prevent tractors from becoming commonplace.

Are the workers harmed when their jobs are replaced by AI? Yes of course. But society itself is helped and so all of society should bear the costs of helping the workers. We should of course offer unemployment benefits and job retraining to those affected. We should not let them go by the wayside the way we did to blue collar factory workers in the 20th century.

But neither should we shoot society in the foot by blocking technological progress that will help all of us. AI replacing jobs will mean products will become cheaper relative to wages, just as what happened with food. A lot of people also spread nonsense that unemployment will skyrocket as the displaced workers can’t find other jobs. They misunderstand economics, there will always be demand for more jobs. The price of some goods will decrease thanks to AI, but that means that people can buy more of those goods or buy more of others goods that they put off buying because they were forced to choose and only had so much money. As prices fall, demand will rise, raising demand for labors in other areas, and a new equilibrium will be reached. Those jobs lost due to AI don’t mean the workers will be forever jobless, any more than 35% of the population displaced by tractors meant that unemployment skyrocketed in the 20th century. Time and time and time again technology has replaced the jobs of workers, and the workers have found new jobs. It will happen again with AI.

The Lunar advantage

This post is gonna be weird and long.

I often have weird thoughts that I wish I could put into a book or story. My thought today is about comparative advantage. Comparative advantage is an economic concept that explains why people and countries can specialize into certain areas of work to become more efficient.

For example, in Iceland the cost of electricity is very low, which is why Iceland has attracted a lot of investment in industries that require lots of electricity, such as aluminum smelting. On the other hand countries like Bangladesh have a low cost of labor, which is why labor intensive activities such as clothing manufacturing invest there. It doesn’t make sense for a company to put an aluminum smelter in places where electricity is expensive, nor does it make sense to put a clothing factory where labor is expensive. Iceland and Bangladesh have their own comparative advantages at this moment in time, and that explains their patterns of industry.

Let’s imagine for a moment that there was a fully autonomous colony on the moon. People lived and worked there without needing to import air, water, or food from Earth. They can trade with Earth, but if Earth were cut off they could still make their own goods, just as if our country were cut off from the world we could still make our own food, drink our own water, breathe our own air. Let’s say they use super-future space technology to extract water and oxygen from moon rocks, and grow crops using moon soil.

If there would be such a moon colony, we would assume there would be trade with Earth. Certainly the cost of moving goods from Earth to the moon and vice versa are enormous. But it was once unbelievably dangerous to cross the oceans, and people still did it because the profits were worth it. We would expect that the moon would have some comparative advantage compared to Earth and vice versa, which would make trade profitable. This comparative advantage is the same reason Iceland sells aluminum products to Bangladesh who in turn sells Iceland clothing.

So with all this in mind, I assert that the moon’s comparative advantage would naturally be in large, heavy goods, but not because of the moon itself but because of the journey.

Let me give another example, suppose there is a factory on Earth making steel and a factory on the Moon making steel. Let’s also say the iron and carbon for the steel can be gotten just as easily on the Moon as on Earth. I assert that the one on the Moon has a comparative advantage because of space travel. Sending goods from the Earth to the Moon means having to spend a lot of energy accelerating out of Earth’s thick atmosphere, then also spending energy to slow yourself down for a moon landing. By contrast it takes much less energy to accelerate off the atmosphere-less surface of the moon, and landing on Earth costs far less energy as you can use the atmosphere itself to brake your fall.

So a moon steel factory can send packages of steel to the Earth at a rather low transport cost compared to vice versa. That gives an advantage to the moon steel factory, as if there are shortages on Earth the moon factory can fill them at a rather low cost, while Earth cannot do the same to fill a need on the moon. The transport costs are not symmetric, and they are in the moon’s favor. I would assert that, all else being equal, investment for steelmaking would flow into the moon and out of the Earth.

Of course the “all else being equal” is the rub. Air, water, and food are hard to come by on the moon. Iron and carbon might be easier but all the mining equipment is already here on Earth. We would have to do a lot of work and build a lot of technology to make a moon-base even possible. But in theory economies of scale and future-technology could make it possible and even economical. And at that point it might enter a virtuous cycle due to these asymmetrical transport costs I mentioned. It will always be cheaper to send goods from the moon to the Earth, than vice versa.

It’s just a random thought I’ve had and I want to put it in a work of fiction. In some sci-fi universe, a moon colony is economically sustained by this comparative advantage compared to Earth. But I’ve never gotten the courage to write this story so until now it’s just been an idle thought in my head.

The AI pause letter seems really dumb

I’m late to the party again, but a few months ago a letter began circulating requesting that AI development “pause” for at least 6 months. Separately, AI developers like Sam Altman have called for regulation of their own industry. These things are supposedly happening because of fears that AI development could get out of control and harm us, or even kill us all in the words of professional insanocrat Eliezer Yudkowsky, who went so far as to suggest we should bomb data centers to prevent the creation of a rogue AI.

To get my thoughts out there, this is nothing more than moat building and fear-mongering. Computers certainly opened up new avenues for crime and harm, but banning them or pausing development of semiconductors in the 80s would have been stupid and harmful. Lives were genuinely saved because computers made it possible for us to discover new drugs and cure diseases. The harm computers caused was overwhelmed by the good they brought, and I have yet to see any genuine argument made that AI will be different. Will it be easier to spread misinformation and steal identities? Maybe, but that was true of computers too. On the other hand the insane ramblings about how robots will kill us all seem to mostly amount to sci-fi nerds having watched a lot of Terminator and the Matrix and being unable to separate reality from fiction.

Instead, these pushes for regulation seem like moat-building of the highest order. The easiest way to maintain a monopoly or oligopoly is to build giant regulatory walls that ensure no one else can enter your market. I think it’s obvious Sam Altman doesn’t actually want any regulation that would threaten his own business, he threatened to leave the EU over new regulation. Instead he wants the kind of regulation that is expensive to comply with but doesn’t actually prevent his company from doing anything it wants to do. He wants to create huge barriers to entry where he can continue developing his company without competition from new startups.

The letter to “pause” development also seems nakedly self-serving, one of the signatories was Elon Musk, and immediately after Musk called for said pause he turned around and bought thousands of graphics cards to improve Twitter’s AI. It seems the pause in research should only apply to other people so that Elon Musk has the chance to catch up. And I think that’s likely the case with most of the famous signatories of the pause letter, people who realize they’ve been blindsided and are scrambling to catch up.

Finally we have the “bomb data centers” crazies who are worried the Terminator, the Paperclip Maximizer or Roko’s Basilisk will come to kill them. This viewpoint involves a lot of magical thinking as it is never explained just how an AI will find a way to recursively improve itself to the point it can escape the confinement of its server farm and kill us all. In fact at times these folks have explicitly rebuked any such speculation on how an AI can escape in favor of asserting that it just will escape and have claimed that speculation on how is meaningless. This is of course in contrast to more reasonable end-of-the-world scenarios like climate change or nuclear proliferation, where there is a very clear through-line as to how these things could cause the end of humanity.

Like I said it I take this viewpoint the least seriously, but I want to end with my own speculation about Yudkowsky himself. Other members of his caucus have indeed demanded that AI research be halted, but I think Yudkowsky skipped straight to the “bomb data centers” point of view both because he’s desperate for attention and because he wants to shift the Overton Window.

Yudkowsky has in fact spent much of his adult life railing about the dangers of AI and how they’ll kill us all, and in this one moment where the rest of the world is at least amenable to the fears of AI harm, they aren’t listening to him but are instead listening (quite reasonably) to the actual experts in the field like Sam Altman and other AI researchers. Yudkowsky wants to maintain the limelight and the best way to do so is often to make the most over-the-top dramatic pronouncements in the hopes of getting picked up and spread by both detractors, supporters and people who just think he’s crazy.

Secondarily he would probably agree with AI regulation, but he doesn’t want that to be his public platform because he thinks that’s too reasonable. If some people are pushing for regulating AI and some people are against it, then the compromise from politicians who are trying to seem “reasonable” would be for a bit of light regulation which for him wouldn’t go far enough. Yudkowsky instead wants to make his platform something insanely outside the bounds of reasonableness, so that in order to “compromise” with him, you’ll have to meet him in the middle at a point that would include much more onerous AI regulation. He’s just taking an extreme position so he has something to negotiate away and still claim victory.

Personally? I don’t want any AI regulation. I can go to the store right now and buy any computer I want. I can to go to a cafe and use the internet without giving away any of my real personal information. And I can download and install any program I want as long as I have the money and/or bandwidth. And that’s a good thing. Sure I could buy a computer and use it to commit crimes, but that’s no reason to regulate who can buy computers or what type they can get, which is exactly what the AI regulators want to happen with AI. Computers are a net positive to society, and the crimes you can commit on them like fraud and theft were already crimes people committed before computers existed. Computers allow some people to be better criminals, so we prosecute those people when they commit crimes. But computers allow other people to cure cancer, so we don’t restrict who can have one and how powerful it can be. The same is true of AI. It’s a tool like any other, so let’s treat it like one.

A possible cure for Duchenne Muscular Dystrophy

Sarepta Therapeutics may have a cure out for Duschenne Muscular Dystrophy (DMD). It’s called SRP-9001, and while I hesitate to say it’s a Dragonball Z reference, I’m not sure why else it has that number. Either way it’s an interesting piece of work and I thought I’d write about it and what I know about it.

DMD is caused by a mutation in the protein dystrophin, a protein which is vital for keeping the muscle fibers stiff and sound. Our muscles move because muscle fibers pull themselves together, which shrinks their volume along an axis and therefor pulls together anything they are attached to. The muscle cell pulling on itself creates an incredible amount of force, and dystrophin is necessary to make sure that that force doesn’t damage the muscle cell itself. When dystrophin is mutated in DMD, the muscle cells pulling on themselves will indeed begin to cause deformations and destruction of the muscle cell itself, which leads to the characteristic wasting away of DMD sufferers. The expected lifespan of someone with DMD is only around 20-30 years.

Dystrophin is a massive protein, fully 0.1% of the human genome is made up of just the dystrophin gene. However a number of the mutations which cause DMD are point mutations, mutations in a single DNA nucleotide. If just that one nucleotide could be fixed, in theory the disease could be cured. For a long time genetic engineering and CRISPR/Cas9 has targeted DMD for treatments based on this idea of just fixing that one nucleotide.

However, Sarepta seems to be working on an entirely new theory. Deliver a complete gene to the patient which can replace the functionality of the non-functional dystrophin. This is called micro-dystrophin and it is less than half the length of true dystrophin. However it still contains some of the necessary domains of dystrophin like the actin-binding-domain. This is important because of how genetic engineering in humans actually works (these days). How do you get a new gene into a human? Normally, you must use a virus. But the viruses of choice (like AAV) are actually so small that the complete dystrophin gene simply would not fit in them. Micro-dystrophin, being so much smaller, is needed in order to fit the treatment into a virus.

So the idea would be that DMD patients cannot produce working dystrophin, but when SRP-9001 is given to them it would give them the genes to create micro-dystrophin for themselves. Then once their muscles begin creating this micro-dystrophin, it would spread throughout the muscle cell and take up the job of strengthening and stiffening the muscle cell just like normal dystrophin does. In this way the decay of their muscles would slow and hopefully they’d live much much longer.

SRP-9001’s road to FDA approval is not yet fully formed. They’ve done some nice clinical trials where they’ve shown that their genetic engineering drug does successfully deliver micro-dystrophin genes into the patients, and that the patients then use those genes to produce the micro-dystrophin protein. However as of right now they are still doing Phase 3 clinical trials and still awaiting the FDA to give them expedited approval. That approval won’t come until June 22nd at the earliest, but I believe it would still make it the first FDA-approved treatment for DMD.

AI art killed art like video killed the radio star

Everyone knows the song “Video Killed the Radio Star” by the Buggles, it was one of the earliest big hits on MTV (back when it was still called Music Television). The song is pretty good, but it also speaks to a genuine fear and wonder about our world, that changing technology upends our social fabric and destroys our livelihood. The radio star who just wasn’t pretty enough for video, or couldn’t compete with the big production values of music videos, or just didn’t like dancing and being seen at all. That radio star is the Dickensian protagonist of the modern age, as they are tossed aside and replaced when new technology comes along.

This Luddite fear has pervaded throughout history. The loom-smashing followers of Ned Ludd are only the most famous, but there were silent actors who never made it in talkies. There were photo-realistic painters who could never compete with a camera. John Henry died trying to beat a steam drill. In each case, an argument could be made that the new technology removed some important human element. The painters could claim that photography wasn’t “true art”. And the loom smashers too probably believed that their handcrafts were more “real” and more deserving of respect than the soulless cloth that replaced it.

So why is AI art any different? Why should we care about the modern Luddites who want to ban it or restrict it? I say we shouldn’t.

AI art steals from other artists to make its images

common argument

No more than any artist “steals” when they learn from the old masters. It is a grievous misunderstanding of how AI works to claim that it cuts and pastes from other images, and an AI training itself on a dataset of art is no different than an art student doing the same whether in university or on their own. The counter-argument I’ve heard is “why are you ascribing rights to an AI that should only belong to humans! Yes humans can learn from other art, but AI shouldn’t have the right to!” I’m not ascribing anything to AI, the person who coded the AI and the person who used the AI have the right to use any images they can find, just as an artist does. And just as the output of an artist learning from old masters is itself new art, so too is the output of coding or using an AI that has been trained on old works.

AI art is soulless

common argument

As soulless as loom-made fabric is compared to hand-made. Or as soulless as a photograph is compared to a hand-painted picture. Being made with a machine doesn’t detract from something for me, and I think only bias causes it to detract from others.

AI art takes money out of artists’ pockets, it should be banned to protect the workers’ paychecks

common argument

Why is the money of the workers more important than the money of the consumers? Loom-made fabric competes with hand-spun fabric, should we smash looms to keep the tailors’ wages up? Are we ok with having everything cost more because it would hurt someone’s business if they had to compete against a machine? The counter-argument I’ve seen to this is that the old jobs replaced by AI were all terrible drudgery and it’s good that they were replaced, whereas art is the highest of human expressions and should never be replaced. Again I think this is presentism and a misreading of history. I’m sure there were tailors and seamstresses who though sewing and making fabric was the absolute bomb, who loved their job and though that their clothes had so much heart and soul that they were works of art in and of themselves. And I know there are artists in the modern day for whom most of their work is dull drudgery.

Thinking that your job and only your job is the highest form of human expression and should never be replaced, well to me that just shows a clear lack of empathy towards everyone else on earth. No one’s job is safe from automation, but all of society reaps the benefits of automation. We can all now afford far more food, more clothing, more everything, since we started automating manual labor. Labor saving creates jobs, it doesn’t destroy them, it frees people to put their efforts towards other tasks. We need to make sure that the people who lose their jobs due to automation are still cared for by society, but we should not halt technological progress just to protect them. AI art allows creators and consumers to have more art available than they otherwise would. Game designers can whip up art far more quickly, role-players can get a character portrait without having to pay, this lets people have far more art available than they otherwise would. In the same way that the loom let us have far more clothing available than we otherwise would.

AI art is always terrible

common argument

I find it funny that this often comes paired in internet discourse with “I’m constantly paranoid and wondering if the picture I’m looking at was made by AI or not.” There’s a very Umberto Eco-esque argument going on in anti-AI spaces. AI is both terrible and easily spotted, but also insidious and you never quite know if what you’re seeing is AI, and also everyone is now using AI art instead of “real” art.

If real art is better than AI art, wouldn’t there be a market for it still? There’s still a market for good food even though McDonald’s exists, if AI art is terrible and soulless than it isn’t really a danger to anyone who can’t make good art themselves. And if AI art is always terrible, then why are so many people worried about whether the picture they’re seeing is AI-made or not? Shouldn’t it always be obvious?

This is very obviously an emotional argument. If you can convince someone that a picture was not made with AI, they’ll defend it. If you convince them it was made with AI, they’ll attack it.

This was a vague disconnected rant, but I’ve become sort of jaded to the AI arguments I’ve seen going on. I had thought that modern society had somewhat grown out of Ludditism. And to be frank, many of the people I see making anti-AI arguments are supposedly pro-science and pro-rationalism. But it seems that ideology only works so long as their “tribe” doesn’t ever get threatened.