Gene drives and gingivitis bacteria

One piece of sci-fi technology that doesn’t get much talk these days is gene drives. When I was an up and coming biology student, these were the subject of every seminar, the case study of every class, and they were going to eliminate malaria worldwide.

Now though, you hardly hear a peep about them. And I don’t think, like some of my peers, that this is because anti-technology forces have cowed scientists and policy-makers into silence. I don’t see any evidence that gene drives are quietly succeeding in every test, or that they are being held back by Greenpeace or other anti-GMO groups.

I just think gene drives haven’t lived up to the hype.

Let me step back a bit: what *is* a gene drive? A gene drive is a way to manipulate the genes of an entire species. If you modify the genes of a single organism, when it reproduces only at most 50% of its progeny will have whatever modification you give it. Unless your modification confers a lot of evolutionary fitness to the organism, there is no way to make every one of the organism’s descendants have your modification.

But a gene drive can do just that. In fact, a gene drive can confer an evolutionary disadvantage to an organism, and you can still guarantee all of the organism’s decedents will have that gene. The biggest use-case for gene drives is mosquitoes. You can give mosquitoes a gene that prevents them from sucking human blood, but since this confers an evolutionary disadvantage, your gene won’t last many generations before evolution weeds it out.

But if you put your gene in a gene drive, you can in theory release a population of mosquitoes carrying this gene and ensure all of their decedents have the gene and thus won’t attack humans. In a few generations, a significant fraction of all mosquitoes will have this gene, thus preventing mosquito bites as well as a whole host of diseases mosquitoes bring.

Now this is a lot of genetic “playing God,” and I’m sure Greenpeace isn’t happy about it. But environmentalist backlash has never managed to stamp out 100% of genetic technology. CRISPR therapies and organisms are on the rise, GMO crops are still planted worldwide, environmentalists may hold back progress but they cannot stop it.

But talk about gene drives *has* slowed considerably and I think it’s because they just don’t work as advertised.

See, to be effective a gene drive requires an evolutionary contradiction: it must reduce an organism fitness but still be passed on to the progeny. Mosquitoes don’t just bite humans for fun, we are some of the most common large mammals in the world, and our blood is rich in nutrients. For mosquitoes, biting us is a necessity for life. So if you create a gene drive that knocks out this necessity, you are making the mosquitoes who carry your gene drive less evolutionarily fit.

And gene drives are not perfect. The gene they carry can mutate, and even if redundancy is built in, that only means more mutations will be necessary to overcome the gene drive. You can make it more and more improbable that mutations will occur, but you cannot prevent them forever. So when you introduce a gene drive, hoping that all the progeny will carry this gene that prevents mosquitoes biting humans, eventually one lucky mosquito will be born that is resistant to the gene drive’s effects. It will have an evolutionary advantage because it *will* bite humans, and so like antibiotic resistant bacteria, it will grow and multiply as the mosquitoes who still carry the gene drive are outcompeted and die off.

Antibiotics did not rid the world of bacteria, and gene drives cannot rid the world of mosquitoes. Evolution is not so easily overcome.

I tell this story in part to tell you another story. Social media was abuzz recently thanks to a guerilla marketing campaign for a bacteria that is supposed to cure tooth decay. The science can be read about here, but I was first alerted to this campaign by stories of an influencer who would supposedly receive the bacteria herself and then pledged to pass it on to others by kissing them. Bacteria can indeed be passed by kissing, by the way.

But like gene drives, this bacteria doesn’t seem to be workable in the context of evolution. Tooth decay happens because certain bacteria colonize our mouth and produce acidic byproducts which break down our enamel. Like mosquitoes, they do not do this just for fun. The bacteria do this because it is the most efficient way to get rid of their waste.

The genetically modified bacteria was supposed to not produce any acidic byproducts, and so if you colonized someone’s mouth with this good bacteria instead of the bad bacteria, their enamel would never be broken down by the acid. But this good bacteria cannot just live in harmony and contentment, life is a war for resources and this good bacteria will be fighting with one hand tied behind its back.

Any time you come into contact with the bad bacteria, it will likely outcompete the good bacteria because it’s more efficient to just dispose of your waste haphazardly than it is to wrap it in a nice, non-acidic bundle first. Very quickly the good bacteria will die off and once again be replaced by bad bacteria.

So I’m quite certain this little marketing campaign will quietly die once its shown the bacteria doesn’t really do anything. And since I’ve read that there aren’t even any peer reviewed studies backing up this work, I’m even more certain of its swift demise.

Biology has brought us wonders, and we have indeed removed certain disease scourges from our world. Smallpox, rinderpest, and hopefully polio very soon, it is possible to remove pests from our world. But it takes a lot more work than simply releasing some mosquitoes or kissing someone with the right bacteria. And that’s because evolution is working against you every step of the way.

Buying a desktop in 2023

I bought my last desktop in 2014.  It was a very high end machine at the time, and while I’ve had several new laptops since then, the desktop long remained the workhorse of my gaming setup.  But with the recent AI craze, I found that my desktop didn’t have enough power to run stable-diffusion (the AI art program) or even GPT4All (an open-source version of ChatGPT). 

So I decided to finally get a new desktop, and it was harder than expected.  I bought my 2014 desktop at Fry’s Electronics, which went under during the pandemic.  With them gone, the only computer stores nearby are a fleet of Best Buys.  Best Buy isn’t bad, but I’ll warn you that it won’t come across well in this story.

When I went to Best Buy for a new computer, I only knew I wanted a machine powerful enough to run stable-diffusion.  And I figured that in this day and age, maybe I don’t need a desktop to do the most powerful computing.  Desktops seem like dinosaurs these days, most of my coworkers only have laptops or tablets.  I even know some people whose only computer is their phone.  So maybe I just need a top-end laptop to do what I want? 

But looking for laptops in Best Buy felt like trawling a souk for antiquities.  There was a huge language barrier, and no one seemed like they knew what I wanted.

I did some homework online, and it turns out that AIs don’t just need a powerful graphics card, they need a very special type of card.  They need an NVIDIA card with a lot of VRAM.  NVIDIA is needed because only its cards contain “CUDA” which is can make AIs go.  CUDA is a suite of on-card libraries for complex math and parallel computing.  I know the AMD stans will tell me that there are libraries to run stable-diffusion on AMD, but installing stable-diffusion is already a pain, and trying to install CUDA work-arounds using barely-commented GitHub files is too much work for a simple hobby.

And in addition to an NVIDIA card, you also need the card to have VRAM.  VRAM stands for video RAM, and it’s needed to let graphics cards work their best.  How it was explained to me is that your PC and your graphics card are like 2 major cities connected by a single dirt path.  Each city has their own big highway system, so moving data within them is quick and easy, but moving data between them is slooooooooooooooooow.  So modern cards use VRAM, which is like a data warehouse for GPU-land.  

This is important because GPU-land is the part of the computer specialized for complex math.  In the old days, the demand for math processing was primarily driven by video games, which needed to calculate position and momentum of thousands of characters and particles across 3D space.  This is why GPUs are most associated with video games, but recently crypto-mining and AI have also emerged as major drivers of GPU demand since they have their own high-end math requirements.

Before VRAM, every time the GPU did a calculation it had to store its answer in the main system memory, then ask for that answer back if it needed it for the next calculation. It was sort of like this:

the computer says: “what’s the square root of 2+7 over 77+23?”  

The GPU says “OK 2+7 is 9.  Now what was in the denominator?”

Computer: “77+23”

GPU: “OK 77+23 is 100.  Now what was in the numerator?”

Computer: “well, you just told me 2+7 was 9”

GPU: “OK 9/100 is 0.09.  Is that all you wanted?”

Computer: “you forgot to square-root it”

GPU: “OK, the square root of 0.09 is 0.3”

Computer: “Did you say 0.3000000000000000004?  Sounds right to me”

GPU: “Don’t forget to check for floating point errors.  See you next time!”

That’s a lot of cars going back and forth along the dirt road, and it made for slow computing.  But with VRAM, the GPU can store all its answers locally and only talks to the computer when it’s finished calculating.  This clears a hell of a lot of traffic off the road, and without VRAM most modern AIs just don’t work.

So I knew I wanted a lot of VRAM, and the internet told me 16GB was a good number.  I also knew I needed an NVIDIA graphics card.  But finding all that at Best Buy was an exercise in frustration.  

I would walk up to a computer to check its specs.  The tag says it has an NVIDIA card with 16GB of RAM.  16GB RAM?  That’s way too low for modern storage.  So that 16GB must be the VRAM, right?  It also says it has a 512GB solid state drive, which I assume is the computer’s main RAM storage.  So half a terabyte of memory and 16GB VRAM, that’s exactly what I want, right?  But on closer inspection of the actual computer and not the tag, it says it has an intel graphics card.  It seems this model of laptop can either have an Intel or an NVIDIA, and while the tag says NVIDIA the computer itself says Intel.  So this is not what I want.

The next computer over does say NVIDIA, and it’s got a whole terabyte of memory.  It still says 16GB RAM, so I guess it’s a buy, right?  Well dxdiag is a simple windows command to tell you the computer’s specs, and I run it on this computer just to check.  It turns out that the 16GB RAM is made up of 6GB display memory and 8GB shared memory.  I guess Best Buy uses base 8 math where 6+8=16.  That would explain their prices, but 6+8 isn’t what I’m looking for.

Even worse, I do some searching and find that only display memory is “true” VRAM.  The 8GB of shared memory is actually just normal RAM that is “reserved” for the graphics card.  Using the analogy from above, it’s like the GPU city owns a warehouse in the Computer city, so when it has too much data it can offload it there for pickup later.  The problem is that to move that data it still has to go back and forth down the dirt path between the two cities, which means it’s still very slow.  So for my purposes, 6+8=0.

But here’s the thing, I’m not an expert so I don’t know if “display memory” really is the same thing as “VRAM.”  I’m only assuming it is.  But maybe I’m wrong and the VRAM is listed elsewhere?  I flag down a Best Buy employee and ask him what display memory actually is.  He tells me “oh it makes the graphics card go faster, but it doesn’t make it more powerful.”  That’s incredibly generic, I ask him if “display memory” is the same as VRAM.  He says “I think kinda, yeah,” and at that point I realize he doesn’t know any more than I do so I thank him for his time and leave.

I need true VRAM, so now I just start running dxdiag on every computer on the floor.  I find that all of them are set up like the 6+8 laptop and none of them have a lot of “true” VRAM.  Looking online, it also seems like NVIDIA has sneakily given their laptop cards the same names as their desktop cards despite the laptop cards having much lower specs.  I knew a 4070 or 3060 were “good” NVIDIA cards, but the laptop versions are paltry imitations of the real thing and not good enough for AI.  So it turns out I do need a desktop.

OK, well I’m still at Best Buy so I wander over to their desktop area.  I no longer trust tags so I just run dxdiag on anything I see.  And there I seem to strike the motherload: 24GB of display memory, holy crap that’s a lot of VRAM!!  Oh, it’s an AMD card.  Well AMD may be cheaper and have way more VRAM, but it doesn’t have the CUDA so it’s a no-go. 

I finally go over to Geek Squad, Best Buy’s in house specialists, and ask if they do build-a-desktop services.  It turns out no, that’s a service they discontinued a long time ago.  I can buy parts to build it myself, but Best Buy can’t build it for me.  I asked who could build me a computer and every member of Geek Squad plus a randomly patrolling employee all told me to try Micro Center instead.  So I had to head there.

Micro Center was the exact opposite of Best Buy.  As soon as I started looking at graphics cards an employee came up to ask if I had any questions.  I asked him my questions about VRAM and display memory and he was able to point me to a specific card that had plenty of VRAM and which he told me was very good for AI.  He also gave me ideas of other cards I could buy if I wanted to move up or down in power and price, and when I finally settled on which card to buy, he then offered to pick out every part I needed for a computer and put them together for me. 

This was exactly what I needed, a build-a-desktop service with an expert who could actually help me buy something.  We went over all the parts and I made whatever changes I wanted from what he suggested.  Then 2 days later I had a desktop built for just 2000$.  That may seem like a lot, but laptops with way less power were selling for 1800$, and the only laptop that seemed even capable of doing what I wanted had a 2500$ price tag.  I only just got the desktop back to my house, so I still have a few weeks before I find all the things I hate about it, but I’m already liking Micro Center a lot more than Best Buy.

Overall, buying a computer in 2023 is still as overly complicated a mess as it’s always been.  If you just need to write emails to your grandkids, Best Buy has 180$ laptops that will probably do you good.  But if you want the kind of power needed to play modern games and do modern activities, trying to parse all the various GPUs with their CUDAs and VRAMs and so on is way more of a hassle than it should be.  

I wish more computer sellers were knowledgeable in what they were selling, I don’t need all of them to be experts in AI hardware but if they could at least tell me what all the parts mean I’d have been a lot happier.  Shouldn’t a car salesmen be able to explain to you miles-per-gallon and what a hybrid is?  As it stands, I was dumbstruck by how helpless most salesfolks were, and how little the GPU business has changed in decades.  In 2008 the late Shamus Young wrote an article complaining about how confusing it was trying to buy a graphics card, and nothing has gotten better since then.

Maybe someday I can ask an AI what kind of graphics card I need to run it.  Then ask the AI to build it and maybe ask the AI to install itself on there for me.  Some people are scared of AI, but I think if Skynet ever does become self-aware and try to self-replicate, just reading its own hardware requirements will give it enough of an aneurysm to drop it back down to pre-sentience.  Until then, I can’t say I’m looking forward to doing all this again in a few years time.

Choosing your facts based on your beliefs; everyone believes they are the rational one

It’s very common and very well-known that people will, to an extent, choose their facts to fit their beliefs. But for many the facts they choose aren’t necessarily even well-founded.

If you are a conservative, you probably prefer generally lower taxes, and you can find well-heeled economists who generally prefer lower taxes and lower spending over higher taxes and higher spending. Likewise a liberal or leftist can find economists who support higher taxes and higher spending. The issue is not “settled” and as with anything in economics (besides rent control, which is universally known to be bad) there are voices on either side.

But there are some things that are uncontroversially accepted as true by all the experts in their field, and for some reason there are people that argue against it for no reason whatsoever.

When I was in school, I remember a debate about teaching evolution. To cut to the chase, many Christians (not all by a long shot) have thought that evolution undermines their religion, and no matter how much evidence there is for it, these Christians will choose facts to fit their beliefs. That includes denying evolution, but also denying the fossil record (which supports evolution) and the age of the Earth (which supports evolution). This sometimes means denying modern microbiology and cancer biology (which are evolution in action). It’s fairly well-known by anyone who isn’t a Christian that this is a Dumb Thing To Do, and that picking your facts based on your beliefs just leaves you looking stupid.

But then I found that while the Christians do it, the anti-Christians do it too.

Let’s be clear: some atheists are just people who don’t believe in God. That’s fine, everyone has their beliefs. But some atheists are better termed anti-theists, they are people who oppose religion and its existence entirely. And it is these atheists that have constructed their own theories of “Intelligent Design” to support their ideas. Often these theories try to prove that Christianity is not only false, but that is is a complete con from start to finish and that no one truly believes in it anyway.

The Atheist version of Intelligent Design is the “Jesus Myth Theory.” This is the idea that not only was Jesus just a mortal man (not the son of God), but that there was never even a person called Jesus at all, and that this is proof that Christianity was an invented scam. To be blunt, this idea has no more credence than Intelligent Design, but so-called rational atheists who turn up their nose at the stupid Christians with their stupid Intelligent Design will still believe this idea because they have chosen their facts based on their beliefs. I may write a post later about the evidence for Jesus’ existence, but the point I’m trying to make is that even communities which are adamant in their own rationality can wind up being suckered into myths just because those myths agree with what they want to believe.

Let’s get one thing straight: EVERYONE believes that they’re rational. Everyone believes that their opinions are backed by evidence, backed by science, fundamentally true, and that only the dumb and misled would ever believe something different. That’s what makes the self-professed “Rationalist” community so misguided: claiming you’re the only community focused on rational beliefs is just admitting that you’ve never spoken to a community different than your own.

EVERY community believes they are the rational ones, believes they are driven by facts and not emotions, believes that the others are ignoring facts to suit their opinions. And the Rationalist community has it’s own Intelligent Design theories just as the Atheist and the Christian communities do. A good Rationalist, Atheist, or Christian should of course never believe something just because their compatriots believe it, or just because it would support some of their ideology, but a good Rationalist, Atheist, or Christian must also recognize that they probably have biases themselves and that their own community probably harbors an “Intelligent Design” theory all its own.

In the hallowed halls of Twitter and social media it’s widely believed that only the Left of the political spectrum knows and respects science, all right-wing beliefs are obviously false and dis-proven by data. The the exact inverse is believed on the right. I know both communities are havens of their own misinformation. I have seen too many on the Left tell me that supply and demand don’t exist, that building more housing doesn’t lower rent and cost and that inflation is driven only by corporate greed and not supply or demand. I have likewise seen the misinformation on the right over gun deaths, drug crime, vaccines and the like. I’m sure some of my own beliefs are misinformation, but we are all the heroes in our own stories and so self-reflection is very hard.

But I just wrote this post because even if I’m only screaming into the void I wanted to remind people that everyone thinks they are rational. Your political enemies who you consider irrational and emotional idiots are human just like you, and they arrived at their beliefs through the exact same human mechanisms you did. Are you sure anything and everything you believe is true? Are you sure there could never be any evidence that supports your opponents? Don’t dismiss people are idiots just because they believe something else, most humans are just as rational as you.

Crying over Cryo-EM

OK so the title is hyperbole, but I’ve definitely struggled recently with my cryo-electron microscopy. I guess here I’ll give an overview of what exactly electron microscopy is and why I’ve struggled.

Professor Jensen of CalTech has a great series of videos on Cryo-EM. Why we use it, how we use it, and what it is. Anyone interested in the technology should watch it, but for my own purposes:

  • Cryo-electron microscopy consists of freezing a sample and then shooting electrons at it to see the 3d structure of it at the smallest atomic scales.
  • We’re using it to study a number of proteins that cause diseases. In particular we want to know how the 3d shape of a certain protein creates that protein’s function. And how that function can then go on to cause a disease.
  • So we purify a specific protein, make a cryo-grid from that purified protein, and then look at that cryo-grid under electron microscopy hoping to get a good 3d structure.

But that’s where the problems start. First of all, purifying a protein to 99.9% purity is no small feat, especially when you’re taking proteins out of actual patient samples. I’ve dearly struggled to get the required purity that would be needed to make good grids for imaging.

But once I have some “pure” protein, I need to add it to a grid to image it. A cryo-grid is a 1 millimeter by 1 millimeter circle about 1 micrometer thick. On that grid are cut out many 1 micrometer by 1 micrometer squares. And in each square are a mesh of 100 nanometer by 100 nanometer holes. When I add a tiny drop of my protein sample (which is in water) onto the grid, the hope is that the proteins will settle down into the holes. I will then “blot” the sample by pressing some paper onto both sides of the sample, which wicks away all the water not in the holes. I then instantly plunge the sample into liquid ethane, freezing all the liquid in the holes in an instant.

What you get is supposed to be a grid covered in a tiny thin layer of ice, and in each hole the ice contains your proteins of interest. Since they were flash frozen in ethane, the ice here is “vitreous,” which means glass-like. It’s see-through just like glass. And so a beam of electrons can pass into the ice to create an image of the proteins inside the ice.

But there’s problems. Let’s get back to making the grid: most proteins are hydrophilic which means water-loving. The opposite of hydrophilic is hydrophobic which mean water hating, like oil. Oil and water don’t mix, and neither do hydrophobic and hydrophilic things. Our grids are made of copper covered in a layer of carbon, and that stuff is naturally hydrophobic, meaning it doesn’t interact well with the hydrophilic proteins (and the water they are in).

So before adding proteins we have to glow discharge our grids. This means putting them in a machine that shoots broken-up water molecules at them. Those broken-up water molecules have oxygen in them, and some of them will bind to the grid creating oxygen-containing compounds. Those compounds are very hydrophilic, so the whole grid becomes hydrophilic enough for the proteins to interact with it.

At some point we got a new glow discharger, and I swear that it started destroying my grids. Like I said the grids are tiny and fragile, 1 millimeter across, 1 micrometer thick! This glow discharger shoots water at them, and the new one shot the water so hard that it was punching through my grids and destroying them completely at the microscopic level. I couldn’t see the damage because it’s microscopic, but after adding the protein to my grids and flash-freezing them, I’d look at them under a microscope and see nothing but a completely destroyed grid. I finally just stopped trusting it completely and moved on to using a new glow discharger that’s a bit weaker.

So OK I solved the glow discharge problem, but now here comes the ice problem. Like I said above, you want the proteins to be encased in glass-like vitreous ice. If you have no ice, well you have no proteins. And if the ice is too thick, it’s no longer glass-like and you can’t see through it. I kept being on both sides of those extremes, first I had ice so thick I couldn’t see anything, then I had no ice at all. You are supposed to manage this problem by configuring your blotting time, which is how long you wick away the water before plunging the grid into the liquid ethane. Shorter blot time, thicker ice, longer blot time, thinner ice or no ice at all. Try long and short times to get the ice just right.

And yet I was using ultra-short blot times and still getting thick and thin ice sometimes at random. On the balance I got more grids with no ice at all, so I kept thinking I needed to drop the blot time more and more. My adviser said that there is a minimum blot time of about 2 seconds and you never want to go lower than that, but I tried 2 seconds and the ice was still way to thin or non-existence. That seems to say that my blot time is still too long, yet 2 seconds is as short as I can go.

I finally asked an expert in the chemistry department who suggested I used their facilities instead. He also suggested that 1 second of blot time is perfectly fine, and so that was what I did. I FINALLY seemed to start getting good grids, so let’s hope it hold out.

So I’ve struggled with glow discharging, and then blot times, as well as protein purity. I’ve finally got some good grids, and I hope I can collect a lot of data on them. If I do that, I may be able to get 3d structural information using AI and a whole bunch of analysis. We’ll see though, we’ll see.

Quantum Computers: Hype or Hopeless?

I’m not an expert on quantum computers by any means, but I do like to blog about things I know a little about. So bear with me.

One of the funnest seminars I ever attended wasn’t even in my major. I’m a biologist by trade, but occasionally in grad school I’d wonder over to the physics department to eat their pizza and have a gander at the science they were touting (they occasionally came for our pizza too so fair’s fair). On one occasion, the CEO of a quantum computer (QC) company (formerly a professor of physics) came by to talk about the exciting new happenings in QC and give us some history.

At that point I had only a surface understanding of QC, and so I had a lot of fun learning how his QC worked and the challenges he’d overcome. He also had a great sense of humor and a fun presentation. At the end he took lots of questions, and so I was also impressed by how he didn’t really sugarcoat or overhype anything. He was very open and honest that the field still had work to do and that working QCs weren’t just around the corner, whereas reading news articles you’d think they were 5 years away at most.

When it came time for my question, I asked the only thing that I, a biologist could think of: “when will we have logical qubits?”

To back up a big, just as classical computers store information in bits, QCs store information in qubits. Classical bits are binary, existing as either 1 or 0. Qubits are quantum mechanical in nature, and they exist as a superposition of states so that every time you measure them they have a probability of being either 1 or 0. This superposition is why QCs can do all those amazing things that people talk about like breaking encryption or what have you.

But this superposition is also highly unstable. Any interaction with the outside world at all will destroy the superposition, rendering it useless. This has long been the bane of QC companies and researchers, how do you make a qubit that doesn’t fall apart before you can usefully use it? When the superposition falls apart, it leads to an error, and error correction in QC is all the tricks and ways that researchers are trying to either keep the superposition stable or rebuild it if it fails.

The holy grail of error correction is the “logical qubit.” A single qubit can of course fall apart at any time so it is a poor store of information. But what if many qubits could be networked together in some way, such that if one fails the others will correct it back to its previous information value. Together, all these qubits would allow the information to be held indefinitely, even if the superposition in 1 or many of the qubits fails individually. And so together these qubits would act as a single “logical qubit,” that is a stable qubit that perfectly holds information, as opposed to the normal qubits that fall apart when you look at them funny.

It has been theorized that a thousand or more qubits will be needed to make a logical qubit, and that the technology for networking qubits to create logical qubits is still not fully formed. So when I asked the seminar speaker how far off logical qubits were, he humbly said that they may not even be possible. From his research, quantum computers may be useful but their utility and longevity will always be undercut by the fragility of the qubit superposition.

I was kind of stunned because in my readings on the field it was taken as writ that as soon as you can produce 1 qubit, you can scale up and produce thousands. Once you produce thousands, you now have logical qubits which will make all our QC programs work perfectly.

What’s interesting is that IBM is now saying it will make a QC with over 1000 qubits, which is around about the number is supposed to be needed to make 1 logical qubit. Yet by in large I haven’t seen much talk about having our first crack at producing a logical qubit.

So again I’d like to ask the question: how long until we have a logical qubit? If qubits will always be unstable superpositions, then I doubt a mass market consumer QC will ever be workable. And while the hype for logical qubits seemed ever-present when they were still a far-off dream, it seems to have subsided as they get closer to being tested for validity. I wonder if they were always nothing by hype.

The Lunar advantage

This post is gonna be weird and long.

I often have weird thoughts that I wish I could put into a book or story. My thought today is about comparative advantage. Comparative advantage is an economic concept that explains why people and countries can specialize into certain areas of work to become more efficient.

For example, in Iceland the cost of electricity is very low, which is why Iceland has attracted a lot of investment in industries that require lots of electricity, such as aluminum smelting. On the other hand countries like Bangladesh have a low cost of labor, which is why labor intensive activities such as clothing manufacturing invest there. It doesn’t make sense for a company to put an aluminum smelter in places where electricity is expensive, nor does it make sense to put a clothing factory where labor is expensive. Iceland and Bangladesh have their own comparative advantages at this moment in time, and that explains their patterns of industry.

Let’s imagine for a moment that there was a fully autonomous colony on the moon. People lived and worked there without needing to import air, water, or food from Earth. They can trade with Earth, but if Earth were cut off they could still make their own goods, just as if our country were cut off from the world we could still make our own food, drink our own water, breathe our own air. Let’s say they use super-future space technology to extract water and oxygen from moon rocks, and grow crops using moon soil.

If there would be such a moon colony, we would assume there would be trade with Earth. Certainly the cost of moving goods from Earth to the moon and vice versa are enormous. But it was once unbelievably dangerous to cross the oceans, and people still did it because the profits were worth it. We would expect that the moon would have some comparative advantage compared to Earth and vice versa, which would make trade profitable. This comparative advantage is the same reason Iceland sells aluminum products to Bangladesh who in turn sells Iceland clothing.

So with all this in mind, I assert that the moon’s comparative advantage would naturally be in large, heavy goods, but not because of the moon itself but because of the journey.

Let me give another example, suppose there is a factory on Earth making steel and a factory on the Moon making steel. Let’s also say the iron and carbon for the steel can be gotten just as easily on the Moon as on Earth. I assert that the one on the Moon has a comparative advantage because of space travel. Sending goods from the Earth to the Moon means having to spend a lot of energy accelerating out of Earth’s thick atmosphere, then also spending energy to slow yourself down for a moon landing. By contrast it takes much less energy to accelerate off the atmosphere-less surface of the moon, and landing on Earth costs far less energy as you can use the atmosphere itself to brake your fall.

So a moon steel factory can send packages of steel to the Earth at a rather low transport cost compared to vice versa. That gives an advantage to the moon steel factory, as if there are shortages on Earth the moon factory can fill them at a rather low cost, while Earth cannot do the same to fill a need on the moon. The transport costs are not symmetric, and they are in the moon’s favor. I would assert that, all else being equal, investment for steelmaking would flow into the moon and out of the Earth.

Of course the “all else being equal” is the rub. Air, water, and food are hard to come by on the moon. Iron and carbon might be easier but all the mining equipment is already here on Earth. We would have to do a lot of work and build a lot of technology to make a moon-base even possible. But in theory economies of scale and future-technology could make it possible and even economical. And at that point it might enter a virtuous cycle due to these asymmetrical transport costs I mentioned. It will always be cheaper to send goods from the moon to the Earth, than vice versa.

It’s just a random thought I’ve had and I want to put it in a work of fiction. In some sci-fi universe, a moon colony is economically sustained by this comparative advantage compared to Earth. But I’ve never gotten the courage to write this story so until now it’s just been an idle thought in my head.

If the weavers get replaced by machines, who will buy the clothes?

I’ve seen way too many articles about AI casting doom and gloom that it will “replace millions of jobs” and that this will lead to societal destruction as the now job-less replacees have no more money to spend.  The common refrain is “when AIs replace the workers, who will buy the products?”

This is just another fundamental misunderstanding of AI and technology.  AI is a multiplier of human effort, and what once took 10 men now takes 1.  That doesn’t mean that 9 men will be homeless on the street because their jobs are “replaced.”  The gains reaped from productivity are reinvested back into the economy and new jobs are created.

When the loom replaced hand-spinning weavers, those weavers were replaced.  But they could eventually find new jobs in the factories that produced looms, and in other factories that were being developed.  When computers replaced human calculators, those calculators could now find jobs programming and producing computers.

For centuries now, millenia even, technology has multiplied human effort.  It used to take dozens of people to move a single rock, until several thousand years ago someone had the bright idea of using ropes, pullies, and wheels.  Then suddenly rocks could be moved easily.  But that just in turn meant the demand for moving rocks shot up to meet this newer, cheaper equilibrium, and great wonders like the Pyramids and Stonehenge were suddenly built.

The same will be true of AI.  AI will produce as many new jobs as it creates.  There will be people to produce the AI, people to train the AI, people to ensure the AI has guardrails and doesn’t do something that gets the company trending on Twitter.  And there will be ever more people to use the AI because demand is not stable and demand for products will rise to meet the increase in supply generated by the AI.  People will want more and more stuff and that will lead to more and more people using AI to produce it.

This is something that people get hung up on, they think that demand is stable.  So when something that multiplies human effort gets created, they assume that since the same amount of products can be produced with less effort, that everyone will get fired.  Except that demand is not stable, people have infinite wants and finite amounts of money. 

Technological progress creates higher paying jobs, subsistence farmers become factory workers, factory workers become skilled workers, skilled workers enter the knowledge economy of R&D.  These new higher paying jobs create people who want more stuff because they always want more stuff, and now have the money to pay for it.  This in turn increases demand, leading to more people being employed in the industry even though jobs are being “replaced” by machines.

To bring it all back to weavers, more people are working in the textile industry now than at any point in human history, even though we replaced weavers with looms long ago.

AI will certainly upend some jobs.  Some people will be unable or unwilling to find new jobs, and governments should work to support them with unemployment insurance and retraining programs.  But it will create so many new jobs as well.  People aren’t satisfied with how many video games they can purchase right now, how much they can go out to restaurants, how much housing they can purchase, etc.  People always want more, and as they move into higher paying jobs which use AI they will demand more.  That in turn will create demand for the jobs producing those things or training the AIs that produce those things. 

It has all happened before and it will happen again.  Every generation thinks that theirs is the most important time in the universe, that their problems are unique and that nothing will ever be the same.  Less than three years ago we had people thinking that “nothing will ever be the same” due to COVID, and yet in just 3 short years we’ve seen life mostly go back to normal.  A few changes on the margins, a little more work from home and a little more consciousness about staying home when sick, but life continued despite the once-a-century upheaval.

Life will also continue after AI.  AI will one day be studied alongside the plow, the loom, and the computer.  A labor-saving device that is an integral part of the economy, but didn’t lead to its downfall.

The AI pause letter seems really dumb

I’m late to the party again, but a few months ago a letter began circulating requesting that AI development “pause” for at least 6 months. Separately, AI developers like Sam Altman have called for regulation of their own industry. These things are supposedly happening because of fears that AI development could get out of control and harm us, or even kill us all in the words of professional insanocrat Eliezer Yudkowsky, who went so far as to suggest we should bomb data centers to prevent the creation of a rogue AI.

To get my thoughts out there, this is nothing more than moat building and fear-mongering. Computers certainly opened up new avenues for crime and harm, but banning them or pausing development of semiconductors in the 80s would have been stupid and harmful. Lives were genuinely saved because computers made it possible for us to discover new drugs and cure diseases. The harm computers caused was overwhelmed by the good they brought, and I have yet to see any genuine argument made that AI will be different. Will it be easier to spread misinformation and steal identities? Maybe, but that was true of computers too. On the other hand the insane ramblings about how robots will kill us all seem to mostly amount to sci-fi nerds having watched a lot of Terminator and the Matrix and being unable to separate reality from fiction.

Instead, these pushes for regulation seem like moat-building of the highest order. The easiest way to maintain a monopoly or oligopoly is to build giant regulatory walls that ensure no one else can enter your market. I think it’s obvious Sam Altman doesn’t actually want any regulation that would threaten his own business, he threatened to leave the EU over new regulation. Instead he wants the kind of regulation that is expensive to comply with but doesn’t actually prevent his company from doing anything it wants to do. He wants to create huge barriers to entry where he can continue developing his company without competition from new startups.

The letter to “pause” development also seems nakedly self-serving, one of the signatories was Elon Musk, and immediately after Musk called for said pause he turned around and bought thousands of graphics cards to improve Twitter’s AI. It seems the pause in research should only apply to other people so that Elon Musk has the chance to catch up. And I think that’s likely the case with most of the famous signatories of the pause letter, people who realize they’ve been blindsided and are scrambling to catch up.

Finally we have the “bomb data centers” crazies who are worried the Terminator, the Paperclip Maximizer or Roko’s Basilisk will come to kill them. This viewpoint involves a lot of magical thinking as it is never explained just how an AI will find a way to recursively improve itself to the point it can escape the confinement of its server farm and kill us all. In fact at times these folks have explicitly rebuked any such speculation on how an AI can escape in favor of asserting that it just will escape and have claimed that speculation on how is meaningless. This is of course in contrast to more reasonable end-of-the-world scenarios like climate change or nuclear proliferation, where there is a very clear through-line as to how these things could cause the end of humanity.

Like I said it I take this viewpoint the least seriously, but I want to end with my own speculation about Yudkowsky himself. Other members of his caucus have indeed demanded that AI research be halted, but I think Yudkowsky skipped straight to the “bomb data centers” point of view both because he’s desperate for attention and because he wants to shift the Overton Window.

Yudkowsky has in fact spent much of his adult life railing about the dangers of AI and how they’ll kill us all, and in this one moment where the rest of the world is at least amenable to the fears of AI harm, they aren’t listening to him but are instead listening (quite reasonably) to the actual experts in the field like Sam Altman and other AI researchers. Yudkowsky wants to maintain the limelight and the best way to do so is often to make the most over-the-top dramatic pronouncements in the hopes of getting picked up and spread by both detractors, supporters and people who just think he’s crazy.

Secondarily he would probably agree with AI regulation, but he doesn’t want that to be his public platform because he thinks that’s too reasonable. If some people are pushing for regulating AI and some people are against it, then the compromise from politicians who are trying to seem “reasonable” would be for a bit of light regulation which for him wouldn’t go far enough. Yudkowsky instead wants to make his platform something insanely outside the bounds of reasonableness, so that in order to “compromise” with him, you’ll have to meet him in the middle at a point that would include much more onerous AI regulation. He’s just taking an extreme position so he has something to negotiate away and still claim victory.

Personally? I don’t want any AI regulation. I can go to the store right now and buy any computer I want. I can to go to a cafe and use the internet without giving away any of my real personal information. And I can download and install any program I want as long as I have the money and/or bandwidth. And that’s a good thing. Sure I could buy a computer and use it to commit crimes, but that’s no reason to regulate who can buy computers or what type they can get, which is exactly what the AI regulators want to happen with AI. Computers are a net positive to society, and the crimes you can commit on them like fraud and theft were already crimes people committed before computers existed. Computers allow some people to be better criminals, so we prosecute those people when they commit crimes. But computers allow other people to cure cancer, so we don’t restrict who can have one and how powerful it can be. The same is true of AI. It’s a tool like any other, so let’s treat it like one.

A possible cure for Duchenne Muscular Dystrophy

Sarepta Therapeutics may have a cure out for Duschenne Muscular Dystrophy (DMD). It’s called SRP-9001, and while I hesitate to say it’s a Dragonball Z reference, I’m not sure why else it has that number. Either way it’s an interesting piece of work and I thought I’d write about it and what I know about it.

DMD is caused by a mutation in the protein dystrophin, a protein which is vital for keeping the muscle fibers stiff and sound. Our muscles move because muscle fibers pull themselves together, which shrinks their volume along an axis and therefor pulls together anything they are attached to. The muscle cell pulling on itself creates an incredible amount of force, and dystrophin is necessary to make sure that that force doesn’t damage the muscle cell itself. When dystrophin is mutated in DMD, the muscle cells pulling on themselves will indeed begin to cause deformations and destruction of the muscle cell itself, which leads to the characteristic wasting away of DMD sufferers. The expected lifespan of someone with DMD is only around 20-30 years.

Dystrophin is a massive protein, fully 0.1% of the human genome is made up of just the dystrophin gene. However a number of the mutations which cause DMD are point mutations, mutations in a single DNA nucleotide. If just that one nucleotide could be fixed, in theory the disease could be cured. For a long time genetic engineering and CRISPR/Cas9 has targeted DMD for treatments based on this idea of just fixing that one nucleotide.

However, Sarepta seems to be working on an entirely new theory. Deliver a complete gene to the patient which can replace the functionality of the non-functional dystrophin. This is called micro-dystrophin and it is less than half the length of true dystrophin. However it still contains some of the necessary domains of dystrophin like the actin-binding-domain. This is important because of how genetic engineering in humans actually works (these days). How do you get a new gene into a human? Normally, you must use a virus. But the viruses of choice (like AAV) are actually so small that the complete dystrophin gene simply would not fit in them. Micro-dystrophin, being so much smaller, is needed in order to fit the treatment into a virus.

So the idea would be that DMD patients cannot produce working dystrophin, but when SRP-9001 is given to them it would give them the genes to create micro-dystrophin for themselves. Then once their muscles begin creating this micro-dystrophin, it would spread throughout the muscle cell and take up the job of strengthening and stiffening the muscle cell just like normal dystrophin does. In this way the decay of their muscles would slow and hopefully they’d live much much longer.

SRP-9001’s road to FDA approval is not yet fully formed. They’ve done some nice clinical trials where they’ve shown that their genetic engineering drug does successfully deliver micro-dystrophin genes into the patients, and that the patients then use those genes to produce the micro-dystrophin protein. However as of right now they are still doing Phase 3 clinical trials and still awaiting the FDA to give them expedited approval. That approval won’t come until June 22nd at the earliest, but I believe it would still make it the first FDA-approved treatment for DMD.

So what’s going on with Amyloid Beta and Alzheimer’s disease?

This will be a very #streamsofconsciousness post where I ramble a bit about my work.

As I’ve said before I study Amyloid Beta in Alzheimer’s disease. I am very new to this field, so much of what is surprising to me might be old hat to the experts. But I’m quite flummoxed on what exactly Amyloid Beta is doing both in diseased and healthy brains. When I started this job, I read papers indicating that Amyloid Beta (henceforth AB) forms these large filaments, and like a bull in a china shop those large filaments will sort of knock around and cause damage. Damaging the brain in that way is obviously a hazard, and would lead to exactly the type of neuro-degeneration that is a hallmark of Alzheimer’s disease.

So because of this, it’s my job to extract these large AB filaments and take pictures of them. That way we can see exactly what they look like and why it is that they do so much damage. But then this simple picture changed. AB is made up of thousands of individual peptides, and I read papers saying these individual peptides might actually be what causes the disease by disrupting the neurons and causing them to die. But if that’s the case, then what are the filaments doing? Are they still causing damage by being big and huge, or are they entirely benign and a red herring? If they are benign, then my studying them and taking pictures of them might be leading us down a dead end.

And now I found that AB is also necessary for the development of a healthy brain. Now this in itself is not too out there, any medicine can turn into poison if the dose is wrong. So this could easily be too much of a good thing, or a good thing in the wrong place, that while normally AB helps a brain, in Alzheimer’s disease something has gone wrong to cause AB to kill nerve cells. But still it’s surprising.

The paper I read indicates that AB is necessary for process of synaptic plasticity. No time to get into the whole details, but synaptic plasticity underlies the formation of memories in the brain. Mice who do not have AB have a harder time forming memories and completing tasks than mice with AB. So now I’m at the point where actually AB is necessary for the formation of memories of a healthy brain, but then sOmEtHiNg happens and it caused Alzheimer’s disease, which is characterized by deficiencies in memories. So what is happening?!?!?

I… don’t know. I don’t know if anyone knows. But I wish I had the tools to study this further. The difficulty is that I’m not sure if I do. My setup is geared towards looking at those giant AB filaments I talked about earlier. Filaments have a big, rigid form and you can do structural analysis on them to get what is essentially a 3D model. But all these papers talking about the role of AB in healthy brains, they are talking about it in the small monomeric form. Small monomers don’t form rigid structures in quite the same way, they are more akin to a floppy noodle, there’s no rigid form to hang your hat on and so no clean 3D model can be made for them. So maybe I’m using the wrong tool for the wrong job. Or maybe it really IS the filaments that are doing the damage. I’m just not sure at this time and it’s racking my brain trying to know where I should go next.