Gene drives and gingivitis bacteria

One piece of sci-fi technology that doesn’t get much talk these days is gene drives. When I was an up and coming biology student, these were the subject of every seminar, the case study of every class, and they were going to eliminate malaria worldwide.

Now though, you hardly hear a peep about them. And I don’t think, like some of my peers, that this is because anti-technology forces have cowed scientists and policy-makers into silence. I don’t see any evidence that gene drives are quietly succeeding in every test, or that they are being held back by Greenpeace or other anti-GMO groups.

I just think gene drives haven’t lived up to the hype.

Let me step back a bit: what *is* a gene drive? A gene drive is a way to manipulate the genes of an entire species. If you modify the genes of a single organism, when it reproduces only at most 50% of its progeny will have whatever modification you give it. Unless your modification confers a lot of evolutionary fitness to the organism, there is no way to make every one of the organism’s descendants have your modification.

But a gene drive can do just that. In fact, a gene drive can confer an evolutionary disadvantage to an organism, and you can still guarantee all of the organism’s decedents will have that gene. The biggest use-case for gene drives is mosquitoes. You can give mosquitoes a gene that prevents them from sucking human blood, but since this confers an evolutionary disadvantage, your gene won’t last many generations before evolution weeds it out.

But if you put your gene in a gene drive, you can in theory release a population of mosquitoes carrying this gene and ensure all of their decedents have the gene and thus won’t attack humans. In a few generations, a significant fraction of all mosquitoes will have this gene, thus preventing mosquito bites as well as a whole host of diseases mosquitoes bring.

Now this is a lot of genetic “playing God,” and I’m sure Greenpeace isn’t happy about it. But environmentalist backlash has never managed to stamp out 100% of genetic technology. CRISPR therapies and organisms are on the rise, GMO crops are still planted worldwide, environmentalists may hold back progress but they cannot stop it.

But talk about gene drives *has* slowed considerably and I think it’s because they just don’t work as advertised.

See, to be effective a gene drive requires an evolutionary contradiction: it must reduce an organism fitness but still be passed on to the progeny. Mosquitoes don’t just bite humans for fun, we are some of the most common large mammals in the world, and our blood is rich in nutrients. For mosquitoes, biting us is a necessity for life. So if you create a gene drive that knocks out this necessity, you are making the mosquitoes who carry your gene drive less evolutionarily fit.

And gene drives are not perfect. The gene they carry can mutate, and even if redundancy is built in, that only means more mutations will be necessary to overcome the gene drive. You can make it more and more improbable that mutations will occur, but you cannot prevent them forever. So when you introduce a gene drive, hoping that all the progeny will carry this gene that prevents mosquitoes biting humans, eventually one lucky mosquito will be born that is resistant to the gene drive’s effects. It will have an evolutionary advantage because it *will* bite humans, and so like antibiotic resistant bacteria, it will grow and multiply as the mosquitoes who still carry the gene drive are outcompeted and die off.

Antibiotics did not rid the world of bacteria, and gene drives cannot rid the world of mosquitoes. Evolution is not so easily overcome.

I tell this story in part to tell you another story. Social media was abuzz recently thanks to a guerilla marketing campaign for a bacteria that is supposed to cure tooth decay. The science can be read about here, but I was first alerted to this campaign by stories of an influencer who would supposedly receive the bacteria herself and then pledged to pass it on to others by kissing them. Bacteria can indeed be passed by kissing, by the way.

But like gene drives, this bacteria doesn’t seem to be workable in the context of evolution. Tooth decay happens because certain bacteria colonize our mouth and produce acidic byproducts which break down our enamel. Like mosquitoes, they do not do this just for fun. The bacteria do this because it is the most efficient way to get rid of their waste.

The genetically modified bacteria was supposed to not produce any acidic byproducts, and so if you colonized someone’s mouth with this good bacteria instead of the bad bacteria, their enamel would never be broken down by the acid. But this good bacteria cannot just live in harmony and contentment, life is a war for resources and this good bacteria will be fighting with one hand tied behind its back.

Any time you come into contact with the bad bacteria, it will likely outcompete the good bacteria because it’s more efficient to just dispose of your waste haphazardly than it is to wrap it in a nice, non-acidic bundle first. Very quickly the good bacteria will die off and once again be replaced by bad bacteria.

So I’m quite certain this little marketing campaign will quietly die once its shown the bacteria doesn’t really do anything. And since I’ve read that there aren’t even any peer reviewed studies backing up this work, I’m even more certain of its swift demise.

Biology has brought us wonders, and we have indeed removed certain disease scourges from our world. Smallpox, rinderpest, and hopefully polio very soon, it is possible to remove pests from our world. But it takes a lot more work than simply releasing some mosquitoes or kissing someone with the right bacteria. And that’s because evolution is working against you every step of the way.

Crying over Cryo-EM

OK so the title is hyperbole, but I’ve definitely struggled recently with my cryo-electron microscopy. I guess here I’ll give an overview of what exactly electron microscopy is and why I’ve struggled.

Professor Jensen of CalTech has a great series of videos on Cryo-EM. Why we use it, how we use it, and what it is. Anyone interested in the technology should watch it, but for my own purposes:

  • Cryo-electron microscopy consists of freezing a sample and then shooting electrons at it to see the 3d structure of it at the smallest atomic scales.
  • We’re using it to study a number of proteins that cause diseases. In particular we want to know how the 3d shape of a certain protein creates that protein’s function. And how that function can then go on to cause a disease.
  • So we purify a specific protein, make a cryo-grid from that purified protein, and then look at that cryo-grid under electron microscopy hoping to get a good 3d structure.

But that’s where the problems start. First of all, purifying a protein to 99.9% purity is no small feat, especially when you’re taking proteins out of actual patient samples. I’ve dearly struggled to get the required purity that would be needed to make good grids for imaging.

But once I have some “pure” protein, I need to add it to a grid to image it. A cryo-grid is a 1 millimeter by 1 millimeter circle about 1 micrometer thick. On that grid are cut out many 1 micrometer by 1 micrometer squares. And in each square are a mesh of 100 nanometer by 100 nanometer holes. When I add a tiny drop of my protein sample (which is in water) onto the grid, the hope is that the proteins will settle down into the holes. I will then “blot” the sample by pressing some paper onto both sides of the sample, which wicks away all the water not in the holes. I then instantly plunge the sample into liquid ethane, freezing all the liquid in the holes in an instant.

What you get is supposed to be a grid covered in a tiny thin layer of ice, and in each hole the ice contains your proteins of interest. Since they were flash frozen in ethane, the ice here is “vitreous,” which means glass-like. It’s see-through just like glass. And so a beam of electrons can pass into the ice to create an image of the proteins inside the ice.

But there’s problems. Let’s get back to making the grid: most proteins are hydrophilic which means water-loving. The opposite of hydrophilic is hydrophobic which mean water hating, like oil. Oil and water don’t mix, and neither do hydrophobic and hydrophilic things. Our grids are made of copper covered in a layer of carbon, and that stuff is naturally hydrophobic, meaning it doesn’t interact well with the hydrophilic proteins (and the water they are in).

So before adding proteins we have to glow discharge our grids. This means putting them in a machine that shoots broken-up water molecules at them. Those broken-up water molecules have oxygen in them, and some of them will bind to the grid creating oxygen-containing compounds. Those compounds are very hydrophilic, so the whole grid becomes hydrophilic enough for the proteins to interact with it.

At some point we got a new glow discharger, and I swear that it started destroying my grids. Like I said the grids are tiny and fragile, 1 millimeter across, 1 micrometer thick! This glow discharger shoots water at them, and the new one shot the water so hard that it was punching through my grids and destroying them completely at the microscopic level. I couldn’t see the damage because it’s microscopic, but after adding the protein to my grids and flash-freezing them, I’d look at them under a microscope and see nothing but a completely destroyed grid. I finally just stopped trusting it completely and moved on to using a new glow discharger that’s a bit weaker.

So OK I solved the glow discharge problem, but now here comes the ice problem. Like I said above, you want the proteins to be encased in glass-like vitreous ice. If you have no ice, well you have no proteins. And if the ice is too thick, it’s no longer glass-like and you can’t see through it. I kept being on both sides of those extremes, first I had ice so thick I couldn’t see anything, then I had no ice at all. You are supposed to manage this problem by configuring your blotting time, which is how long you wick away the water before plunging the grid into the liquid ethane. Shorter blot time, thicker ice, longer blot time, thinner ice or no ice at all. Try long and short times to get the ice just right.

And yet I was using ultra-short blot times and still getting thick and thin ice sometimes at random. On the balance I got more grids with no ice at all, so I kept thinking I needed to drop the blot time more and more. My adviser said that there is a minimum blot time of about 2 seconds and you never want to go lower than that, but I tried 2 seconds and the ice was still way to thin or non-existence. That seems to say that my blot time is still too long, yet 2 seconds is as short as I can go.

I finally asked an expert in the chemistry department who suggested I used their facilities instead. He also suggested that 1 second of blot time is perfectly fine, and so that was what I did. I FINALLY seemed to start getting good grids, so let’s hope it hold out.

So I’ve struggled with glow discharging, and then blot times, as well as protein purity. I’ve finally got some good grids, and I hope I can collect a lot of data on them. If I do that, I may be able to get 3d structural information using AI and a whole bunch of analysis. We’ll see though, we’ll see.

Good idea: financially supporting workers displaced by AI. Bad idea: taxing companies when for displacing workers with AI.

AI is again the topic of the day and people are discussing what to do about the coming “job-pocolypse.” It seems AI can do anything we humans can do better and so 30% or more of jobs will be destroyed and replaced by AI. Leaving aside how accurate that prediction is, if 30% of all jobs will be impacted then it does warrant a public policy response. Everyone’s got their own personal favorite, but one I see come up again and again is that companies should face a hefty tax any time they replace a worker with AI.

To be blunt, taxing companies for replacing workers with AI is a terrible idea. Let’s leave aside the argument of “how do you prove it,” and cut straight to the fact that the government should not be taxing technological progress. Just to start with some history, how many farmers were displaced by tractors? Millions. In 1900 40% of Westerners worked on farms, now it’s less than 5%. Tractors meant that a single farmer could do the labor of tens or hundreds of men, and so they could fire many of their farm hands to be replaced by tractors. But does anyone reading this wish nearly 1/2 of us were still farmers? Should the government have heavily taxes tractors to preserve the idyllic rural farm life?

The argument in favor of taxing companies that replace workers with a machine is that the company is becoming more profitable at the expense of the worker, and they should pay it back. The current hullabaloo is about being replaced by AI, but in the 20th century similar calls were made when factory workers were being replaced by robots. The problem with this argument is that ignores society. The worker and the company are not the only 2 pieces of the equation, society in general benefits when companies become more efficient. Technology is deflationary, and it has allows many products to drop or price or not increase as rapidly as wages in general. Food today costs less as a percent of annual income than at nearly any time in history, and a large part of that is because the cost of food is decoupled from the cost of labor. So farm hands being replaced by tractors helped all of society by giving us cheaper food, and all of society would have been harmed if taxes had been instituted to prevent tractors from becoming commonplace.

Are the workers harmed when their jobs are replaced by AI? Yes of course. But society itself is helped and so all of society should bear the costs of helping the workers. We should of course offer unemployment benefits and job retraining to those affected. We should not let them go by the wayside the way we did to blue collar factory workers in the 20th century.

But neither should we shoot society in the foot by blocking technological progress that will help all of us. AI replacing jobs will mean products will become cheaper relative to wages, just as what happened with food. A lot of people also spread nonsense that unemployment will skyrocket as the displaced workers can’t find other jobs. They misunderstand economics, there will always be demand for more jobs. The price of some goods will decrease thanks to AI, but that means that people can buy more of those goods or buy more of others goods that they put off buying because they were forced to choose and only had so much money. As prices fall, demand will rise, raising demand for labors in other areas, and a new equilibrium will be reached. Those jobs lost due to AI don’t mean the workers will be forever jobless, any more than 35% of the population displaced by tractors meant that unemployment skyrocketed in the 20th century. Time and time and time again technology has replaced the jobs of workers, and the workers have found new jobs. It will happen again with AI.

The Lunar advantage

This post is gonna be weird and long.

I often have weird thoughts that I wish I could put into a book or story. My thought today is about comparative advantage. Comparative advantage is an economic concept that explains why people and countries can specialize into certain areas of work to become more efficient.

For example, in Iceland the cost of electricity is very low, which is why Iceland has attracted a lot of investment in industries that require lots of electricity, such as aluminum smelting. On the other hand countries like Bangladesh have a low cost of labor, which is why labor intensive activities such as clothing manufacturing invest there. It doesn’t make sense for a company to put an aluminum smelter in places where electricity is expensive, nor does it make sense to put a clothing factory where labor is expensive. Iceland and Bangladesh have their own comparative advantages at this moment in time, and that explains their patterns of industry.

Let’s imagine for a moment that there was a fully autonomous colony on the moon. People lived and worked there without needing to import air, water, or food from Earth. They can trade with Earth, but if Earth were cut off they could still make their own goods, just as if our country were cut off from the world we could still make our own food, drink our own water, breathe our own air. Let’s say they use super-future space technology to extract water and oxygen from moon rocks, and grow crops using moon soil.

If there would be such a moon colony, we would assume there would be trade with Earth. Certainly the cost of moving goods from Earth to the moon and vice versa are enormous. But it was once unbelievably dangerous to cross the oceans, and people still did it because the profits were worth it. We would expect that the moon would have some comparative advantage compared to Earth and vice versa, which would make trade profitable. This comparative advantage is the same reason Iceland sells aluminum products to Bangladesh who in turn sells Iceland clothing.

So with all this in mind, I assert that the moon’s comparative advantage would naturally be in large, heavy goods, but not because of the moon itself but because of the journey.

Let me give another example, suppose there is a factory on Earth making steel and a factory on the Moon making steel. Let’s also say the iron and carbon for the steel can be gotten just as easily on the Moon as on Earth. I assert that the one on the Moon has a comparative advantage because of space travel. Sending goods from the Earth to the Moon means having to spend a lot of energy accelerating out of Earth’s thick atmosphere, then also spending energy to slow yourself down for a moon landing. By contrast it takes much less energy to accelerate off the atmosphere-less surface of the moon, and landing on Earth costs far less energy as you can use the atmosphere itself to brake your fall.

So a moon steel factory can send packages of steel to the Earth at a rather low transport cost compared to vice versa. That gives an advantage to the moon steel factory, as if there are shortages on Earth the moon factory can fill them at a rather low cost, while Earth cannot do the same to fill a need on the moon. The transport costs are not symmetric, and they are in the moon’s favor. I would assert that, all else being equal, investment for steelmaking would flow into the moon and out of the Earth.

Of course the “all else being equal” is the rub. Air, water, and food are hard to come by on the moon. Iron and carbon might be easier but all the mining equipment is already here on Earth. We would have to do a lot of work and build a lot of technology to make a moon-base even possible. But in theory economies of scale and future-technology could make it possible and even economical. And at that point it might enter a virtuous cycle due to these asymmetrical transport costs I mentioned. It will always be cheaper to send goods from the moon to the Earth, than vice versa.

It’s just a random thought I’ve had and I want to put it in a work of fiction. In some sci-fi universe, a moon colony is economically sustained by this comparative advantage compared to Earth. But I’ve never gotten the courage to write this story so until now it’s just been an idle thought in my head.

The AI pause letter seems really dumb

I’m late to the party again, but a few months ago a letter began circulating requesting that AI development “pause” for at least 6 months. Separately, AI developers like Sam Altman have called for regulation of their own industry. These things are supposedly happening because of fears that AI development could get out of control and harm us, or even kill us all in the words of professional insanocrat Eliezer Yudkowsky, who went so far as to suggest we should bomb data centers to prevent the creation of a rogue AI.

To get my thoughts out there, this is nothing more than moat building and fear-mongering. Computers certainly opened up new avenues for crime and harm, but banning them or pausing development of semiconductors in the 80s would have been stupid and harmful. Lives were genuinely saved because computers made it possible for us to discover new drugs and cure diseases. The harm computers caused was overwhelmed by the good they brought, and I have yet to see any genuine argument made that AI will be different. Will it be easier to spread misinformation and steal identities? Maybe, but that was true of computers too. On the other hand the insane ramblings about how robots will kill us all seem to mostly amount to sci-fi nerds having watched a lot of Terminator and the Matrix and being unable to separate reality from fiction.

Instead, these pushes for regulation seem like moat-building of the highest order. The easiest way to maintain a monopoly or oligopoly is to build giant regulatory walls that ensure no one else can enter your market. I think it’s obvious Sam Altman doesn’t actually want any regulation that would threaten his own business, he threatened to leave the EU over new regulation. Instead he wants the kind of regulation that is expensive to comply with but doesn’t actually prevent his company from doing anything it wants to do. He wants to create huge barriers to entry where he can continue developing his company without competition from new startups.

The letter to “pause” development also seems nakedly self-serving, one of the signatories was Elon Musk, and immediately after Musk called for said pause he turned around and bought thousands of graphics cards to improve Twitter’s AI. It seems the pause in research should only apply to other people so that Elon Musk has the chance to catch up. And I think that’s likely the case with most of the famous signatories of the pause letter, people who realize they’ve been blindsided and are scrambling to catch up.

Finally we have the “bomb data centers” crazies who are worried the Terminator, the Paperclip Maximizer or Roko’s Basilisk will come to kill them. This viewpoint involves a lot of magical thinking as it is never explained just how an AI will find a way to recursively improve itself to the point it can escape the confinement of its server farm and kill us all. In fact at times these folks have explicitly rebuked any such speculation on how an AI can escape in favor of asserting that it just will escape and have claimed that speculation on how is meaningless. This is of course in contrast to more reasonable end-of-the-world scenarios like climate change or nuclear proliferation, where there is a very clear through-line as to how these things could cause the end of humanity.

Like I said it I take this viewpoint the least seriously, but I want to end with my own speculation about Yudkowsky himself. Other members of his caucus have indeed demanded that AI research be halted, but I think Yudkowsky skipped straight to the “bomb data centers” point of view both because he’s desperate for attention and because he wants to shift the Overton Window.

Yudkowsky has in fact spent much of his adult life railing about the dangers of AI and how they’ll kill us all, and in this one moment where the rest of the world is at least amenable to the fears of AI harm, they aren’t listening to him but are instead listening (quite reasonably) to the actual experts in the field like Sam Altman and other AI researchers. Yudkowsky wants to maintain the limelight and the best way to do so is often to make the most over-the-top dramatic pronouncements in the hopes of getting picked up and spread by both detractors, supporters and people who just think he’s crazy.

Secondarily he would probably agree with AI regulation, but he doesn’t want that to be his public platform because he thinks that’s too reasonable. If some people are pushing for regulating AI and some people are against it, then the compromise from politicians who are trying to seem “reasonable” would be for a bit of light regulation which for him wouldn’t go far enough. Yudkowsky instead wants to make his platform something insanely outside the bounds of reasonableness, so that in order to “compromise” with him, you’ll have to meet him in the middle at a point that would include much more onerous AI regulation. He’s just taking an extreme position so he has something to negotiate away and still claim victory.

Personally? I don’t want any AI regulation. I can go to the store right now and buy any computer I want. I can to go to a cafe and use the internet without giving away any of my real personal information. And I can download and install any program I want as long as I have the money and/or bandwidth. And that’s a good thing. Sure I could buy a computer and use it to commit crimes, but that’s no reason to regulate who can buy computers or what type they can get, which is exactly what the AI regulators want to happen with AI. Computers are a net positive to society, and the crimes you can commit on them like fraud and theft were already crimes people committed before computers existed. Computers allow some people to be better criminals, so we prosecute those people when they commit crimes. But computers allow other people to cure cancer, so we don’t restrict who can have one and how powerful it can be. The same is true of AI. It’s a tool like any other, so let’s treat it like one.

A possible cure for Duchenne Muscular Dystrophy

Sarepta Therapeutics may have a cure out for Duschenne Muscular Dystrophy (DMD). It’s called SRP-9001, and while I hesitate to say it’s a Dragonball Z reference, I’m not sure why else it has that number. Either way it’s an interesting piece of work and I thought I’d write about it and what I know about it.

DMD is caused by a mutation in the protein dystrophin, a protein which is vital for keeping the muscle fibers stiff and sound. Our muscles move because muscle fibers pull themselves together, which shrinks their volume along an axis and therefor pulls together anything they are attached to. The muscle cell pulling on itself creates an incredible amount of force, and dystrophin is necessary to make sure that that force doesn’t damage the muscle cell itself. When dystrophin is mutated in DMD, the muscle cells pulling on themselves will indeed begin to cause deformations and destruction of the muscle cell itself, which leads to the characteristic wasting away of DMD sufferers. The expected lifespan of someone with DMD is only around 20-30 years.

Dystrophin is a massive protein, fully 0.1% of the human genome is made up of just the dystrophin gene. However a number of the mutations which cause DMD are point mutations, mutations in a single DNA nucleotide. If just that one nucleotide could be fixed, in theory the disease could be cured. For a long time genetic engineering and CRISPR/Cas9 has targeted DMD for treatments based on this idea of just fixing that one nucleotide.

However, Sarepta seems to be working on an entirely new theory. Deliver a complete gene to the patient which can replace the functionality of the non-functional dystrophin. This is called micro-dystrophin and it is less than half the length of true dystrophin. However it still contains some of the necessary domains of dystrophin like the actin-binding-domain. This is important because of how genetic engineering in humans actually works (these days). How do you get a new gene into a human? Normally, you must use a virus. But the viruses of choice (like AAV) are actually so small that the complete dystrophin gene simply would not fit in them. Micro-dystrophin, being so much smaller, is needed in order to fit the treatment into a virus.

So the idea would be that DMD patients cannot produce working dystrophin, but when SRP-9001 is given to them it would give them the genes to create micro-dystrophin for themselves. Then once their muscles begin creating this micro-dystrophin, it would spread throughout the muscle cell and take up the job of strengthening and stiffening the muscle cell just like normal dystrophin does. In this way the decay of their muscles would slow and hopefully they’d live much much longer.

SRP-9001’s road to FDA approval is not yet fully formed. They’ve done some nice clinical trials where they’ve shown that their genetic engineering drug does successfully deliver micro-dystrophin genes into the patients, and that the patients then use those genes to produce the micro-dystrophin protein. However as of right now they are still doing Phase 3 clinical trials and still awaiting the FDA to give them expedited approval. That approval won’t come until June 22nd at the earliest, but I believe it would still make it the first FDA-approved treatment for DMD.

AI art killed art like video killed the radio star

Everyone knows the song “Video Killed the Radio Star” by the Buggles, it was one of the earliest big hits on MTV (back when it was still called Music Television). The song is pretty good, but it also speaks to a genuine fear and wonder about our world, that changing technology upends our social fabric and destroys our livelihood. The radio star who just wasn’t pretty enough for video, or couldn’t compete with the big production values of music videos, or just didn’t like dancing and being seen at all. That radio star is the Dickensian protagonist of the modern age, as they are tossed aside and replaced when new technology comes along.

This Luddite fear has pervaded throughout history. The loom-smashing followers of Ned Ludd are only the most famous, but there were silent actors who never made it in talkies. There were photo-realistic painters who could never compete with a camera. John Henry died trying to beat a steam drill. In each case, an argument could be made that the new technology removed some important human element. The painters could claim that photography wasn’t “true art”. And the loom smashers too probably believed that their handcrafts were more “real” and more deserving of respect than the soulless cloth that replaced it.

So why is AI art any different? Why should we care about the modern Luddites who want to ban it or restrict it? I say we shouldn’t.

AI art steals from other artists to make its images

common argument

No more than any artist “steals” when they learn from the old masters. It is a grievous misunderstanding of how AI works to claim that it cuts and pastes from other images, and an AI training itself on a dataset of art is no different than an art student doing the same whether in university or on their own. The counter-argument I’ve heard is “why are you ascribing rights to an AI that should only belong to humans! Yes humans can learn from other art, but AI shouldn’t have the right to!” I’m not ascribing anything to AI, the person who coded the AI and the person who used the AI have the right to use any images they can find, just as an artist does. And just as the output of an artist learning from old masters is itself new art, so too is the output of coding or using an AI that has been trained on old works.

AI art is soulless

common argument

As soulless as loom-made fabric is compared to hand-made. Or as soulless as a photograph is compared to a hand-painted picture. Being made with a machine doesn’t detract from something for me, and I think only bias causes it to detract from others.

AI art takes money out of artists’ pockets, it should be banned to protect the workers’ paychecks

common argument

Why is the money of the workers more important than the money of the consumers? Loom-made fabric competes with hand-spun fabric, should we smash looms to keep the tailors’ wages up? Are we ok with having everything cost more because it would hurt someone’s business if they had to compete against a machine? The counter-argument I’ve seen to this is that the old jobs replaced by AI were all terrible drudgery and it’s good that they were replaced, whereas art is the highest of human expressions and should never be replaced. Again I think this is presentism and a misreading of history. I’m sure there were tailors and seamstresses who though sewing and making fabric was the absolute bomb, who loved their job and though that their clothes had so much heart and soul that they were works of art in and of themselves. And I know there are artists in the modern day for whom most of their work is dull drudgery.

Thinking that your job and only your job is the highest form of human expression and should never be replaced, well to me that just shows a clear lack of empathy towards everyone else on earth. No one’s job is safe from automation, but all of society reaps the benefits of automation. We can all now afford far more food, more clothing, more everything, since we started automating manual labor. Labor saving creates jobs, it doesn’t destroy them, it frees people to put their efforts towards other tasks. We need to make sure that the people who lose their jobs due to automation are still cared for by society, but we should not halt technological progress just to protect them. AI art allows creators and consumers to have more art available than they otherwise would. Game designers can whip up art far more quickly, role-players can get a character portrait without having to pay, this lets people have far more art available than they otherwise would. In the same way that the loom let us have far more clothing available than we otherwise would.

AI art is always terrible

common argument

I find it funny that this often comes paired in internet discourse with “I’m constantly paranoid and wondering if the picture I’m looking at was made by AI or not.” There’s a very Umberto Eco-esque argument going on in anti-AI spaces. AI is both terrible and easily spotted, but also insidious and you never quite know if what you’re seeing is AI, and also everyone is now using AI art instead of “real” art.

If real art is better than AI art, wouldn’t there be a market for it still? There’s still a market for good food even though McDonald’s exists, if AI art is terrible and soulless than it isn’t really a danger to anyone who can’t make good art themselves. And if AI art is always terrible, then why are so many people worried about whether the picture they’re seeing is AI-made or not? Shouldn’t it always be obvious?

This is very obviously an emotional argument. If you can convince someone that a picture was not made with AI, they’ll defend it. If you convince them it was made with AI, they’ll attack it.

This was a vague disconnected rant, but I’ve become sort of jaded to the AI arguments I’ve seen going on. I had thought that modern society had somewhat grown out of Ludditism. And to be frank, many of the people I see making anti-AI arguments are supposedly pro-science and pro-rationalism. But it seems that ideology only works so long as their “tribe” doesn’t ever get threatened.

AI art is fun

I don’t have much to say on the AI art public debate yet, other than to say it feels overly vitriolic. But I can definitely say that AI art has been fun to make. I’d love to have the time to put some of these together into a 1-off RPG campaign, but I don’t know when I’d have the time. I’d also like to use some time to learn how I can use an AI art program on my own computer and train it against specific images I want, rather than relying on the web-browser based programs that are trained on a whole host of unnecessary data. 

But that’s for another time. For now, AI art is fun to toy around with, and dream of what could be very soon. 

Not feeling good, but hope to have a post tomorrow

As the title says. Work is stressful when you’re not sure if you’re looking in the right direction. We have a lot of seemingly contradictory data coming in from our experiments, but those contradictions are hopefully pushing us in the direction of new science. Just how the shape of proteins relates to their disease states is still new ground, and I’m proud to be working in it. But it’s a difficult field to get data in.

I sometimes look back and wish I could have worked in an earlier time. You look at say Gregor Mendel’s pea plants and you think that that would have been a scientific endeavor you could do easily. Or discovering new elements in the 19th century when all you needed was an atomic mass and the mass ratios of an oxygen and fluorine salt. I think back and that research seems so much easier, since high school kids replicate some of those experiments today.

But I know it isn’t so simple. It’s easy to replicate those things first because we know what we’re looking for, and second because our technology is so much better. Finding the structure of benzene is a classic case, students learn how to draw benzene very early on in an organic chemistry class, but its structure confounded the best and brightest for decades in the 19th century. They didn’t have the accuracy of scales we do, the easy access to light-based detection methods, or nuclear based detection methods, hell they didn’t even have a theory of protons and neutrons. They knew Carbon and Hydrogen existed, but they weren’t in any way sure how to fit those onto a periodic table yet, and weren’t certain there wasn’t some new element hiding around in Benzene. Putting together an experiment to prove the structure of benzene, using only 19th century knowledge and 19th century technology, is a lot harder than it sounds, and me wishing I could have worked on that discovery instead of my current one is “grass-is-greener-ism.”

Another good one for any discussion: we often laugh at those silly medievals who believed the sun goes around the earth. I mean, even some Greek philosophers proposed it, but alas the medievals were just too closed-minded, right? But actually the geocentric theory did seem to be parsimonious for a good long while. Here’s a fun thought experiment: how would you prove geocentricism using only what you could find in the 10th century? No telescope, no pictures from orbit, just observations of the sky. If you know your astronomy you know there are certain irregularities with the orbits of planets as viewed from earth, and that is a good argument against geocentricism. Yet it was also noted that there is no perception of movement when one is standing on the earth, and that was taken as an argument against heliocentricism. It wasn’t until Galileo’s theory of relative motion that a cogent counter-argument was put in place, and so if you want to prove heliocentricism in the 10th century you’d also have to do the hard work of demonstrating relativity like Galileo did. Copernicus’s model of heliocentricism is often seen as revolutionary, but it still had endless epicycles needed to explain it, more even than geocentricism, making it not that much better than Ptolemy’s geocentricism, so if you want to argue for heliocentricism by attacking epicycles you’d also need to do the hard math that Kepler did in establishing how orbits can be calculated based on ellipses. It really isn’t as easy a problem as it sounds.

So yeah, work is hard but I guess it’s always been hard. We think all the easy discoveries have been made, but those discoveries were made when they were hard to make.

The danger of small patterns

As I’ve probably said before, I work as a researcher. When you’re doing difficult or expensive research, you don’t usually have the time or money to do a whole lot of replications. That goes doubly if you’re working with patients or patient samples. But since science is all about finding patterns, how can you find patterns in a small dataset?

There are statistical tools that can help with this, but even before you get to the hypothesis testing phase, you need to know which direction your hypothesis will go in. For that, we tend to look at the small patterns which aren’t yet statistically significant and try to see what they mean. The danger here is when you don’t get data in a reasonable amount of time, you want to work on your project but you don’t have data to work on. So you go back to whatever you have, the “small patterns” and start extrapolating from there. “If this pattern holds, what could it mean for this disease?”

Then you can start getting attached to a hypothesis that has no data to back it. When you do get data, you may start to interpret it in light of the small pattern you already detected, a pattern which may not even hold. That’s the problem with small patterns, you get to thinking they mean more than they do.

The human brain is a pattern matching machine. Our first calendars came about from noticing that the seasons of a year came in patterns, and that certain stars in the sky could be seen during the hot season while others could be seen during the colder one. But people also thought they detected patterns about how certain things happened on earth when certain stars were seen in the sky. One pattern between stars and the sky held true, there is a correlation between which stars you can see and the season in your local area. But another pattern was false. Yet both patterns were studied and believed for thousands of years.

I hope I don’t get attached to bad patterns for quite so long as that, but it’s hard to avoid. When you’ve got all the time in the world and not enough data, you get attached to these small patterns that you think you detect. And that can hold true even when the pattern is no longer real.