Ginkgo Bioworks: the economics of genetic engineering

Yesterday I discussed the science of genetic engineering, or at least its application to synthetic biology. Today I’d like to discuss how Ginkgo Bioworks is trying to monetize genetic engineering and gain all the value of its total addressable market.

To recap, genetic engineering is used for the production of biological molecules. If you have a drug for curing a disease you’ll need to produce mass quantities of it to both get through clinical trials and sell to patients down the road. In modern cases, that drug will usually be produced in specially made genetically modified organisms, and then purified out of those organisms using a specific purification pathway. The end result is a pure drug, which is something that the FDA demands and patients really want as it cuts down on variability and potential side effects. This is the business that Ginkgo Bioworks wants to get into, they want to be the ones producing those genetically modified organisms and validated those purification pathways. The organism and the pathway then become akin to intellectual property (IP) for the production pathway of that drug. So say you’re a company that own a drug but has no ability to produce it at scale, Ginkgo will develop a production pathway and charge you the lowest possible price for doing so (making zero profit themselves). They do this because their IP specifies a revenue sharing agreement whereby they get a cut should you manage to sell your drug in the future. This system is what gave Ginkgo such a ridiculous valuation based on TAM, if they can be the lowest-cost provider of drug production pathways, then every single company will want to contract with them, and so they’ll get revenue from every single drug on the market.

The problem is… that’s not how it seems to have worked. First, Ginkgo wants to drive down the cost of producing these production pathways, but they’re competing with companies that already work at economies of scale far greater than them. Let’s start with just the first step of the producing a production pathway: you had to get DNA for your drug and insert it into an organism. There are already many companies that will do this job for you if you’re willing to pay up. Those companies include heavy hitters like Genescript (market cap of more than 3x what Ginkgo was at its peak) and Thermo Fisher (the 600lb gorilla of this sector). These companies have driven down the cost of DNA, genetically modified organisms, and other tools to the point that Ginkgo doesn’t seem much like competition. Now Thermo Fisher And Genescript to my knowledge won’t make an entire pathway for you, but they will you a large part of the pathway for dirt cheap and then sell you the tools to finish it up yourself. But that still means that for many of the steps, Ginkgo is competing with companies that are far larger than it which are better able to deploy economies of scale than it. So Ginkgo might not even be offering you the best price possible when you compare with using some of the big boys instead. And remember they need to be offering the best price possible as they don’t even make money by selling you this process, instead they need to entice you to sign the deal where they get a portion of your future revenue.

Then there’s the fact that their business model relies on successes but self-selects for failures. It’s important to start by remembering that most drugs which go through clinical trials will fail to make any money whatsoever. Ginkgo’s business model is to produce a drug production pathway and sell it for zero profit and bank on the revenue sharing portion to make them money, but they of course understand that most of these revenue sharing agreements won’t make any revenue. But then what type of drug discovery company will even take such an agreement? A large drug company (Johnson and Johnson, Pfizer) already has the in-house tools produce a drug production pathway, they have little reason to enter a revenue sharing agreement especially when Ginkgo’s cost might compare unfavorably with just buying stuff from Thermo Fisher and doing the rest themselves. A small drug company is exactly the type Ginkgo needs to go after, but what type of small drug company? A small company that has lots of money and a product they are very certain is a hit also will be dissuaded by the revenue-sharing agreement, why fork over so much future revenue unnecessarily? On the other hand a small drug company will less money, or a drug company that has a product it isn’t sure of, those would be the kind of customers who would willingly bet on Ginkgo, but they are also the customers who will be least likely to succeed at bringing their drug through clinical trials. If they have no money they could easily go bust before they make it, and if they’re unsure of their drug then it probably means their scientists know it’s a long shot. So Ginkgo’s business model is forcing it to self-select and take on the customers who are least likely to make it a lot of money through the revenue sharing agreement.

And that’s important because the revenue sharing is supposed to be how the company will grow larger, and until it grows larger it can never compete with the big boys on economies of scale, therefor never address it’s total addressable market because there will always be big companies for whom it’s cheaper not to even work with Ginkgo. This is a chicken and egg problem, they need to grow large to reach economies of scale and drive down the cost of their services, and they need to drive down the cost of their services to make it more enticing to sign those revenue sharing agreements, but as long as their services are still higher they’re stuck in a holding pattern. It’s important to note that at this point that Ginkgo had a loss-from-operations of about 650,000,000 dollars in Q3 2022 alone. They are expected to have total 2022 Revenue of around 500,000,000 dollars. They lost more in a single quarter then their expected year-long revenue and that trend shows no sign of changing. Their cash on hand at the end of all this was 1.3 billion dollars, and with plenty of stock to sell and loans to take out, they can continue this business for a while yet. I’ll talk more in a future post about their burn rate and their losses, but it’s important to note that this is where the company is: growing but not necessarily at at rate that will let it achieve lift-off. It needs find some way to make its revenue-sharing business model work, either by driving down their costs so much that other companies have to use their services or by somehow enticing more winners instead of losers to use their services. The only part of the firm that is close to break-even is the “biosecurity” arm a COVID-monitering and diagnostic service that will likely fade as the salience of COVID continues to fade. Perhaps they can pivot to new avenues of biosecurity, flu monitoring? Either way this work is much lower margin than the synthetic biology revolution that was supposed to propel their TAM, and stock price, into the stratosphere.

Ginkgo Bioworks: the science of genetic engineering

Last time I discussed Gingko Bioworks ($DNA) and how their absurd valuation over this past year was in part due to valuing them by their TAM (Total Addressable Market) on the assumption that they’d grow rapidly to meet it. Today I’d like to lay down exactly what Gingko does so that tomorrow I can discuss why I think they’ve been failing. Full disclosure: I’ll be writing this post assuming my audience is non-scientists, so if anything I write is obvious to you since you studied it yourself, feel free to skip ahead.

Gingko is in the industry of synthetic biology, which is just an application of genetic engineering. In synthetic biology, you manufacture biological things (proteins, cells, other molecules) in order to perform a specific job. The classic example of this is producing insulin for sale to diabetics. Prior to synthetic biology, insulin could only be procured from the living organisms that produced it, this was usually cows and pigs, who produce small amounts of it in their daily lives. Since the total amount of insulin you got from butchering a cow was tiny, the cost was astronomical. But insulin gets produced because it is coded for by a piece of DNA called a gene, and by cloning the gene for human insulin into a bacteria cell you can grow up huge colonies of those cells and extract the insulin from them instead. This revolutionized the development of insulin, and led to a steady reduction in prices to the point that today insulin can be purchased for just 25$ at Walmart. But how exactly does the cloning and gene editing work? And how does Ginkgo hope to make money off of it?

To start with we should understand the central dogma of Biology: DNA codes for RNA, RNA codes for proteins. If you give a cell a piece of DNA, it can make RNA based on that, and then make proteins based on that RNA. Since insulin is a small protein, producing it is relatively straight forward: insert a piece of DNA into the cell which codes for the RNA which codes for insulin, and in a relatively short amount of time the cell will use the DNA you gave it to produce the insulin you wanted. But of course first you have to get the DNA for insulin and put it into your cell, and these are no small problems!

So how do you put a piece of DNA into a cell and force the cell to make proteins off of it? Well to start with the DNA has to be readable and usable by the cell you’re going to put it into, for example if has to have the right kind of introns and exons or the cell won’t use it right. Most DNA contains both exons and introns, the exons are the parts that will actually code for a protein, the introns get removed through splicing and have their own special properties we’ll talk about some other day. The important part here is that bacteria do not participate in splicing and non-human eukaryotes (yeast cells, insect cells, non-human mammal cells) splice differently than humans do. If you want your DNA to actually code for insulin, you need to use only the exons and none of the introns and you also need to ensure the cell doesn’t try to splice away your exons anyway. Let’s also note that DNA won’t even be used by a cell if it doesn’t come with a promoter, a string of DNA at the beginning of a gene that tells the cell “please turn me into RNA.” Humans have different promoters in our genes than other organisms, so you’ll need to add a promoter that works for the cell type you’re using (bacteria, insect, yeast, mammalian). Then there’s the fact that DNA (really the RNA but let’s skip a step here) is read in 3-letter codes called codons. Each codon matches to a certain tRNA, and each tRNA comes with an amino acid attached. But not all tRNAs are created equal, some organism have more or less of a certain tRNA and so will create protein more or less efficiently if you give them certain codons. Codon optimization is another tool used for making sure your piece of DNA gets efficiently transcribed and translated into a protein by using the right codons in the right cells. All these factors (exons, promoters, codons) need to be altered so that your DNA can be used by the cell you are going to put it in.

OK, so you’ve altered your DNA a whole bunch so that it codes for insulin once it’s inside a cell. Now you just have to get it there. With bacteria the system is moderately simple, many bacteria have evolved mechanisms so that they will willingly pick up just about any piece of DNA they find when subjected to major stresses (heat, electrocution). So you zap some bacteria in a tube along with your DNA of interest and some of them will pick it up. Then if your DNA has a selection marker for antibiotic resistance and you grow them up on antibiotic plates, the ones that survive are the ones who picked up your DNA and can now be grown up to start making proteins. But that’s just bacteria, a lot of drugs would be better produced in higher-order eukaryotes because those organisms are more biochemically similar to us. Eukaryotes however have a nucleus that is a barrier to foreign DNA, so you have to be extra clever (sometimes using retroviruses or CRISPR) to get your DNA into a eukaryote and make them make your insulin. And that’s just for insulin, something we figured out decades ago! There’s always new proteins or modified versions of old proteins being tested as new drugs, and every single one of them goes through this process in order to be produced using synthetic biology.

Changing the sequence of DNA you’re using, removing the introns so it only has the exons, changing the promoter, optomizing the codons, getting the DNA into cells, all these are time consuming to do and validate. I won’t get into the specifics of how they’re done, but some low level researchers may work on just this in the lab for the entirety of their junior research career (before they get their own project). This is not a simple process, and is definitely an area where Ginkgo thinks they can make a splash. The problem is that they won’t be the first and only player, there are already a number of companies out there who will do this job for you. Academic labs generally don’t use those services because it’s too expensive, and private sector labs already have competitors to choose from besides Ginkgo Bioworks. There is definitely a market here, but it’s a competitive one.

But remember, this is still just about getting the cell to make a protein! We still then need to purify the protein out of those cells in order to sell it and use it! And this too is no small problem, the USA and other countries all have regulations requiring that drugs sold to consumers must meet certain standards of quality and purity, each batch must be identical so the drug will work the same way each time, and the drug must be at the highest possible purity so no contaminants can mask or alter its effect. So purifying the protein out of your cells is another problem that Ginkgo and other companies need to solve when they are doing synthetic biology. I’ll talk about purifying some other time but with how much I wrote above about just getting the right DNA into cells so that they can produce insulin, I hope you can appreciate that this is a long and involved process. This is the work that Ginkgo Bioworks wants to do, they want to do all this in exchange for money and take over the synthetic biology industry. But their business model is strange indeed, all this work (getting the right DNA, getting it into cells, producing protein, purifying protein) will be done in what they call the foundry and they want to run that part of the business at cost meaning it won’t run a profit and will sell its services for the lowest possible amount to remain breakeven. So how does Ginkgo expect to make a profit? Tune in next time where I explain the wonderful world of IP and revenue sharing, and how that is the part of their business that I think Ginkgo has failed at.

Tau vs A-beta in Alzheimer’s disease

There remains, in the medical community, a disagreement over what exactly causes Alzheimer’s disease. If you even look at what kinds of drugs people are trying to make to treat the disease, there is no consensus on what mechanism the drugs should target.

The above picture was taken from “History and progress of hypotheses and clinical trials for Alzheimer’s disease” by Liu et al in 2019. This picture shows all the different clinical trials being run on Alzheimer’s disease, and all the different kinds of drugs people are using in those clinical trials. What’s interesting is that those drugs don’t all target the same or even similar things. Some drugs target Amyloid Beta, some target Tau, some target things you may never have heard of, like Neurotransmitters or the Mitochondrial Cascade. In most diseases we know what causes the disease, ever drug that has a clinical trial to treat COVID will in some way be targeting the coronavirus itself, because that’s what causes COVID. But there is no consensus on what causes Alzheimer’s disease, so the drugs that are in clinical trials all target completely different things.

With no consensus, it’s hard to see a path forward for both research and drug discovery. There are a number of antibody-based drugs on the market today which are supposed to clear our either Amyloid Beta plaques or Tau tangles, but there’s no consensus in the field if those are even the causative agents of Alzheimer’s disease. So if you’re a drug company investigating this, you’d making a bet not only that your product will do what it’s supposed to (clear out tangles and plaques) but you’re ALSO making a bet that you know the One True Cause of Alzheimer’s disease and that 70% or researchers are just wrong when they think the causative agent might be something else.

I’m not saying this is entirely a bad thing, I think it’s good that researchers and drug discovery companies are willing the gamble big to cure Alzheimer’s disease, despite knowing that the science isn’t even settled on what causes Alzheimer’s disease. But I am saying that I’m not exactly surprised that we’ve been studying this for decades with no effective cures in sight.

Difficult post: what even is imposter syndrome?

There’s an old joke about a guy going to a fancy party. The party was attended by only the richest and most famous Americans, from Hollywood stars to CEOs of companies to national politicians, so the guy wasn’t sure he really belonged. He voiced his concern to another guy he met at the party saying “I’m not sure what I’ve done to be invited to this, I mean unlike most folks here I didn’t do anything myself, I was only doing what they told me to do.” The other guy says to him “well sure Neil, but most of us never walked on the Moon.”

It’s an old joke but it gets to the heart of what’s been called “imposter syndrome,” people thinking that they aren’t as special or as capable or as important as they really are, people who despite their long list of achievements feel like “imposters” when people congratulate them or talk glowingly about them. It’s been said that this is especially common in Academia, but I don’t know if I buy that since I’ve only been told that factoid by Academics. Every industry thinks they’re special and unique, and I don’t know if a poll or study would find imposter syndrome to be any more common in Academia than in Journalism, Tech, or any other white collar field.

But what if you really are an imposter? What if you really aren’t as good as people think you are, your work isn’t as deserving of praise as what it gets, and you’re just hanging on with the certainty that any deep look at your work would show you for what you really are. I know for a fact that Academics aren’t usually of the ability of looking closely at each others’ work, the sheer number of retracted papers each year speaks to the fact that even the journals and committees that are paid to keep out imposters don’t work all the time. And beyond retractions there’s always a truism that you don’t know someone else’s work as well as you do your own. So when I feel like my work just isn’t good enough and feel helpless not knowing how to improve that, platitudes about “well everyone feels imposter syndrome” aren’t necessarily the solution.

When something fails in science, you can either overturn the hypothesis or conclude that you did the experiment wrong. When something fails again and again in science, you either have strong evidence that the hypothesis is wrong or strong evidence that you’re really bad at doing the experiment. If everyone but you is able to do the experiment and get the results, then the hypothesis is probably correct. That’s what it feels like sometimes in the lab, I have no reason to believe that my experiment is wrong because I see others have been able to do it flawlessly. And so I can only conclude that I’m really bad at doing the experiment, meaning maybe I’m not cut out for doing this “science” thing.

I just don’t know what I could be doing wrong. If I had some idea then I could design some experiment to determine if I’m doing it wrong or if my sample is wrong or if my hypothesis is wrong. But I have no reason to doubt the hypothesis, little reason to doubt the sample, and all the reason in the world to doubt my own abilities. I know I have my flaws, I’m lacking in manual dexterity and attention span, I have poor motivation when things don’t work and this sometimes leads me to doing more bad work because the work I did just prior was bad. So I’m not sure if I’m the problem or if something else is the problem, and I’m not sure what that says about me in science.

Pointless prognosticating, what is the “Next Big Thing”

If you follow the Tech industry, you know that everyone’s always searching for the Next Big Thing, and if you remember my series on The American Challenge, you might remember that I talked about how that book badly missed on some of its predictions of what The Next Big Thing would actually be. This got me thinking, what do I think the Next Big Thing is? What do I think will be the next trillion-dollar industry, the type of thing countries will want to focus on and people will want to invest in, things like semiconductors and computers in the 80s, mass-built automobiles in the 1910s, or trains in the 1800s. The kind of thing that will change the way we do everything, and if you have a chance to get in at the ground floor you’ll be kicking yourself in 20 years if you don’t take it.

To start with, I’ll talk about others’ predictions.

I’ve heard some people talk about Cloud Computing as the Next Big Thing, but it’s hard to tell if it’s truly Next or if it’s more of a continuation of the Current Big Thing. Like, would it make sense to separate the internet revolution from the computer revolution? Both happened concurrently, the first couldn’t have happened without the second and the second was truly skyrocketted by the first. So how does Cloud Computing fit into all this, it’s already a trillion dollar industry with the largest tech companies in the world all throwing money into it, and even if I can’t explain how it works personally I can definitely see that others are talking about it as a revolution. But again it feels hard to tease it apart from computers and internet as a whole, and it doesn’t seem like we’re on the ground floor anymore. Microsoft, Google, Amazon and Meta have all put so much money into their cloud infrastructure that I don’t see any small fries really taking pieces off of them. I’d say Cloud Computing is the current Big Thing.

But that’s mostly semantics, I’ve also heard people say 3D printing is the Next Big Thing. The University of Nottingham for instance has a department that wants to be able to 3D print a smartphone, circuitry and all, using just metal and plastics as inputs. The ability to mass-produce using 3D printing has long been a holy grail of the field, and the ability to custom manufacture pretty much anything by just fiddling with a computer model would certainly be a game-changer. But 3D printing has so many technological limitations that I still wonder if it will truly take off, most glaringly, 3D printed items tend to not work well out of the printer, and fall apart quickly even if they do, which is a big barrier for mass-production. Ultimately I just wonder if 3D printing will be something more like Supersonic Travel was in the 70s, something that was seen as the mass-market future but was in fact relegated to only specialized roles while more boring “old fashioned” things kept their market share.

The Internet of Things is something I’ve never really gotten the hype for. There are certain applications where having a device always connected to the wifi seems like it could be value added, but most of the hype seems to be marketers trying to see a subscription service for a device that used be be a one-time purchase, or from unrealistic promises that don’t fix the Oracle Problem (ie suppose you give you machine a wifi connection so it can always tell you when certain conditions are met, but will you necessarily trust that your machine is giving you good data or will you have to double check each time anyway, negating the benefits of having the wifi in your machine). Frankly, I don’t want anything in my house to be connected to the wifi unless I expect or need to play Youtube on it.

Another Next Big Thing could be the DNA/protein revolution. The Human Genome Project was a massive success, as was the development of modern Mass spectrometry, and a huge amount of modern biochemistry couldn’t exist without these techniques. Our ability to read the sequence of any protein or piece of DNA we want to, and to alter them in any way we please, have definitely given us a leg up in fighting genetic diseases and engineering proteins for a number of different purposes. In theory, biochemistry can let us create proteins to do just about any job that ordinary chemistry does, only faster and better. This includes highly speculative roles like uranium enrichment and carbon capture to even humdrum every day roles like plastic production. The ability to use genetics and proteomics to both cure our diseases and for industrial purposes is certainly enticing, but I’m still not sure the technology is there or will be there soon. Without getting too jargon-y, proteins can only do their job if they have the correct shape, and our ability to create any shape we want is not fully developed. When you change a single piece of a protein, it can have enormous effects on the protein’s structure and function, and it’s often difficult to even test these effects. Some people have told me that “genes and proteins are the next coding language” but until it’s as easy to test a protein as it is to test a program, I’m not sure that’s true.

Finally, outer space. Will the next trillion dollar company be a space company and not a tech company? I’d love that to be true, but I’m not sure. The best argument I’ve heard for the economic viability of space colonies was actually a really dumb and technical one. If you assume that there is already people living on both the Moon and on Earth, then in theory it is cheaper to ship anything from the Moon to the Earth, versus shipping something from the Earth to the Moon (due to differing gravity and atmospheric drag effects). If we then assume that economies of scale can be harnessed to make producing things on the Moon and producing things on Earth cost almost the same amount, then any company that moves its production from the Earth to the Moon has a comparative advantage that cannot be taken away, and it can service both the population on the Moon and the population on Earth more cheaply. Thus a Moon colony should be (economically) self-sustaining once it reaches a certain size. There are of course a hell of a lot of assumptions with this plan, and some of them are even bad assumptions, but this is genuinely the only compelling argument I’ve heard for colonizing space other than the Tsiolkovsky argument, which isn’t much more of an argument than but I WANT it to happen.

So what is the Next Big Thing? Honestly I don’t know, and I don’t think anyone does at this point. That was one thing I kept thinking about while reading The American Challenge. JJSS and people like him seemed to think that the best way to run a country was to foresee what would be the “Next Big Thing” and then invest in it. But JJSS’s predictions on The Next Big Thing were 1/3 or 1/4 depending on how you wanted to score him, and frankly redirecting national budgets into government projects with all the bureaucratic inertia and election-cycle-thinking that comes with them just seems like a terrible idea. Better to let the free market create a virtuous cycle where the good ideas win and the bad ideas lose, rather than create a government system that can be handcuffed by political or interest-group concerns to throw good money after bad and ignore successes in favor of prestigious failures. I don’t know what the Next Big Thing is, but what do you think? Feel free to comment below.

The End of Growth part 5: How much more improvement is possible?

As I continue The End of Growth by Richard Heinberg, I’m struck most of all by his lack of creativity. When thinking about the future, most of us should be able to conjure up some ideas of how the world could be a modestly better place to live. Cars will become electric so no more filling up with gas, telework will get more common and we can all work from home, over 400 clinical trials are currently trials are currently studying Alzheimer’s disease, maybe one of them will cure it. These are all things that could change our society for the better and would contribute to economic growth. More efficient cars mean transportation is cheaper and so people can partake in more of it, in a very real way the supply of transportation will be increased, leading to an increase in GDP and a decrease in prices. And this is true of pretty much all technological advancements, technology is supposed to be deflationary, growing our economy while reducing prices. Yet Richard Heinberg doesn’t really see how technology could ever improve our lives from his lofty vantage point of 2011

We may be able to further improve the functionality of the Microsoft Office software package, the speed of transactions on the computer, computer storage capacity, or the number of sites available on the internet. Yet on many of these development trajectories we will face a point when the value of yet another improvement will be lower than its cost to the consumer

Yeah let me stop you right there Rick. If the cost is greater than the utility, then the product is unprofitable and it fails. Like the Nimslo Camera or the Quibi streaming platform, the world of tech is littered with big fails where product designers make something that consumers don’t buy. Yet here’s the secret Rick, if people do buy it then it is adding value to their lives greater than the price they pay for it. Richard Heinberg wants to paint a picture where our ever improving technology isn’t actually bringing any net good to consumers, yet by definition it IS otherwise the consumers just wouldn’t buy it. Consumers aren’t brainwashed automatons (as much as marketers wish they were) you can’t force them to buy something they don’t want. And consumers over the years have proven very willing to turn up their nose at goods and services which bring them less value than what they cost.

He continues:

At this point, further product “improvements” will be driven almost solely by aesthetic considerations […] for many consumer products this stage was reached decades ago.

Damn Rick, you’re right, the only reason people buy iPhones instead of old rotary-dialers is because of the aesthetics, not because you can access the whole world at the touch of a screen. And TVs, who needs a big plasma TV? Hell life was better in black and white anyway! And don’t get me started on ovens, pots, and dishware, sure these modern fancy kitchen appliances are less likely to burn your house down or leach carcinogens into your food, but is that really worth the cost?

If it sounds like I’m mocking Richard Heinberg it’s because I am. I diagnose him with a terminal lack of creativity, and an inability to see the improvements in life happening all around him. Every year consumer products, not just our electronics but our cookware, our houseware, our vehicles, they all continue to improve and become more safe, more efficient, and more useful. But Rick can’t understand why Microsoft Office became a subscription service and so questions whether technological improvement is even possible. Here’s a thought Rick: maybe you aren’t the target market for improving technology? Maybe you’d be happier with a typewriter and a sundial and thus don’t represent the average consumer? I can tell you that as a scientist, modern Microsoft Office is WAY better for me than what we had a decade ago. Since all my programs and files are on the cloud, I can sit down at any computer anywhere in the world and do my work. I don’t need to lug a PC everywhere I go, I can sit down at any PC and get to work. I can also collaborate easily with people anywhere on earth because all our files are in the cloud so we can work on them together instead of editing on our local machines and then sending versions back and forth through email.

My job has become immeasurably easier since Richard Heinberg wrote his book in 2011, the increased utility from technological advances like computer software, computer hardware, and internet communication have made me more productive and a hell of a lot more happy. Technology has worked great for me and I’m glad to pay for the privilege of it. Rick can stick to his sundials if he really thinks technology peaked in the past.

The best way to learn something is to just use it

Short post today, but as I’ve tried to teach things to people I’ve found the best way for them to learn is to just use the knowledge. We work with amino acids in our lab (the 20 building blocks of all proteins), and many new lab members have come up to me asking how I know so much about amino acids and how they can learn. What class did I take, what class should they take, is there a book I studied? The honest question is I learned by doing. When I studied the amino acids I was told to learn their shapes by drawing a protein which would spell out my name, but since half the letters in my name don’t have a corresponding amino acid I dropped that idea pretty quickly. For the rest of the semester I vaguely knew just enough to do well on the test but couldn’t exactly list the amino acids off with any fluency. Once I began working in a biochemistry lab though, it all fell into place. Suddenly, having to remember every day that Lysine and Arginine are positively charged helped me remember their structures, and eventually I could remember the side chains of most amino acids with little difficulty. This never would have happened if I had only studied them in a class, I had to learn by using.

Liquid-liquid phase separation

I don’t have a deep topic to write about today because I’m busy at work, but I thought I’d write on a subject that I’ve been studying myself, partly in order to make sure I understand it.

Phase separation is a common and easily understood way that matter segregates: when water boils the gaseous vapor and the liquid water are separated from each other due to differences in their density. A liquid-liquid phase separation is the same thing, only it’s two liquids separating and not two different states of matter. One example of this is oil and water, as everyone knows oil and water will separate from each other when placed in a glass. This is in part due to their preference to aggregate with each other, water is hydrophilic and so interacts strongly with water and not well with hydrophobic things, while oil is hydrophobic and interacts strongly with itself and not well with hydrophilic things. What’s less understood is how liquid-liquid phase separation is also important in cell biology.

If you remember what a cell looks like from a high school textbook, you can probably remember that it has things like a nucleus, a mitochondria (the powerhouse of the cell!), and some weirder things like an endoplasmic reticulum. These are all examples of membrane-bound organelles. Just like our organs all perform special duties within the body, so too do organelles perform special duties in the cell. The membranes that surround these organelles spatially separate their inside contents from the outside cell, and so allow them to more efficiently perform their functions. But there are also specialized parts of a cell that do not have a surrounding membrane. The nucleolus in particular is a specialized area of the nucleus that has its own special functions, but it is not separated from the nucleus by any membrane whatsoever. So how does the nucleolus prevent all its contents from diffusing into the nucleus and ruining whatever process it is performing? It does so through a liquid-liquid phase separation.

It’s a bit too technical to explain here, but just as oil and water have chemical properties that separate them in a glass, so too does the nucleolus have chemical properties that separate it from the wider nucleus. And it turns out that a large number of cellular functional areas segregate from the cell through liquid-liquid phase separations, all segregated not by a membrane-defined boundary, but by the physical properties of the medium in which their reside. These phase separations allow many different areas of the cell to undergo their own specialized functions without needing to constantly make and remove cell membranes around them, and thus allow for more efficient activity inside the cell.

So that’s an overview of liquid-liquid phase separation, I’m still learning it myself so I hope I got everything correct. But if I miswrote anything, cell biologists out their feel free to correct me.

In science, be willing to say something’s wrong

This is a short one today, it’s been a busy week. I just wanted to share an anecdote from my work:

We’ve been operating under a certain hypothesis for as long as I’ve worked here. We think if we do a certain experiment a certain way we’ll get certain results. We haven’t managed to get those results yet but we are tweeking and revising the experiments in an attempt to do so. Yesterday I randomly ran into a professor who shared with me a paper he had just published, a paper which seemed to indicate that the results we were searching for my not be possible, or at least might not be possible using the experiment we were doing. Now why had we believed our experiment would work? Well we read a different paper that seemed to indicate it would.

So now I have a conundrum, I have this old paper that says what we’re doing will definitely work, and this new paper saying maybe it won’t. What do I do? I start by re-reading both papers to make sure I’m not misunderstanding them, and I come upon something I never realized: the old paper may not have proven what it thought it proved. Maybe the results from the old paper are actually closer to the results from the new paper, but were just interpreted wrong. If that’s the case then the new paper is correct and our experiment won’t work. We read the old paper and believed it’s interpretation, but we didn’t put enough effort into validating that it’s interpretation was correct based on its data, we assumed the paper had done that well enough. But with the benefit of the new paper we can see that maybe its interpretation was wrong.

This is a very heavy conclusion: the paper we have been basing our research on might have a wrong conclusion. It’s a harsh accusation but in science it’s sometimes necessary to speak out and make these accusations. You can’t keep going down the wrong path or you’ll never go anywhere.

You shouldn’t go too far down a scientific rabbit hole

Sometimes when you get scientific data that doesn’t make sense, the best use of your time is to say “well that’s weird,” and just redo the experiment. I’ve been in many labs where strange data, be it unknown proteins in a mass spectrometry sample or unknown shapes under an electron microscope, have gotten people’s minds aflutter as they try to figure out what it all means. Is it contamination, is it scientifically interesting, is it something that should be expected but we just don’t know about it? Humans are innately curious, scientists most of all, so when presented with a mystery it’s natural to want to solve it. And a scientific mystery should be easier to solve than most because not only are the experiments set up with numerous controls that can be checked against, but there is a wealth of data in the literature that might point to an answer. When you see something you don’t recognize, it’s easy to dive deep into the literature searching for some paper or clue which might tell you what you’re looking at.

But this isn’t always the best use of your time. Sometimes stuff is just weird for dumb reasons and if you spend weeks trying to figure out why then that’s weeks you’re not spending working on your actual projects. Chasing false leads can also blind you to the more important (if less mysterious) true leads that you should be following. All this to say, my lab is currently in the midst of a mystery that I don’t think is very important and I wish we could all just agree it’s mysterious and get back to more mundane but solvable problems.