Teaching isn’t easy

I’m going to come right out and say that I don’t know if I’m a good teacher.  I’m a passionate teacher, I like to see students learning and growing, but I don’t know if I’m a good one.  And honestly, in my position I don’t know if I can be a good one.

I’m a researcher at a major research institution.  One of the first rungs of the “science-as-a-career” ladder is usually for students to join a lab as unpaid volunteers, either for course credit or just for fun.  They will get trained and learn to help out with some of the duties performed by the lab, they may even do some actual science.  Eventually they may move into a semi-paid position in which their work in the lab pays for some of their tuition, before finally moving to a paid position around their graduation.  From there, the scientific world is their oyster.  But this first rung, with untrained students, is to me the hardest.  Nobody really knows what work in a lab is like until they do it, I know when I was a kid I had a picture in my mind that scientists spent all their time sitting and thinking.  But it’s actually a job that requires moving, doing, skillful techniques, and a lot of hand-eye coordination.  These are all skills that a student needs to learn to progress as a researcher, and I don’t know if I’m doing good as a teacher.

When I work with these students, the biggest issue is imparting on them the necessary knowledge.  This starts with “what is the work we are doing and why,” student may have just learned about DNA replication for instance, but that doesn’t necessary give them the background necessary to understand why DNA-intercalating-molecules are known carcinogens.  And it definitely doesn’t give them the knowledge of all the previous research that has been done in this field that brought us to that conclusion.  So you need to get them up to speed on some of the facts of the field, “here’s what these molecules are, this is why they’re important, this is how we are studying them.”  

Furthermore, a lab is nothing like a classroom, there is no textbook filled with the One Holy Truth that they can study, textbooks only get written about the settled science that is decades old.  Instead there are papers and literature of all kinds that they need to read, scattered throughout many areas and each focusing on a different area.  These scattered papers don’t even make a coherent story unless you know how to read and understand them and draw your own conclusions.  So additionally we must teach them the skills necessary for them to gain knowledge on their own.

Finally, there’s teaching them the things we actually do in lab.  The techniques, the protocols, and even the proper methods for safety and cleaning, teaching them all there is to know about working and being in a lab is probably the most important part of keeping them safe, but it’s also difficult to teach this in any way but by rote.  You just tell them what to do and tell them to keep trying until they do it right, I don’t really have the skills necessary to teach physical activities in any way but that.

So with all that said, there’s a lot of teaching that needs to go on between senior lab members and junior lab members, and personally I don’t know if I’m up to the task.  I try to help them learn on their own, but I seem to always just give them the answer when they can’t figure it out.  I try to help them do things in lab, but only by doing it myself and letting them watch how I do it.  I just don’t know if what I’m doing is the best or most helpful way to teach them, but teaching is just such a small part of my job that I don’t have the headspace to “get good” at it either.  I hope I’m teaching them and I hope they can leave this lab with good memories of their time here, but I just don’t know.

Biotech update: Vertex Pharmaceuticals and CTX001

I’ve said before that I don’t feel like I can reasonably invest in any biotech company since they all feel like a gamble, but for the gamblers out there I took a look at the science behind Vertex Pharmaceuticals (VRTX).

Vertex has a drug called CTX001 which has been in the news as it seeks FDA approval to treat sickle cell anemia and beta thalassemia.  Sickle cell anemia happens when the hemoglobin in your blood has a mutation that makes it fold into the wrong shape, this makes red blood cells become sickle shaped instead of their usual donut shape, and these sickle-shaped red blood cells get caught in the tiny capillaries of your body.  This causes damage and a lack of energy as blood isn’t able to efficiently transfer nutrients and waste into and out of your cells.  Sickle cell anemia reduces one’s life expectancy to around 40-60 years.  Beta thalassemia is another hemoglobin disease this time caused by reduced production of hemoglobin itself.  Less hemoglobin means less nutrients and waste can be transferred by the blood, meaning the body can’t work as efficiently.  Beta thalassemia in its major form has a life expectancy of around 20-30 years.  

Despite the fact that both diseases are caused by mutations in hemoglobin, the mutations are very different from each other and so it surprised me that both were being treated by a single CRISPR drug.  How CRISPR works is that a protein uses a piece of DNA to very specifically target itself towards an area on a gene of interest.  The protein can then cut into that gene of interest and if another piece of DNA is on the protein, then that other piece of DNA can be incorporated into the gene by the cell’s DNA repair machinery.  This process is somewhat random in nature, it’s hard to ensure that your other piece of DNA gets incorporated and even harder to ensure that it is incorporated in just the right orientation, just the right position, and just the right way so as not to cause problems down the line.  Since sickle cell and beta thalassemia are caused by mutations in very different places within the hemoglobin gene, a CRISPR drug that is targeted towards the sickle cell mutation site should not be able to also hit the beta thalassemia mutation site.

But the trick is that CTX001 isn’t targeting hemoglobin, it’s targeting fetal hemoglobin.  When a baby is in the womb, it needs to take oxygen from its mother’s blood stream to survive.  If a baby’s hemoglobin were the same as its mother’s, this process would be inefficient because both the baby’s and mother’s hemoglobin would bind to the oxygen equally well and there would not be enough oxygen flowing from the mother’s blood into the baby’s.  It would be like a tug of war where both sides are of equal strength.  However, fetal hemoglobin binds to oxygen more strongly than adult hemoglobin, and this ensures that a baby can take the oxygen it needs from its mother’s blood stream.  Fetal hemoglobin usually stops being produced around the time the baby is born, and after the body switches over to purely adult hemoglobin by around 6-months after birth.  What CTX001 does is it tries to switch on the production of fetal hemoglobin in people suffering from sickle cell anemia and beta thalassemia.  If they can produce fetal hemoglobin instead then it can compensate for the fact that their normal hemoglobin isn’t working properly, and should reduce their symptoms and prolong their lives.

How CTX001 does this is by altering the promotion of the fetal hemoglobin gene.  The promoter regions of genes are the segments of a gene that help the gene get transcribed into new mRNA.  That mRNA will then get translated into a new protein.  The promoter of fetal hemoglobin does not usually allow the gene to get transcribed into adulthood, so no fetal hemoglobin gets made.  But altering the promotion of the gene would allow it to be transcribed, and thus translated, and so fetal hemoglobin would be produced in the body.  Now here’s where it gets a bit tricky: they aren’t actually altering the promoter region of fetal hemoglobin, but rather the promoter region of another gene called BCL11A.  I wanted to explain how promoters work, but there’s more to explain now because biology is complicated so bear with me:

The reason the promoter region of fetal hemoglobin doesn’t normally allow transcription (and thus production of the gene) is because of a repressor called BCL11A.  BCL11A is a protein that sits on the promoter of fetal hemoglobin and refuses to budge, this prevents any other protein from accessing the fetal hemoglobin gene and thus prevents fetal hemoglobin from being transcribed.  Now BCL11A is produced by its own gene, and CTX001 alters the promoter region of BCL11A in such a way that no BCL11A can be produced.  Without BCL11A, there is nothing to repress the promotion of fetal hemoglobin.  Without the repression of fetal hemoglobin, its promoter region is accessible and it can be transcribed.  With the transcription of fetal hemoglobin, the fetal hemoglobin protein will be produced in the body.  And with the production of fetal hemoglobin, the diseases caused by malformed adult hemoglobin (sickle cell anemia and beta thalassemia) should be reduced.

But it’s still not over!  How the hell would CTX001 find every red blood cell in the body and do its thing?  It doesn’t have to!  Hematopoietic stem cells are the stem cells which produce red blood cells (and it’s red blood cells which will carry the hemoglobin or fetal hemoglobin in the blood).  Hematopoietic stem cells can be extracted from the patient’s blood and then altered with CTX001 so that they will produce fetal hemoglobin.  The cells which are successfully altered can then be transferred back into the patient.  Before the altered cells are given back to the patient, the patient is given busulfan to kill off stem cells.  This is necessary to kill off some of the stem cells which are producing the malformed hemoglobin so that the new stem cells producing fetal hemoglobin can reproduce and become the majority.  The patient is then monitored for improvements in their sickle cell anemia or beta thalassemia condition.

So this process is long, involved and complicated.  Just to list all the things that could go wrong: when altering the promoter the DNA could accidentally be mutated towards being cancerous, killing of so many stem cells using busulfan could have harsh side effects, the infused hematopoietic stem cells might not reproduce and become the majority, and even then the DNA of the promoter might not be altered enough so that fetal hemoglobin becomes the majority of the hemoglobin in the body.  But I’m sure every step is heavily monitored by Vertex during the treatment process.  So is Vertex Pharmaceuticals a buy?  I have no idea, if you believe the Efficient Market Hypothesis then all their upside is already priced in, but they’re in phase 3 of clinical trials and if you’re a gambling man I see nothing wrong with their scientific thesis.  So idk, go ahead?

Biotech seems far more speculative than other tech

There’s a mantra that gets repeated by everyone around me: biotech is the next big thing.  I’m willing to believe that on average the biotech industry will probably grow faster than the market, maybe even faster than the tech industry over the next 20 or 30 years.  What I’m less enthused by is the prospect of trying to pick and invest in the winners of that market and not get stuck holding the losers.  I feel like biotech in general will have a much larger standard deviation on its returns, a small number of companies will make out like bandits and a very very large number of companies will make nothing.  This is generally true in most markets, but in biotech you have the added barrier of the government to think about.

When a tech company brings a new product to market, they will design it, test it, then try to sell it to consumers.  But when a biotech company brings a new product to market, they often have an added hurdle of the government.  They need to design a product, test it, ask the government for permission to sell it, and then sell it to consumers.  These consumers are usually healthcare patients because the product is usually a drug or medical device.  The government in this case is protecting us from bad products in healthcare, but in turn this puts up a barrier to entry that ensures that only a few products get through and get all the money in the market.  There’s a large market for crappy but cheap smartphones that retail for far less than an iPhone or an Android, there isn’t any market for crap drugs that only “sort of” cure your disease. 

50 years ago biotech’s second biggest area was agribusiness, but today all the biggest movers and shakers are all related to medical in some way.  Everyone is working in an industry where money only comes in if you can improve the health of a patient.  Even the non-medical companies, the “shovel salesmen” in the biotech gold rush, the products they sell will only get bought by companies which are themselves trying to make a drug or a device that will prolong the life of a patient.  So I feel like any biotech giant I wanted to invest in, be it Pfizer or Merck or Johnson and Johnson, investing in any one of them is like playing a crap shoot with the FDA.  If Pfizer’s next biggest drugs don’t get approval, Pfizer’s stock will go way down.  And if the FDA approves a “better Tylenol” for mass market, then Johnson and Johnson could drop.  So biotech feels like I’m investing in the future of the FDA more than I am the future of the market.

And then there’s Thermo Fisher, the biggest shovel salesman of the biotech gold rush.  They make the products used in labs all over the world,I know even my lab uses a lot of Thermo Fisher brand products.  Even here the future seems less certain than it is for say Amazon or Google because all the labs which buy Thermo Fisher products are still at the whims of the FDA.  Everyone buys polypropylene tubes from Thermo Fisher, but what if the FDA decides polypropylene leaves behind microplastics which harm patients and mandates that polypropylene never be used in medical devices or drug manufacturing?  Then Thermo and every company like them would be scrambling for a substitute, and there’s no way of predicting that Thermo would come out of that mess the victor.  So shovel salesmen make for safer but by no means safe bets.

And finally there’s the small players in biotech, the startups and mid-sized companies which hope to build the products of the future.  They are the most speculative companies of then all because they’re often pre-revenue companies which are hoping that whatever drug or device they own the IP for can get through the FDA’s hurdles and reach the mass market.  These hurdles are very high and there’s no money in only getting past the first few just to fall at the last one.  So when you invest in a company like that you’re investing in a business of hope and hype, and since even the greatest experts in biotechnology can’t predict which drug or device will work for patients there’s little chance of someone like me making all the right predictions.

So I guess biotech might be the future, but the future is too murky to invest in.  I’d keep my money in biotech ETFs and hope for the best.

Don’t just mindlessly avoid things that are dangerous

This post may be a little weird, but I didn’t know how to title it. I want to talk about hazards in science and how they need to be handled. The key point I want to make is that science by its nature requires us to work with obscure and sometimes dangerous chemicals, but they shouldn’t be feared or avoided rather we should be aware of the dangers and use those chemicals with the proper precautions.

At a previous lab I worked at we had to wear special gloves when handling one of the chemicals we used. This chemical was toxic enough to seep through your skin, into your bones and begin leeching the calcium out of your bones, and because of its formulation it would also seep through normal lab gloves. So we wore special safety gloves when handling it and took special precautions: we always wore two pairs of gloves over each other and if we ever noticed we had spilled any we would immediately remove our gloves and start washing our hands. These precautions were the ones endorsed by the National Science Foundation and pretty much anyone who had ever worked with this chemical, and in all my time working with it we never had anyone harmed by it due to our safety precautions.

At one point a visiting scientist was working in our lab alongside me and his experiment required him to use this toxic chemical. I could tell he was nervous and unsure of himself, he was wearing two sets of gloves but didn’t want to touch the bottle in order to pour the chemical into his reaction vessel. He kept saying that he didn’t understand if he was doing it right and wanted to know if we had any special tool or instrument that would pour the chemical for him. Finally I simple took the bottle containing the chemical and poured it myself, saying to him “you don’t lack understanding, you just lack confidence.”

I think the overcautious approach that the visiting scientist had may have come from them misunderstanding the repeated emphasis on safety that we put out. Yes we work with dangerous chemicals and we have to be safe when using them, but overestimating a danger is as inaccurate as underestimating it, and proper lab safety doesn’t mean avoiding the lab work at all costs. We use these chemicals because we have to, they’re the only ones with the right properties to work in our experiments, and so any scientist needs to have the confidence and capability to use them himself. A healthy amount of precaution is good but if it makes you too scared to pick up a bottle then you’ve gone too far, you have to be able to read the scientific literature on a chemical and understand how dangerous it actually is so you can use it when you need it.

I know this post was a bit rambly, but it’s something I’ve been thinking about.

Why was everyone in the 60s so high on Supersonic air travel?

I get a small sense of morbid schadenfreude reading old books on economics.  Occasionally the authors make some of the most insightful predictions I’ve ever read about the nature and direction of the economy of their future (our past), but more often they miss wildly and I get to feel superior while reading a book on the bus.  I’ve now noticed a pattern though of writers from the 60s: a whole lot of people expected supersonic air travel to be the Next Big Thing.  I’ve already written about how the American Challenge predicted it as one of the most important challenges that Europe needed to invest in.  I’ve now started reading The New Industrial State by John Kenneth Galbraith, in which he singles out supersonic air travel as “an indispensable industry” of the modern economy.  As I’ve noted before, supersonic passenger planes never quite took off as advertised, but it’s a fun little theory to look at why people might have expected them to do better than they did.

At first, supersonic travel seems like no less than the next logical conclusion of human travel.  First we walked, then we invented wheels to carry our stuff, then we built ships then railroads then automobiles then planes.  Each step in the evolution of human transportation seemed to bring an increase in speed and thus a huge economic advantage, so it seemed only natural that supersonic travel would follow this pattern.  But I think the constant increases in speed blinded people to the more important increases in efficiency.  Airplanes are much faster than cars and ships, yet to this day far more international trade is conducted by land and sea than by air.  In order for airplanes to compete as a mode of travel, they not only had to be faster but the gain in speed had to outweigh the increase in cost.  For moving people around this gain is very easy as none of us wants to sit on a boat for 4 weeks to get to our destination.  But for moving cargo that gain is much harder because the cargo doesn’t care as much about its speed and the cargo’s owner only cares how much fuel he has to spend moving it from A to B.  So speed only leads to efficiency in some cases, in others the higher cost of fuel means more speed has less efficiency.

The same dichotomy between speed and efficiency exists for supersonic vs subsonic planes.  The supersonic Concorde could of course do a transatlantic route in just under 3 hours, and this gain in efficiency was appreciated by its many passengers.  But the even greater gain in efficiency came from planes like the Boeing 747 and other “Jumbo Jets” that could take hundreds of passengers across that same route using significantly less fuel per passenger.  That meant a ticket on a 747 could be a small fraction of the price of a Concorde ticket, and there just weren’t enough ultra-high-class passengers to make the Concorde cost-efficient. 

It just seems like nobody did their due diligence on a cost-benefits analysis for supersonic transportation, or instead they looked ahead with starry eyed wonder and proclaimed that “technology” would in some way ensure that supersonic travel was made efficient enough to compete.  

Science thought: all of proteomics is based on shape

You are what your proteins are.  That was the maxim of a biochemistry teacher I had, proteins are the molecules performing all your bodily functions, and any genetic trait or variation will normally not affect you unless it in some way can affect your proteins.  But proteins themselves can be difficult to wrap your head around, even for trained biochemists.

I thought about this conundrum while listening to a discussion between my peers.  A collaborator has a theory that a certain protein and a certain antibody will bind to each other, and they have demonstrated this to be true via Western Blot.  On the other hand when we image the samples using electron microscopy, we don’t see them binding.

Binding, like all protein functions, depends on the shape of the protein or more specifically a combination of shape and charge.  You may have seen gifs of a kinesin protein walking along microtubules, that only happens because kinesin has the right shape and the right charges to do so.  If kinesin was shaped more like collagen (long, thick rods) then it wouldn’t be able to move at all, and if collagen was shaped like ribosome proteins (globular and flexible) then it would never be able to be used as structural support.  Each protein can perform its job only because it is shaped in the correct way.

Shape also determines protein interactions.  You may have heard of how antibodies can bind so tightly and so specifically that they can be used to detect even tiny amounts of protein.  An antibody will detect a protein by binding to some 3D shape that makes up part of the protein.  An antibody that detects kinesin might bind to one of its “legs,” an antibody that detects collagen would have to bind to some part of its rod-like structure and so on.  That’s important because proteins can change their shape.  If a protein is boiled or put in detergent, then then its shape will disintegrate and it will become more like a floppy noodle of amino acids.  Now there are some antibodies that can only bind to a protein when its been disintegrated into a floppy noodle, but those same antibodies would not detect the protein when it’s in it’s “native” shape.  Because as can be expected the native shape of kinesin (two feet, able to move) looks nothing like the native shape of a floppy noodle (which kinesin turns into when it’s boiled and put in detergent).

So back to the mystery above: there is an antibody that binds to a certain protein in Western Blot, but we can’t make it bind in electron microscopy.  Well Western Blotting first requires boiling and adding detergent to run a protein through a gel, while electron microscopy keeps the protein in its native shape.  It’s very likely that this person’s antibody only can bind to the floppy noodle form of the protein (what you get after boiling and detergent) but cannot bind to the native form, and that’s why we aren’t seeing it in electron microscopy.  As always, shape is important.

Dear Scientists, publish your damn methods

Dear Scientists,

I’m a scientist myself.  I’ve written papers, I’ve published papers, I know it’s often long and boring work that isn’t as exciting as seeing good data and telling your friends about it.  I’ve sat in a room with 3 other people just to edit a single paragraph, and god it was dull.  So I can understand if writing your actual paper isn’t the rip-roaring adventure that gets you up in the morning.  

At the same time, science is only as good as the literature. One of our fundamental scientific tenants is the principle of uniformity, that is that anyone should be able to do the same experiment and get the same answer.  If you and I get different answers when we do the experiment then something is definitely wrong, and failed replications have taught us a lot about how much bad science there is out there.  On the other hand, any failed replication will fall back on the excuse that the replicator “didn’t do the experiment right.”  They will claim that something done by the replicator was not done exactly as they had done it, and that this is the source of the error.  I would fire back that it is your job as a scientific writer to give all the details necessary for a successful replication.  If there is something very minor that has to be done in a specific way in order to replicate your experiments, then you need to state that clearly in the methods section of your paper.  Anything not stated in your methods is assumed unimportant to the outcome by definition, so if it is important put it in the methods.

Even worse than the above is the scientific papers which publish no methods to begin with!  I can’t tell you how many times I’ve been looking for the methods of a paper only to find a note saying “methods performed as previously described,” which links to another paper saying “methods performed as previously described” which links to another paper on and on again until I’m trying to find some paper from 1980 just to know what someone in 2021 did.  I don’t think “as previously described” is sufficient, if the methods are identical then you can just copy and past them in as supplemental material.  It’s the 21st century, memory and bandwidth are very very cheap, there is no need for a restrictive word count regarding your methods.

But the worst of the worst, and the reason I wrote this article, is that I found a paper claiming “methods performed as previously described” which did not link or cite any paper whatsoever.  I have no way of knowing which previously described method this paper is referring to, and in fact no way of knowing whether they are making this all up!  I would go so far as to say this is scientific malpractice, the methods are totally undescribed and thus the experiment is unfalsifiable, because anything I did in an attempt to replicate it might be wrong because I don’t know how it was done in the first place!

So please scientists, publish your damn methods.  Here’s an idea that I’m hoping will catch on, if you don’t have room in the body of your paper and are publishing your methods as a supplement, just copy/paste from whatever document you used to do the experiment.  Most methods are written in the past tense in a paper, but the present tense during an experiment, and furthermore the experimental methods often include extraneous information such as “make sure not to do the next step until X occurs,” this information often being omitted in the published paper.  I would say that this information is not in fact extraneous but should be included, if there is some precise ordering of steps that needs to happen, then that information should be shared with the world.  So whatever protocol you used to do the experiment, with marginal notes and handy tips, just throw the entire protocol into your supplemental information as a “methods” section and stop playing hide the pickle with your experiment by citing ever older papers

Weekend thoughts: not everything that evolved is acted on by evolution

So I understand biology, I’ve researched biology for most of my adult life, and one of the fundamental tenets of biology is the Theory of Evolution.  I don’t think it’s an overstatement to say that evolution is as central to modern biology as Quantum Theory is to modern physics, almost everything we do and study ties back to it in some way.  But like all scientific theories, evolution is widely misunderstood on the internet, and not just by dogmatic creationists but even by the science journalists and appreciators who we would expect to understand it.  It often comes back to a simple statement:

Not everything evolved to be the way is it today

Evolution is a process where certain traits are selected for or against, but not all traits undergo this selection pressure at all times.  Some traits are relatively “silent” in that mutations affecting them don’t give significant advantages or disadvantages so there is no selection pressure.  And some traits are downstream of the selection pressure, in that while they are affected by a trait which is being selected for or against, they themselves are not under selection pressure and so don’t get acted upon.  And some traits are just the best of a bad bunch, evolution does not make things perfect, it makes things good enough to thrive within their niche.

Let me give a few examples.  I occasionally see or hear a discussion about “why would humans evolve to get cancer?”  The misunderstanding here is thinking that cancer is just a phenotype that can be selected for or against, like height or hair color.  Most cancer is somatic in nature, meaning it does not come directly from the inherited genes but from mutations upon those genes.  These mutations were not inherited nor will they be passed down (unless they occur in the sex cells) so it isn’t true that evolution is even acting upon these mutations.  OK but why did the human body evolve to allow these mutation to happen?  That’s just the best of a bad bunch, the human DNA repair and replication machinery aren’t perfect and there are big tradeoffs that would have to be made for our DNA to not allow mutations whatsoever (if that were even possible).  The human DNA machinery does a very good job at what it evolved to do, replicating and repairing DNA with high fidelity, and just because it fails sometimes doesn’t mean that it evolved to fail, but instead means that there was no mutation that created machinery which never failed.

Likewise there isn’t always an evolutionary reason behind every other weird aspect of our bodies.  Why do wisdom teeth cause us pain?  Or our spines?  These things evolved during times when we lived differently to today, so trying to understand them in the evolutionary context of 2022 just doesn’t help.  Clickbait article writers like to point to these as “why evolution is not always helpful.” But they’re simply examples of the author misunderstanding evolution more than anything else.  So if you ever find yourself thinking “why did we evolve to be weird like this,” first ask yourself if the question even makes sense.

Google will have a fully self-driving car on road by 2020

In 2015, Google claimed they would have a fully self-driving car within 5 years, completely removing humans from the equation.

Lol. Lmao even.

I’ve at times thought myself too much a pessimist, but self-driving cars is a technology where I feel that several companies and hype machines are knowingly barking up the wrong tree. Self-driving cars aren’t a technological problem, they are truly a political and legal problem. Let me explain.

We have had for many years the technology capable of making a fully autonomous car using sensors and automatic feedback for controls, and it only took a few years of Google engineering before they were able to make a program which could drive with greater fidelity than most any human. Fidelity in this case means ability to get there and back in a reasonable amount of time while adhering to road safety. Obviously a car doesn’t have an ego, so it can be programmed not to speed, to drive defensively, to obey traffic laws etc. And the split second reaction times required when zooming down the freeway are more easily handled by a computer than a human anyway. But that isn’t the barrier to self-driving cars in my view, the barrier is what happens when things go wrong.

If a self-driving car is responsible for a crash, who is held responsible? In the real world, responsibility in crashes is assigned in order to pay restitution and prevent future harm. Someone has to pay for the victim’s hospital bills, and it might be necessary to prevent future harm by prohibiting unsafe drivers from driving. Under pretty much every imaginable circumstance, the driver of the car is presumed solely at fault if their car is responsible for a crash, but under a few specific circumstances the manufacturer of the car or even the person who last worked on it can be held at fault if the driver acted correctly and the car did not respond to their inputs.

But who is at fault if a self-driving Google Lexus crashes? Let’s cut to the chase, Lexus will not be at fault in any sense, and in Google’s visionary world there would be no peddles or steering wheel in the self-driving Google car, so no “driver” as such. The only answer then is that Google itself must be at fault as the writer of the self-driving algorithm. This isn’t an open question, someone must be at fault to pay restitution, and there is very little possibility that the passenger of a car with no way to influence it could be held liable. But is Google, or any company for that matter, willing to take on the burden of fault for every possible crash their cars could get into? Google has handily sidestepped this problem by pointing out that so far their cars have never been in an at-fault crash, but that really isn’t an answer. All software fails eventually, that is an iron law of nature no matter what the programmers say. There will always be a bug in the code, an unexpected edge case, or an update pushed out without proper oversight. And so eventually Google’s car will cause a crash and someone must be held responsible. This isn’t just one person’s hospital bills either, if Google’s car causes a crash and there’s no peddles or steering wheel, they would be responsible for the harm to people in both cars. I surmise that Google is unwilling to take on that responsibility.

So this truly is a question that cannot be sidestepped, and I think that is why even though the tech is “there” for self-driving cars, none have come to the mass market. You can make a car navigate through 99.99% of all driving problems with ease, but no one is willing to be responsible for the 0.01% of times their car will fail. So even though humans might only navigate 99% of driving problems with ease, and thus even though self-driving cars are already “better” than us, we take on the burden of responsibility when we fail, as defined by laws and legally mandated insurance. In exchange for this burden we get the privilege of going place to place much faster than we would otherwise. Google would only get the privilege of our money in exchange for taking on that burden, and I suspect the economics of the exchange don’t yet work for them.

No one likes a scientific buzzkill

When doing science, you’ll often come upon mysteries you didn’t expect.  Some of these are exciting and will lead to new discoveries, but some of them are depressing because they mean you did the experiment wrong.  Take a common example: you do an experiment and find a result you didn’t expect.  You obviously want to know why you got the result you did, and so you spend a lot of time and effort looking into what part of the experiment could have been causing the unexpected result.  Eventually the answer could have been caused by contamination of your samples or misuse of your experimental design, either way you didn’t find something new and exciting, you just made more work for yourself since you spent a lot of time chasing down an answer only to find out you just needed to do the experiment all over.

But for those first few days or even weeks you can feel constantly like you’re on the precipice of some new discovery, something grand and publishable that everyone will see and be enlightened by.  I have had feelings like that, and it’s always been a let down to realize that there was nothing cool and exciting about my unexpected results, they simply came from not doing things correctly.  The danger is of course getting too into it, spending a lot of time and money chasing rabbit holes wondering why your data looks weird when the quick and simple answer is “do the experiment again and it will look right,” you can spend many years and millions of dollars just to learn that sometimes.  I thus usually try to look at results with a very pessimistic eye: it’s unlikely I just discovered something literally earth shattering because if it was this easy to discover then someone else would have already done so.  This can at times seem like the joke about the two economists who see 20$ on the sidewalk, but it’s a mindset that promotes healthy skepticism.

With all that said, the hardest part of this for me is making sure other people are healthy skeptics.  We’re all scientists in a lab and we all have that part of our brain that wants to solve a mystery and will spend way too much time and effort trying to do so.  It’s easy all the while to convince yourself that the answer to the mystery is big and groundbreaking enough to justify all the time spent, but so often it just isn’t.  I’ve been dancing around the point for a while but essentially: some people in my lab are looking at data which they think reveals amazing undiscovered insights into a disease we are researching.  I see the data and assume it’s some unknown contaminant causing it and that we should just redo the experiment.  We could spend our time trying to look more at the data or spend our time redoing the experiment, and I fear that if we spend too much time on the former we’ll all be really bummed out when it is a contaminant and we have to go back and spend more time on the latter.  But I can’t stop people from getting excited, just like the X-Files we all want to believe.