Why was everyone in the 60s so high on Supersonic air travel?

I get a small sense of morbid schadenfreude reading old books on economics.  Occasionally the authors make some of the most insightful predictions I’ve ever read about the nature and direction of the economy of their future (our past), but more often they miss wildly and I get to feel superior while reading a book on the bus.  I’ve now noticed a pattern though of writers from the 60s: a whole lot of people expected supersonic air travel to be the Next Big Thing.  I’ve already written about how the American Challenge predicted it as one of the most important challenges that Europe needed to invest in.  I’ve now started reading The New Industrial State by John Kenneth Galbraith, in which he singles out supersonic air travel as “an indispensable industry” of the modern economy.  As I’ve noted before, supersonic passenger planes never quite took off as advertised, but it’s a fun little theory to look at why people might have expected them to do better than they did.

At first, supersonic travel seems like no less than the next logical conclusion of human travel.  First we walked, then we invented wheels to carry our stuff, then we built ships then railroads then automobiles then planes.  Each step in the evolution of human transportation seemed to bring an increase in speed and thus a huge economic advantage, so it seemed only natural that supersonic travel would follow this pattern.  But I think the constant increases in speed blinded people to the more important increases in efficiency.  Airplanes are much faster than cars and ships, yet to this day far more international trade is conducted by land and sea than by air.  In order for airplanes to compete as a mode of travel, they not only had to be faster but the gain in speed had to outweigh the increase in cost.  For moving people around this gain is very easy as none of us wants to sit on a boat for 4 weeks to get to our destination.  But for moving cargo that gain is much harder because the cargo doesn’t care as much about its speed and the cargo’s owner only cares how much fuel he has to spend moving it from A to B.  So speed only leads to efficiency in some cases, in others the higher cost of fuel means more speed has less efficiency.

The same dichotomy between speed and efficiency exists for supersonic vs subsonic planes.  The supersonic Concorde could of course do a transatlantic route in just under 3 hours, and this gain in efficiency was appreciated by its many passengers.  But the even greater gain in efficiency came from planes like the Boeing 747 and other “Jumbo Jets” that could take hundreds of passengers across that same route using significantly less fuel per passenger.  That meant a ticket on a 747 could be a small fraction of the price of a Concorde ticket, and there just weren’t enough ultra-high-class passengers to make the Concorde cost-efficient. 

It just seems like nobody did their due diligence on a cost-benefits analysis for supersonic transportation, or instead they looked ahead with starry eyed wonder and proclaimed that “technology” would in some way ensure that supersonic travel was made efficient enough to compete.  

Science thought: all of proteomics is based on shape

You are what your proteins are.  That was the maxim of a biochemistry teacher I had, proteins are the molecules performing all your bodily functions, and any genetic trait or variation will normally not affect you unless it in some way can affect your proteins.  But proteins themselves can be difficult to wrap your head around, even for trained biochemists.

I thought about this conundrum while listening to a discussion between my peers.  A collaborator has a theory that a certain protein and a certain antibody will bind to each other, and they have demonstrated this to be true via Western Blot.  On the other hand when we image the samples using electron microscopy, we don’t see them binding.

Binding, like all protein functions, depends on the shape of the protein or more specifically a combination of shape and charge.  You may have seen gifs of a kinesin protein walking along microtubules, that only happens because kinesin has the right shape and the right charges to do so.  If kinesin was shaped more like collagen (long, thick rods) then it wouldn’t be able to move at all, and if collagen was shaped like ribosome proteins (globular and flexible) then it would never be able to be used as structural support.  Each protein can perform its job only because it is shaped in the correct way.

Shape also determines protein interactions.  You may have heard of how antibodies can bind so tightly and so specifically that they can be used to detect even tiny amounts of protein.  An antibody will detect a protein by binding to some 3D shape that makes up part of the protein.  An antibody that detects kinesin might bind to one of its “legs,” an antibody that detects collagen would have to bind to some part of its rod-like structure and so on.  That’s important because proteins can change their shape.  If a protein is boiled or put in detergent, then then its shape will disintegrate and it will become more like a floppy noodle of amino acids.  Now there are some antibodies that can only bind to a protein when its been disintegrated into a floppy noodle, but those same antibodies would not detect the protein when it’s in it’s “native” shape.  Because as can be expected the native shape of kinesin (two feet, able to move) looks nothing like the native shape of a floppy noodle (which kinesin turns into when it’s boiled and put in detergent).

So back to the mystery above: there is an antibody that binds to a certain protein in Western Blot, but we can’t make it bind in electron microscopy.  Well Western Blotting first requires boiling and adding detergent to run a protein through a gel, while electron microscopy keeps the protein in its native shape.  It’s very likely that this person’s antibody only can bind to the floppy noodle form of the protein (what you get after boiling and detergent) but cannot bind to the native form, and that’s why we aren’t seeing it in electron microscopy.  As always, shape is important.

Dear Scientists, publish your damn methods

Dear Scientists,

I’m a scientist myself.  I’ve written papers, I’ve published papers, I know it’s often long and boring work that isn’t as exciting as seeing good data and telling your friends about it.  I’ve sat in a room with 3 other people just to edit a single paragraph, and god it was dull.  So I can understand if writing your actual paper isn’t the rip-roaring adventure that gets you up in the morning.  

At the same time, science is only as good as the literature. One of our fundamental scientific tenants is the principle of uniformity, that is that anyone should be able to do the same experiment and get the same answer.  If you and I get different answers when we do the experiment then something is definitely wrong, and failed replications have taught us a lot about how much bad science there is out there.  On the other hand, any failed replication will fall back on the excuse that the replicator “didn’t do the experiment right.”  They will claim that something done by the replicator was not done exactly as they had done it, and that this is the source of the error.  I would fire back that it is your job as a scientific writer to give all the details necessary for a successful replication.  If there is something very minor that has to be done in a specific way in order to replicate your experiments, then you need to state that clearly in the methods section of your paper.  Anything not stated in your methods is assumed unimportant to the outcome by definition, so if it is important put it in the methods.

Even worse than the above is the scientific papers which publish no methods to begin with!  I can’t tell you how many times I’ve been looking for the methods of a paper only to find a note saying “methods performed as previously described,” which links to another paper saying “methods performed as previously described” which links to another paper on and on again until I’m trying to find some paper from 1980 just to know what someone in 2021 did.  I don’t think “as previously described” is sufficient, if the methods are identical then you can just copy and past them in as supplemental material.  It’s the 21st century, memory and bandwidth are very very cheap, there is no need for a restrictive word count regarding your methods.

But the worst of the worst, and the reason I wrote this article, is that I found a paper claiming “methods performed as previously described” which did not link or cite any paper whatsoever.  I have no way of knowing which previously described method this paper is referring to, and in fact no way of knowing whether they are making this all up!  I would go so far as to say this is scientific malpractice, the methods are totally undescribed and thus the experiment is unfalsifiable, because anything I did in an attempt to replicate it might be wrong because I don’t know how it was done in the first place!

So please scientists, publish your damn methods.  Here’s an idea that I’m hoping will catch on, if you don’t have room in the body of your paper and are publishing your methods as a supplement, just copy/paste from whatever document you used to do the experiment.  Most methods are written in the past tense in a paper, but the present tense during an experiment, and furthermore the experimental methods often include extraneous information such as “make sure not to do the next step until X occurs,” this information often being omitted in the published paper.  I would say that this information is not in fact extraneous but should be included, if there is some precise ordering of steps that needs to happen, then that information should be shared with the world.  So whatever protocol you used to do the experiment, with marginal notes and handy tips, just throw the entire protocol into your supplemental information as a “methods” section and stop playing hide the pickle with your experiment by citing ever older papers

Weekend thoughts: not everything that evolved is acted on by evolution

So I understand biology, I’ve researched biology for most of my adult life, and one of the fundamental tenets of biology is the Theory of Evolution.  I don’t think it’s an overstatement to say that evolution is as central to modern biology as Quantum Theory is to modern physics, almost everything we do and study ties back to it in some way.  But like all scientific theories, evolution is widely misunderstood on the internet, and not just by dogmatic creationists but even by the science journalists and appreciators who we would expect to understand it.  It often comes back to a simple statement:

Not everything evolved to be the way is it today

Evolution is a process where certain traits are selected for or against, but not all traits undergo this selection pressure at all times.  Some traits are relatively “silent” in that mutations affecting them don’t give significant advantages or disadvantages so there is no selection pressure.  And some traits are downstream of the selection pressure, in that while they are affected by a trait which is being selected for or against, they themselves are not under selection pressure and so don’t get acted upon.  And some traits are just the best of a bad bunch, evolution does not make things perfect, it makes things good enough to thrive within their niche.

Let me give a few examples.  I occasionally see or hear a discussion about “why would humans evolve to get cancer?”  The misunderstanding here is thinking that cancer is just a phenotype that can be selected for or against, like height or hair color.  Most cancer is somatic in nature, meaning it does not come directly from the inherited genes but from mutations upon those genes.  These mutations were not inherited nor will they be passed down (unless they occur in the sex cells) so it isn’t true that evolution is even acting upon these mutations.  OK but why did the human body evolve to allow these mutation to happen?  That’s just the best of a bad bunch, the human DNA repair and replication machinery aren’t perfect and there are big tradeoffs that would have to be made for our DNA to not allow mutations whatsoever (if that were even possible).  The human DNA machinery does a very good job at what it evolved to do, replicating and repairing DNA with high fidelity, and just because it fails sometimes doesn’t mean that it evolved to fail, but instead means that there was no mutation that created machinery which never failed.

Likewise there isn’t always an evolutionary reason behind every other weird aspect of our bodies.  Why do wisdom teeth cause us pain?  Or our spines?  These things evolved during times when we lived differently to today, so trying to understand them in the evolutionary context of 2022 just doesn’t help.  Clickbait article writers like to point to these as “why evolution is not always helpful.” But they’re simply examples of the author misunderstanding evolution more than anything else.  So if you ever find yourself thinking “why did we evolve to be weird like this,” first ask yourself if the question even makes sense.

Google will have a fully self-driving car on road by 2020

In 2015, Google claimed they would have a fully self-driving car within 5 years, completely removing humans from the equation.

Lol. Lmao even.

I’ve at times thought myself too much a pessimist, but self-driving cars is a technology where I feel that several companies and hype machines are knowingly barking up the wrong tree. Self-driving cars aren’t a technological problem, they are truly a political and legal problem. Let me explain.

We have had for many years the technology capable of making a fully autonomous car using sensors and automatic feedback for controls, and it only took a few years of Google engineering before they were able to make a program which could drive with greater fidelity than most any human. Fidelity in this case means ability to get there and back in a reasonable amount of time while adhering to road safety. Obviously a car doesn’t have an ego, so it can be programmed not to speed, to drive defensively, to obey traffic laws etc. And the split second reaction times required when zooming down the freeway are more easily handled by a computer than a human anyway. But that isn’t the barrier to self-driving cars in my view, the barrier is what happens when things go wrong.

If a self-driving car is responsible for a crash, who is held responsible? In the real world, responsibility in crashes is assigned in order to pay restitution and prevent future harm. Someone has to pay for the victim’s hospital bills, and it might be necessary to prevent future harm by prohibiting unsafe drivers from driving. Under pretty much every imaginable circumstance, the driver of the car is presumed solely at fault if their car is responsible for a crash, but under a few specific circumstances the manufacturer of the car or even the person who last worked on it can be held at fault if the driver acted correctly and the car did not respond to their inputs.

But who is at fault if a self-driving Google Lexus crashes? Let’s cut to the chase, Lexus will not be at fault in any sense, and in Google’s visionary world there would be no peddles or steering wheel in the self-driving Google car, so no “driver” as such. The only answer then is that Google itself must be at fault as the writer of the self-driving algorithm. This isn’t an open question, someone must be at fault to pay restitution, and there is very little possibility that the passenger of a car with no way to influence it could be held liable. But is Google, or any company for that matter, willing to take on the burden of fault for every possible crash their cars could get into? Google has handily sidestepped this problem by pointing out that so far their cars have never been in an at-fault crash, but that really isn’t an answer. All software fails eventually, that is an iron law of nature no matter what the programmers say. There will always be a bug in the code, an unexpected edge case, or an update pushed out without proper oversight. And so eventually Google’s car will cause a crash and someone must be held responsible. This isn’t just one person’s hospital bills either, if Google’s car causes a crash and there’s no peddles or steering wheel, they would be responsible for the harm to people in both cars. I surmise that Google is unwilling to take on that responsibility.

So this truly is a question that cannot be sidestepped, and I think that is why even though the tech is “there” for self-driving cars, none have come to the mass market. You can make a car navigate through 99.99% of all driving problems with ease, but no one is willing to be responsible for the 0.01% of times their car will fail. So even though humans might only navigate 99% of driving problems with ease, and thus even though self-driving cars are already “better” than us, we take on the burden of responsibility when we fail, as defined by laws and legally mandated insurance. In exchange for this burden we get the privilege of going place to place much faster than we would otherwise. Google would only get the privilege of our money in exchange for taking on that burden, and I suspect the economics of the exchange don’t yet work for them.

No one likes a scientific buzzkill

When doing science, you’ll often come upon mysteries you didn’t expect.  Some of these are exciting and will lead to new discoveries, but some of them are depressing because they mean you did the experiment wrong.  Take a common example: you do an experiment and find a result you didn’t expect.  You obviously want to know why you got the result you did, and so you spend a lot of time and effort looking into what part of the experiment could have been causing the unexpected result.  Eventually the answer could have been caused by contamination of your samples or misuse of your experimental design, either way you didn’t find something new and exciting, you just made more work for yourself since you spent a lot of time chasing down an answer only to find out you just needed to do the experiment all over.

But for those first few days or even weeks you can feel constantly like you’re on the precipice of some new discovery, something grand and publishable that everyone will see and be enlightened by.  I have had feelings like that, and it’s always been a let down to realize that there was nothing cool and exciting about my unexpected results, they simply came from not doing things correctly.  The danger is of course getting too into it, spending a lot of time and money chasing rabbit holes wondering why your data looks weird when the quick and simple answer is “do the experiment again and it will look right,” you can spend many years and millions of dollars just to learn that sometimes.  I thus usually try to look at results with a very pessimistic eye: it’s unlikely I just discovered something literally earth shattering because if it was this easy to discover then someone else would have already done so.  This can at times seem like the joke about the two economists who see 20$ on the sidewalk, but it’s a mindset that promotes healthy skepticism.

With all that said, the hardest part of this for me is making sure other people are healthy skeptics.  We’re all scientists in a lab and we all have that part of our brain that wants to solve a mystery and will spend way too much time and effort trying to do so.  It’s easy all the while to convince yourself that the answer to the mystery is big and groundbreaking enough to justify all the time spent, but so often it just isn’t.  I’ve been dancing around the point for a while but essentially: some people in my lab are looking at data which they think reveals amazing undiscovered insights into a disease we are researching.  I see the data and assume it’s some unknown contaminant causing it and that we should just redo the experiment.  We could spend our time trying to look more at the data or spend our time redoing the experiment, and I fear that if we spend too much time on the former we’ll all be really bummed out when it is a contaminant and we have to go back and spend more time on the latter.  But I can’t stop people from getting excited, just like the X-Files we all want to believe.

Friday feelings: the importance of communication

I don’t want to get into too many specifics here, but this week was a lesson in the importance of communication.  Science is a collaborative process, the days of one person making discoveries have long since passed, and everything we do these days requires not just a team but multiple teams working in tandem.  With that comes a requirement for all teams to be on the same page so they can work together instead of going in circles.  My team recently received a sample to test but has no idea what the sample is or how it was produced.  Without this knowledge, how can we know what is in the sample?  If I see something odd in the sample, how can I know whether it’s important and must be removed or whether it’s a normal and expected part of the process?  And importantly, how can we replicate this work in our own lab if we don’t know how it was produced in the other lab?

Collaboration is of course difficult, we all have our own things to do any communicating to our collaborators sometimes only helps them and not us, so we don’t want to spend energy on it.  Still it’s necessary if a collaboration is going to work and collaboration is a thing that helps all of us. 

Just as important to collaborative communication is scientific communication to the wider community, usually through papers.  I’ve recently thought that scientific journals should also increase the standards to which they hold paper writers, too many will publish inscriptible images and vague methods that cannot be replicated at all, with your best bet in this case is usually the arduous process of calling the original scientist on the phone and asking him or her what the hell they did.  It’s like if you read a recipe and all it said was “cook it until it’s finished.”  What the hell does that even mean?  If you read a paper and you don’t know how the method was done, how can you ever build off that paper?  I’m not trying to accuse people of scientific misconduct or anything, I’m just trying to say that if I have no idea what you did, I’m not going to cite your paper or use it for my own research.  Good communication is important.

How can you fix science that has become engineering?

One of the toughest questions in science is simply “when do you admit you were wrong?”  It’s never an easy thing to do, but we all understand that in the scientific method sometimes our most beautiful, most beloved hypotheses turn out not to describe the world as it truly is.  But people are human and it’s only natural that they’d prefer their favorite hypothesis to be right, and of course there’s always the possibility that just around the corner is some new evidence that will finally prove them right…

This process of clinging to an unsupported hypothesis in the face of repeated failures is something I discussed in a previous post.  There, I discussed working in a lab where we treating our hypothesis more as an engineering problem, we felt we knew that what we were doing was possible if only we could do it right.  Repeated failures never swayed our view of this point, and rather than admit it might be impossible, we would just double down and try again.  When that sort of thinking infects a lab, how do you treat it?  How do you get scientists to go back to being scientists, to go back to accepting or rejecting hypotheses based on the evidence and not taking them as gospel prior to even doing the experiment?

I think one thing that might help this process would be a revolution in the publishing industry in which null results would be considered publishable.  Right now it is very rare to get a paper published that says “we failed to prove something new.”  Novelty is desired, overturning the established paradigms is desired, and failing to accomplish either basically condemns your work to the trash bin, totally unpublishable.  I have often thought that null results should still be archived, if only to tell future scientists where the pitfalls lie and dissuade them from wasting more time on a fruitless endeavor.  But until null results are as publishable as positive results, people will still have a substantial interest in redoing failed experiments just in the hope that this time it will succeed, to do otherwise would force them to admit defeat and start all over from the beginning.

How do you read in a language you only half understand?

Whenever I learn a new language, there always comes a time when I start to get good enough at it to recognize and understand certain words, but not good enough to know every word I come across.  I can read half a sentence but not the whole sentence, understand half a paragraph but not the whole paragraph.  This is a difficult time for a learner because you’re just on the cusp of truly using the language to read, but you don’t feel good enough to actually use it because you only understand half of what you read.  How do you get better?

The answer (so I’ve been taught) is you still try to read.  Even if you don’t understand everything, even if you only understand half of it, you try to read what you can so you can get familiar with the language and start learning by using.  Most words we know were probably never defined to us specifically, did anyone ever define to word “anyone” to you?  Instead as learners we pick them up by context clues and other hints, and start using them the way we read or heard them.  This can occasionally lead to hilarity, like how I once heard someone describe a child as homely instead of comely, but it can also lead to learning as you start to use and understand each new word you read.

So if I’m reading something and I come upon words I don’t understand, I was taught not to look each one of them up, but instead to just keep reading and try to figure them out as I go.  I may read a sentence that says “he went to the 餐厅, and after he’d finished his meal he…”.  Although I don’t know what 餐厅 means directly, it seems that “he” ate their, so it must be some sort of eating place.  Now whenever I see that word again I see if it seems to have something to do with eating, and if it does then I can learn by usage that 餐厅 means “a place where you eat.” Through this process I can slowly pick up the language through usage rather than trying to stop and look up every word.

But here’s the secret: this trick also works with scientific writing.  Scientific writing is filled to the brim with jargon and odd definitions.  What is an SDS-PAGE?  What is an HPLC?  And not only are the words difficult, the concepts are difficult, why did they use centrifugation to separate out the nucleus?  Why does electron microscopy not let you visualize the less-rigid parts of a protein?  When you start out as a scientist, you are often told to read scientific papers, and scientific papers can feel like you’re reading a foreign language!  But the same rules apply as reading a foreign language, you don’t always have to know every word when you’re starting out, or even every concept.  It’s more important to develop scientific language fluency so that you can get the big idea out of a paper and understand it when speaking with others.  For example, they used HPLC to separate a protein of interest from all the other proteins in a cell.  OK so HPLC is a purification technique, I don’t need to know how it works if all I’m interested in is that protein of interest.  I can move on to what the paper says about the protein secure in the knowledge that it is indeed pure.  If later on if HPLC becomes more important then I can do a quick search or deep dive to understand more of it, but it isn’t always necessary to know every single word or technique in a paper. Reading scientific papers is a skill, one I’ve had to devote a lot of time to getting better at, but once you develop knowledge of the jargon and techniques it gets a lot easier, and importantly you develop the skills necessary to learn any new jargon or techniques that you come across.  And that is the real skill, not the knowledge of specific things but the ability to learn new things.  That is what truly makes a scientist.

Science has its holy wars too

In my continuing ramblings about what science is versus what it ought to be, I thought I’d touch briefly on a topic that is well understood in the community but doesn’t seem understood outside of it, that is the question of how a scientific hypothesis becomes scientific dogma.  I don’t mean dogma in a negative sense, in my area of science a dogma is simply something that is without question because all the evidence points to it being true.  The “central dogma” of biology for example is that DNA is where genetic information is stored, RNA is the messenger of information, and protein executes the functions that are demanded by the information.  DNA->RNA->proteins is a dogma taught to every aspiring biologist and bored high school student, and it underpins every piece of modern biology we do.

But dogmas don’t become dogmas out of nothing, there must be a mountain of evidence in their favor, and additionally there is usually a prior dogma or competing hypothesis that they must replace.  This last bit is important, it has often been said that you can’t reason someone out of a position they did not reason themselves into, but equally true is that you often can’t reason them out of something that they did reason themselves into either.  People just don’t like changing their mind.  And so when a new hypothesis comes along challenging an old dogma, scientists don’t just accept it straight away, instead they will demand more and more evidence for it while continuing to cling to what they learned in the old dogma.  Science advances not through persuasion but through retirement as these heralds of the old dogma retire and get replaced by people who learned the new hypothesis.  And those people in turn accept the hypothesis fully and turn it into a dogma to be taught to students who don’t yet have the full knowledge base yet to understand why something is true but who can be taught that it is true, hence dogma.

During the upwelling of a new hypothesis though, holy wars can happen.  I don’t mean fighting and purges, I instead mean the kind of holy wars that nerds engage in, the kind of demeaning of those on the “other side” in the sense of “oh you have a Gamecube instead of a PC? I should have known you were a console peasant.”  These holy wars infect science too, scientists try to be nice for professionalism of course but they will spend enormous efforts undercutting each other’s theories and at times even undercutting each other’s professional trajectories in their bid to garner support for their own theory.  This may seem needlessly cruel but there is an element of rational self-interest, if you think your theory is true then supporting the truth against the false is good praxis, and in more base terms there is only so much funding to go around so ensuring that your dogma or theory is held in higher esteem will ensure your side is the one receiving the lion’s share of scientific funding.I know this all sounds like pointless waffle, but I was specifically reminded of this when I recently saw a few talks on Alzheimer’s disease.  The holy war over Alzheimer’s can’t be summed up in a short blog post, but some people think Alzheimer’s is caused by a protein called “A-beta” and some think it is caused by one called “tau”.  A few hold a compromise position that perhaps both proteins are necessary but most of the scientists I’ve seen presenting talks hold to one side or the other, and both sides are competing to become the new dogma.  For the most part these two sides talk past each other, if you think that A-beta is the cause of Alzheimer’s disease then there isn’t as much a point in researching tau, and vice versa.  But occasionally you’ll find both sides present at a symposium and there they will feel the need to defend themselves to the audience and slyly denigrate the opposing position.  Never to the level of insults (in public) but instead to the level of “I respectfully suggest that those other scientists have grossly misunderstood the evidence.”  Which is a very kind way of saying fuck you.