Friday feelings: the importance of communication

I don’t want to get into too many specifics here, but this week was a lesson in the importance of communication.  Science is a collaborative process, the days of one person making discoveries have long since passed, and everything we do these days requires not just a team but multiple teams working in tandem.  With that comes a requirement for all teams to be on the same page so they can work together instead of going in circles.  My team recently received a sample to test but has no idea what the sample is or how it was produced.  Without this knowledge, how can we know what is in the sample?  If I see something odd in the sample, how can I know whether it’s important and must be removed or whether it’s a normal and expected part of the process?  And importantly, how can we replicate this work in our own lab if we don’t know how it was produced in the other lab?

Collaboration is of course difficult, we all have our own things to do any communicating to our collaborators sometimes only helps them and not us, so we don’t want to spend energy on it.  Still it’s necessary if a collaboration is going to work and collaboration is a thing that helps all of us. 

Just as important to collaborative communication is scientific communication to the wider community, usually through papers.  I’ve recently thought that scientific journals should also increase the standards to which they hold paper writers, too many will publish inscriptible images and vague methods that cannot be replicated at all, with your best bet in this case is usually the arduous process of calling the original scientist on the phone and asking him or her what the hell they did.  It’s like if you read a recipe and all it said was “cook it until it’s finished.”  What the hell does that even mean?  If you read a paper and you don’t know how the method was done, how can you ever build off that paper?  I’m not trying to accuse people of scientific misconduct or anything, I’m just trying to say that if I have no idea what you did, I’m not going to cite your paper or use it for my own research.  Good communication is important.

How can you fix science that has become engineering?

One of the toughest questions in science is simply “when do you admit you were wrong?”  It’s never an easy thing to do, but we all understand that in the scientific method sometimes our most beautiful, most beloved hypotheses turn out not to describe the world as it truly is.  But people are human and it’s only natural that they’d prefer their favorite hypothesis to be right, and of course there’s always the possibility that just around the corner is some new evidence that will finally prove them right…

This process of clinging to an unsupported hypothesis in the face of repeated failures is something I discussed in a previous post.  There, I discussed working in a lab where we treating our hypothesis more as an engineering problem, we felt we knew that what we were doing was possible if only we could do it right.  Repeated failures never swayed our view of this point, and rather than admit it might be impossible, we would just double down and try again.  When that sort of thinking infects a lab, how do you treat it?  How do you get scientists to go back to being scientists, to go back to accepting or rejecting hypotheses based on the evidence and not taking them as gospel prior to even doing the experiment?

I think one thing that might help this process would be a revolution in the publishing industry in which null results would be considered publishable.  Right now it is very rare to get a paper published that says “we failed to prove something new.”  Novelty is desired, overturning the established paradigms is desired, and failing to accomplish either basically condemns your work to the trash bin, totally unpublishable.  I have often thought that null results should still be archived, if only to tell future scientists where the pitfalls lie and dissuade them from wasting more time on a fruitless endeavor.  But until null results are as publishable as positive results, people will still have a substantial interest in redoing failed experiments just in the hope that this time it will succeed, to do otherwise would force them to admit defeat and start all over from the beginning.

How do you read in a language you only half understand?

Whenever I learn a new language, there always comes a time when I start to get good enough at it to recognize and understand certain words, but not good enough to know every word I come across.  I can read half a sentence but not the whole sentence, understand half a paragraph but not the whole paragraph.  This is a difficult time for a learner because you’re just on the cusp of truly using the language to read, but you don’t feel good enough to actually use it because you only understand half of what you read.  How do you get better?

The answer (so I’ve been taught) is you still try to read.  Even if you don’t understand everything, even if you only understand half of it, you try to read what you can so you can get familiar with the language and start learning by using.  Most words we know were probably never defined to us specifically, did anyone ever define to word “anyone” to you?  Instead as learners we pick them up by context clues and other hints, and start using them the way we read or heard them.  This can occasionally lead to hilarity, like how I once heard someone describe a child as homely instead of comely, but it can also lead to learning as you start to use and understand each new word you read.

So if I’m reading something and I come upon words I don’t understand, I was taught not to look each one of them up, but instead to just keep reading and try to figure them out as I go.  I may read a sentence that says “he went to the 餐厅, and after he’d finished his meal he…”.  Although I don’t know what 餐厅 means directly, it seems that “he” ate their, so it must be some sort of eating place.  Now whenever I see that word again I see if it seems to have something to do with eating, and if it does then I can learn by usage that 餐厅 means “a place where you eat.” Through this process I can slowly pick up the language through usage rather than trying to stop and look up every word.

But here’s the secret: this trick also works with scientific writing.  Scientific writing is filled to the brim with jargon and odd definitions.  What is an SDS-PAGE?  What is an HPLC?  And not only are the words difficult, the concepts are difficult, why did they use centrifugation to separate out the nucleus?  Why does electron microscopy not let you visualize the less-rigid parts of a protein?  When you start out as a scientist, you are often told to read scientific papers, and scientific papers can feel like you’re reading a foreign language!  But the same rules apply as reading a foreign language, you don’t always have to know every word when you’re starting out, or even every concept.  It’s more important to develop scientific language fluency so that you can get the big idea out of a paper and understand it when speaking with others.  For example, they used HPLC to separate a protein of interest from all the other proteins in a cell.  OK so HPLC is a purification technique, I don’t need to know how it works if all I’m interested in is that protein of interest.  I can move on to what the paper says about the protein secure in the knowledge that it is indeed pure.  If later on if HPLC becomes more important then I can do a quick search or deep dive to understand more of it, but it isn’t always necessary to know every single word or technique in a paper. Reading scientific papers is a skill, one I’ve had to devote a lot of time to getting better at, but once you develop knowledge of the jargon and techniques it gets a lot easier, and importantly you develop the skills necessary to learn any new jargon or techniques that you come across.  And that is the real skill, not the knowledge of specific things but the ability to learn new things.  That is what truly makes a scientist.

Science has its holy wars too

In my continuing ramblings about what science is versus what it ought to be, I thought I’d touch briefly on a topic that is well understood in the community but doesn’t seem understood outside of it, that is the question of how a scientific hypothesis becomes scientific dogma.  I don’t mean dogma in a negative sense, in my area of science a dogma is simply something that is without question because all the evidence points to it being true.  The “central dogma” of biology for example is that DNA is where genetic information is stored, RNA is the messenger of information, and protein executes the functions that are demanded by the information.  DNA->RNA->proteins is a dogma taught to every aspiring biologist and bored high school student, and it underpins every piece of modern biology we do.

But dogmas don’t become dogmas out of nothing, there must be a mountain of evidence in their favor, and additionally there is usually a prior dogma or competing hypothesis that they must replace.  This last bit is important, it has often been said that you can’t reason someone out of a position they did not reason themselves into, but equally true is that you often can’t reason them out of something that they did reason themselves into either.  People just don’t like changing their mind.  And so when a new hypothesis comes along challenging an old dogma, scientists don’t just accept it straight away, instead they will demand more and more evidence for it while continuing to cling to what they learned in the old dogma.  Science advances not through persuasion but through retirement as these heralds of the old dogma retire and get replaced by people who learned the new hypothesis.  And those people in turn accept the hypothesis fully and turn it into a dogma to be taught to students who don’t yet have the full knowledge base yet to understand why something is true but who can be taught that it is true, hence dogma.

During the upwelling of a new hypothesis though, holy wars can happen.  I don’t mean fighting and purges, I instead mean the kind of holy wars that nerds engage in, the kind of demeaning of those on the “other side” in the sense of “oh you have a Gamecube instead of a PC? I should have known you were a console peasant.”  These holy wars infect science too, scientists try to be nice for professionalism of course but they will spend enormous efforts undercutting each other’s theories and at times even undercutting each other’s professional trajectories in their bid to garner support for their own theory.  This may seem needlessly cruel but there is an element of rational self-interest, if you think your theory is true then supporting the truth against the false is good praxis, and in more base terms there is only so much funding to go around so ensuring that your dogma or theory is held in higher esteem will ensure your side is the one receiving the lion’s share of scientific funding.I know this all sounds like pointless waffle, but I was specifically reminded of this when I recently saw a few talks on Alzheimer’s disease.  The holy war over Alzheimer’s can’t be summed up in a short blog post, but some people think Alzheimer’s is caused by a protein called “A-beta” and some think it is caused by one called “tau”.  A few hold a compromise position that perhaps both proteins are necessary but most of the scientists I’ve seen presenting talks hold to one side or the other, and both sides are competing to become the new dogma.  For the most part these two sides talk past each other, if you think that A-beta is the cause of Alzheimer’s disease then there isn’t as much a point in researching tau, and vice versa.  But occasionally you’ll find both sides present at a symposium and there they will feel the need to defend themselves to the audience and slyly denigrate the opposing position.  Never to the level of insults (in public) but instead to the level of “I respectfully suggest that those other scientists have grossly misunderstood the evidence.”  Which is a very kind way of saying fuck you.

When science becomes engineering, it ceases to be science

I just wanted to talk about the pitfalls of science for a moment.  We all know what science is “supposed” to be, you take evidence and create a theory about the world, then you test your theory rigorously to see if it is true, incorporating the new evidence from each round of testing to create a better and better theory.  But although that’s normally what science is in a macro sense, in a micro sense it isn’t always.  Science in a micro sense is the work done by students and researchers at labs all across the globe.  They don’t always have a theory, they don’t always do a good job testing their theories, and importantly for today, they don’t always incorporate new evidence into their theory to see if it is really true.

I worked in a lab before that didn’t incorporate new evidence.  We were trying to make… something.  It isn’t important what that something was, but it was pharmaceutical in nature.  We didn’t know exactly what it would look like, but we would know it when we saw it.  Our science day to day was to do large experiments, and in the experiment look for our special “something”.  If we didn’t find it this time, then we’d change our parameters and try again to run the experiment and look for our “something”.  Each time we failed to find our “something” we would use the evidence to change our experiment,  we would think that maybe some part of our process is destroying the “something,” maybe the “something” is in very small quantities and we can’t detect it, maybe we just ran the experiment improperly and we should try again.  What we would never do is think that maybe our “something” doesn’t even exist, maybe we’re doing experiments and collecting data searching for a mirage, and we should take our repeated null results as evidence that our hypothesis just isn’t true.

We didn’t think that because our minds had been set that this was an engineering problem, not a scientific one.  Scientifically we felt the something *must* exist, everything we’d ever studying said it must, and yet time after time we found it conclusively *not existing* despite our best efforts to find it.  If we could just get the engineering right: tweak the experiment, alter our detection methods, make sure to do it all correctly, then surely we’d find it.  But maybe that was all a lie and it just never existed.

I left that lab, and to this day they still haven’t found their special something.  They still work on it, and I’m sure many labs around the world still work diligently looking for a something that may or may not be there.  But on a micro level I feel that that lab had stopped doing science.

I wish PBS Spacetime would do more planetary science

For those who don’t know, PBS Spacetime is an awesome youtube series where real-life astrophysicist Matt O’Dowd discusses the most fascinating facts and theories about modern physics. They’ve had videos on everything from String Theory to General Relativity to alien spaceships buzzing our solar system. I’ve loved almost every video and topic they’ve discussed but one glaring omission that I’d love to see more of is planetary science, especially the formation of our solar system.

Our solar system is a weird and wonderful place, and there’s plenty to talk about that they haven’t gotten too. I’m particularly interested in the topic of solar system formation. When I read articles about exoplanets and foreign stars, they often discuss the Hot Jupiters and Super Earths that might be orbiting those. These stories make our solar system, with it’s cold Jupiter and it’s regular-sized Earth seem kind of lame. But how abnormal is our solar system? Are we out of the ordinary, or very ordinary indeed?

One really cool set of hypotheses I’ve read up on are the Nice Model and the Grand Tack. I don’t have near enough astrophysics background to explain these, but together they paint an exciting picture in which, during the early formation of the solar system, Jupiter and Saturn began to drift inward on orbits closer and closer to the sun. Eventually they got to orbits that are much closer to Mars’ orbit than what they have at present, before orbital resonances kicked them back out again into their present orbits. These theories propose to explain a lot of questions about our early solar system: the smallness of Mars relative to the Earth and Venus, how the current gas giants could have formed and reached positions so far away from the sun, and even perhaps explain the Late Heavy Bombardment of the inner solar system. I’ve often been curious if they could also be an explanation for why our sun doesn’t have a Hot Jupiter aka a gas giant orbiting very very close to the Sun. As stated, Jupiter and Saturn migrated inward before eventually turning around and migrating back out again. If they had not stopped, might they have formed a set of Hot Jupiters? Did the Hot Jupiters around other stars migrate inward to their positions, and Jupiter and Saturn once migrated?

It’s a tantalizing topic for me which is why I’d love to see a PBS spacetime episode on it?

Some questions about a new Miracle Cure for degrading “Forever Chemicals” such as PFAS

Earlier I was sent a wonderful article by the BBC about a new breakthrough in degrading “forever chemicals” known as PFAS. PFAS aka “perfluoroalkyl substances” are common chemicals used to make all sorts of houseware from paints to pans to wrappers. They are highly resistant to liquids which is why they’re so often used, but that itself makes them difficult to degrade. Because they are difficult to degrade, they stick around and have been linked to some harmful health effects if they are present at very high levels. This is why the new breakthrough is so important, the ability to degrade these chemicals before they build up to harmful levels would be very useful.

After reading the BBC article I went to the paper itself to understand the science behind the breakthrough. Now here’s where I have questions, because I am not an expert here I’d love if some actual science experts could help me understand this. To start with, PFAS is basically a long string of carbon atoms ending in a carboxylic acid, and attached to each carbon atom is a bunch of fluorine atoms. Prior research demonstrated that the carboxylic acid can be popped off using high temperature (120 degrees C) and a polar, aprotic solvent (water is protic, DMSO is aprotic, aprotic means that is can’t donate hydrogen bonds, which water does do easily). Once the carboxylic acid is popped off by this high temperature and specific solvent, then all the fluorine ions are readily removed by the addition of NaOH. In the main body of the research this step was simultaneous to the popping off of the carboxylic acid (aka at 120 degrees C), but later on the paper said that removing the fluorines could also happen at lower temperatures.

Now this is all very cool but one of my questions is: what happens to all those fluorines? And especially what will happen if we try to industrialize this process to degrade PFAS on a large scale? It appears that the fluorines remain as F- ions in the solution, but from my understanding if even a small amount of water gets into the solution, they will readily turn into HF, a very dangerous acid. If this process is scaled up, it seems conceivable that the concentration of PFAS will be increased in the reaction vessel to more efficiently use space and heat for degradation, meaning the concentration of fluorine following degradation will also be increased, meaning that the possibility for high concentrations of HF will also increase. So basically: is this process ready for prime time, or do we need to add another step to safely remove the F- ions? Fluorine as an atom is very hard to move around, requiring special permits and special containers, so I can’t imagine you can just package and ship it to some plant for re-use in new PFAS production. So what’s the next step? What’s a good way to remove or neutralize the fluorine so it can either be safely disposed of or sold for re-use? I’d love if any scientists could help me understand this.

A Practical Guide for going to space.  Final thoughts.

Writing this series has been, for me, very therapeutic.  I’ve always been interested in space and space travel.  There’s still a lot more to talk about, for instance SSTOs (single stage to orbit) and why many think they’re the future of space travel.  Or the particular difficulties of landing on any planet with an atmosphere.  But overall I wanted this to be a fun little introduction to how space travel works and how it was done in the Apollo program.  Once I learned how it worked I started noticing how basically no movies or games (besides Kerbal Space Program do it justice.  I can’t tell you how many times I’ve noticed that most spaceships in movies or games don’t actually orbit anything, they just float around relatively motionless compared to whatever body they are near.  The International Space Station for its part is moving incredibly fast, with an orbital rate of about once every two hours.

Still it was fun to get this all out there and in one coherent place.  Thank you for taking the time to read and learn with me.

A Practical Guide for going to space. Part 4: fuel-saving designs for an easier round trip

In the last three days I’ve made a series of posts detailing in a general sense how a space mission can go from the Earth to the Moon and back.  On Monday I discussed how to get into orbit and how orbits work generally.  On Tuesday I discussed how to go from an Earth orbit to a Moon orbit, and how to go from orbit to landing on the surface.  And on Wednesday I discussed the return journey from the Moon to Earth and how atmospheric drag can be used to help land on Earth.

Today I’d like to touch on the things I didn’t mention, the things NASA spent a lot of time and money to achieve because they were crucial to mission success.  In particular, NASA spent a lot of time and money figuring out how they could get the greatest amount of mass to the moon using the least fuel and the smallest rockets they could.  Rockets and fuel are big, expensive, and difficult to handle so the less of them you have to use the better.

This weight-saving starts in the first ascent when the spaceship is getting into orbit.  The rocket that launched from Kennedy space center was 363 feet tall and looked like THIS while the orbiting modules that went to the moon was about 37 feet tall and looked like THIS.  Where did all the rockets go?  Well the Saturn V rocket itself was big and heavy, and once all its fuel was expended it was detached from the orbiting modules and fell back to earth, allowing the modules to get into orbit on their own.  This in turn made getting to the moon cheaper and more fuel efficient because getting those little modules to the moon costs way less fuel than getting a giant Saturn V PLUS those modules to the moon.  This idea of saving weight by detaching from expended rockets was used all over the Apollo and Soviet programs, and will be discussed again shortly.

Next, once the modules got into orbit around the moon, we can save weight again by having only 1 module descend to the lunar surface while the other remains in orbit.  This significantly reduces the amount of weight we need to get on and off the Moon, and that in turn reduces the fuel usage.  Finally, once on the Moon the Apollo module would detach from some of its rockets yet again, leaving them on the Moon and sending only a small part of the lunar lander back to orbit, similar to how booster rockets were jettisoned during Earth ascent.

In all these above cases, fuel can be saved by simply taking less mass from one place to the other. Detaching from the rockets to take less mass from Earth orbit to Moon orbit, detaching the lunar module to take less mass from Moon orbit to Moon Landing, and then detaching from some lunar module rockets to take less mass from Moon Landing back to Moon orbit. All of these save the weight you have to move and thus save fuel, and one of the biggest difficulties in going into space is you fuel usage so this is a big help. Originally NASA didn’t want to detach a lunar module to detach from the command module for lunar landing, they wanted to land the entire module on the moon. This was because detachment and landing would have to be followed by an in-orbit rendezvous to get the astronauts back together for the return-to-earth part of the mission, and they didn’t know if in-space rendezvous were feasible. But the fuel-savings from this method were obvious so several missions were launched to test our ability to perform rendezvous, and once successful the lunar-module version of the mission was given the go-ahead.

The last trick is something I’d like to make clear about the physics of getting into and out of an orbit.  When I watched the Giant-Bomb let’s play of Kerbal Space Program, one of the commenters posed the question: “It’s easier to get down from orbit than back into orbit, it must be easier because you have gravity helping you, right?”. This is in fact a misunderstanding, to get from in orbit around the body to being stationary on a body requires the same amount of force as to do the opposite. You can get down from orbit more cheaply if all you want to do is crash, in that case you can simply shrink your orbit and crash into the body at a few hundred meters per second, saving you a lot on fuel (this is called lithobraking and was used to land the NASA rovers Spirit and Opportunity, although to protect the robots their fall was cushioned by inflatable airbags). So it will always take the same amount of energy to get from the ground into orbit as it takes to get from orbit to the ground, however importantly this does not take into account the atmosphere of a planet. The atmosphere of a planet creates drag which will slow down down any craft moving through it, and we can use that to our advantage when we try to land on Earth by letting the atmosphere slow our descent instead of needing to use rockets to slow ourselves like we did on the Moon. This is the final big fuel-saving for our trip and is why the Apollo capsules landed without their rockets, because they didn’t need those rockets to slow themselves and it would only make descent harder as they’d need a bigger parachute to slow themselves upon final descent to the ground.

All in all, saving fuel and weight is of primary importance to any space mission, and many of the techniques we take for granted had to be calculated and figured out by NASA before they became standard. Everything the Apollo rockets did had hundreds of pages on data and savings behind them, even if they aren’t immediately obvious to us, but they were all necessary to get to the moon.

A Practical Guide for going to space. Part 3: from the Moon back to Earth

This is the third post in my weeklong series about space travel.  Yesterday’s post can be found here and in it I explained the basics of getting a spaceship from low Earth orbit to the surface of the moon using the simple concepts of a prograde and retrograde burns.  Remember that burning prograde means firing your rockets in such a way that you increase your velocity in the direction of your motion, relative to the body you are orbiting.  Burning retrograde decreases your velocity in that direction.  If you are orbiting around the Earth’s equation, burning prograde means pointing your rocket in the direction or your current motion and executing a burn to gain more velocity in that direction.  

Now that we’ve been to the surface of the moon we can play a few holes of moon golf, and then once finished we can leave the surface of the moon and return to Earth.

The trip from the moon’s surface to low lunar orbit is much like the trip we took in Part 1 from the surface of the Earth to low Earth orbit, only this time there’s no atmosphere to drag us down.  So we only need to gain enough altitude to clear any lunar mountains, then burn horizontally from the lunar surface until we have enough horizontal velocity that gravity bends our trajectory around the planet and into an orbit.  If we have too little horizontal velocity, our trajectory will be bent back down to the planet’s surface, and if we have too much horizontal velocity we will escape the moon’s orbit.  Escaping the moon’s orbit is actually our next step though, so once in orbit we can burn prograde to gain velocity relative to the moon and escape its orbit.  

Once we escape the moon’s orbit, where will we be?  Back in orbit around the earth.  Remember that the moon itself orbits the Earth, and so anything orbiting the moon is also itself orbiting the Earth.  Escaping the moon’s orbit will likely bring us to an elliptical orbit with Earth as its focus.  We gained a lot of velocity relative to both the moon and the earth in order to escape the moon, but we still haven’t escaped the Earth’s orbit.  That’s actually good, we don’t want to escape the Earth (yet), personally I need to get back home.  So now that we’re out of the moon’s orbit and back into an Earth orbit, how do we get back to Earth?  Simply burn retrograde to reduce our velocity relative to Earth.  Doing this will shrink our orbit, just as burning prograde expanded our orbit in part 2.  And once we’ve shrunk our orbit to the point that our orbital trajectory crosses into Earth’s atmosphere, we’re basically guaranteed to get home.  The Earth’s atmospheric drag will slow our craft down, sapping it of horizontal momentum, until our trajectory no longer maintains an orbit but instead is bent towards the planet’s surface by gravity.  

This was something we couldn’t do on the moon because the moon doesn’t have an atmosphere, but it does bring back the Apollo 13 dilemma that I discussed all the way back in Post 2.  To recap: the Apollo 13 dilemma was about how Apollo 13 would navigate the Earth’s atmosphere to ensure it got home safely.  The astronauts needed to burn retrograde to lose enough velocity such that Earth’s atmosphere would slow them down and they would land on Earth with their parachutes, but how much should they slow down?  If they slowed down too much, they would take a steep plunge through the atmosphere, the intense heat from re-entry might destroy their capsule, and even if it didn’t the steep trajectory might not give their craft enough time to slow down enough for a safe landing.  However if they slowed down too little, then they would take a very shallow trajectory.  This shallow trajectory would mean they would not pass through enough of earth’s dense atmosphere, meaning they would not be slowed down significantly by the atmosphere, meaning their trajectory would not be bent into a surface-crossing one.  As they passed through the atmosphere, they would be slowed down, but it would not be enough and they would continue on their elliptical earth orbit.  Their orbit would still cross Earth’s atmosphere, and so each time their obit passed through the atmosphere the craft would be slowed more and more until their trajectory was bent into a surface-crossing one and it hit the ground.  The problem for the Apollo 13 astronauts was by then it would be too late.  Their elliptical orbit took days to complete and they didn’t have enough food, water or oxygen to survive for that long.  They needed to come down to earth in a single pass.

This dilemma is similar to the one we would face coming back from the moon, we need to burn retrograde such that we will pass through the earth’s atmosphere and let it take enough of our momentum so we can safely land with our parachutes.  Again the calculus for figuring this out is diabolical, and it’s the reason NASA employed so many people just to do calculations during the Apollo program.  But once we are slowed down enough by the atmosphere, our trajectory will be bent into one which crosses the surface of the earth, and from there it’s simply a matter of deploying a parachute at the right time and our craft can gently float down to land on the surface.  Mission accomplished.