I don’t like Factorio: Space Age

I started, stopped, and started this post several times. I just want to get it out the door so I’m posting it now regardless of that it’s not the greatest. I’ll have more to post on Factorio after this, but my thesis remains: I loved Factorio on it’s own, I don’t like Factorio: Space Age. I don’t think it’s a good expansion pack and I don’t think you should buy it.

Let me ramble about science in the base version of Factorio.

Red science was so simple you could craft it in your inventory. But the long time it took encouraged you to figure out automation to make that unnecessary. Green science was a step up, but it not only tested your automation skills, but also encouraged *and* rewarded you for successfully doing it. To explain: green science needs inserters and belts, which are two things you’ll make a *lot* of in Factorio. If you want to succeed, you’ll need to automate them so might as well do so since they’re also needed for green science. Conversely once you do get over the difficulty hill of automating them, you can split off the inserters and belts you’ll need for your factory, because you probably are building more than what your green science needs. So green science encourages you to automate the things you’ll need to automate anyway, but also rewards you since automating those things is a necessary step in growing the factory.

From there, blue science tests a whole new subject: fluid mechanics. Blue science needs plastics, which needs petroleum gas, which needs oil. If you’ve never dealt with factorio fluids before, blue science demands you learn how. But you’re also rewarded with bots, because blue science unlocks the construction and logistics robots that make the second half of the game so much easier.

Purple science doesn’t feel much different than blue science, but I think the name “production science” is fitting because it’s a real step up in total materials if not complexity. For the most part purple science uses all the same inputs as blue science, but no matter how much I feel I overbuild, I *always* seems to run out of steel for it! Purple science tests your ability to scale, and scale big, because you always need more steel than you think you need.

Finally, yellow science really feels like a final exam. Like purple science you’ll need to have an overwelming volume of inputs, this time copper instead of iron/steel. Blue Circuits and Batteries both require you to have completely mastered the game’s liquid input systems, with multiple steps where chemical plants feed into assemblers and vice versa.

When you finally master yellow, white science is strangely underwelming. It’s mostly “the same but more,” if requires blue circuits and low density structures just like yellow science (plus extra green and red circuits before Space Age came out), but then adds rocket fuel on top of that and a huge space launcher that needs to be built. Not exact a great leap in difficulty, but by then you’re probably just ready for it to end, so it’s in a good place overall.

The thing is, Space Age doesn’t feel like it follows this kind of progression, or any progression. Each planet feels mostly like redoing red and green science. The science pack only demands that you master the basics of automation on this new planet with these new resources. And once you do that, you can leave and never need to return.

It feels… not great. I don’t feel any sense of adventure and progression landing on planet after planet and doing the equivalent of “super simple red/green science, only now with 1 new ingredient no other planet has.”

The space mechanics are like Dyson Sphere Program, in that they aren’t realistic at all and I wish they were. I know making Kerbal Space Program *in* Factorio would have been hard, but at the very least I don’t see why a rocket that runs out of fuel starts slowly sinking back to the planet it launched from, but also doesn’t ever fall into the atmosphere and hit the ground. A rocket that loses fuel just continues to drift on its current trajectory. If you want it to fall back to the planet it launched from, then that trajectory should eventually make it hit the ground. But instead Factorio: Space Age has this worst-of-every-single-world middle ground where things are unituitive *and* unphysical *and* waste your time. My first every space ship didn’t have enough fuel to reach its destination planet, so I had no choice but to wait for it to *sloooooooooooooooowly* drift backwards back to the first planet before I could give it more fuel to try the journey again. I had no way to speed this up, and I had no reason to think it *would even work that way* since that’s not how space travel actually works.

Another thing I dislike, I feel like this game had room for having the planets interact with each other more. The space ships are build off the old system for railroads, but the spaceships aren’t useful as railroads. The game is clear that you should simply be producing your science on each planet and then shipping it all to Nauvis for research. But why does that have to be the *only* option? Why not make it so that we can juggle items and send them all over to each planet? Because the devs decided every challenge in this expansion pack must have *a single specific solution*, rather than letting the player come up with their own solution. That’s bad game design and makes this game less fun.

When I played with rails, yes I would make a starter base for red/green/black science. Then another for blue, another for purple, another for yellow+white. And I’d run a single train line to each of these bases to ship all the science to a single location. But you don’t have to be that lame. You can have train likes running in all directions to ship all raw resources to a centralized location. This can simplify say your green chip production if it all happens in one place and you just siphon those chips to each research that needs them.

Or you can have satellite bases that build intermediate products, say putting all chips in one place and shipping them around. Or a mishmash of both where sometimes you produce everything onsite and only ship the science back and sometimes you’re importing everything just to make science. You can do a lot of things.

You can’t do that in space age because of the seemingly arbitrary restrictions on how much stuff can fit in a rocket. 2,000 green chips can fit in a single rocket, but only 300 blue chips. Blue chips stack a lot more efficiently than that, the only reason for this is the feeling that it would be “too easy” if you could ship blue chips around from Fulgora. But would it be easy, or would it be interesting? They clearly wanted you to engage with space shipping, the entire planet Aquilo punishes you if you don’t, but they didn’t want you to do *enough* space shipping to actually make planet-to-planet production lines like you could with trains in the base game.

And I think that’s a huge missed opportunity, because I’d *love* it if I could be rewarded for interplanetary shipping like this. I’d love to heavily focus Vulcanus on the “low tier” items and Fulgora on the “high tier.” Gleba could specialize in the various oil derivatives with all its bioproducts. Then I could ship whatever I need whereever I need and have an engaging reason to produce a lot of different space ships with different needs.

It feels like the game quite clearly has exactly one way you have to play and doesn’t want you to experiment, rather it wants you to find and accept the “right” way. The most clear version of this is in the asteroids that will hit your space ships. Fighting the biters in the base game gave a huge latitude for experimentation, did you turret creep them? Mass produce grenades and use grenade spam? Drive all around them in a car with autocannons? Go for the defender capsules? There’s a lot of different ways to do things and none of them are wrong. You can use a tank or ignore it completely. You can focus on personal laser defense to kill biters up close, or rush artillery to kill them from afar. Do you even care to try uranium ammo? Or nuclear bombs? Or do you just want to plop down a long line of laser turrets and call it a day? The game lets you play how you want, rewards you for experimenting, and never punishes you for trying something “wrong.”

Space Age punishes you for not playing its way. You need to use turrets in space to protect from asteroids. And you need to build ammo in space to feed the turrets. You can’t use lasers like you could on the ground, because then you’d only need to focus on power, so asteroids have 99% damage reduction against the same lasers that can kill a behemoth biter twice their size. And you can’t ship ammo up to the space ship either, that would be too easy. Instead ammo has been heavily curtailed with how much of it can be shipped to and fro. 25 uranium-coated bullets weigh as much as 1,000 solid iron plates. Check the periodic table and do the math, I assure you it doesn’t add up. Even more crazy is that 25 uranium bullets weigh as much as 50 uranium fuel cells, U-238 really isn’t *that* much heavier than U-235 guys.

And then once you get ammo working, they introduce new asteroids that are 99% resistant to physical damage. All so that you are forced to build rocket turrets instead, which are the new asteroids one weakness. Then finally rocket turrets need to be upgraded to tesla turrets.

There’s no variety here, there’s no experimentation, there’s no reward for trying things your way. You don’t get to try other options like shipping all your ammo up and trying to make it that way. Or focusing on laser turrets instead of gun turrets. Or using walls to ram the asteroids instead of using guns at all. There’s a lot of alternative routes that are just fine to experiment with against biters, but are shot down when you go against asteroids because the devs had a very specific vision in mind for how they wanted space ships to work, and stepping outside of their vision is not allowed.

The game just isn’t fun. The newest planets are hit and miss. Fulgora is nice because it’s a backwards planet, all the most expensive materials are easy to get and all the cheapest materials are harder to get. Vulcanus is my favorite because it actually does something cool: your normal solid products are turned into liquids instead. Gleba is terrible game design and should be deleted entirely. Aquila is unfinished and boring.

And overall even the new planets aren’t fun when I’m just landing, doing 3 things, and then leaving that planet never to return. I don’t feel like these bases are part of “my” base the way I felt when I made an area for purple science and an area for yellow science. I don’t feel like they connect to each other in any way because they don’t.

And I don’t feel like any of the challenges the game presents are worthwhile in their own right, because they’ve all been made with the mindset of “there is only 1 way to properly complete this challenge, find the way the game devs wanted or else.” They’ve specifically put down guard-rails to prevent you from ever having an original thought that wasn’t the solution they themselves wanted, and it just feels lame. Space ship design should be the greatest avenue for player freedom and creativity, but instead everyone’s space ship is *identical* because the devs needed to make the challenges solvable in only 1 precise way. So no one ships ammo to space, no one tries to smash into the asteroids with walls and build up faster than they take damage. No one tries to do anything except the exact solution the devs wanted, and that it such a shame for a game that until now was so focused on player freedom and expression.

Factorio: Space Age is not a good expansion pack. I thought it would rekindle my love for Factorio, but now I never want to play Factorio again. I had been playing for absolute ages, and had recommended the game to friends. But I can’t recommend this expansion pack to anyone I know, it just isn’t what made Factorio so fun to begin with.

If the government doesn’t do this, no one will

I’m not exactly happy about the recent NIH news. For reference the NIH has decided to change how it pays for the indirect costs of research. When the NIH gives a 1 million dollar grant, the University which receives the grant is allowed to demand a number of “indirect costs” to support the research.

These add up to a certain percentage tacked onto the price of the grant. For a Harvard grant, this was about 65%, for a smaller college it could be 40%. What it meant was that a 1 million grant to Harvard was actually 1.65 million, while a smaller college got 1.4 million, 1 million was always for the research, but 0.65 or 0.4 was for the “indirect costs” that made the research possible.

The NIH has just slashed those costs to the bone, saying it will pay no more than 15% in indirect costs. A 1 million dollar grant will now give no more than 1.15 million.

There’s a lot going on here so let me try to take it step by step. First, some indirect costs are absolutely necessary. The “direct costs” of a grant *may not* pay for certain things like building maintenance, legal aid (to comply with research regulations), and certain research services. Those services are still needed to run the research though, and have to be paid for somehow, thus indirect costs were the way to pay them.

Also some research costs are hard to itemize. Exactly how much should each lab pay for the HVAC that heats and cools their building? Hard to calculate, but the building must be at a livable temperature or no researcher will ever work in it, and any biological experiment will fail as well. Indirect costs were a way to pay for all the building expenses that researchers didn’t want to itemize.

So indirect costs were necessary, but were also abused.

See, unlike what I wrote above, a *university* almost never receives a government grant, a *primary investigator* (called a PI) does instead. The PI gets the direct grant money (the 1 million dollars), but the University gets the indirect costs (the 0.4 to 0.65 million). The PI gets no say over how the University spends the 0.5 million, and many have complained that far from supporting research, the University is using indirect costs to subsidize their own largess, beautifying buildings, building statues, creating ever more useless administrative positions, all without actually using that money how it’s supposed to be used: supporting research.

So it’s clear something had to be done about indirect costs. They were definitely necessary, if there were no indirect costs most researchers would not be able to research as Universities won’t allow you to use their space for free, and direct costs don’t always allow you to rent out lab space. But they were abused in that Universities used them for a whole host of non-research purposes.

There was also what I feel is a moral hazard in indirect costs. More prestigious universities, like Harvard, were able to demand the highest indirect costs, while less prestigious universities were not. Why? It’s not like research costs more just because you have a Harvard name tag. It’s just because Harvard has the power to demand more money, so demand they shall. Of course Harvard would use that extra money they demanded on whatever extravagance they wanted.

The only defense of Harvard’s higher costs is that it’s doing research in a higher cost of living environment. Boston is one of the most expensive cities in America, maybe the world. But Social Security doesn’t pay you more if you live in Boston or in Kalamazoo. Other government programs hand you a set amount of cash and demand you make ends meet with it. So too could Harvard. They could have used their size and prestige to find economies of scale that would give them *less* proportional indirect costs than could a smaller university. But they didn’t, they demanded more.

So indirect costs have been slashed. If this announcement holds (and that’s never certain with this administration, whether they walk it back or are sued to undo it are both equally likely), it will lead to some major changes.

Some universities will demand researcher pay a surcharge for using facilities, and that charge will be paid for by direct costs instead. The end result will be the university still gets money, but we can hope that the money will have a bit more oversight. If a researcher balks at a surcharge, they can always threaten to leave and move their lab.

Researchers as a whole can likely unionize in some states. And researchers, being closer to the university than the government, can more easily demand that this surcharge *actually* support research instead of going to the University’s slush fund.

Or perhaps it will just mean more paperwork for researchers with no benefit.

At the same time some universities might stop offering certain services for research in general, since they can no longer finance that through indirect costs. Again we can hope that direct costs can at least pay for those, so that the services which were useful stay solvent and the services which were useless go away. This could be a net gain. Or perhaps none will stay solvent and this will be a net loss.

And importantly, for now, the NIH budget has not changed. They have a certain amount of money they can spend, and will still spend all of it. If they used to give out grants that were 1.65 million and now give out grants that are 1.15 million, that just means more individual grants, not less money. Or perhaps this is the first step toward slashing the NIH budget. That would be terrible, but no evidence of it yet.

What I want to push back on though, is this idea I’ve seen floating around that this will be the death of research, the end of PhDs, or the end of American tech dominance. Arguments like this are rooted in a fallacy I named in the title: “if the government doesn’t do this, no one will.”

These grants fund PhDs who then work in industry. Some have tried to claim that this change will mean there won’t be bright PhDs to go to industry and work on the future of American tech. But to be honest, this was always privatizing profit and socializing cost. All Americans pay taxes that support these PhDs, but overwelmingly the benefits are gained by the PhD holder and the company they work for, neither of whom had to pay for it.

“Yes but we all benefit from their technology!” We benefit from a lot of things. We benefit from Microsoft’s suite of software and cloud services. We benefit from Amazon’s logistics network. We benefit form Tesla’s EV charging infrastructure. *But should we tax every citizen to directly subsidize Microsoft, Amazon, and Tesla?* Most would say. no. The marginal benefits to society are not worth the direct costs to the taxpayer. So why subsidize the companies hiring PhDs?

Because people will still do things even if the government doesn’t pay them. Tesla built a nation-wide network of EV chargers, while the American government couldn’t even build 10 of them. Even federal money was not necessary for Tesla to build EV chargers, they built them of their own free will. And before you falsely claim how much Tesla is government subsidized, an EV tax credit benefits the *EV buyer* not the EV seller. And besides, if EV tax credits are such a boon to Tesla, then why not own the fascists by having the Feds and California cut them completely? Take the EV tax credits to 0, that will really show Tesla. But of course no one will because we all really know who the tax credits support, they support the buyers and we want to keep them to make sure people switch from ICE cars to EVs

Diatribe aside, Tesla, Amazon, and Microsoft have all built critical American infrastructure without a dime of government investment. If PhDs are so necessary (and they probably are), then I don’t doubt the market will rise to meet the need. I suspect more companies will be willing to sponsor PhDs and University research. I suspect more professors will become knowledgeable about IP and will attempt to take their research into the market. I suspect more companies will offer scholarships where after achieving a PhD, you promise to work for the company on X project for Y amount of years. Companies won’t just shrug and go out of business if they can’t find workers, they will in fact work to make them.

I do suspect there will be *less* money for PhDs in this case however. As I said before, the PhD pipeline in America has been to privatize profits and subsidize costs. All American taxpayers pay billions towards the Universities and Researchers that produce PhD candidates, but only the candidates and the companies they work for really see the gain. But perhaps this can realign the PhD pipeline with what the market wants and needs. Less PhDs of dubious quality and job prospect, more with necessary and marketable skills.

I just want to push back on the idea that the end of government money is a deathknell for industry. If an industry is profitable, and if it sees an avenue for growth, it will reinvest profits in pursuit of growth. If the government subsidizes the training needed for that industry to grow, then instead it will invest in infrastructure, marketing, IP and everything else. If training is no longer subsidized, then industry will subsidize it themselves. If PhDs are really needed for American tech dominance, then I absolutely assure you that even the complete end of the NIH will not end the PhD pipeline, it will simply shift it towards company-sponsored or (for the rich) self-sponsored research.

Besides, the funding for research provided by the NIH is still absolutely *dwarfed* by what a *single* pharma company can spend, and there are hundreds of pharma companies *and many many other types of health companies* out there doing research. The end of government-funded research is *not* the end of research.

Now just to end on this note: I want to be clear that I do not support the end of the NIH. I want the NIH to continue, I’d be happier if its budget increased. I think indirect costs were a problem but I think this slash-down-to-15% was a mistake. But I think too many people are locked into a “government-only” mindset and cannot see what’s really out there.

If the worst comes to pass, and if you cannot find NIH funding, go to the private sector, go to the non-profits. They already provided less than the NIH in indirect costs but they still funded a lot of research, and will continue to do so for the foreseeable future. Open your mind, expand your horizons, try to find out how you can get non-governmental funding, because if the worst happens that may be your only option.

But don’t lie and whine that if the government doesn’t do something, then nobody will. That wasn’t true with EV chargers, it isn’t true with biomedical research, and it is a lesson we all must learn if the worst does start to happen.

“I hate them, their antibodies are bull****”

I want to tell two stories today, they may mean nothing individually but I hope they’ll mean something together. Or they’ll mean nothing together, I don’t know. I’ve gotten really into personal fitness and am writing this in between sets of various exercises I can do in my own house.

The first story is from before the pandemic. I used to be a biochemist (still am, but I used to too). During that time I went to a lot of conferences and heard a lot of talks by the Latest and Greatest. One of the most fascinating talks was by a group out of Sweden who were preparing what they called a “cell atlas,” a complete map that could pinpoint the locations of every protein that would be in healthy human cells.

The science behind the cell atlas was pretty sweet. We know that the physical location of proteins in the body really matters, the proteins that transcribe DNA into RNA are only found in the nucleus because DNA itself is only found in the nucleus. Physical location is very important so that every protein in the body is doing only the job it’s assigned, and not either slacking off or accidentally doing something it isn’t supposed to. The first gives you a wasting disease and the latter may cause cancer.

So knowing the location of these proteins on a subcellular level is actually pretty important. But how can we even determine that? We can’t really zoom into a cells and walk around checking off proteins, can we?

The key was that this group was also really into making their own fluorescent antibodies. They could make antibodies for any human protein and then stick on a fluorescent tag that lights up under the right conditions. Then it was just a task of sticking the antibodies into cells and seeing which part lights up, that tells you where the protein is.

There was a bit more to it of course, I should do a post about how all this relates to Eve Online, but that was the gist of it: put antibodies in cells and see where the cell lights up. Use that to build an atlas of the subcellular locations of the human proteome.

It was some cool science and a nice talk. A few months later I was at another conference and the discussion came up of if conferences ever really have “good” talks or if scientists are incapable of anything above “serviceable.” I proffered the cell atlas talk as one I thought was actually “good,” it was good science explained well. The response I got from one professor stunned me: “oh I hate those people, their antibodies are bullshit.”

I don’t know how or why, but somehow this professor had decided that the in-house antibodies which underpinned the cell atlas project were all poorly made and inaccurate. That then undercut the validity of the entire project. I didn’t press further for this professor’s reasoning or evidence, I could tell he was a bit heated (and drunk) and left it at that. But while I never got any evidence against the cell atlas antibodies, I also never heard much in their favor. They seemed like a big project that just never got much recognition in the circles I ran in.

So was the cell atlas project a triumph of niche science, or a big scam? Well I don’t know, but it reminds me of another story.

As I said above, I’m much more into personal fitness these days. The Almighty Algorithm knows this, and so youtube serves me up a steady stream of fitness influencer content. I still stay away from anything that isn’t Mike Israetel or a few other “evidence based” youtubers, but even this small circle has served up its own helping of scientific slapfights.

In this case the slapfight is about “training to failure.” Most fitness influencers agree that you have to train hard if you want results. What exactly counts as “hard” though, that is where the controversy lies.

First of all, what is “training to failure?” Well unfortunately that too is controversial, because everyone has a different definition of what “failure” actually means. But generally, failure is when you are doing some exercise (a pushup, a pullup, a bench press) and you cannot complete the movement. Say you’ve done 5 pullups and you can’t do another, that’s “failure.”

Mike Israetel shows off example workouts of himself training hard, and he claims he’s training with “0 to 1 reps in reserve,” that’s a fancy way of saying he is training very near failure. If he does 5 pullups and claims he has 0 to 1 RIR (reps in reserve), then he is saying he could do AT MOST 1 more pullup, but he might actually fail if he even tried. He does this for almost every movement: bench presses, leg presses, squats, deadlifts, his claim of 0 to 1 RIR means he is doing the exercise until he can either no longer do it, or do it at most 1 more time before failure.

Failure itself is hard to measure, and sometimes you don’t know you’ll fail a move until you try. I once was doing pushups and just suddenly collapsed on my chest, not even knowing what happened. A quick assessment showed my shoulders gave out, and since pushups are supposed to be a chest exercise this implies I was doing them wrong, but that was a case where I clearly trained to failure since I tried to do the motion and failed.

But other fitness influencers have called Mike out on his 0 to 1 RIR claim, they think he isn’t training anywhere close to failure. The claims and counterclaims go back and forth, and unfortunately the namecalling does as well. I’ve kinda lost respect for the youtubers on all sides of this argument because of it.

But it gets back to the same point as the antibody story up above: a scientist is making a claim that they think is well-founded and backed by evidence, other scientists claim it’s all bullshit.

We think of science as very high minded and such, that science is conducted through solemn papers submitted to austere journals. I don’t think that’s ever been the case, science is conducted as much through catty bickering and backbiting as it is in the peer-reviewed literature. Scientists are still people, I’m sure a lot of us will be happy to take our cues from people we respect without spending the time to go diving into the literature. The literature is long and dense, and you may not even be the right kind of expert to evaluate it. So when someone you respect says a claim is bullshit, I’m sure a lot of people accept that and don’t pay the claim any additional mind.

So is the cell atlas actually good? Is Mike Israetel actually training to failure? I don’t know. I’m not the right kind of scientist to evaluate those claims. The catty backbiting has reduced my opinion of all the scientists involved in these controversies, although I understand that drunk scientists are only human and youtubers need to make a living through drama, so I try not to be too unkind to them.

Still, it’s a reminder that “the science” isn’t a thing that’s set in stone, and “scientists” are not all steely-eyed savants searching dispassionately for Truth. I don’t have any good recommendations from this unfortunately, the only thing I can think of is the bland “don’t believe scientists unquestioningly,” but that’s hardly novel. I guess just realize that scientists can disagree as childishly and churlishly as anyone else.

“I go with the athletes, not the science”

Sorry I haven’t written about finance in a while, I know science+finance (SciFi, if you will) was kinda my niche, but since I got serious about my fitness I’ve been recommended a lot of fitness content by the Almighty Algorithm, and it’s gotten me thinking.

Today’s topic requires just a tiny bit of background. As I wrote about, I’ve been following the advice of Dr Mike Israetel in part because he says all the right science-y shibboleths to make me believe he knows what he’s talking about. But I’ve also gotten recommended content from many other lifters who push back against some of his claims.

To an extent their pushbacks pass the smell test as well, they reference the same concepts that Dr Mike (and others) discuss, but they interpret those concepts differently. So the disagreement between Dr Mike’s “science-based” advice and other people’s advice seems to be a legitimate disagreement over the science, rather than a denial of science and the substitution of personal preference in its stead.

But other parts of this disagreement strike me as more… thoughtless. I watched a video critiquing some of the science-based conclusions, and it stated (paraphrased) “people say this move is terrible, but then you see world record power lifters doing it and you think hmmm, maybe it’s not so terrible after all.”

I think this appeal to authority has no place in a science-based discussion. Now yes, every scientific theory on exercise must be tested and proven *outside* the lab as well as in the lab. If a conclusion only works in a controlled lab environment then it isn’t necessarily best in the “real world.” But saying “well the best power lifters do this so the science must be wrong” is kind of absurd, because maybe they could be *better* if they actually listened to the science.

It reminds me of a story about Pliny the Elder. Pliny was a wealth Roman politician, whose wealth was derived mainly from vast agricultural estates. Not only that, he had extensive sources of the best knowledge available in the Roman world. So in his book Natural History, he draws upon his knowledge and experience to categorically state that *if you do not honor the gods, you will not be successful in agriculture*. And if you asked any of the Roman agriculturalists of his era, they’d probably give you the same answer.

Is the science on agriculture wrong? If all the best farmers honor the Gods, is that the only way to succeed?

No.

So if the best power lifters in the world are doing a certain move that science says is terrible, maybe the science is actually right and the power lifters are succeeding due to their own innate abilities combined with all their other training. I’d hazard a guess that a single move isn’t make or break to their training at all, and defending a move with this appeal to authority doesn’t really seem logical. It seems more like casting about for evidence to support an idea that you’d like to be true.

Science must be refuted with science. You have to be able to use real-world data and say “lab results say this move is bad but here’s all the evidence showing that people who eschew the move generally fail and people who use the move generally succeed.” You can’t point to a single piece of anecdote and say “well some people who use it succeed,” because then you’d be pointing to Pliny the Elder and saying “well I guess honoring the gods does improve your farm, because this guy was a really successful farmer and that’s what he did.”

Anyway, exercise science still seems to be in its infancy. I hope it gets more rigorous and comprehensive in the future, but it still seems to need some time before we can believe its claims as much as we can believe virology or chemistry.

Exercise and shibboleths

I’ve been trying to lose weight and gain muscle for years. But despite being in the target Young Male demographic, I never listened to Joe Rogan, or Logan Paul, or any of the exercise/fitness influences. Part of that was that they just didn’t interest me. Part of that was that fitness is filled with a lot of pseudoscience, and as a scientist myself I could see that almost everything said online was tinged with nonsense and falsehood. Everyone is looking for “one weird trick” to get abs of steel and 4% body fat, which leads to a proliferation of voodoo practitioners giving terrible advice and selling you supplements.

I stayed away from online exercise discussions.

But while idly scrolling one day, I found a video by Dr Mike Israetel of Renaissance Periodization. And for the first time in my life, I’m hooked. I’m watching his videos, I’m trying to learn his techniques, I’m putting into practice what he say I should be doing.

I think a large part of this sudden switch is that Dr Mike seems to have legit credentials. A teaching record at Lehman College, a genuine publication history, this guy is clearly doing science, not voodoo. But I think even more than his credentials are his shibboleths.

Put simply, Mike Israetel says all the right words as a scientist to make me (a fellow scientist) believe he knows what he’s saying. There are certain words that started out in science but have reached the mainstream: anyone can talk about carbohydrates and calories. But few people know what a motor unit is, or can accurately talk about the immune system. Dr Mike is saying things that pass the smell test to me (I am a fellow biology but not an exercise scientist specifically), and that helps me believe him when he says things I might otherwise be skeptical of.

And those shibboleths… make me nervous. Because I know I’m not actually doing research, I’m not actually seeking out all sides of the debate and forming my own rational conclusions. There’s hundreds of hucksters selling you on “the best way” to do exercise, so am I trusting Dr Mike for all the wrong reasons? Maybe he knows his biochemistry, but his exercise science is dogshit. I’d never know.

And even if Dr Mike is truly giving me the most accurate, up-to-date information in the scientific literature, that information could be wrong, and I could spend my time following baseless advice and getting less fit than if I’d just trusted the gymbro with a 6-pack and pecs.

I haven’t looked for any advice outside of Dr Mike, because to be honest I don’t have the time or the background necessary to know if he’s *really* got the goods or is a huckster like all the others. I have the background to know he knows his biochemistry, but beyond that I’m lost. But as someone without much time to exercise anyway, I feel like latching on to a charismatic Youtube professor is at least better than latching on to any other charismatic Youtuber, and is hopefully better than flying blind like how I used to exercise.

Time will tell.

So just how *do* you get good at teaching?

As a scientist with dreams of becoming a professor, I know teaching is part of the package. Whether it’s a class of undergraduates or a single student in a lab, your knowledge isn’t worth anything if you cannot teach it to others. I always say: no one would have cared about Einstein if he couldn’t accurately explain his theories. It doesn’t matter how right you are, science demands you explain your reasoning, and if you can’t explain in such a way to convince others, you still have a ways to go as a scientist.

Einstein was a teacher. After discovering the Theory of Relativity, he wrote and lectured so as to teach his theory to everyone. Likewise I must be a teacher, whether teaching basic concepts to a class of dozens, or teaching high-level concepts to an individual or a small group, teaching is part of science, and mandatory for a professor.

But how do I get good at it?

The first problem is public speaking. I don’t think I get nervous speaking in public, but I do have a tendency to go too fast, such that my words don’t articulate what I’m actually thinking. It’s hard to realize that the concepts you know in your head will be new and novel to the whole world that lives *outside* your head. When teaching these concepts to someone else, you need to go step by step so that they understand the logical progression, you can’t just make a logical leap because you already know the intervening steps.

So OK, I need to practice speaking more, but beside that, what’s the best method for teaching? And here we get to the heart of why I’m writing this post, *I don’t know and I don’t think anyone does*.

Every decade it seems sociologists find One Weird Trick to make students learn, and every decade it seems that trick is still leaving many students behind. When I went to school, teaching was someone standing at the front of the class, giving a lecture, after which students would go home and do practice problems. This “classic” style of teaching is now seen as passe at best, outright harmful at worst, and while it’s still the norm it’s actively shunned by most newer teachers.

Instead, teachers now have a battery of One Weird Tricks to get students to *really* learn. “ACTIVE learning” is the word of the day, the teacher shouldn’t just lecture but should involve the students in the learning process.

For instance, the students could each hold remote controls (clickers) with the numbers 1 through 4 on them. Then the teacher will put up a multiple-choice question at random points during class, and the students will use their clicker to give the answer they think is correct. There’s no grade for this except participation, and the students’ answers are anonymized, but the teacher will give the correct answer after all the students answer, and a pie chart will show the students how most of their classmates answered. So the theory is that this will massively improve student learning in the following ways:

  • Students will have a low-stakes way to test their knowledge and see if they’re right or wrong, rather than the high-stakes tests and homework that they’re graded on. They may be more willing to approach the problem with an open mind, rather than being stressed about how it will affect their grade.
  • The teacher will know what concepts the students are having trouble on, and can give more time to those prior to the test.
  • Students stay more engaged in class, rather than falling asleep, and likewise teachers feel more validated with an attentive class

The only problem is that the use of clickers has been studied, and has failed to improve student outcomes. Massive studies and meta-analyses with dozens of classes, thousands of students, and clickers don’t improve student’s learning at all over boring old lectures.

Ok, how about this One Weird Trick: “flipped classrooms.” The idea is that normally the teacher lectures in class and the students do practice problems at home. What if instead the students’ homework is to watch the lecture as a video, then in class students work on problems and the teacher goes around giving them immediate and personalized feedback on what they’re doing right or wrong?

In theory this again keeps students far more active, they’re less likely to sleep through class and the immediate feedback they receive while working through the problem sets helps the teachers and students know what they need to work more on. Even better, this One Weird Trick was claimed to narrow the achievement gap in STEM classes.

But another large meta-analysis showed that flipped classrooms *again* don’t improve student learning, and in fact *widen* the achievement gap between minority and white students. Not at all what we wanted!

In theory, science teaches us the way to find the truth. Our methods of storing information have gotten better and better and better as we’ve used science to improve data handling, data acquisition, and data transmission. I read both of those meta-analyses on my phone, whereas even just 30 years ago I would have had to physically go to a University Library and check out one of their (limited) physical journals if I wanted to read the articles and learn if Active Learning is even worth it or not.

But while we’ve gotten so much better at storing information, have we gotten any better at teaching it? We’ve come up with One Weird Trick after One Weird Trick, and yet the most successful (and common) form of teaching is a single person standing in front of 20-30 students, just talking their ears off. A style of teaching not too far removed from Plato and Aristotle, more than 2,000 years ago.

I want to get better at teaching, and I think public speaking is part of that. But beyond just speaking gooder, does anyone even know what good teaching *is*?

Gene drives and gingivitis bacteria

One piece of sci-fi technology that doesn’t get much talk these days is gene drives. When I was an up and coming biology student, these were the subject of every seminar, the case study of every class, and they were going to eliminate malaria worldwide.

Now though, you hardly hear a peep about them. And I don’t think, like some of my peers, that this is because anti-technology forces have cowed scientists and policy-makers into silence. I don’t see any evidence that gene drives are quietly succeeding in every test, or that they are being held back by Greenpeace or other anti-GMO groups.

I just think gene drives haven’t lived up to the hype.

Let me step back a bit: what *is* a gene drive? A gene drive is a way to manipulate the genes of an entire species. If you modify the genes of a single organism, when it reproduces only at most 50% of its progeny will have whatever modification you give it. Unless your modification confers a lot of evolutionary fitness to the organism, there is no way to make every one of the organism’s descendants have your modification.

But a gene drive can do just that. In fact, a gene drive can confer an evolutionary disadvantage to an organism, and you can still guarantee all of the organism’s decedents will have that gene. The biggest use-case for gene drives is mosquitoes. You can give mosquitoes a gene that prevents them from sucking human blood, but since this confers an evolutionary disadvantage, your gene won’t last many generations before evolution weeds it out.

But if you put your gene in a gene drive, you can in theory release a population of mosquitoes carrying this gene and ensure all of their decedents have the gene and thus won’t attack humans. In a few generations, a significant fraction of all mosquitoes will have this gene, thus preventing mosquito bites as well as a whole host of diseases mosquitoes bring.

Now this is a lot of genetic “playing God,” and I’m sure Greenpeace isn’t happy about it. But environmentalist backlash has never managed to stamp out 100% of genetic technology. CRISPR therapies and organisms are on the rise, GMO crops are still planted worldwide, environmentalists may hold back progress but they cannot stop it.

But talk about gene drives *has* slowed considerably and I think it’s because they just don’t work as advertised.

See, to be effective a gene drive requires an evolutionary contradiction: it must reduce an organism fitness but still be passed on to the progeny. Mosquitoes don’t just bite humans for fun, we are some of the most common large mammals in the world, and our blood is rich in nutrients. For mosquitoes, biting us is a necessity for life. So if you create a gene drive that knocks out this necessity, you are making the mosquitoes who carry your gene drive less evolutionarily fit.

And gene drives are not perfect. The gene they carry can mutate, and even if redundancy is built in, that only means more mutations will be necessary to overcome the gene drive. You can make it more and more improbable that mutations will occur, but you cannot prevent them forever. So when you introduce a gene drive, hoping that all the progeny will carry this gene that prevents mosquitoes biting humans, eventually one lucky mosquito will be born that is resistant to the gene drive’s effects. It will have an evolutionary advantage because it *will* bite humans, and so like antibiotic resistant bacteria, it will grow and multiply as the mosquitoes who still carry the gene drive are outcompeted and die off.

Antibiotics did not rid the world of bacteria, and gene drives cannot rid the world of mosquitoes. Evolution is not so easily overcome.

I tell this story in part to tell you another story. Social media was abuzz recently thanks to a guerilla marketing campaign for a bacteria that is supposed to cure tooth decay. The science can be read about here, but I was first alerted to this campaign by stories of an influencer who would supposedly receive the bacteria herself and then pledged to pass it on to others by kissing them. Bacteria can indeed be passed by kissing, by the way.

But like gene drives, this bacteria doesn’t seem to be workable in the context of evolution. Tooth decay happens because certain bacteria colonize our mouth and produce acidic byproducts which break down our enamel. Like mosquitoes, they do not do this just for fun. The bacteria do this because it is the most efficient way to get rid of their waste.

The genetically modified bacteria was supposed to not produce any acidic byproducts, and so if you colonized someone’s mouth with this good bacteria instead of the bad bacteria, their enamel would never be broken down by the acid. But this good bacteria cannot just live in harmony and contentment, life is a war for resources and this good bacteria will be fighting with one hand tied behind its back.

Any time you come into contact with the bad bacteria, it will likely outcompete the good bacteria because it’s more efficient to just dispose of your waste haphazardly than it is to wrap it in a nice, non-acidic bundle first. Very quickly the good bacteria will die off and once again be replaced by bad bacteria.

So I’m quite certain this little marketing campaign will quietly die once its shown the bacteria doesn’t really do anything. And since I’ve read that there aren’t even any peer reviewed studies backing up this work, I’m even more certain of its swift demise.

Biology has brought us wonders, and we have indeed removed certain disease scourges from our world. Smallpox, rinderpest, and hopefully polio very soon, it is possible to remove pests from our world. But it takes a lot more work than simply releasing some mosquitoes or kissing someone with the right bacteria. And that’s because evolution is working against you every step of the way.

Crying over Cryo-EM

OK so the title is hyperbole, but I’ve definitely struggled recently with my cryo-electron microscopy. I guess here I’ll give an overview of what exactly electron microscopy is and why I’ve struggled.

Professor Jensen of CalTech has a great series of videos on Cryo-EM. Why we use it, how we use it, and what it is. Anyone interested in the technology should watch it, but for my own purposes:

  • Cryo-electron microscopy consists of freezing a sample and then shooting electrons at it to see the 3d structure of it at the smallest atomic scales.
  • We’re using it to study a number of proteins that cause diseases. In particular we want to know how the 3d shape of a certain protein creates that protein’s function. And how that function can then go on to cause a disease.
  • So we purify a specific protein, make a cryo-grid from that purified protein, and then look at that cryo-grid under electron microscopy hoping to get a good 3d structure.

But that’s where the problems start. First of all, purifying a protein to 99.9% purity is no small feat, especially when you’re taking proteins out of actual patient samples. I’ve dearly struggled to get the required purity that would be needed to make good grids for imaging.

But once I have some “pure” protein, I need to add it to a grid to image it. A cryo-grid is a 1 millimeter by 1 millimeter circle about 1 micrometer thick. On that grid are cut out many 1 micrometer by 1 micrometer squares. And in each square are a mesh of 100 nanometer by 100 nanometer holes. When I add a tiny drop of my protein sample (which is in water) onto the grid, the hope is that the proteins will settle down into the holes. I will then “blot” the sample by pressing some paper onto both sides of the sample, which wicks away all the water not in the holes. I then instantly plunge the sample into liquid ethane, freezing all the liquid in the holes in an instant.

What you get is supposed to be a grid covered in a tiny thin layer of ice, and in each hole the ice contains your proteins of interest. Since they were flash frozen in ethane, the ice here is “vitreous,” which means glass-like. It’s see-through just like glass. And so a beam of electrons can pass into the ice to create an image of the proteins inside the ice.

But there’s problems. Let’s get back to making the grid: most proteins are hydrophilic which means water-loving. The opposite of hydrophilic is hydrophobic which mean water hating, like oil. Oil and water don’t mix, and neither do hydrophobic and hydrophilic things. Our grids are made of copper covered in a layer of carbon, and that stuff is naturally hydrophobic, meaning it doesn’t interact well with the hydrophilic proteins (and the water they are in).

So before adding proteins we have to glow discharge our grids. This means putting them in a machine that shoots broken-up water molecules at them. Those broken-up water molecules have oxygen in them, and some of them will bind to the grid creating oxygen-containing compounds. Those compounds are very hydrophilic, so the whole grid becomes hydrophilic enough for the proteins to interact with it.

At some point we got a new glow discharger, and I swear that it started destroying my grids. Like I said the grids are tiny and fragile, 1 millimeter across, 1 micrometer thick! This glow discharger shoots water at them, and the new one shot the water so hard that it was punching through my grids and destroying them completely at the microscopic level. I couldn’t see the damage because it’s microscopic, but after adding the protein to my grids and flash-freezing them, I’d look at them under a microscope and see nothing but a completely destroyed grid. I finally just stopped trusting it completely and moved on to using a new glow discharger that’s a bit weaker.

So OK I solved the glow discharge problem, but now here comes the ice problem. Like I said above, you want the proteins to be encased in glass-like vitreous ice. If you have no ice, well you have no proteins. And if the ice is too thick, it’s no longer glass-like and you can’t see through it. I kept being on both sides of those extremes, first I had ice so thick I couldn’t see anything, then I had no ice at all. You are supposed to manage this problem by configuring your blotting time, which is how long you wick away the water before plunging the grid into the liquid ethane. Shorter blot time, thicker ice, longer blot time, thinner ice or no ice at all. Try long and short times to get the ice just right.

And yet I was using ultra-short blot times and still getting thick and thin ice sometimes at random. On the balance I got more grids with no ice at all, so I kept thinking I needed to drop the blot time more and more. My adviser said that there is a minimum blot time of about 2 seconds and you never want to go lower than that, but I tried 2 seconds and the ice was still way to thin or non-existence. That seems to say that my blot time is still too long, yet 2 seconds is as short as I can go.

I finally asked an expert in the chemistry department who suggested I used their facilities instead. He also suggested that 1 second of blot time is perfectly fine, and so that was what I did. I FINALLY seemed to start getting good grids, so let’s hope it hold out.

So I’ve struggled with glow discharging, and then blot times, as well as protein purity. I’ve finally got some good grids, and I hope I can collect a lot of data on them. If I do that, I may be able to get 3d structural information using AI and a whole bunch of analysis. We’ll see though, we’ll see.

Good idea: financially supporting workers displaced by AI. Bad idea: taxing companies when for displacing workers with AI.

AI is again the topic of the day and people are discussing what to do about the coming “job-pocolypse.” It seems AI can do anything we humans can do better and so 30% or more of jobs will be destroyed and replaced by AI. Leaving aside how accurate that prediction is, if 30% of all jobs will be impacted then it does warrant a public policy response. Everyone’s got their own personal favorite, but one I see come up again and again is that companies should face a hefty tax any time they replace a worker with AI.

To be blunt, taxing companies for replacing workers with AI is a terrible idea. Let’s leave aside the argument of “how do you prove it,” and cut straight to the fact that the government should not be taxing technological progress. Just to start with some history, how many farmers were displaced by tractors? Millions. In 1900 40% of Westerners worked on farms, now it’s less than 5%. Tractors meant that a single farmer could do the labor of tens or hundreds of men, and so they could fire many of their farm hands to be replaced by tractors. But does anyone reading this wish nearly 1/2 of us were still farmers? Should the government have heavily taxes tractors to preserve the idyllic rural farm life?

The argument in favor of taxing companies that replace workers with a machine is that the company is becoming more profitable at the expense of the worker, and they should pay it back. The current hullabaloo is about being replaced by AI, but in the 20th century similar calls were made when factory workers were being replaced by robots. The problem with this argument is that ignores society. The worker and the company are not the only 2 pieces of the equation, society in general benefits when companies become more efficient. Technology is deflationary, and it has allows many products to drop or price or not increase as rapidly as wages in general. Food today costs less as a percent of annual income than at nearly any time in history, and a large part of that is because the cost of food is decoupled from the cost of labor. So farm hands being replaced by tractors helped all of society by giving us cheaper food, and all of society would have been harmed if taxes had been instituted to prevent tractors from becoming commonplace.

Are the workers harmed when their jobs are replaced by AI? Yes of course. But society itself is helped and so all of society should bear the costs of helping the workers. We should of course offer unemployment benefits and job retraining to those affected. We should not let them go by the wayside the way we did to blue collar factory workers in the 20th century.

But neither should we shoot society in the foot by blocking technological progress that will help all of us. AI replacing jobs will mean products will become cheaper relative to wages, just as what happened with food. A lot of people also spread nonsense that unemployment will skyrocket as the displaced workers can’t find other jobs. They misunderstand economics, there will always be demand for more jobs. The price of some goods will decrease thanks to AI, but that means that people can buy more of those goods or buy more of others goods that they put off buying because they were forced to choose and only had so much money. As prices fall, demand will rise, raising demand for labors in other areas, and a new equilibrium will be reached. Those jobs lost due to AI don’t mean the workers will be forever jobless, any more than 35% of the population displaced by tractors meant that unemployment skyrocketed in the 20th century. Time and time and time again technology has replaced the jobs of workers, and the workers have found new jobs. It will happen again with AI.

The Lunar advantage

This post is gonna be weird and long.

I often have weird thoughts that I wish I could put into a book or story. My thought today is about comparative advantage. Comparative advantage is an economic concept that explains why people and countries can specialize into certain areas of work to become more efficient.

For example, in Iceland the cost of electricity is very low, which is why Iceland has attracted a lot of investment in industries that require lots of electricity, such as aluminum smelting. On the other hand countries like Bangladesh have a low cost of labor, which is why labor intensive activities such as clothing manufacturing invest there. It doesn’t make sense for a company to put an aluminum smelter in places where electricity is expensive, nor does it make sense to put a clothing factory where labor is expensive. Iceland and Bangladesh have their own comparative advantages at this moment in time, and that explains their patterns of industry.

Let’s imagine for a moment that there was a fully autonomous colony on the moon. People lived and worked there without needing to import air, water, or food from Earth. They can trade with Earth, but if Earth were cut off they could still make their own goods, just as if our country were cut off from the world we could still make our own food, drink our own water, breathe our own air. Let’s say they use super-future space technology to extract water and oxygen from moon rocks, and grow crops using moon soil.

If there would be such a moon colony, we would assume there would be trade with Earth. Certainly the cost of moving goods from Earth to the moon and vice versa are enormous. But it was once unbelievably dangerous to cross the oceans, and people still did it because the profits were worth it. We would expect that the moon would have some comparative advantage compared to Earth and vice versa, which would make trade profitable. This comparative advantage is the same reason Iceland sells aluminum products to Bangladesh who in turn sells Iceland clothing.

So with all this in mind, I assert that the moon’s comparative advantage would naturally be in large, heavy goods, but not because of the moon itself but because of the journey.

Let me give another example, suppose there is a factory on Earth making steel and a factory on the Moon making steel. Let’s also say the iron and carbon for the steel can be gotten just as easily on the Moon as on Earth. I assert that the one on the Moon has a comparative advantage because of space travel. Sending goods from the Earth to the Moon means having to spend a lot of energy accelerating out of Earth’s thick atmosphere, then also spending energy to slow yourself down for a moon landing. By contrast it takes much less energy to accelerate off the atmosphere-less surface of the moon, and landing on Earth costs far less energy as you can use the atmosphere itself to brake your fall.

So a moon steel factory can send packages of steel to the Earth at a rather low transport cost compared to vice versa. That gives an advantage to the moon steel factory, as if there are shortages on Earth the moon factory can fill them at a rather low cost, while Earth cannot do the same to fill a need on the moon. The transport costs are not symmetric, and they are in the moon’s favor. I would assert that, all else being equal, investment for steelmaking would flow into the moon and out of the Earth.

Of course the “all else being equal” is the rub. Air, water, and food are hard to come by on the moon. Iron and carbon might be easier but all the mining equipment is already here on Earth. We would have to do a lot of work and build a lot of technology to make a moon-base even possible. But in theory economies of scale and future-technology could make it possible and even economical. And at that point it might enter a virtuous cycle due to these asymmetrical transport costs I mentioned. It will always be cheaper to send goods from the moon to the Earth, than vice versa.

It’s just a random thought I’ve had and I want to put it in a work of fiction. In some sci-fi universe, a moon colony is economically sustained by this comparative advantage compared to Earth. But I’ve never gotten the courage to write this story so until now it’s just been an idle thought in my head.