So whatever happened with Aduhelm?

Aduhelm and Leqimbi were hot news a few years ago. They are both antibodies that work as anti-Alzheimer’s disease drugs by binding to and hopefully destroying amyloid beta. The hypothesis that amyloid beta is the causative agent of Alzheimer’s, and that reducing amyloid beta will lessen the disease, is known as the Amyloid Hypothesis. And while the Amyloid Hypothesis is still the most widely supported, I wonder if the failures of Aduhelm and Leqimbi to make much of a dent to Alzheimer’s disease has damaged the hypothesis somewhat.

Because think about it, the whole job of an antibody is to help your body clear a foreign object. When antibodies bind to something, they trigger your immune system to destroy it. And this is why you get inflammation whenever you get a cut or scrape, antibodies will bind to whatever microscopic dirt and bacteria that enter your body, and your immune system flooding that area to destroy them is felt by you as inflammation.

And we know that Aduhelm and Leqimbi are working as antibodies against amyloid beta. They bind strongly to amyloid beta, they induce inflammation when given to Alzheimer’s patients (although inflammation in the brain can cause multiple side effects), and tests show that they seem to be reducing the amount of amyloid beta in the patients who take them.

Yet the prognosis for Alzheimer’s is not much better with these drugs than without them. Maybe they just aren’t destroying *enough* amyloid beta, but they are barely reducing the rate at which Alzheimer’s patients decline in mental faculty, and are not at all causing patients to improve and regain their mental state. Maybe the brain just *can’t* be fixed once it’s been damaged by amyloid beta, but you’d hope that there would at least be some improvement for patients if the Amyloid hypothesis is correct.

This has caused the field to seemingly split, with many still supporting the Amyloid hypothesis but saying these drugs don’t target amyloid beta correctly, with others now fractured in trying to study the many, many other possible causes of Alzheimer’s diesease. Tau, ApoE, neurotransmitters, there’s lots of other stuff that might cause this disease, but I want to focus on the final hail mary of the Amyloid hypothesis: that the drugs aren’t targeting amyloid beta correctly.

Because it’s honestly not the stupidest idea. One thing I learned when I researched this topic was the variety of forms and flavors that *any* protein can come in, and amyloid beta is no different.

When it’s normally synthesized, amyloid beta is an unfolded protein, called “intrinsically disordered” because it doesn’t take a defined shape. Through some unknown mechanism, multiple proteins can then cluster together to form aggregates, again of no defined shape. But these aggregates can fold into a very stable structure called a protofilament, and protofilaments can further stabilize into large, long filaments.

Each of these different structures of amyloid beta, from the monomers to the aggregates to the filaments, will have a slightly different overall shape and will bind slightly differently to antibodies. One reason given for why Aduheim causes more brain bleeds than Leqimbi is because Aduheim binds to the large filaments of amyloid beta, which are often found in the blood vessels of the brain. By siccing the body’s immune system on these large filaments, the blood vessels get caught in the crossfire, and bleeding often results.

Meanwhile other antibodies are more prone to target other forms of amyloid beta, such as the protofilaments or the amorphous aggregates.

But what amyloid beta does or what it looks like in its intrinsically disordered state is still unknown, and still very hard to study. All our techniques for studying small proteins like this require them to have a defined shape. Our instruments are like a camera, and amyloid beta is like a hummingbird flapping its wings too fast. We can’t see what those wings look like because they just look like a blur to our cameras.

So maybe we’ve been looking at the wrong forms of amyloid beta, rather than the filaments and protofilaments which are easy to extract, see, and study, maybe we should have been looking at the intrinsically disordered monomers all along, and we only studied the filaments and protofilaments because we were *able* to study them, not because they were actually important.

There’s a parable I heard in philosophy class about a drunk man looking for his keys. He keeps searching under the bright streetlight but can never seem to find them. But he’s only searching under the streetlight because *that’s where he can see*, he isn’t searching because *that’s where his keys are*.

Endlessly searching the only places you *can* search won’t necessarily bring results, you may instead need to alter your methods to search where you currently can’t. And if the Amyloid hypothesis is to be proven true, that will probably be necessary. Because right now I’ve heard nothing to write home about Aduheim and Leqimbi, many doctors won’t even proscribe them because the risk of brain bleeds is greater than the reward of very marginally slowing a patient’s mental decline, not even reversing the decline.

I no longer directly research Alzheimer’s disease, but the field is in a sad place when just 4 years ago it seemed like it was on the cusp of a breakthrough.

Sad but true, Mike Israetel is a sham

I watched this video doing a breakdown of Mike’s PhD thesis. His thesis is riddled with failures across every page. His research was shoddily done, with worthless statistics, and with technical errors littering every single paragraph that he wrote. The thesis proves that he cannot do research, cannot write research, and probably cannot *read* research either, since he misunderstands many of the papers and articles he actually cites.

This is sad to me, because as long-time readers know, I followed Israetel and took his advice seriously.

Why it matters: You may say that a thesis is just some bs he did in college, and has no bearing on his current position. But Mike Israetel’s entire brand is based around his PhD, that he is a sport *scientist* and not just a jacked dude. He mentions his PhD in his every ad and video, and so he wants viewers and customers to believe that he’s giving them *scientific advice*, which would be based on *research and testing* and not just vibes.

Yet Mike’s thesis is proof that not only can he not do research, nor write a research paper, he can’t even *read* a research paper as he misunderstands and misrepresents the papers he cites. He tells his readers that the science that *other people do* is saying something completely different than what it actually says, and that’s a big problem.

So Mike’s advice and supplements and apps aren’t actually based on science, they’re based on vibes just like every other gym bro on youtube.

Why else it matters: Some have said that Mike’s PhD program wasn’t like a “normal” program, and shouldn’t be held to the same standards. His program works closely with a lot of US olympic athletes, and it wasn’t focused on research that will help the broader public, but on learning the specific techniques to help the specific elite athletes that Mike worked with.

But if that’s the case then Mike has no business claiming that his PhD gives his knowledge applicable to anyone in his general audience. He isn’t giving advice that you, the listener should actually take, his supplements and programs won’t help you, specifically, instead they are tailored toward the special subset of people who are genuine olympic athletes, and who require a very different program to succeed than what an average 9-5er needs

Likewise, if Mike’s program wasn’t held to the standards of a “normal” PhD, then it should not have *awarded* him a PhD and he shouldn’t call himself doctor. The standards of a PhD, the reason it confers upon you the title of “Doctor” is supposed to be because it proves you have met the highest standards for science and scientific communication. That you are not only knowledgeable, but able to use and communicate your knowledge effectively to help the scientific community and educate the non-scientific community at large. But Mike’s thesis proves he just can’t do that.

He has not met the highest standards for science, and he has not even met a *high school level* standard for scientific communication. And yet he still trades on his title of “PhD,” using it as a crutch to gain legitimacy, and as a shield to deflect criticism. It matters that his thesis is worthless and that his PhD was substandard, because is means his crutch should be kicked out from under him, and his shield should be broken like the trash it is.

Finally some have said that many of these criticisms are “nitpicks.” But it matters because a PhD-level of research is supposed to be held to the highest standard of quality. You aren’t supposed to publish something without feeling certain that you can defend its integrity and its conclusions, and yet it is clear Mike’s thesis was written without any thought whatsoever. If he had even re-read his thesis once, he would not have typos and data-fails across whole swaths of it.

I have had many typos in my own blog, but these are streams-of-consciousness posts that I usually type up and publish without a second read, I’m not acting like these are high quality research publications. Mike *is* claiming that his thesis is high quality, that’s the whole reason he got a PhD for it, so it being as shoddily researched, as shoddily written, and as completely absent of a point as it is really proves that he should never have been given a PhD in the first place.

So is that the end for my following of Mike Israetel? Will I stop doing weight workouts and go back to running, since everything he says about “how to lose weight” is clearly wrong?

No.

Mike’s research is crap, and it always did skeeve me out that he leaned so hard on his “Dr” label. I’ve never bought his app or his supplements, but that doesn’t mean I can’t take his advice. Most of what he says is the same as what my *real* doctor has told me with regards to losing weight. And while his PhD is bogus it’s clear he’s taken a few undergraduate level science classes and is more knowledgeable than most of the gym bros with a youtube channel.

Ultimately his advice is probably fine on the whole. The low-level advice he gives is mostly the same as what you’ll hear from non-cranks, and the high-level advice he gives is mostly his personal opinions like any other influencer. He’s probably correct in the broad strokes that weight-lifting and caloric deficits are the best way to lose weight. And he’s probably correct that you should focus on exercises that improve your “strength” and ignore exercises that improve your “balance” unless you have an inherent balancing issue you need to improve on. He’s probably also right that the hysteria around Ozempic and other GLP-1 drugs is overblown, and that if they help you lose weight you should go ahead and use them. As he says: it’s ok to save your willpower for other parts of your life.

But I have no reason to believe his specific advice around high level concepts like training to failure, periodization, muscle group activation, etc. If you don’t know what those are, then it’s a good idea to ignore what he says about them and just focus on lifting and (if you’re overweight), cutting calories.

I don’t think Mike is a complete idiot who should be ignored entirely. I think he’s a hustler like any other influencer and if the things he says work for you, then do them. But he’s not backed by science like he claims he is, so ignore any of his ramblings if they don’t work for you. Talk to your doctor instead, or an *actual* exercise scientist, although if Mike’s PhD thesis is “the norm” for that discipline, then most exercise scientists aren’t really scientists at all.

I’ve long lamented that the fitness and sports landscape is overrun by bro-science and dude-logic. It’s ruled by the kinds of shoddy science and appeals to tradition that we would normally call “old wives’ tales.” But when a jacked dude says something crazy, like “you should lie upside down to regain your breath so that your blood rushes to your lungs,” a lot of people might say “well he’s jacked, he must know *something*.”

I had thought Mike Isratel was an escape from the wider landscape, and that he was perhaps a trendsetter for actual science to creep into this mess. But it seems he’s just another grifter trying to get rich. Ah well, such is life.

What does it mean to think? 

It may surprise you to know, but I was once a philosopher.  To be more accurate, I was once a clueless college student who thought “philosophy” would be a good major.  I eventually switched to a science major, but not before I took more philosophy classes than most folks ever intend to. 

A concept that was boring back then, but relavent now, is that of the “Chinese Room.”  John Searle devised this thought experiment to prove that machines cannot actually think, even if they pass Turing Tests.  The idea goes something like this: 

Say we produce a computer program which takes in Chinese Language inputs and returns Chinese Language outputs, outputs which any speaker of Chinese can read and understand.  These outputs would be logical responses to whatever inputs are given, such that the answers would pass a Turing Test if given in Chinese.  Through these inputs and outputs, this computer can hold a conversation entirely in Chinese, and we might describe it as being “fluent” in Chinese, or even say it can “think” in Chinese. 

But a computer program is fundamentally a series of mathematical operations, “ones and zeros” as we say.  The Chinese characters which are taken in will be converted to binary numbers, and mathamatical operations will be performed on those numbers to create an output in binary numbers, which more operations will then turn from binary numbers back into Chinese characters.   

The math and conversions done by the computer must be finite in scope, because no program can be infinite.  So in theory all that math and conversions can themselves be written down as rules and functions in several (very long) books, such that any person can follow along and perform the operations themselves.  So a person could use the rules and function in these books to: 1.) take in a series of Chinese characters, 2.) convert the Chinese to binary, 3.) perform mathamatical operations to create a binary output, and 4.) convert that binary output back into Chinese. 

Now comes the “Chinese Room” experiment.  Take John Searle and place him in a room with all these books described above. John sits in this room and recieves prompts in Chinese.  He follows the rules of the books and produces an output in Chinese.  John doesn’t know Chinese himself, but he fools any speaker/reader into believing he does.  The question is: is this truly a demenstration of “intelligence” in Chinese?  John says no. 

It should be restated  that the original computer program could pass a Turing Test in Chinese, so it stands to reason that John can also pass such a test using the Chinese Room.  But John himself doesn’t know Chinese, so it’s ridiculous to say (says John) that passing this Turing Test demonstrates “intelligence.”   

One natural response is to say that “the room as a whole” knows Chinese, but John pushed back against this.  The Chinese Room only has instructions in it, it cannot take action on its own, therefore it cannot be said to “know” anything.  John doesn’t know Chinese, and only follows written instructions, the room doesn’t know Chinese, in fact it doesn’t “know” anything.  Two things which don’t know Chinese cannot add up to one thing that does, right? 

But here is where John and I differ, because while I’m certainly not the first one to argue so, I would say that the real answer to the Chinese Room problem is either that “yes, the room does know Chinese” or “it is impossible to define what “knowing” even is.” 

Let’s take John out of his Chinese Room and put him into a brain.  Let’s shrink him down to the size of a neuron, and place him in a new room hooked up to many other neurons.  John now receives chemical signals delivered from the neurons behind him.  His new room has a new set of books which tell him what mathematical operations to perform based on those signals.  And he uses that math to create new signals which he sends on to the neurons in front of him.  In this way he can act like a neuron in the dense neural network that is the brain. 

Now let’s say that our shrunken down John-neuron is actually in my brain, and he’s replaced one of my neurons.  I actually do speak Chinese.  And if John can process chemical signals as fast as a neuron can, I would be able to speak Chinese just as well as I can.  Certainly we’d still say that John doesn’t speak Chinese, and it’s hard to argue that the room as a whole speaks Chinese (it’s just  replacing a neuron after all).  But I definitely speak Chinese, and I like to think I’m intelligent.  So where then, does this intelligence come from? 

In fact every single neuron in my brain could be replaced with a John-neuron, each one of which is now a room full of mathematical rules and functions, each one of which takes in a signal, does math, and gives an input to the neurons further down the line.  And if al these John-neurons can act as fast as my neurons, they could all do the job of my brain, which contains all of my knowledge and intelligence, even though John himself (and his many rooms) know nothing about me.   

Or instead each one of my neurons could be examined in detail and turned into a mathematical operation.  “If you recieve these specific impulses, give this output.”  A neuron can only take finitely many actions, and all the actions of a neuron can be defined purely mathematically (if we believe in realism).   

Thus every single neuron of my brain could be represented mathematically, their actions forming a complete mathematical function, and yet again all these mathematical operations and functions could be written down on books to be placed in a room for John to sit in.  Sitting in that room, John would be able to take in any input and respond to it just as I would, and that includes taking in Chinese inputs and responding in Chinese.  

You may notice that I’m not really disproving John’s original premise of the Chinese Room, instead I’m just trying to point out an absurdity of it.  It is difficult to even say where knowledge begins in the first place.   

John asserts that the Chinese room is just books with instructions, it cannot be said to “know” anything.  And so if John doesn’t know Chinese, and the Room doesn’t know Chinese, then you cannot say that John-plus-the-Room knows Chinese either, where does this knowledge come from? 

But in the same sense none of my neurons “knows” anything, they are simply chemical instructions that respond to chemical inputs and create chemical outputs.  Yet surely I can be said to “know” something?  At the very least (as Decarte once said) can’t I Know that I Am? 

And replacing any neuron with a little machine doing a neuron’s job doesn’t change anything, the neural net of my brain still works so long as the neuron (from the outside) is fundementally indistinguishable from a “real” neuron, just as John’s Chinese Room (from the outside) is fundementally indistinguishable from a “real” knower of Chinese. 

So how do many things that don’t know anything sum up to something that does?  John’s Chinese Room  is really just asking this very question.  John doesn’t have an answer to this question, and neither do I.  But because John can’t answer the question, he decides that the answer is “it doesn’t,” and I don’t agree with that.   

When I first heard about the Chinese room my answer was that “obviously John *can’t* fool people into thinking he knows Chinese, if he has to do all that math and calculations to produce an output, then any speaker will realize that he isn’t answering fast enough to actually be fluent.”  My teacher responded that we should assume John can do the math and stuff arbitrarily fast.  But that answer really just brings me back to my little idea about neurons from above, if John can do stuff arbitrarily fast, then he could also take on the job of any neuron using a set of rules just as he could take on the job of a Chinese-knower. 

And so really the question just comes back to “where does knowledge begin.”  It’s an interesting question to raise, but raising the question doesn’t provide an answer.  John tries at a proof-by-contradiction by saying that the Room and John don’t know Chinese individually, so you cannot say that together they know Chinese.  I respond by saying that none of my individual neurons know Chinese, yet taken together they (meaning “I”) do indeed know Chinese.  I don’t agree that he’s created an actual contradiction here, so I don’t agree with his conclusion. 

I don’t know where knowledge comes from, but I disagree with John that his Chinese Room thought experiment disproves the idea that “knowledge” underlies the Turing Test. Maybe John is right and the Turing Test isn’t useful, but he needs more than the Chinese Room to prove that.

Ultimately this post has been a huge waste of time, like any good philosophy.  But I think wasting time is sometimes important and I hope you’d had as much fun reading this as I had writing it.  Until next time. 

Declaring victory on my Twitter prediction, conceding defeat on self-driving cars

I’ve made a few predictions over the years here, and I want to talk about two of them.

I’m declaring victory in saying that 2022 was *not* the Year Twitter Died. It was an extremely broad opinion in the left-of-center spaces that Musk was a terrible CEO, that firing so much Twitter staff would destroy the company, that it would be dead and overtaken very soon. I can concede the first one, the second two are clearly false.

The evidence from history has shown that firing most of Twitter’s staff has *not* led to mass outages, mass hacks, or the death of twitter’s infrastructure. It may seem like I’m debating a strawman, but it’s difficult to really convey the ridiculous hysteria I saw, with some claiming that Twitter would soon be dead and abandoned as newer versions of most popular browsers wouldn’t be able to access it. Likewise it was claimed that the servers would be insecure and claimed by botnets, and would thus get blocked by any sane browser protection. None of that has happened, Twitter runs just as it did in 2021. It is no less secure and it not blocked by most browsers.

Nor has the mass exodus of users really occurred. Some people think it has because they live in a bubble, but Mastodon was never going to replace Twitter and Bluesky is losing users. And regardless of your opinions on that, the numbers don’t lie.

I’ve said before that I used to be part of a community that routinely though Musk’s sky was falling. Every Tesla delay would be the moment that *finally* killed the company, every year would be when NASA *finally* kicked SpaceX to the curb, every failed Musk promise would *finally* make people stop listening to him. You’ve heard of fandoms, I was in a hatedom.

But I learned that all of that was motivated reasoning. EVs aren’t actually super easy, and that’s the reason Ford and GM utterly failed to build any. It’s not that Musk was lucky and would soon be steamrolled by the Big Boys, Musk was smart (and lucky) and the Big Boys wet their Big Boy pants and have stilled utterly failed in the EV market despite billions of dollars in free government money.

Did Musk receive free government money? Not targeted money no, any car company on earth could have benefited from the USA/California EV tax credits, it’s just that the Detroit automakers didn’t make EVs. Then they got handed targeted free money, and they still failed to make EVs.

NASA (and the ESA, and JAXA, and CNSA) haven’t managed to replicate SpaceX’s success in low-cost re-usable rockets sending thousands of satellites into orbit. So now *another* Musk property, Starlink, is the primary way that rural folk can get broadband, because Biden’s billions utterly failed to build any rural broadband.

And of course while Musk has turned most of the left against him, he has turned much of the right for him, which is generally what happens when you switch parties. And now that he’s left Trump, some of the left want to coax him back. Clearly people still listen to him even if you and I do not.

So I was very wrong 10 years ago about Elon Musk being the anti-Midas, but I learned my lesson and started stepping out of my bubble. I was right 3 years ago when I said Twitter isn’t dying, and everything I said still rings true. Big companies still use Twitter because it’s their best way to mass-blast their message to everyone in an age when TV is dying and more people block ads with their browser. The same reason people prefer Bluesky (curate your feed, never see what you don’t want to see) is the same reason Wendy’s, Barstool Sports, and Kendrick Lamar prefer Twitter. They want their message, their brand, to show up in your feed even if you don’t want to see it. It’s advertising that isn’t labeled as an ad.

So that’s what I was right about, now I’m going to write a lot *less* about what I was wrong about, because I hate being wrong.

I was wrong about how difficult it would be to get self-driving cars on all roads. In 2022 I clowned on a 2015 prediction that said self-driving cars would be on every road by 2020. Well it’s 2025, and I’ll be honest 5 years late isn’t that terrible.

At the time I thought that there was a *political-legal* barrier that would need to be overcome: how do you handle insurance of a self-driving car? No system is perfect and if there’s a defect in the LIDAR detector or just a bug in the system, a car *can* cause damage. And if it does, does Google pay the victim, or the passenger, or what? Insurance is a messy, expensive system, split into 50 different systems here in America, and I thought without some new insurance legislation (such as unifying the insurance systems or just creating more clarity regarding self-driving cars), that the companies would realize they couldn’t roll these out without massive risk and headaches.

I was wrong, I’ve now seen waymos in every city I’ve been to.

So it seems the insurance problems weren’t insurmountable, and the problem was less hard then I thought. You can read my thoughts about how hard I *thought* those problems were, but to be honest I was wrong.

When will the glaciers all melt?

Glacier National Part in Montana [has] fewer than 30 glaciers remaining, [it] will be entirely free of perennial ice by 2030, prompting speculation that the park will have to change its name – The Ravaging Tide, Mike Tidwell

Americans should plan on the 2004 hurricane season, with its four super-hurricanes (catagory 4 or stronger) becoming the norm […] we should not be surprised if as many as a quarter of the hurricane seasons have five super-hurricanes – Hell and High Water, Joseph Romm

Two points of order:

  • In 2006, when Mike Tidwell wrote about glaciers, Glacier national park had 27 glaciers. It now has 26 glaciers, and isn’t expected to suddenly suddenly lose them all in 5 years.
  • Since 2007, when Joseph Romm wrote about hurricanes, just four hurricane seasons have had four so-called “super-hurricanes,” and just one season has had five. The 2004 season has not become the norm, and we are averaging less than 6% of seasons having five super-hurricanes

I do not write this to dunk on climate science, I write only to dunk on the popular press. The science of global warming is fact, it is not a myth or fake news. But the popular press has routinely misused and abused the science, taking extreme predictions as certainties and downplaying the confidence interval.

What do I mean by that? Think of a roulette wheel, where a ball spins on a wheel and you place a bet as to where it will land. If you place a bet, what is the maximum amount of money you can win (aka the “maximum return”)? In a standard game the maximum amount you can win is 36 times what you bid, should you pick the exact number the ball lands on. But remember that in casinos, the House Always Wins. Your *expected* return is just 95/100 of your bid. You’re more likely to lose than to win, and the many many loses wipe out your unlikely gains, if you play the game over and over.

So how should we describe the statistical possibilities of betting on a roulette wheel? We should give the expected return (which is like a mean value how much money you might win), we should give the *most likely* return (the mode), and we should give the minimum and maximum returns, as well as their likelihood of happening. So if you bet 1$ on a roulette wheel:

  • Your expected return is 0.95$
  • Your most likely return is 0$ (more than half of the time you win nothing, even if betting on red or black. If you bet on numbers, you win nothing even more often).
  • Your minimum return is 0$ (at least you can’t owe more money than you bet), this happens just over half the time if you bet on red/black, and happens more often if you bet on numbers
  • Your maximum return is 36$. This happens 1/38 times, or about 2.6% of the time.

But would I be lying to you if I said “hey, you *could* win 36$”?

By some standards no, this isn’t lying. But most people would acknowledge the hiding of information as a lie of omission. If someone tried to entice someone else to play roulette only by telling them that they could win 36$ for every 1$ they put down, I would definitely consider that lying.

So too does the popular press lie. Climate science is a science of statistics and of predictions. Like Nate Silver’s election forecasting, climate modeling doesn’t just tell you a single forecast, they tell you what range of possibilities you should expect and how often you should expect them. For instance, Nate Silver made a point in 2024 that while his forecast showed Harris and Trump with about even odds to win, you shouldn’t have expected them to split the swing states evenly and have the election come down to the wire. The most common result (the mode) was for either candidate to win *all* the swing states together, which is indeed what happened.

Bad statistics and prediction modellers will misstate the range of possible probabilities. They will heavily overstate their certainties, understate the variance, and pretend that some singular outcome is so likely as to be guaranteed.

This kind of bad statistics was central to Sam Wong of the Princeton Election Consortium‘s 2016 prediction, which gave Hillary Clinton a greater than 99% chance of victory. Sam *massively* overstated the election’s certainty, and frequently attacked anyone who dared to caution that Clinton wasn’t guaranteed to win.

Nate Silver meanwhile was widely criticized for giving Hillary such a *low* chance of victory, at around 70%. He was “buying into GOP propaganda” so Sam said. Then after the election Silver was attacked by others for giving Clinton such a *high* chance, since by that point we knew she had lost. But 30% chance events happen 30% of the time. Nate has routinely been more right than anyone else in forecasting elections.

I don’t doubt that some people read and believed Sam Wong’s predictions, and even believed (wrongly) that he was the best in the business. When he was proven utterly, completely wrong, how many of his readers decided forecasting would never be accurate again? How much damage did Sam Wong do to the popular credibility of election modeling?

However much damage Sam did, the popular press has done even more to damage the statistical credibility of science, and here we return to climate change. Climate change is happening and will continue to accelerate for the foreseeable future until drastic measures are taken. But how much the earth will warm, and what effects this will have, have to be modeled in detail and there are large statistical uncertainties, much like Silver’s prediction of the 2016 election.

Yet I have been angry for the last 20 years as the popular press continues to pretend long-shot possibilities are dead certainties, and to understate the range of possibilities. Most of the popular press follows the Sam Wong school.

In the roulette table, you might win 36$, but that’s a long-shot possibility. And in 2006 and 2007, we might have predicted that all the glaciers would melt and super-hurricanes would become common. But those were always long-shot possibilities, and indeed these possibilities *have not happened*.

The climate has been changing, the earth has been warming, but you don’t have to go back far to see people making predictions so horrendously inaccurate that they destroy the trust of the entire field. If I told you that you were dead certain to win 36$ when putting 1$ on the roulette wheel, you might never trust me again after you learned how wrong I was. Is it any wonder so many people aren’t trusting the science these days, when this is how it’s presented? When we were told 20 years ago that all the glacier in America would have melted by now? Or that every hurricane season would be as bad as 2004?

And it isn’t hard either to find numerous even more dire predictions couched in weasel words like “may” and “possibly.” The oceans “may” rise by a foot, such and such city “may” be under water. It’s insidious, because while it isn’t *technically* wrong (“I only said may!”) it makes a long-shot possibility seem far more likely than it really is. Again, it’s a clear lie of omission, and it’s absolutely everywhere in the popular press.

We have to be accurate when modelling our uncertainty. We have to discuss the *full range of possibilities*, not just the possibility we *want* to use for fear-mongering. And we have to accurately state the likelihoods for our possibilities, not just declare the long-shot to be a certainty.

Because the earth *has* warmed. A glacier has disappeared from Glacier national park and the rest are shrinking. Hurricane season power is greater than it was last century. But writers weren’t content to write those predictions, and instead filled books with nonsense overstatements that were not born out by the data and are easily disproven with a 2025 google search. When it’s so easy to prove you wrong, people stop listening. And they definitely won’t listen to you when you “update” your predictions to match the far less eye-catching trend that you should have written all along. Lying loses you trust, even if you tell the truth later.

I think Nate Silver should be taken as the gold standard for modelers, statistician, and more importantly *the popular press*. You *need* to model the uncertainties, and more importantly you need to *tell people* about those uncertainties. You need to tell them about the longshots, but also about *how longshot they are*. You need to tell them about the most likely possibility too, even if it isn’t as flashy. And you need to tell them about the range of possibilities along the bell curve, and accurately represent how likely they all are.

Nate Silver did just this. In 2016 he accurately reported that Trump was still well within normal bounds of winning, an average size polling error in his favor was all it would take. He also pointed out that Clinton was a polling error away from an utter landslide (which played much better among the twitterati), and that she was the favorite (but not enough of the favorite to appease the most innumerate writers).

In *every* election Silver has covered, he has been the primary modeller accurately measuring the range of possibilities, and preparing his readers for every eventuality. That gets him dogpiled when he says things that people don’t like, but it means he’s accurate, and accuracy is supposed to be more important than popularity in science.

So my demand to the popular press is to be more like Nate Silver and less like Sam Wong. Don’t overstate your predictions, don’t downplay uncertainties, don’t make extreme predictions to appeal to your readers. Nate Silver has lost a lot of credibility for his temerity to continue forecasting accurately even in elections that Democrats don’t win, but Sam Wong destroyed his credibility in 2016 and has been an utter joke ever since. If science is to remain a force of informing policy, it needs to be credible. And that means making accurate predictions even if they aren’t scary enough to grab headlines, or even if they aren’t what the twitterati would prefer.

Lying only works until people find you out.

Research labs are literally sucking the blood from their graduate students

I’m going for a “clickbait” vibe with this one, is it working?

When I was getting my degree, I heard a story that seemed too creepy to be real. There was a research lab studying the physiology of white blood cells, and as such they always needed new white blood cells to do experiments on. For most lab supplies, you buy from a company. But when you’re doing this many experiments, using this many white blood cells, that kind of purchasing will quickly break the bank. This lab didn’t buy blood, it took it.

The blood drives were done willingly, of course. Each grad student was studying white blood cells in their own way, and each one needed a plethora of cells to do their experiment. Each student was very willing to donate for the cause, if only because their own research would be impossible otherwise.

And it wasn’t even like this was dangerous. The lab was connected to a hospital, the blood draws were done by trained nurses, and charts were maintained so no one gave more blood than they should. Everything was supposedly safe, sound, by the book.

But still it never seemed enough. The story I got told was that *everyone* was being asked to give blood to the lab, pretty much nonstop. Spouses/SOs of the grad students, friends from other labs, undergrads interning over the summer, visiting professors who wanted to collaborate. The first thing this lab would ask when you stepped inside was “would you like to donate some blood?”

This kind of thing quickly can become coercive even if it’s theoretically all voluntary. Are you not a “team player” if you don’t donate as much as everyone else? Are interns warned about this part of the lab “culture” when interviewing? Does the professor donate just like the students?

Still, when this was told to me it seemed too strange to be true. I was certain the storyteller was making it up, or at the very least exaggerating heavily. The feeling was exacerbated since this was told to me at a bar, and it was a “friend of a friend” story, the teller didn’t see it for themself.

But I recently heard of this same kind of thing, in a different context. My co-worker studied convalescent plasma treatments during the COVID pandemic. For those who don’t know, people who recover from a viral infection have lots of antibodies in their blood that fight off the virus. You can take samples of their blood and give those antibodies to other patients, and the antibodies will help fight the infection. Early in the pandemic, this kind of treatment was all we had. But it wasn’t very effective and my co-worker was trying to study why.

When the vaccine came out, all the lab members got the vaccine and then immediately started donating blood. After vaccination, they had plenty of anti-COVID antibodies in their blood, and they could extract all those antibodies to study them. My co-worker said that his name and a few others were attached to a published paper, in part because of their work but also in part as thanks for their generous donations of blood. He pointed to a figure in the paper and named the exact person whose antibodies were used to make it.

I was kind of shocked.

Now, this all seems like it could be a breach of ethics, but I do know that there are some surprisingly lax restrictions on doing research so long as you’re doing research on yourself. There’s a famous story of two scientists drinking water infected with a specific bacteria in order to prove that it was that bacteria which caused ulcers. This would have been illegal had they wanted to infect *other people* for science, but it was legal to infect themselves.

There’s another story of someone who tried to give themselves bone cancer for science. This person also believed that a certain bone cancer was caused by infectious organisms, and he willingly injected himself with a potentially fatal disease to prove it. Fortunately he lived (bone cancer is NOT infectious), but this is again something that was only legal because he experimented on himself.

But still, those studies were all done half a century ago. In the 21st century, experimenting with your own body seems… unusual at the very least. I know blood can be safely extracted without issue, but like I said above I worry about the incentive structure of a lab where taking students’ blood for science is “normal.” You can quickly create a toxic culture of “give us your blood,” pressuring people to do things that they may not want to do, and perhaps making them give more than they really should.

So I’m quite of two minds about the idea of “research scientists giving blood for the lab’s research projects.” All for the cause of science, yes, but is this really ethical? And how much more work would it really have been to get other people’s blood instead? I just don’t think I could work in a lab like that, I’m not good with giving blood, I get terrible headaches after most blood draws, and I wouldn’t enjoy feeling pressured to give even more.

Is there any industry besides science where near-mandatory blood donations would even happen? MAYBE healthcare? But blood draws can cause lethargy, and we don’t want the EMTs or nurses to be tired on the job. Either way, it’s all a bit creepy, innit?

The need for data, the need for good data

Another stream of consciousness, this one will be a story that will make some people go “no shit sherlock,” but it’s a lesson I had to learn on my own, so here goes:

My work wants me to make plans for “professional development,” every year I should be gaining skills or insights that I didn’t have the year before.  Professional development is a whole topic on its own, but for now let’s just know that I pledged to try to integrate machine learning into some of my workflows for reasons.

Machine learning is what we used to call AI.  It’s not necessarily *generative* AI (like ChatGPT), I mean it can be, but it’s not necessarily so.

So for me, integrating machine learning wasn’t about asking ChatGPT to do all my work, rather it was about trying to write some code to take in Big Data and give me a testable hypothesis.  My data was the genetic sequences of many different viruses, and the hypotheses were: “can we predict which animal viruses might spill over and become human viruses?” and “can we predict traits of understudied viruses using the traits of their more well-studied cousins?”.

My problem was data.  

There is actually a LOT of genetic data out there in the internet.  You can search a number of repositories, NCBI is my favorite, and find a seemingly infinite number of genomes for different viruses.  Then you can download them, play around with them, and make machine learning algorithms with them.

But lots of data isn’t useful by itself.  Sure I know the sequences of a billion viruses, what does that get me?  It gets me the sequences of a billion viruses, nothing more nothing less.

What I really need is real-world data *about* those sequences.  For instance: which of these viruses are purely human viruses, purely animal viruses, or infect both humans AND animals?  What cell types does this virus infect?  How high is the untreated mortality rate if you catch it?  How does it enter the cell?

The real world data is “labels” in the language of machine learning, and while I had a ton of data I didn’t have much *labelled* data.  I can’t predict whether an animal virus might become a human virus if I don’t even know which viruses are human-only or animal-only.  I can’t predict traits about viruses if I don’t have any information about those traits.  I can do a lot of fancy math to categorize viruses based on their sequences, but without good labels for those viruses, my categories are meaningless.  I might as well be categorizing the viruses by their taste, for all the good it does me.

Data labels tell you everything that the data can’t, and without them the data can seem useless.  I can say 2 viruses are 99% identical, but what does that even mean?  Is it just two viruses that give you the sniffles and not much else?  Or does one cause hemorrhagic fever and the other causes encephalitis?  

I don’t know if that 1% difference is even important, if these viruses infect 2 different species of animals it’s probably very important.  But if these viruses infect the same animals using identical pathways and are totally identical in every way except for a tiny stretch of DNA, then that 1% is probably unimportant.

Your model is only as good as your data and your data is only as good as your labels.  The real work of machine learning isn’t finding data, it’s finding labelled data.  A lot of machine learning can be about finding tricks to get the data labelled, for instance ChatGPT was trained on things like Wikipedia and Reddit posts because we can be mostly sure those are written by humans.  Similarly if you find some database of viral genomes, and a *different* database of other viral traits (what they infect, their pathway, their mortality rate), then you can get good data and maybe an entire publication just by matching the genomes to their labels.

But the low-hanging fruit was picked a long time ago.  I’m trying to use public repositories, and if there was anything new to mine there then other data miners would have gotten to it first. I still want to somehow integrate machine learning just because I find coding so enjoyable, and it gives me something to do when I don’t want to put on gloves.  But clearly if I want to find anything useful, I have to either learn how to write code that will scrape other databases for their labels, create *my own data*, or maybe get interns to label the data for me as a summer project.  

Stay tuned to find out if I get any interns.

I don’t like Factorio: Space Age

I started, stopped, and started this post several times. I just want to get it out the door so I’m posting it now regardless of that it’s not the greatest. I’ll have more to post on Factorio after this, but my thesis remains: I loved Factorio on it’s own, I don’t like Factorio: Space Age. I don’t think it’s a good expansion pack and I don’t think you should buy it.

Let me ramble about science in the base version of Factorio.

Red science was so simple you could craft it in your inventory. But the long time it took encouraged you to figure out automation to make that unnecessary. Green science was a step up, but it not only tested your automation skills, but also encouraged *and* rewarded you for successfully doing it. To explain: green science needs inserters and belts, which are two things you’ll make a *lot* of in Factorio. If you want to succeed, you’ll need to automate them so might as well do so since they’re also needed for green science. Conversely once you do get over the difficulty hill of automating them, you can split off the inserters and belts you’ll need for your factory, because you probably are building more than what your green science needs. So green science encourages you to automate the things you’ll need to automate anyway, but also rewards you since automating those things is a necessary step in growing the factory.

From there, blue science tests a whole new subject: fluid mechanics. Blue science needs plastics, which needs petroleum gas, which needs oil. If you’ve never dealt with factorio fluids before, blue science demands you learn how. But you’re also rewarded with bots, because blue science unlocks the construction and logistics robots that make the second half of the game so much easier.

Purple science doesn’t feel much different than blue science, but I think the name “production science” is fitting because it’s a real step up in total materials if not complexity. For the most part purple science uses all the same inputs as blue science, but no matter how much I feel I overbuild, I *always* seems to run out of steel for it! Purple science tests your ability to scale, and scale big, because you always need more steel than you think you need.

Finally, yellow science really feels like a final exam. Like purple science you’ll need to have an overwelming volume of inputs, this time copper instead of iron/steel. Blue Circuits and Batteries both require you to have completely mastered the game’s liquid input systems, with multiple steps where chemical plants feed into assemblers and vice versa.

When you finally master yellow, white science is strangely underwelming. It’s mostly “the same but more,” if requires blue circuits and low density structures just like yellow science (plus extra green and red circuits before Space Age came out), but then adds rocket fuel on top of that and a huge space launcher that needs to be built. Not exact a great leap in difficulty, but by then you’re probably just ready for it to end, so it’s in a good place overall.

The thing is, Space Age doesn’t feel like it follows this kind of progression, or any progression. Each planet feels mostly like redoing red and green science. The science pack only demands that you master the basics of automation on this new planet with these new resources. And once you do that, you can leave and never need to return.

It feels… not great. I don’t feel any sense of adventure and progression landing on planet after planet and doing the equivalent of “super simple red/green science, only now with 1 new ingredient no other planet has.”

The space mechanics are like Dyson Sphere Program, in that they aren’t realistic at all and I wish they were. I know making Kerbal Space Program *in* Factorio would have been hard, but at the very least I don’t see why a rocket that runs out of fuel starts slowly sinking back to the planet it launched from, but also doesn’t ever fall into the atmosphere and hit the ground. A rocket that loses fuel just continues to drift on its current trajectory. If you want it to fall back to the planet it launched from, then that trajectory should eventually make it hit the ground. But instead Factorio: Space Age has this worst-of-every-single-world middle ground where things are unituitive *and* unphysical *and* waste your time. My first every space ship didn’t have enough fuel to reach its destination planet, so I had no choice but to wait for it to *sloooooooooooooooowly* drift backwards back to the first planet before I could give it more fuel to try the journey again. I had no way to speed this up, and I had no reason to think it *would even work that way* since that’s not how space travel actually works.

Another thing I dislike, I feel like this game had room for having the planets interact with each other more. The space ships are build off the old system for railroads, but the spaceships aren’t useful as railroads. The game is clear that you should simply be producing your science on each planet and then shipping it all to Nauvis for research. But why does that have to be the *only* option? Why not make it so that we can juggle items and send them all over to each planet? Because the devs decided every challenge in this expansion pack must have *a single specific solution*, rather than letting the player come up with their own solution. That’s bad game design and makes this game less fun.

When I played with rails, yes I would make a starter base for red/green/black science. Then another for blue, another for purple, another for yellow+white. And I’d run a single train line to each of these bases to ship all the science to a single location. But you don’t have to be that lame. You can have train likes running in all directions to ship all raw resources to a centralized location. This can simplify say your green chip production if it all happens in one place and you just siphon those chips to each research that needs them.

Or you can have satellite bases that build intermediate products, say putting all chips in one place and shipping them around. Or a mishmash of both where sometimes you produce everything onsite and only ship the science back and sometimes you’re importing everything just to make science. You can do a lot of things.

You can’t do that in space age because of the seemingly arbitrary restrictions on how much stuff can fit in a rocket. 2,000 green chips can fit in a single rocket, but only 300 blue chips. Blue chips stack a lot more efficiently than that, the only reason for this is the feeling that it would be “too easy” if you could ship blue chips around from Fulgora. But would it be easy, or would it be interesting? They clearly wanted you to engage with space shipping, the entire planet Aquilo punishes you if you don’t, but they didn’t want you to do *enough* space shipping to actually make planet-to-planet production lines like you could with trains in the base game.

And I think that’s a huge missed opportunity, because I’d *love* it if I could be rewarded for interplanetary shipping like this. I’d love to heavily focus Vulcanus on the “low tier” items and Fulgora on the “high tier.” Gleba could specialize in the various oil derivatives with all its bioproducts. Then I could ship whatever I need whereever I need and have an engaging reason to produce a lot of different space ships with different needs.

It feels like the game quite clearly has exactly one way you have to play and doesn’t want you to experiment, rather it wants you to find and accept the “right” way. The most clear version of this is in the asteroids that will hit your space ships. Fighting the biters in the base game gave a huge latitude for experimentation, did you turret creep them? Mass produce grenades and use grenade spam? Drive all around them in a car with autocannons? Go for the defender capsules? There’s a lot of different ways to do things and none of them are wrong. You can use a tank or ignore it completely. You can focus on personal laser defense to kill biters up close, or rush artillery to kill them from afar. Do you even care to try uranium ammo? Or nuclear bombs? Or do you just want to plop down a long line of laser turrets and call it a day? The game lets you play how you want, rewards you for experimenting, and never punishes you for trying something “wrong.”

Space Age punishes you for not playing its way. You need to use turrets in space to protect from asteroids. And you need to build ammo in space to feed the turrets. You can’t use lasers like you could on the ground, because then you’d only need to focus on power, so asteroids have 99% damage reduction against the same lasers that can kill a behemoth biter twice their size. And you can’t ship ammo up to the space ship either, that would be too easy. Instead ammo has been heavily curtailed with how much of it can be shipped to and fro. 25 uranium-coated bullets weigh as much as 1,000 solid iron plates. Check the periodic table and do the math, I assure you it doesn’t add up. Even more crazy is that 25 uranium bullets weigh as much as 50 uranium fuel cells, U-238 really isn’t *that* much heavier than U-235 guys.

And then once you get ammo working, they introduce new asteroids that are 99% resistant to physical damage. All so that you are forced to build rocket turrets instead, which are the new asteroids one weakness. Then finally rocket turrets need to be upgraded to tesla turrets.

There’s no variety here, there’s no experimentation, there’s no reward for trying things your way. You don’t get to try other options like shipping all your ammo up and trying to make it that way. Or focusing on laser turrets instead of gun turrets. Or using walls to ram the asteroids instead of using guns at all. There’s a lot of alternative routes that are just fine to experiment with against biters, but are shot down when you go against asteroids because the devs had a very specific vision in mind for how they wanted space ships to work, and stepping outside of their vision is not allowed.

The game just isn’t fun. The newest planets are hit and miss. Fulgora is nice because it’s a backwards planet, all the most expensive materials are easy to get and all the cheapest materials are harder to get. Vulcanus is my favorite because it actually does something cool: your normal solid products are turned into liquids instead. Gleba is terrible game design and should be deleted entirely. Aquila is unfinished and boring.

And overall even the new planets aren’t fun when I’m just landing, doing 3 things, and then leaving that planet never to return. I don’t feel like these bases are part of “my” base the way I felt when I made an area for purple science and an area for yellow science. I don’t feel like they connect to each other in any way because they don’t.

And I don’t feel like any of the challenges the game presents are worthwhile in their own right, because they’ve all been made with the mindset of “there is only 1 way to properly complete this challenge, find the way the game devs wanted or else.” They’ve specifically put down guard-rails to prevent you from ever having an original thought that wasn’t the solution they themselves wanted, and it just feels lame. Space ship design should be the greatest avenue for player freedom and creativity, but instead everyone’s space ship is *identical* because the devs needed to make the challenges solvable in only 1 precise way. So no one ships ammo to space, no one tries to smash into the asteroids with walls and build up faster than they take damage. No one tries to do anything except the exact solution the devs wanted, and that it such a shame for a game that until now was so focused on player freedom and expression.

Factorio: Space Age is not a good expansion pack. I thought it would rekindle my love for Factorio, but now I never want to play Factorio again. I had been playing for absolute ages, and had recommended the game to friends. But I can’t recommend this expansion pack to anyone I know, it just isn’t what made Factorio so fun to begin with.

If the government doesn’t do this, no one will

I’m not exactly happy about the recent NIH news. For reference the NIH has decided to change how it pays for the indirect costs of research. When the NIH gives a 1 million dollar grant, the University which receives the grant is allowed to demand a number of “indirect costs” to support the research.

These add up to a certain percentage tacked onto the price of the grant. For a Harvard grant, this was about 65%, for a smaller college it could be 40%. What it meant was that a 1 million grant to Harvard was actually 1.65 million, while a smaller college got 1.4 million, 1 million was always for the research, but 0.65 or 0.4 was for the “indirect costs” that made the research possible.

The NIH has just slashed those costs to the bone, saying it will pay no more than 15% in indirect costs. A 1 million dollar grant will now give no more than 1.15 million.

There’s a lot going on here so let me try to take it step by step. First, some indirect costs are absolutely necessary. The “direct costs” of a grant *may not* pay for certain things like building maintenance, legal aid (to comply with research regulations), and certain research services. Those services are still needed to run the research though, and have to be paid for somehow, thus indirect costs were the way to pay them.

Also some research costs are hard to itemize. Exactly how much should each lab pay for the HVAC that heats and cools their building? Hard to calculate, but the building must be at a livable temperature or no researcher will ever work in it, and any biological experiment will fail as well. Indirect costs were a way to pay for all the building expenses that researchers didn’t want to itemize.

So indirect costs were necessary, but were also abused.

See, unlike what I wrote above, a *university* almost never receives a government grant, a *primary investigator* (called a PI) does instead. The PI gets the direct grant money (the 1 million dollars), but the University gets the indirect costs (the 0.4 to 0.65 million). The PI gets no say over how the University spends the 0.5 million, and many have complained that far from supporting research, the University is using indirect costs to subsidize their own largess, beautifying buildings, building statues, creating ever more useless administrative positions, all without actually using that money how it’s supposed to be used: supporting research.

So it’s clear something had to be done about indirect costs. They were definitely necessary, if there were no indirect costs most researchers would not be able to research as Universities won’t allow you to use their space for free, and direct costs don’t always allow you to rent out lab space. But they were abused in that Universities used them for a whole host of non-research purposes.

There was also what I feel is a moral hazard in indirect costs. More prestigious universities, like Harvard, were able to demand the highest indirect costs, while less prestigious universities were not. Why? It’s not like research costs more just because you have a Harvard name tag. It’s just because Harvard has the power to demand more money, so demand they shall. Of course Harvard would use that extra money they demanded on whatever extravagance they wanted.

The only defense of Harvard’s higher costs is that it’s doing research in a higher cost of living environment. Boston is one of the most expensive cities in America, maybe the world. But Social Security doesn’t pay you more if you live in Boston or in Kalamazoo. Other government programs hand you a set amount of cash and demand you make ends meet with it. So too could Harvard. They could have used their size and prestige to find economies of scale that would give them *less* proportional indirect costs than could a smaller university. But they didn’t, they demanded more.

So indirect costs have been slashed. If this announcement holds (and that’s never certain with this administration, whether they walk it back or are sued to undo it are both equally likely), it will lead to some major changes.

Some universities will demand researcher pay a surcharge for using facilities, and that charge will be paid for by direct costs instead. The end result will be the university still gets money, but we can hope that the money will have a bit more oversight. If a researcher balks at a surcharge, they can always threaten to leave and move their lab.

Researchers as a whole can likely unionize in some states. And researchers, being closer to the university than the government, can more easily demand that this surcharge *actually* support research instead of going to the University’s slush fund.

Or perhaps it will just mean more paperwork for researchers with no benefit.

At the same time some universities might stop offering certain services for research in general, since they can no longer finance that through indirect costs. Again we can hope that direct costs can at least pay for those, so that the services which were useful stay solvent and the services which were useless go away. This could be a net gain. Or perhaps none will stay solvent and this will be a net loss.

And importantly, for now, the NIH budget has not changed. They have a certain amount of money they can spend, and will still spend all of it. If they used to give out grants that were 1.65 million and now give out grants that are 1.15 million, that just means more individual grants, not less money. Or perhaps this is the first step toward slashing the NIH budget. That would be terrible, but no evidence of it yet.

What I want to push back on though, is this idea I’ve seen floating around that this will be the death of research, the end of PhDs, or the end of American tech dominance. Arguments like this are rooted in a fallacy I named in the title: “if the government doesn’t do this, no one will.”

These grants fund PhDs who then work in industry. Some have tried to claim that this change will mean there won’t be bright PhDs to go to industry and work on the future of American tech. But to be honest, this was always privatizing profit and socializing cost. All Americans pay taxes that support these PhDs, but overwelmingly the benefits are gained by the PhD holder and the company they work for, neither of whom had to pay for it.

“Yes but we all benefit from their technology!” We benefit from a lot of things. We benefit from Microsoft’s suite of software and cloud services. We benefit from Amazon’s logistics network. We benefit form Tesla’s EV charging infrastructure. *But should we tax every citizen to directly subsidize Microsoft, Amazon, and Tesla?* Most would say. no. The marginal benefits to society are not worth the direct costs to the taxpayer. So why subsidize the companies hiring PhDs?

Because people will still do things even if the government doesn’t pay them. Tesla built a nation-wide network of EV chargers, while the American government couldn’t even build 10 of them. Even federal money was not necessary for Tesla to build EV chargers, they built them of their own free will. And before you falsely claim how much Tesla is government subsidized, an EV tax credit benefits the *EV buyer* not the EV seller. And besides, if EV tax credits are such a boon to Tesla, then why not own the fascists by having the Feds and California cut them completely? Take the EV tax credits to 0, that will really show Tesla. But of course no one will because we all really know who the tax credits support, they support the buyers and we want to keep them to make sure people switch from ICE cars to EVs

Diatribe aside, Tesla, Amazon, and Microsoft have all built critical American infrastructure without a dime of government investment. If PhDs are so necessary (and they probably are), then I don’t doubt the market will rise to meet the need. I suspect more companies will be willing to sponsor PhDs and University research. I suspect more professors will become knowledgeable about IP and will attempt to take their research into the market. I suspect more companies will offer scholarships where after achieving a PhD, you promise to work for the company on X project for Y amount of years. Companies won’t just shrug and go out of business if they can’t find workers, they will in fact work to make them.

I do suspect there will be *less* money for PhDs in this case however. As I said before, the PhD pipeline in America has been to privatize profits and subsidize costs. All American taxpayers pay billions towards the Universities and Researchers that produce PhD candidates, but only the candidates and the companies they work for really see the gain. But perhaps this can realign the PhD pipeline with what the market wants and needs. Less PhDs of dubious quality and job prospect, more with necessary and marketable skills.

I just want to push back on the idea that the end of government money is a deathknell for industry. If an industry is profitable, and if it sees an avenue for growth, it will reinvest profits in pursuit of growth. If the government subsidizes the training needed for that industry to grow, then instead it will invest in infrastructure, marketing, IP and everything else. If training is no longer subsidized, then industry will subsidize it themselves. If PhDs are really needed for American tech dominance, then I absolutely assure you that even the complete end of the NIH will not end the PhD pipeline, it will simply shift it towards company-sponsored or (for the rich) self-sponsored research.

Besides, the funding for research provided by the NIH is still absolutely *dwarfed* by what a *single* pharma company can spend, and there are hundreds of pharma companies *and many many other types of health companies* out there doing research. The end of government-funded research is *not* the end of research.

Now just to end on this note: I want to be clear that I do not support the end of the NIH. I want the NIH to continue, I’d be happier if its budget increased. I think indirect costs were a problem but I think this slash-down-to-15% was a mistake. But I think too many people are locked into a “government-only” mindset and cannot see what’s really out there.

If the worst comes to pass, and if you cannot find NIH funding, go to the private sector, go to the non-profits. They already provided less than the NIH in indirect costs but they still funded a lot of research, and will continue to do so for the foreseeable future. Open your mind, expand your horizons, try to find out how you can get non-governmental funding, because if the worst happens that may be your only option.

But don’t lie and whine that if the government doesn’t do something, then nobody will. That wasn’t true with EV chargers, it isn’t true with biomedical research, and it is a lesson we all must learn if the worst does start to happen.

“I hate them, their antibodies are bull****”

I want to tell two stories today, they may mean nothing individually but I hope they’ll mean something together. Or they’ll mean nothing together, I don’t know. I’ve gotten really into personal fitness and am writing this in between sets of various exercises I can do in my own house.

The first story is from before the pandemic. I used to be a biochemist (still am, but I used to too). During that time I went to a lot of conferences and heard a lot of talks by the Latest and Greatest. One of the most fascinating talks was by a group out of Sweden who were preparing what they called a “cell atlas,” a complete map that could pinpoint the locations of every protein that would be in healthy human cells.

The science behind the cell atlas was pretty sweet. We know that the physical location of proteins in the body really matters, the proteins that transcribe DNA into RNA are only found in the nucleus because DNA itself is only found in the nucleus. Physical location is very important so that every protein in the body is doing only the job it’s assigned, and not either slacking off or accidentally doing something it isn’t supposed to. The first gives you a wasting disease and the latter may cause cancer.

So knowing the location of these proteins on a subcellular level is actually pretty important. But how can we even determine that? We can’t really zoom into a cells and walk around checking off proteins, can we?

The key was that this group was also really into making their own fluorescent antibodies. They could make antibodies for any human protein and then stick on a fluorescent tag that lights up under the right conditions. Then it was just a task of sticking the antibodies into cells and seeing which part lights up, that tells you where the protein is.

There was a bit more to it of course, I should do a post about how all this relates to Eve Online, but that was the gist of it: put antibodies in cells and see where the cell lights up. Use that to build an atlas of the subcellular locations of the human proteome.

It was some cool science and a nice talk. A few months later I was at another conference and the discussion came up of if conferences ever really have “good” talks or if scientists are incapable of anything above “serviceable.” I proffered the cell atlas talk as one I thought was actually “good,” it was good science explained well. The response I got from one professor stunned me: “oh I hate those people, their antibodies are bullshit.”

I don’t know how or why, but somehow this professor had decided that the in-house antibodies which underpinned the cell atlas project were all poorly made and inaccurate. That then undercut the validity of the entire project. I didn’t press further for this professor’s reasoning or evidence, I could tell he was a bit heated (and drunk) and left it at that. But while I never got any evidence against the cell atlas antibodies, I also never heard much in their favor. They seemed like a big project that just never got much recognition in the circles I ran in.

So was the cell atlas project a triumph of niche science, or a big scam? Well I don’t know, but it reminds me of another story.

As I said above, I’m much more into personal fitness these days. The Almighty Algorithm knows this, and so youtube serves me up a steady stream of fitness influencer content. I still stay away from anything that isn’t Mike Israetel or a few other “evidence based” youtubers, but even this small circle has served up its own helping of scientific slapfights.

In this case the slapfight is about “training to failure.” Most fitness influencers agree that you have to train hard if you want results. What exactly counts as “hard” though, that is where the controversy lies.

First of all, what is “training to failure?” Well unfortunately that too is controversial, because everyone has a different definition of what “failure” actually means. But generally, failure is when you are doing some exercise (a pushup, a pullup, a bench press) and you cannot complete the movement. Say you’ve done 5 pullups and you can’t do another, that’s “failure.”

Mike Israetel shows off example workouts of himself training hard, and he claims he’s training with “0 to 1 reps in reserve,” that’s a fancy way of saying he is training very near failure. If he does 5 pullups and claims he has 0 to 1 RIR (reps in reserve), then he is saying he could do AT MOST 1 more pullup, but he might actually fail if he even tried. He does this for almost every movement: bench presses, leg presses, squats, deadlifts, his claim of 0 to 1 RIR means he is doing the exercise until he can either no longer do it, or do it at most 1 more time before failure.

Failure itself is hard to measure, and sometimes you don’t know you’ll fail a move until you try. I once was doing pushups and just suddenly collapsed on my chest, not even knowing what happened. A quick assessment showed my shoulders gave out, and since pushups are supposed to be a chest exercise this implies I was doing them wrong, but that was a case where I clearly trained to failure since I tried to do the motion and failed.

But other fitness influencers have called Mike out on his 0 to 1 RIR claim, they think he isn’t training anywhere close to failure. The claims and counterclaims go back and forth, and unfortunately the namecalling does as well. I’ve kinda lost respect for the youtubers on all sides of this argument because of it.

But it gets back to the same point as the antibody story up above: a scientist is making a claim that they think is well-founded and backed by evidence, other scientists claim it’s all bullshit.

We think of science as very high minded and such, that science is conducted through solemn papers submitted to austere journals. I don’t think that’s ever been the case, science is conducted as much through catty bickering and backbiting as it is in the peer-reviewed literature. Scientists are still people, I’m sure a lot of us will be happy to take our cues from people we respect without spending the time to go diving into the literature. The literature is long and dense, and you may not even be the right kind of expert to evaluate it. So when someone you respect says a claim is bullshit, I’m sure a lot of people accept that and don’t pay the claim any additional mind.

So is the cell atlas actually good? Is Mike Israetel actually training to failure? I don’t know. I’m not the right kind of scientist to evaluate those claims. The catty backbiting has reduced my opinion of all the scientists involved in these controversies, although I understand that drunk scientists are only human and youtubers need to make a living through drama, so I try not to be too unkind to them.

Still, it’s a reminder that “the science” isn’t a thing that’s set in stone, and “scientists” are not all steely-eyed savants searching dispassionately for Truth. I don’t have any good recommendations from this unfortunately, the only thing I can think of is the bland “don’t believe scientists unquestioningly,” but that’s hardly novel. I guess just realize that scientists can disagree as childishly and churlishly as anyone else.