Do momentum strategies beat buy-and-hold?

This post has been a LONG time coming, but a while ago I wrote about the rate of return for investing in the S&P 500. In that article, I compared the returns of someone executing a buy-and-hold strategy starting in a certain year and ending 10 years later. Unsurprisingly, the best time to start a 10-year investment was in 1990 or early 1991, as the peak of the DotCom bubble happened 10 years later and you could sell out at the top.

Figure 1: Return over 10 years of a $10,000 investment, assuming buy-and-hold strategy

But what about someone who wants a more sophisticated strategy than simple buy-and-hold? The reason people day-trade is that they hope to beat the market, not just match it. One strategy that I have seen genuine, peer-reviewed literature discussing is the so-called “momentum” strategy of buying while the market is going up and selling while it’s going down. In this way you should avoid big loses (like the DotCom bust) but still have big gains (like the DotCom bubble).

Now, a momentum strategy can be done in different ways. It can look at specific time periods, it can include shorting, it can include sector rotation, etc. But the simplest momentum strategy I found was to simply sell out whenever the market dropped by 20%, and then buy back in when it recovered 20% from the bottom. This is intended to stop loses on the way down and avoid FOMO-ing back in during a bull trap, only buying stocks during a true bull market.

I wrote a program to calculate the return on a $10,000, 10-year investment using that strategy.

Figure 2: Return over 10 years of a $10,000 investment, assuming 20% momentum strategy

The results are fairly discontinuous because of the rigidity of the 20% cutoff, but some patterns do emerge. The return is almost identical for people who invested in 1990, because for that 10-year period the market never dropped 20%. Once you get into 1991 however, this strategy would have allowed some people to avoid the worst of the DotCom crash, as they would have sold out when the market dropped hard. In that case they would have done better than a buy-and-hold strategy.

However that’s just an example of the strategy working at it’s best. I decided to compare the two strategies. I simply subtracted the two graphs from each other, creating the below figure as a result. Any dot that is on the zero line is a point in which buy-and-hold performed identically to momentum. Any dot below is where momentum performed worse, and the few dots above are where it performed better.

Here, we see some interesting patterns, the momentum strategy actually performed pretty poorly for anyone who started a 10-year investment in the 2000s. The peaks in the early 90s are people who sold out during the DotCom bust and missed the worst of the loses. The peak around 1999 is people who sold out during the Financial Crisis and missed the worst of the loses. But the declining valley during the 2000s is the result of people who would have sold out during the Financial Crisis, but then waited for the market to get above where they had sold before buying back in.

Remember that the momentum strategy involves selling when the market has lost 20% and only re-buying when it’s regained 20% off the bottom. Less than 20% off the bottom and you can argue (as some have this year) that it’s just a “bull trap” and the market still has “another leg down” ie much further to fall. This can result in standing on the sidelines with your cash while the market makes money without you. And using this momentum strategy, that’s exactly what can happen.

I use this to illustrate a point I’ve talked about before, it’s not usually smart to just sit on cash waiting for the market to fall further. Sure the market can fall further, but it can also rise and leave you behind. Time in the market beats timing the market. Furthermore, this experiment is as generous as possible to the momentum strategy: there are no transaction costs (the bid-ask spread is an unavoidable real-world cost) and we ignore dividends (which further rewards time in the market at the expense of timing the marker). If total returns were taken into account along with transaction costs, it’s debatable as to whether any 10-year momentum investment would have beaten buy-and-hold. Even as it stands now, only a very few lucky investment windows would have benefited from momentum strategies, most would do best with buy-and-hold.

Just for kicks, I reran this data with a 10% momentum strategy instead of 20%, and the results were even worse for momentum. Selling out at the first sign of trouble, FOMO’ing back in to the first recovery, and then losing all over again makes for a terrible strategy and that can basically be what momentum trading is.

I can go forward and look at more exotic momentum strategies some other time (for example short stocks that are falling and long stocks that are rising), but for now I think I’ve proven my point.

Technical analysis and fundamental analysis cannot both be true

I’ve said before about how I’m not sold on technical analysis being viable, but I’ve seen counter-arguments floating around that say “it doesn’t matter if TA doesn’t make sense, if people believe it and act on it then it will still move the markets.” In this case, knowing TA yourself lets you read the minds of all the other TA-knowers and join in their pumps and their dumps, making money by being part of the crowd. Yet fundamental analysis says that there is an underlying “fair value” for an asset, and good investing is about going long on undervalued assets and short on overvalued ones. Fundamental analysis accepts the possibility of hype and speculation, “the market can stay irrational longer than you can stay solvent” etc, but it requires that at SOME point things fall back to earth and assets reach their fair value.

Technical analysis on the other hand implies that the future price of a stock is most strongly connected to its previous prices, not to the value of the underlying asset. This means that previous highs, lows, and averages give a kind of momentum that can be predicted and traded on.

My question is, how can these both be true? Technical analysis assumes that the price reflects all available information, otherwise past trends cannot predict future prices. Fundamental analysis assumes that the price does not reflect all available information, otherwise there are no over-valued or under-valued stocks and all stocks are at their fair value.

How can the price of a stock be dependent both on the analysis of the underlying asset AND on the previous prices of the stock, since those can easily move in opposite directions? If there is a conflict between the TA and the FA, who wins? Well if you believe the Efficient Market Hypothesis, neither win because the winning move is to buy once and hold forever. But if you believe FA then FA wins and the price must go up, if you believe TA then TA wins, but I don’t think both can be true in all or most cases.

I’d like to know if anyone believes both TA and FA can be true and if so why.

As an aside, Wikipedia explicitly labels Technical Analysis as a pseudoscience.

So who’s still sitting on cash?

A couple of months ago people were clambering that anyone holding stocks was a moron and it was better to be sitting on cash. Where are those people now?

This time December, the S&P 500 was hovering around 3800 and we had just emerged from the S&P’s worst performing year since the Great Recession. With the Federal Reserve continuing to tighten plenty of folks were scrambling to say that the worst was yet to come and everyone needed to get out of stocks NOW.

Since then, the market has recovered, the FTSE in particular has hit all-time-highs, and sentiment is strengthening. Is there still a case to be made that the market will drop another 20% from here? If there is, I’m not seeing it get made. If you pulled everything out of the market in December, well you missed the upswing. Cash isn’t without its downsides.

This was supposed to be a bigger post, but frankly I just don’t have much else to say. Don’t be like this guy, just because stocks can go up as well as down doesn’t mean your best bet is to sell everything and put it under a mattress. On average, the people who make the least amount of trades have the best portfolios, and that means buying once and never selling.

Energy Return on Energy Investment, a very silly concept

Today I’d like to address one concept that I read about in Richard Heinberg’s The End of Growth, Energy Return on Energy Investment or EROEI. The concept is an attempt to quantify the efficiency of a given energy source, and in the hands of Heinberg and other degrowthers it is a way to “prove” that we are running out of usable energy.

EROEI is a simple and intuitive concept, taking the amount of energy produced by a given source and dividing by the amount of energy it costs to set up and use that source. Oil is a prime example. In the beginning of the 20th century oil extract was easy since it just seeped out of the ground in many places. Drilling a small oil well won’t cost you that much, hell you can probably do it with manpower alone. In that case the oil gushing forth will easily give you a good energy return.

In the 21st century however, things have become harder. Oil wells require powerful machines to drill (which costs energy), and the amount and quality of the oi you get out is often lower. Add to that the fact that modern wells require huge amounts of metal and plastics, all of which cost energy to produce and even more energy to transport to their location, then add the energy it took to find the oil wells in the first place using complex geographical surveys and seismographic data, and taken together some people claim that the EROEI for a modern oil well is already less than 1, meaning that more energy is being put in than the energy we get out.

And oil isn’t the only fuel source heading towards and EROEI of less than 1. Modern mining techniques for coal require bigger and bigger machines, natural gas requires more and more expansive facilities, even solar panels require minerals that are more and more difficult to acquire. It seems everything but hydro power and (perhaps) nuclear power are becoming harder and harder to produce, sending energy returns down further and further.

This phenomenon, where the EROEI for our energy sources is less than 1, is supposed to presage an acute energy crisis and the economic cataclysm that degrowth advocates have been warning us about. If we’re getting out less energy than we’re putting in, then we’re really not even gaining, aren’t we? The problem is, I’m struggling to see how EROEI is even a meaningful way to look at this.

First let me note that not all energy is created equal. Energy in certain forms is more usable to us than in others. A hydroelectric dam holds water which (due to its being elevated above its natural resting place) acts as a store of potential energy. The release of that water drives a turbine to produce electricity. But you can’t fly a plane using water power nor keep it plugged in during flight. Jet fuel is another source of potential energy, and it has a number of advantages versus elevated water. Jet fuel is very easy to use and transport, you can fill a tank with it and move it to wherever your plane is, then fill the plane’s tanks from there.

If the only two energy sources in the world were jet fuel and hydroelectric power, we would still find it beneficial to somehow produce jet fuel using hydroelectric power even though that would necessity an EROEI of less than one. Because although this conversion would have less total energy, the energy would be in a more useful form. People would happily extract oil using hydroelectric power, then run refineries using hydroelectric power, because jet fuel has so much utility. This utility means that (supply being equal), jet fuel would command a higher price than hydroelectric power per unit of energy. And so the economic advantages would make the EROEI disadvantages meaningless.

This is the fatal flaw of EROEI in my mind. The fact that some forms of energy are more useful than others means we can’t directly compare energy out and energy in. The energy that is used to run a modern oil well comes to it from the grid, which is usually powered by coal, solar, wind, or nuclear, none of which can be used to fuel a plane. Converting these forms of energy into oil is an economic gain even if it is an energy loss. Furthermore EROEI estimates are generally overly complex and try to account for every joule of energy used in extraction, even when those calculations don’t really make sense. Let me give you an example:

A neolithic farmer has to plow his own fields, sow his own seeds, reap his own corns. Not only that, but the sun’s rays must shine upon his fields enough to let them grow. Billions of kilocalories of energy are hitting his plants every second, and most of then are lost during the plants’ growth process because photosynthesis is actually not all that efficient to begin with. The plant will have used billions of kilocalories of energy, and from them the farmer gets a few thousands of kilocalories of energy. Most of the energy is lost.

This is the kind of counting EROEI tries to do, applied to farming. When you count up every joule of energy that went into the farmer’s food, you find his food will necessarily provide him with an EROEI of less than one thanks to the first law of thermodynamics. But this isn’t a problem because Earth isn’t a closed system, nor are our oil wells. We are blasted by sunlight every minute, our core produces energy from decaying nucleotides, our tides are driven in part by the moon’s gravity, there is so much energy hitting us that we could fuel the entire world for a thousand years and never run out. The problem is that there are some scenarios where that energy isn’t useful. You can’t fly a plane with solar or geothermal or gravitational energy, but you can power an oil well. So we happily use the energies we have lots of (including our use of solar power to grow useful plants and animals!) and use that energy to help us extract the energies with greater utility.

I think EROEI failed from the very beginning for this very reason. It ignores economic realities and the massive amount of energy that surrounds us, and instead argues from the first law of thermodynamics. Yes in any closed system energy eventually runs out, but it isn’t even clear that our universe is a closed system, and the earth definitely is not, so we need to face up to economic reality on this.

10x Genomics: what happened to all the DNA stocks?

2020 was the year of COVID, but it was also the year of DNA. Thousands of DNA companies and researchers got in on the pandemic since there was a sudden surge in demand for COVID testing, contact tracing, and virus studying. Now, COVID is an RNA virus, but RNA and DNA work so similarly that most organizations that do one can do both. Even the Universities got into it, I know the Genomics Research Core Facility at the university I used to work for got money from city, state, and federal governments to process COVID tests, which was way more profitable for them than sequencing my plasmids once a month.

So 10x Genomics was a DNA sequencing company that, like so many others, skyrocketed in valuation during 2020. It hit its absolute peak in early 2021 and then fell precipitously through 2022. That describes a lot of companies but it especially seems to describe DNA companies. Still, I wanted to know if 10x Genomics was a buy, and considering my LinkedIn account got spammed all last year with recruiters and ads for the company, I figured they were at least worth a look. The result? A resounding “eh.”

One of 10x Genomics’ big claims to fame is their ability to perform reads of long segments of DNA as opposed to the shorter segments read by rival Illumina. Sequencing DNA gets less accurate the longer the DNA is, so Illumina and others use a technique of chopping the DNA into pieces, sequencing each piece, and putting them all back together. This usually works fine because there are enough overlapping pieces to make the puzzle fit, if you know read 3 different pieces with letters “AATT”, “TTGG” and “GGCC” then you can hazard a guess that the full sequence reads “AATTGGCC” and that the 3 pieces simply overlapped each other. This doesn’t work with long sequences of a single letter or repeating patters. If you simply have “AAAA” “AAAA” “AAAA” then you actually don’t know how long that string of As is. Those 3 reads could overlap on 2 letter and the result could be “AAAAAAAA” or they could overlap completely and the result is simply “AAAA.” About 8% of the human genome is these sorts of repeating patterns that Illumina and others are ill-equipped to read, which gives 10x an exploitable niche.

Now this is a genuinely interesting piece of equipment and their barcoding of DNA segments to read them in the correct order is a nice bit of chemistry, but was this a company that was ever worth 21 billion dollars? In my opinion *no*. The COVID era was a bubble in a number of ways, the easy-money policies of the post Financial Crisis era led to the super-duper-easy-money policies of the COVID era. Operation Warp Speed and other pandemic-focused funding sources meant that money was flowing into the biotech sector, and with most of the world still coming out of lockdown investors were desperate to park their money in companies that were still able to operate. It seemed like the moment for Biotech had come, and companies like 10x Genomics rode the wave to the very top. But from where I’m standing this was always an obvious bubble, and people were making claims about DNA companies that were woefully unfounded.

I’ve just written a lot about 10x Genomics, but all this could apply to most any DNA/RNA startup around. They all had neat ideas, got woefully overvalued during the pandemic, and have since crashed to earth taking shareholder value along with them. My question today then is why, why does it seem like Wall Street Investors saw something in DNA/RNA companies that I, a biology researcher, never did? Now part of that is that I’m just curmudgeony by nature, but part of that is that I think a lot of investors have this Sci-Fi idea of genetics in their head that isn’t really reflected in the field. I could just point to Cathie Wood and say “she doesn’t know Jack” but I want to dig a little deeper into some of the strange narratives that surround nucleic acids.

First, “DNA as a coding language.” I wonder if computer scientists just latch onto the word “code” or something, because the genetic code is no where near to being something that we can manipulate like computer code, and probably won’t be for decades. We’ve been inserting novel DNA into organisms since the 70s, but recently there’s been a spate of investors and analysts who believe that we’re on the cusp of truly programming cells, being able to manipulate them into doing everything we want as cleanly and as easily as a computer. This would definitely unlock a whole host of industries, it’s also not going to happen for a long while yet. DNA codes for proteins, and proteins are the functional units of most biological processes. Nothing you change in the DNA matters until it shows up in the proteins, and we are far, FAR from being able to understand how to manipulate proteins as easily as we do computers. You cannot simply say “let’s add a gene to make this wheat crop use less water,” you have to find a protein from another plant that causes it to use less water, insert that gene into wheat, do tests to make sure the wheat crop tolerates the new protein, alter the protein to account for unexpected cross reactions, and then finally test your finished wheat product out to make sure it works as designed. Any one of these steps could require an entire company to do, and each step could prove impossible and bankrupt the company working on it. We can’t just make de novo proteins to do our bidding because we don’t even know yet how to predict what a protein will look like when we code for it. Folding at home and other machine learning projects have helped us get partway there, but it will still take many Nobel Prizes before we can make de novo proteins as easily as we make de novo programs. So while there is a genomics revolution going on, it’s still an expensive and time consuming one, it’s not going to solve all our problems in a single go.

Second, “DNA as a storage medium.” I’ve said before that while DNA does store and transmit information, that information cannot be well integrated with our modern technology. The readout of DNA information is in RNA and proteins, while the readout of the circuits in your computer is photons on a screen or electrons in a modem. RNA and proteins do not easily produce or absorb electrons and photons, so having DNA communicate with our current technology is not currently doable. In addition, the time lag between reading DNA and making RNA/proteins is astronomical compared to the speed of information retrieval in semiconductors. I sometimes get an annoyed by the seconds-long delay it takes to load a webpage, but I’d be tearing my hair out if I had to wait on the minutes-long delay for DNA to be transcribed into RNA! At this time I really don’t see any reason to use DNA as a storage medium and I certainly don’t see a path to profit for any company trying to use it as such.

Third, curing genetic diseases. There is definitely a market for curing genetic diseases, let me just say that first, but many of the hyped-up corporate solutions are not feasible and rely more on sci-fi than actual science. I’ve discussed how even though CRISPR can change a cell’s DNA, bringing the CRISPR and cell together is much more challenging. The human body has a lot of defenses to protect itself from exogenous DNA and proteins, and getting around those defenses is a challenge. But in addition I don’t think investors realize that DNA is not the end-all be-all of genetic diseases, and so they tack things on to the Total Addressable Market (TAM) of DNA companies that shouldn’t really be there. Then they get flummoxed when the company has no path to addressing its TAM. Valuing companies based on what they can’t do is a bad investment strategy that I see over and over again with DNA companies. As to genetic disease themselves: there’s a truism in biology that “you are what your proteins are”. Although DNA codes for those proteins, once they’re coded they act all on their own, and some of their actions cannot readily be undone. When a body is growing and developing, its proteins can act up in ways that cause permanent alterations, and after they’ve done so changing the DNA won’t change things back. There are a number of genetic-linked diseases which are not amenable to CRISPR treatment because by the time the disease is discovered the damage has been done and changing the DNA won’t undo the damage.

Finally, “move fast and break things” doesn’t work with DNA the way it does with computers. I’ve worked on both coding projects and wet-lab projects. When something goes wrong in my computer code, fixing it is a long and arduous process but I have tools available that let me know what exactly the code is doing every step of the way. I can step through the code line by line and find out exactly what went wrong, and use that knowledge to fix things. Nothing is so straightforward when working with DNA, your ability to bugfix is only as good as your ability to read the code and reading DNA is a difficult and time-consuming process. Not only that, remember how I said above that we don’t know the exact relation between the DNA we put in and the DNA product we get out? If I’m trying to make a novel protein using novel DNA and it doesn’t get made, what went wrong? I can’t step through the code on this one because there’s no way to read out the activity of every RNA polymerase, every ribosome, or every post-transcriptional enzyme in the cell. I can make hypotheses and do experiments to try to guess at what is going on, but I can’t bugfix by stepping through the code, even using Green Fluorescent Protein as a print(“here”) crutch is difficult and time consuming. Even if I try to bugfix, the time lag between making a change and seeing the results can be weeks, months, or years depending on what system I’m working in, a far cry from how long it takes to compile code! A DNA-based R&D pipeline just doesn’t have the speed necessary to scale the way a coding house does, once you’ve got a program working the cost of sharing it is basically zero and the cost of starting a new project isn’t that great. That speed isn’t’ available to DNA companies yet.

This was a lot of words not just on 10x Genomics but on DNA-based companies in general. The pandemic-era highs may never be seen again for many of these companies, much like how some companies never again saw the highs of the Dotcom bubble. I think it’s important for investors to take a level-headed approach to DNA-based companies and not get caught up in the sci-fi hype. Anyone can sell you an idea, it takes a lot more work to make a product.

A series of proposals for testing the validity of technical analysis (TA)

I’ve said before that I think TA is astrology, and I still haven’t seen any evidence to rid me of that belief. I’ve thought about genuine scientific experiments that could be done to see if it’s true and I’m wondering if people have already done them.

See if TA-knowers all move the same way by giving a bunch of them a chart and have them predict the forward movement of the stock based on that chart. Two key ways you know astrology/fortune telling is fake are 1. that it uses weasel-words and vagueness to make predictions, and 2. because the same data can cause its practitioners to make wildly different predictions. In actual science however, any two scientists should be able to take the same data and make the same prediction: if two bowling balls of different weights are dropped from the Eiffel Tower, which hits the ground first? Any physicist can tell you the answer. Now to be clear, second opinions in medicine do exist, but these occur because we often work with incomplete information and have to use priors and estimations for the rest. But TA claims that the chart is the information, so if the information is complete than the prediction should always be the same. So if 100 TA-knowers all make the same prediction using the same chart, then perhaps we can start treating this as a complete and testable theory. If they all draw different lines on it then it becomes more clear we’re dealing with astrology.

Find out if the TA of ETFs follow the TA of their underlying assets. The mechanisms of ETFs ensures that their price never deviates far from the price of their underlying assets, and if both ETFs and the securities they contain obey TA then the movement of the two should correlate. Essentially you should be able to make predictions of the movement of an ETF by performing TA on the stocks that compose it, and I’d like to know if this is true.

Correlation analysis. Most theories of the stock market claim that the movement of a stock price is uncorrelated with any of it’s previous prices. Just because a stock is down 50% doesn’t mean it’s dead or a bargain. If I’m going to believe TA I’d like a TA-believer to prove to me that price movement is correlated with previous prices.

A working, mathematical definition of “resistance” and “support.” I understand that these are TA terms, but I’ve asked 5 different TA people for a true definition of them and have gotten 5 different answers. If TA really is based on math then these terms need to be mathematically defined, not emotionally defined based on how someone wants to analyze a chart at that time.

These are just a few of the things I’d like to be demonstrated before I start believing in TA.

What was the best 10-year period to invest in the S&P 500?

I’m doing a small project right now looking at whether stop losses are actually useful in investing. When FTX blew up, it was noted that the traders there didn’t believe in stop losses, for which they were ridiculed on social media. Of course, do stop losses actually help? Or are they more likely to kick you out of a volatile-but-profitable investment than save you from an unprofitable one? Well I can’t answer that yet, but I can answer a different question.

To start my project, I downloaded 30ish years of S&P 500 data starting September 1990 and asked a quick question: what 10-year period gave the best return if you had invested in the S&P? Once I get the baseline return down, I can add in things like stop-losses and momentum strategies to see if a savvy investor could have improved their return with simple rules. Anyway, here’s the data:

I make a small program to estimate the return if you have bought $10,000 of S&P 500 stocks and simply held them for 10 years, selling them at the end of the 10th year. From this we can see that 1990 would have by far been the best years to start as you would have been able to sell at the peak of the Dotcom Bubble. Just a couple of years later however and you would have sold into the Dotcom Crash instead, drastically lowering your returns. The worst years for a 10-year buy-and-hold were 1998-2000 as you would have sold into the teeth of the Financial Crisis. These are only years where your 10-year return would have been negative. Then we can see 2008-2009 themselves as some of the best years to start investing, since you would have bought right at the bottom and ridden strong returns into 2018-2019.

I hope to update the program soon to see if momentum strategies beat buy-and-hold, but for now this gives a good picture of the historical returns for the S&P 500. The average 10-year-return was 100%, but with an 80% standard deviation. The absolute worst return would have been to start investing March 30th 1999, you would have bought into the Dotcom Bubble and sold into the Financial Crisis with a net return of -48%. The best 10-year-return was to start October 11, 1990, which would have had you buy very low and sell near the tippy top of the Dotcom Bubble for a 510% return. There are some wild swings with the buy-and-hold strategy, but the average is still very positive, we’ll see later if stop-losses can beat that.

Send troops to the Fed?

Pardon me for wading into Twitter Drama, but Rohan Grey is a remarkably unserious “intellectual” and I couldn’t help myself.

Before I start, let me share a tiny story from “Zen and the Art of Motorcycle Maintenance.” This book was a thoroughly unenjoyable read for teenaged me, but it has one anecdote that still sticks with me. If memory serves, there is a university that is being threatened with losing its accreditation due to repeated failures and the students are naturally protesting as this would make their degrees worthless. One student talks to the narrator and claims that the University in fact can’t lose its accreditation, because if someone tried to take it “the Governor would send the national guard to protect us!”

I shouldn’t have to spell out the ridiculousness, but I want to hit word count so I will. Accreditation isn’t held in a vault, it isn’t something you can protect with guns and soldiers. Accreditation is the trust that other institutions have in you, and while some of it is legally codified most of its power is in the uncodified trust that a society is built on. You can’t protect accreditation with and soldiers any more than you can protect trust or friendship.

And so it was with bewilderment that I read an Assistant Law Professor on Twitter making the same mistakes as the nameless student from a book. Rohan Grey wants to do an end-run around the debt ceiling by having the Treasury mint a one trillion dollar platinum coin and deposit it in the Federal Reserve. This coin would then pay for the USA’s financial obligations without the need to borrow money. A big (and usually ignored) problem is that the Fed would have to accept the coin, and as Josh Barro writes, the Fed has expressed the opinion that this chicanery is illegal and undermines Fed independence. (Read Barro’s article, it goes into great detail as to why this idea probably wouldn’t work). Undeterred, Grey thinks the Fed’s opinion doesn’t matter, and that if they refuse to accept the coin then Biden should send troops to the Federal Reserve and force them to accept it.

Grey’s mistake is thinking that guns can be used to enforce trust. The Federal Reserve has the trust of the markets, and its power to move markets is based on that trust as much as anything else. The Federal Reserve trades bonds and sets rates, but those bonds and rates have value because people trust the Fed to keep its word, Jerome Powell’s speeches about the Fed’s plans have as much or more power as any action taken by the Fed. Now imagine a scenario where troops are instructed to besiege and occupy the Federal Reserve, where Powell is held at gunpoint and forced to accept a one trillion dollar deposit from the Treasury which he and the Fed have gone on record as saying is illegal. Trust in the Fed would be shattered, nothing Powell says or does matters anymore because the troops (and by extension the President) are running the show. Investors would flee from US government bonds, causing yields (and thus America’s cost of borrowing) to skyrocket, because America’s currency will have been debased against the will of its central banks, and will now be at the whims of the President.

And you may say “that’s fine, I like Biden as President” but do you like DeSantis? Do you trust that DeSantis wouldn’t be willing to send his own troops to force his will on the Fed? Would you buy a 10-year government bond if there’s a chance that DeSantis or Trump will be controlling it 2 years? And furthermore, Powell’s remarks on inflation will become worthless. Maybe Biden doesn’t like the rate rising that Powell needs to do, or maybe when the election comes he wants to juice the economy. So what’s to stop him from leaning over and reminding Powell who’s boss? What’s to stop Trump or DeSantis from doing the same? People like Grey once griped that Trump’s complaining caused the Fed to pause rate rises in 2019 (ignoring of course that inflation went under the Fed’s 2% target, which should cause them to pause rate hikes all on its own). Now Grey wants to make the Fed wholly subsumed by the President, so Trump would be able to do whatever he wanted.

Once you’ve sent troops to the Fed, you can’t unring that bell. Investors invest in American Dollars and American bonds in large part because they trust the Federal Reserve to do its duty with regards to the currency. Shattering that trust with soldiers would shatter investor confidence in the American economy as a whole. You’d have a trillion shiny dollars, but they wouldn’t be worth a pence.

Beam Therapeutics: what’s so special about prime editing?

Beam Therapeutics is another biotech company often mentioned in the same vein as Ginkgo Bioworks, Amyris, and Twist Bioscience, and since I’ve blogged about all three of those I might as well blog about Beam. Unlike Ginkgo and Twist, Beam isn’t a shovel salesman in a gold rush, they’re actually trying to create drugs and sell them, in this case they’re trying to break into or perhaps even create the cutting edge industry of medical genetics, changing people’s genes for the better. I’ll briefly discuss the science of their technology, but I feel like the science surrounding their technology deserves the most focus.

Beam has a novel form of CRISPR/Cas gene editing called prime editing. In both normal CRISPR/Cas and prime editing, genetic information is inserted into a living organism by way of novel DNA, guide-nucleotides and a DNA cutting enzyme. The guide-nucleotides direct the information to the specific part of the genome where it is needed, the DNA cutting enzyme excises a specific segment of host DNA, and hopefully DNA repair mechanisms allow the novel DNA to be inserted in its place. These techniques always rely in part of the host’s own DNA repair mechanisms, you have to cut DNA to insert novel DNA and that cut must then be stitched back up. Most CRISPR/Cas systems create double-stranded breaks while prime editing creates just single stranded breaks, and this greatly eases the burden of the host DNA repair mechanisms allowing inserts to go in smoothly and with far less likelihood of catastrophic effects. Double stranded breaks can introduce mutations, cancers, or cause a cell to commit cell-suicide to save the rest of the body from its own mutations and cancers. Because Beam is using prime editing, their DNA editing should have less off-target effects and far less chances to go wrong.

So the upside for Beam is that they’re doing gene editing in what could be the safest, most effective way possible. The downside is that gene editing itself is still just half the battle.

When I look at a lot of gene editing companies, I quickly find all kinds of data on the safety of their edits, the amount of DNA they can insert or delete, and impressive diagrams about how their editing molecules work. I rarely see much info about delivery systems, and that’s because delivering an edit is still somewhat of an Achilles’s heel of this technology. In a lab setting you can grow any cell you want in any conditions you want, so delivering the editing machinery (the DNA, the guide-nucleotides, the enzymes) is child’s play. But actual humans are not so easy, our cells are not readily accessible and our body has a number of defense mechanisms that have evolved to keep things out and that includes gene editors. To give you an idea of what these defenses are like, biology has its own gene editors in the form of retroviruses which insert their DNA into organisms like us in order to force our body to produce more viral progeny, a process which often kills the host. Retroviruses package their edit machinery in a protein capsid which sometimes sits inside a lipid (aka fatty) envelope, and so the human body has a lot of tools to recognize foreign capsids and envelopes and destroy them on sight. These same processes can be used to recognize and destroy a lot of the delivery systems that could otherwise be harnessed for gene editing.

Some companies side-step delivery entirely, if it’s hard to bring gene editing to cells why not just bring the cells to gene editing. This was the approach Vertex Pharmaceuticals used in its sickle cell anemia drug, blood stems cells were extracted from patients and edited in a test tube, before being reinserted into the patients in order to grow, divide, and start producing non-sickled red blood cells. This approach works great if you’re working on blood-based illnesses, since blood cells and blood stem cells are by far the easiest to extract and reinsert into the human body. But for other illnesses you need a delivery method which, like a virus, is able to enter the organism and change its cells’ DNA from within.

So if Beam Therapeutics wants to deliver a genetic payload using their prime editing technology, they’re going to need a delivery system which obeys the following rules

  • It must be able to evade the immune system and any other systems which would degrade it before it finds its target cells
  • It must be able to be targeted towards certain cells so that it doesn’t have off target effects
  • It must be able to enter targeted cells and deliver its genetic package

So let’s look at the options.

Viruses have already been mentioned, and they can be engineered in such a way as to deliver a genetic package without causing any disease. However as mentioned they are quickly recognized and dispatched by the immune system whenever their are found, their protein shells being easy targets for our bodies’ adaptive immune system. Normal viruses get around this by reproducing enough to outcompete the immune system that is targeting them, but we don’t want to infect patients we just want to cure them, so using viruses that reproduce is off the table for gene editing.

A variety of purely lipid-based structures exist which can ferry a genetic package through the body. Our cell membranes are made of phospholipids, and phospholipids will naturally form compartments whenever they are immersed in water. Phospholipids also have the propensity to fuse with each other, allowing their internal compartments to be shared and anything inside them to move from one to the other. Packaging a gene editor inside phospholipids would be less likely to trigger the immune system, and they can be created in such a way that they target a particular cell type to deliver their genetic package. However random phospholipids can be easily degraded by the body, limiting how long they can circulate to find their target cell. Furthermore their propensity to fuse is both a blessing and a curse, allowing them to easily deliver their genetic package to targets but also making them just as likely to deliver it to any random cell they bump into instead. This means a lot of off-target delivery and the possibility for plenty of off-target effects

At the other end of the scale are nanoparticles made of metals or other compounds. Many methods exist to attach drugs to the outside of a nanoparticle and target that nanoparticle to a cell, however this in turn leaves the drug free to be interacted with and targeted by the immune system. For many drugs this is fine, but prime editing uses foreign proteins, DNA and free nucleotides and the body is downright paranoid about finding those things hanging around since that usually means the body has either a cancer or an infection. To that end, the body destroys them on site and triggers an immune response, which would severely curtain any use of nanoparticles to deliver a genetic package. Nanoparticles can also be designed hollow to allow for the prime editing machinery to fit snugly inside them, but this can lead to the machinery just falling out of the nanoparticle in transit and being destroyed anyway. You might say “well not a hollow sphere that fully surrounds the machinery so it can’t fall out?” But it does need to get out eventually if it wants to edit the cell, and if it’s encased in a solid sphere of metal it can’t do that. Enzymes to breach the metal would be cool but are impractical in this case.

Between these two extremes we have a number of structures made of lipids, proteins, polymers or metals, and they all struggle with one of these points. They can’t encase the machinery, or they can’t easily deliver the machinery, or they trigger an immune response, or they degrade easily, or they often cause off-target delivery. Delivery to the target is Step 0 of both prime editing and gene editing in general, and for the most part this step is still unsolved. I’ve visited several seminars where viral packages for delivering CRISPR/Cas systems were discussed, and while these seem some of the most promising vectors for gene editing they still have the problem of triggering the body’s immune system and being destroyed by it. The seminars I’ve watched all discussed mitigating that problem, but none could sidestep it entirely.

I do believe that Beam therapeutics has technology that works, their prime editing is clearly a thing of beauty. Beam is currently working on treatments for sickle cell anemia, as is Vertex Pharmaceutical, and as are most gene editing companies because it’s a blood-based disease that is amenable to bringing the cells to the gene editing machinery instead of having to go vice versa. But for anything where you can’t bring the cells to the editing, Beam isn’t quite master of it’s own fate because for prime editing to reach the cells of the body it will need to be delivered in some way and currently that’s an unsolved problem. Even a system that works to deliver some packages won’t necessarily work for all of them as size and immunity considerations change with the specific nature of the genetic package you’re delivering. I would also be worried about Beam’s cash burn, they are essentially pre-revenue and will need to do a lot of research before any of their drugs get to market or can be sold to a bigger player. I think they can survive for a long while by selling stock since their price has held up a lot better than other biotechs I’ve blogged about, but that’s good for them and not for a shareholder. As long as interest rates keep going up, I’ll treat pre-revenue companies with a wary eye.

People buy stocks instead of ETFs because their values are different

I enjoy talking stocks, and whenever you hang around on the finance parts of the internet, you’ll inevitably run into the following sentiment:

Why are you even buying individual stocks? You should just buy a broad-market ETF. You’ll never beat the market so ETFs are the best and most reliable way to grow your money.

Bogleheads et al

I’ve written about the Efficient Market Hypothesis before and about the difficulties of stock picking. I understand and to an extent agree with the arguments that people in general cannot beat the market reliably over any significant length of time. Any good runs are transitory, purely luck based, and eventually fall back to earth (see $ARKK 2016-2021 and then 2021-today). But that isn’t the primary value most stick pickers are going for, they’re going for potential return not expected return.

When you buy a broad market ETF, what is your expected return? Well the ETF tracks the whole market and the market goes up 5-10% every year, so that’s the return you can expect. Some years you’re down 20% (like 2021), some years you’re up 30% (like 2019), but on average you get a 5-10% yearly return that will slowly grow your money. Slowly is the key word: investing in the stock market probably won’t make you rich, for the average American it won’t even make you a millionaire over the course of your entirely life, but it will give you a small leg up in the long run with very little risk to yourself.

So what’s the expected return for stock picking instead? Well, definitely less than 5-10%. The efficient market hypothesis and significant amounts of experimental data show that stock pickers broadly lose to the market over any significant timescale. They might be up 100% one year but are equally likely to lose it all the next. But the key here is that the expected return is not everyone’s return. The expected return is just the average of everyone’s return, and while on average people lose to the market there are always a lucky few that beat the market and some of them win big. There is at least one person out there who went all in on Tesla stock in 2013, sold in 2021 when Musk started acting weird, and made a truly life changing amount of money, and everyone who stock picks hopes to be like that person. Is it likely? Of course not, but it’s possible and that’s what keeps people going.

This may sound illogical to a bogglehead, and they may scoff and say the stock picker is no different that the casino gambler, but let’s try another example. What is the expected return of starting a small restaurant? Well, it takes a lot of capital investment to start a restaurant and 80% of them fail within the first 5 years of operation, so it’s safe to say that the expected return of a restaurants is actually negative. On average a person starting a restaurant will end up losing money, so are an restauranteurs as illogical as stock pickers? I’d argue no, the expected return isn’t as important to them as the potential return. A restaurant is an opportunity to make a life-changing amount of money, and while it’s clearly very uncommon, it happens often enough to continue enticing people to try it. The bogglehead could just as easily state that it’s more efficient for restauranteurs to not open up restaurants at all and they should instead invest in broad market ETFs, but if no one ever took risks like that then we’d never have new businesses at all.

Big gains require big risk, and I’d argue being content with your lot and investing like a bogglehead is no more “logical” than going all in on smart but high-risk plays, it’s simply a questions of values.