Biotech update: Vertex Pharmaceuticals and CTX001

I’ve said before that I don’t feel like I can reasonably invest in any biotech company since they all feel like a gamble, but for the gamblers out there I took a look at the science behind Vertex Pharmaceuticals (VRTX).

Vertex has a drug called CTX001 which has been in the news as it seeks FDA approval to treat sickle cell anemia and beta thalassemia.  Sickle cell anemia happens when the hemoglobin in your blood has a mutation that makes it fold into the wrong shape, this makes red blood cells become sickle shaped instead of their usual donut shape, and these sickle-shaped red blood cells get caught in the tiny capillaries of your body.  This causes damage and a lack of energy as blood isn’t able to efficiently transfer nutrients and waste into and out of your cells.  Sickle cell anemia reduces one’s life expectancy to around 40-60 years.  Beta thalassemia is another hemoglobin disease this time caused by reduced production of hemoglobin itself.  Less hemoglobin means less nutrients and waste can be transferred by the blood, meaning the body can’t work as efficiently.  Beta thalassemia in its major form has a life expectancy of around 20-30 years.  

Despite the fact that both diseases are caused by mutations in hemoglobin, the mutations are very different from each other and so it surprised me that both were being treated by a single CRISPR drug.  How CRISPR works is that a protein uses a piece of DNA to very specifically target itself towards an area on a gene of interest.  The protein can then cut into that gene of interest and if another piece of DNA is on the protein, then that other piece of DNA can be incorporated into the gene by the cell’s DNA repair machinery.  This process is somewhat random in nature, it’s hard to ensure that your other piece of DNA gets incorporated and even harder to ensure that it is incorporated in just the right orientation, just the right position, and just the right way so as not to cause problems down the line.  Since sickle cell and beta thalassemia are caused by mutations in very different places within the hemoglobin gene, a CRISPR drug that is targeted towards the sickle cell mutation site should not be able to also hit the beta thalassemia mutation site.

But the trick is that CTX001 isn’t targeting hemoglobin, it’s targeting fetal hemoglobin.  When a baby is in the womb, it needs to take oxygen from its mother’s blood stream to survive.  If a baby’s hemoglobin were the same as its mother’s, this process would be inefficient because both the baby’s and mother’s hemoglobin would bind to the oxygen equally well and there would not be enough oxygen flowing from the mother’s blood into the baby’s.  It would be like a tug of war where both sides are of equal strength.  However, fetal hemoglobin binds to oxygen more strongly than adult hemoglobin, and this ensures that a baby can take the oxygen it needs from its mother’s blood stream.  Fetal hemoglobin usually stops being produced around the time the baby is born, and after the body switches over to purely adult hemoglobin by around 6-months after birth.  What CTX001 does is it tries to switch on the production of fetal hemoglobin in people suffering from sickle cell anemia and beta thalassemia.  If they can produce fetal hemoglobin instead then it can compensate for the fact that their normal hemoglobin isn’t working properly, and should reduce their symptoms and prolong their lives.

How CTX001 does this is by altering the promotion of the fetal hemoglobin gene.  The promoter regions of genes are the segments of a gene that help the gene get transcribed into new mRNA.  That mRNA will then get translated into a new protein.  The promoter of fetal hemoglobin does not usually allow the gene to get transcribed into adulthood, so no fetal hemoglobin gets made.  But altering the promotion of the gene would allow it to be transcribed, and thus translated, and so fetal hemoglobin would be produced in the body.  Now here’s where it gets a bit tricky: they aren’t actually altering the promoter region of fetal hemoglobin, but rather the promoter region of another gene called BCL11A.  I wanted to explain how promoters work, but there’s more to explain now because biology is complicated so bear with me:

The reason the promoter region of fetal hemoglobin doesn’t normally allow transcription (and thus production of the gene) is because of a repressor called BCL11A.  BCL11A is a protein that sits on the promoter of fetal hemoglobin and refuses to budge, this prevents any other protein from accessing the fetal hemoglobin gene and thus prevents fetal hemoglobin from being transcribed.  Now BCL11A is produced by its own gene, and CTX001 alters the promoter region of BCL11A in such a way that no BCL11A can be produced.  Without BCL11A, there is nothing to repress the promotion of fetal hemoglobin.  Without the repression of fetal hemoglobin, its promoter region is accessible and it can be transcribed.  With the transcription of fetal hemoglobin, the fetal hemoglobin protein will be produced in the body.  And with the production of fetal hemoglobin, the diseases caused by malformed adult hemoglobin (sickle cell anemia and beta thalassemia) should be reduced.

But it’s still not over!  How the hell would CTX001 find every red blood cell in the body and do its thing?  It doesn’t have to!  Hematopoietic stem cells are the stem cells which produce red blood cells (and it’s red blood cells which will carry the hemoglobin or fetal hemoglobin in the blood).  Hematopoietic stem cells can be extracted from the patient’s blood and then altered with CTX001 so that they will produce fetal hemoglobin.  The cells which are successfully altered can then be transferred back into the patient.  Before the altered cells are given back to the patient, the patient is given busulfan to kill off stem cells.  This is necessary to kill off some of the stem cells which are producing the malformed hemoglobin so that the new stem cells producing fetal hemoglobin can reproduce and become the majority.  The patient is then monitored for improvements in their sickle cell anemia or beta thalassemia condition.

So this process is long, involved and complicated.  Just to list all the things that could go wrong: when altering the promoter the DNA could accidentally be mutated towards being cancerous, killing of so many stem cells using busulfan could have harsh side effects, the infused hematopoietic stem cells might not reproduce and become the majority, and even then the DNA of the promoter might not be altered enough so that fetal hemoglobin becomes the majority of the hemoglobin in the body.  But I’m sure every step is heavily monitored by Vertex during the treatment process.  So is Vertex Pharmaceuticals a buy?  I have no idea, if you believe the Efficient Market Hypothesis then all their upside is already priced in, but they’re in phase 3 of clinical trials and if you’re a gambling man I see nothing wrong with their scientific thesis.  So idk, go ahead?

Biotech seems far more speculative than other tech

There’s a mantra that gets repeated by everyone around me: biotech is the next big thing.  I’m willing to believe that on average the biotech industry will probably grow faster than the market, maybe even faster than the tech industry over the next 20 or 30 years.  What I’m less enthused by is the prospect of trying to pick and invest in the winners of that market and not get stuck holding the losers.  I feel like biotech in general will have a much larger standard deviation on its returns, a small number of companies will make out like bandits and a very very large number of companies will make nothing.  This is generally true in most markets, but in biotech you have the added barrier of the government to think about.

When a tech company brings a new product to market, they will design it, test it, then try to sell it to consumers.  But when a biotech company brings a new product to market, they often have an added hurdle of the government.  They need to design a product, test it, ask the government for permission to sell it, and then sell it to consumers.  These consumers are usually healthcare patients because the product is usually a drug or medical device.  The government in this case is protecting us from bad products in healthcare, but in turn this puts up a barrier to entry that ensures that only a few products get through and get all the money in the market.  There’s a large market for crappy but cheap smartphones that retail for far less than an iPhone or an Android, there isn’t any market for crap drugs that only “sort of” cure your disease. 

50 years ago biotech’s second biggest area was agribusiness, but today all the biggest movers and shakers are all related to medical in some way.  Everyone is working in an industry where money only comes in if you can improve the health of a patient.  Even the non-medical companies, the “shovel salesmen” in the biotech gold rush, the products they sell will only get bought by companies which are themselves trying to make a drug or a device that will prolong the life of a patient.  So I feel like any biotech giant I wanted to invest in, be it Pfizer or Merck or Johnson and Johnson, investing in any one of them is like playing a crap shoot with the FDA.  If Pfizer’s next biggest drugs don’t get approval, Pfizer’s stock will go way down.  And if the FDA approves a “better Tylenol” for mass market, then Johnson and Johnson could drop.  So biotech feels like I’m investing in the future of the FDA more than I am the future of the market.

And then there’s Thermo Fisher, the biggest shovel salesman of the biotech gold rush.  They make the products used in labs all over the world,I know even my lab uses a lot of Thermo Fisher brand products.  Even here the future seems less certain than it is for say Amazon or Google because all the labs which buy Thermo Fisher products are still at the whims of the FDA.  Everyone buys polypropylene tubes from Thermo Fisher, but what if the FDA decides polypropylene leaves behind microplastics which harm patients and mandates that polypropylene never be used in medical devices or drug manufacturing?  Then Thermo and every company like them would be scrambling for a substitute, and there’s no way of predicting that Thermo would come out of that mess the victor.  So shovel salesmen make for safer but by no means safe bets.

And finally there’s the small players in biotech, the startups and mid-sized companies which hope to build the products of the future.  They are the most speculative companies of then all because they’re often pre-revenue companies which are hoping that whatever drug or device they own the IP for can get through the FDA’s hurdles and reach the mass market.  These hurdles are very high and there’s no money in only getting past the first few just to fall at the last one.  So when you invest in a company like that you’re investing in a business of hope and hype, and since even the greatest experts in biotechnology can’t predict which drug or device will work for patients there’s little chance of someone like me making all the right predictions.

So I guess biotech might be the future, but the future is too murky to invest in.  I’d keep my money in biotech ETFs and hope for the best.

Technology is supposed to be deflationary

Elon Musk and Cathie Wood are complaining about deflation again.  For the most part they’re just sad that the Fed’s actions have cut off the flow of cheap money, reducing the price of stocks and thus reducing their total wealth.  But they both have a tiny kernel of truth within their whining, technology is deflationary by nature and our monetary policy should be prepared to deal with it.  But what does that even mean for technology to be deflationary?

I’d like to go back to a post I did on dividends for an example here.  Let’s look at the Oil Shock of the 70s for a good example of an inflationary period.  The rise in the price of oil led to inflation as companies and people who still needed it bid up the price in order to compete for what little oil was left to go around.  This in turn pushed inflation into other sectors, as the lack of oil meant there was a lack of goods that relied on oil, thus the price of those was bid up as well.  If we take the example of a company which uses oil to make certain goods, how do they deal with the oil shock?  

Most directly, they can continue to buy oil at a high price and raise the price of their goods to compensate.  As long as every other company in their sector is also forced to raise prices, the company will survive by pushing inflation onto their customers, but if the other companies making their good are not affected by the price of oil, then this strategy won’t work as the company will just bleed market share into bankruptcy.

Alternatively, they can look to find ways to reduce the amount of oil they use per unit product.  In this way they can try to keep their prices low while their competitors’ prices are forced to rise, thereby gaining market share.

In a very real way, reducing the amount of oil used to create products would require some sort of innovation in technology, the creation of things like electric cars and nuclear power plants so that less of some stuff (oil) is being demanded and more goods are being supplied. This decrease in demand and increase in supply will cause deflation as prices drop due to these factors.  Remember that this is why some neoliberals pushed back against price controls and rationing during the oil crisis, those things depress the market forces which would otherwise cause people to invest in innovation and trigger deflation.

So today we don’t have an oil crisis, but in Europe we have a gas crisis, and European countries have also declared their intentions to accelerate the gas crisis by subsidizing demand instead of reigning in supply.  The problem here is that the government will pay the cost of this gas inflation and so there’s no reason for market-actors like companies to change their behavior or invest in alternative technologies.  Perhaps the governments themselves will try to force investment in alternative technologies, but I’m skeptical they’ll do as well as the market would.

So what does all this mean? Well if you believe that we’re on the cusp of a technological revolution, then it’s true that the Fed could accidentally flip us into deflation without even trying. On the other hand one of the biggest drivers of inflation this year, energy, is being subsidized by the government with price caps or tax reductions so companies and individuals aren’t being forced to invest in new technology in order to limit their use. Technology is supposed to be deflationary, but that’s no guarantee.

Small coding update: what I’ve done with Unity

A while ago I said I wanted to get back into coding.  I’ve only been doing an hour a week or so but I do have some successes.  Here’s what I’ve got and here’s what I still want to do:

I’ve got a few dispensers that I have labeled “gunpowder,” “nitroglycerin” and “TNT”.   Each dispenser will dispense particles corresponding to their name.  Gunpowder is set to explode with low force, nitroglycerin explodes with medium force and TNT explodes with high force.  I have a button which dispenses particles from whichever dispenser I choose, then another button makes all the particles “explode.”  If only a little gunpowder is dispensed, the explosion is kind of small.  If a large amount of nitroglycerin and TNT is dispensed, it’s a big impressive explosion.  Then I have a button to erase all the particles and dispense new ones.

What I still want to do are some things I think are much harder.  I want a way for the player in-game to create entirely new explosive particles with more or less explosive force.  What if I want them to invent C4, which explodes even better?  I also want a way to “centralize” the explosion.  Currently every particle moves in an entirely random direction, but actually they should all explode outward from whatever point in space the explosion began at.  Once I can centralize the explosion I can make explosions that have multiple starting points, and from there I want parts of the game to allow the player to learn how to use explosive shape to perform certain tasks, and thereby gain the resources to build better explosives to perform new tasks.  Also eventually I’d like to be able to put in some amount of “control” by which certain explosives (like gunpowder) explode easily even when you don’t want them to while others (like C4) are very stable and don’t explode unless you really make them.

So all that’s to say I still have a long way to go.

Don’t just mindlessly avoid things that are dangerous

This post may be a little weird, but I didn’t know how to title it. I want to talk about hazards in science and how they need to be handled. The key point I want to make is that science by its nature requires us to work with obscure and sometimes dangerous chemicals, but they shouldn’t be feared or avoided rather we should be aware of the dangers and use those chemicals with the proper precautions.

At a previous lab I worked at we had to wear special gloves when handling one of the chemicals we used. This chemical was toxic enough to seep through your skin, into your bones and begin leeching the calcium out of your bones, and because of its formulation it would also seep through normal lab gloves. So we wore special safety gloves when handling it and took special precautions: we always wore two pairs of gloves over each other and if we ever noticed we had spilled any we would immediately remove our gloves and start washing our hands. These precautions were the ones endorsed by the National Science Foundation and pretty much anyone who had ever worked with this chemical, and in all my time working with it we never had anyone harmed by it due to our safety precautions.

At one point a visiting scientist was working in our lab alongside me and his experiment required him to use this toxic chemical. I could tell he was nervous and unsure of himself, he was wearing two sets of gloves but didn’t want to touch the bottle in order to pour the chemical into his reaction vessel. He kept saying that he didn’t understand if he was doing it right and wanted to know if we had any special tool or instrument that would pour the chemical for him. Finally I simple took the bottle containing the chemical and poured it myself, saying to him “you don’t lack understanding, you just lack confidence.”

I think the overcautious approach that the visiting scientist had may have come from them misunderstanding the repeated emphasis on safety that we put out. Yes we work with dangerous chemicals and we have to be safe when using them, but overestimating a danger is as inaccurate as underestimating it, and proper lab safety doesn’t mean avoiding the lab work at all costs. We use these chemicals because we have to, they’re the only ones with the right properties to work in our experiments, and so any scientist needs to have the confidence and capability to use them himself. A healthy amount of precaution is good but if it makes you too scared to pick up a bottle then you’ve gone too far, you have to be able to read the scientific literature on a chemical and understand how dangerous it actually is so you can use it when you need it.

I know this post was a bit rambly, but it’s something I’ve been thinking about.

Why was everyone in the 60s so high on Supersonic air travel?

I get a small sense of morbid schadenfreude reading old books on economics.  Occasionally the authors make some of the most insightful predictions I’ve ever read about the nature and direction of the economy of their future (our past), but more often they miss wildly and I get to feel superior while reading a book on the bus.  I’ve now noticed a pattern though of writers from the 60s: a whole lot of people expected supersonic air travel to be the Next Big Thing.  I’ve already written about how the American Challenge predicted it as one of the most important challenges that Europe needed to invest in.  I’ve now started reading The New Industrial State by John Kenneth Galbraith, in which he singles out supersonic air travel as “an indispensable industry” of the modern economy.  As I’ve noted before, supersonic passenger planes never quite took off as advertised, but it’s a fun little theory to look at why people might have expected them to do better than they did.

At first, supersonic travel seems like no less than the next logical conclusion of human travel.  First we walked, then we invented wheels to carry our stuff, then we built ships then railroads then automobiles then planes.  Each step in the evolution of human transportation seemed to bring an increase in speed and thus a huge economic advantage, so it seemed only natural that supersonic travel would follow this pattern.  But I think the constant increases in speed blinded people to the more important increases in efficiency.  Airplanes are much faster than cars and ships, yet to this day far more international trade is conducted by land and sea than by air.  In order for airplanes to compete as a mode of travel, they not only had to be faster but the gain in speed had to outweigh the increase in cost.  For moving people around this gain is very easy as none of us wants to sit on a boat for 4 weeks to get to our destination.  But for moving cargo that gain is much harder because the cargo doesn’t care as much about its speed and the cargo’s owner only cares how much fuel he has to spend moving it from A to B.  So speed only leads to efficiency in some cases, in others the higher cost of fuel means more speed has less efficiency.

The same dichotomy between speed and efficiency exists for supersonic vs subsonic planes.  The supersonic Concorde could of course do a transatlantic route in just under 3 hours, and this gain in efficiency was appreciated by its many passengers.  But the even greater gain in efficiency came from planes like the Boeing 747 and other “Jumbo Jets” that could take hundreds of passengers across that same route using significantly less fuel per passenger.  That meant a ticket on a 747 could be a small fraction of the price of a Concorde ticket, and there just weren’t enough ultra-high-class passengers to make the Concorde cost-efficient. 

It just seems like nobody did their due diligence on a cost-benefits analysis for supersonic transportation, or instead they looked ahead with starry eyed wonder and proclaimed that “technology” would in some way ensure that supersonic travel was made efficient enough to compete.  

Science thought: all of proteomics is based on shape

You are what your proteins are.  That was the maxim of a biochemistry teacher I had, proteins are the molecules performing all your bodily functions, and any genetic trait or variation will normally not affect you unless it in some way can affect your proteins.  But proteins themselves can be difficult to wrap your head around, even for trained biochemists.

I thought about this conundrum while listening to a discussion between my peers.  A collaborator has a theory that a certain protein and a certain antibody will bind to each other, and they have demonstrated this to be true via Western Blot.  On the other hand when we image the samples using electron microscopy, we don’t see them binding.

Binding, like all protein functions, depends on the shape of the protein or more specifically a combination of shape and charge.  You may have seen gifs of a kinesin protein walking along microtubules, that only happens because kinesin has the right shape and the right charges to do so.  If kinesin was shaped more like collagen (long, thick rods) then it wouldn’t be able to move at all, and if collagen was shaped like ribosome proteins (globular and flexible) then it would never be able to be used as structural support.  Each protein can perform its job only because it is shaped in the correct way.

Shape also determines protein interactions.  You may have heard of how antibodies can bind so tightly and so specifically that they can be used to detect even tiny amounts of protein.  An antibody will detect a protein by binding to some 3D shape that makes up part of the protein.  An antibody that detects kinesin might bind to one of its “legs,” an antibody that detects collagen would have to bind to some part of its rod-like structure and so on.  That’s important because proteins can change their shape.  If a protein is boiled or put in detergent, then then its shape will disintegrate and it will become more like a floppy noodle of amino acids.  Now there are some antibodies that can only bind to a protein when its been disintegrated into a floppy noodle, but those same antibodies would not detect the protein when it’s in it’s “native” shape.  Because as can be expected the native shape of kinesin (two feet, able to move) looks nothing like the native shape of a floppy noodle (which kinesin turns into when it’s boiled and put in detergent).

So back to the mystery above: there is an antibody that binds to a certain protein in Western Blot, but we can’t make it bind in electron microscopy.  Well Western Blotting first requires boiling and adding detergent to run a protein through a gel, while electron microscopy keeps the protein in its native shape.  It’s very likely that this person’s antibody only can bind to the floppy noodle form of the protein (what you get after boiling and detergent) but cannot bind to the native form, and that’s why we aren’t seeing it in electron microscopy.  As always, shape is important.

Dear Scientists, publish your damn methods

Dear Scientists,

I’m a scientist myself.  I’ve written papers, I’ve published papers, I know it’s often long and boring work that isn’t as exciting as seeing good data and telling your friends about it.  I’ve sat in a room with 3 other people just to edit a single paragraph, and god it was dull.  So I can understand if writing your actual paper isn’t the rip-roaring adventure that gets you up in the morning.  

At the same time, science is only as good as the literature. One of our fundamental scientific tenants is the principle of uniformity, that is that anyone should be able to do the same experiment and get the same answer.  If you and I get different answers when we do the experiment then something is definitely wrong, and failed replications have taught us a lot about how much bad science there is out there.  On the other hand, any failed replication will fall back on the excuse that the replicator “didn’t do the experiment right.”  They will claim that something done by the replicator was not done exactly as they had done it, and that this is the source of the error.  I would fire back that it is your job as a scientific writer to give all the details necessary for a successful replication.  If there is something very minor that has to be done in a specific way in order to replicate your experiments, then you need to state that clearly in the methods section of your paper.  Anything not stated in your methods is assumed unimportant to the outcome by definition, so if it is important put it in the methods.

Even worse than the above is the scientific papers which publish no methods to begin with!  I can’t tell you how many times I’ve been looking for the methods of a paper only to find a note saying “methods performed as previously described,” which links to another paper saying “methods performed as previously described” which links to another paper on and on again until I’m trying to find some paper from 1980 just to know what someone in 2021 did.  I don’t think “as previously described” is sufficient, if the methods are identical then you can just copy and past them in as supplemental material.  It’s the 21st century, memory and bandwidth are very very cheap, there is no need for a restrictive word count regarding your methods.

But the worst of the worst, and the reason I wrote this article, is that I found a paper claiming “methods performed as previously described” which did not link or cite any paper whatsoever.  I have no way of knowing which previously described method this paper is referring to, and in fact no way of knowing whether they are making this all up!  I would go so far as to say this is scientific malpractice, the methods are totally undescribed and thus the experiment is unfalsifiable, because anything I did in an attempt to replicate it might be wrong because I don’t know how it was done in the first place!

So please scientists, publish your damn methods.  Here’s an idea that I’m hoping will catch on, if you don’t have room in the body of your paper and are publishing your methods as a supplement, just copy/paste from whatever document you used to do the experiment.  Most methods are written in the past tense in a paper, but the present tense during an experiment, and furthermore the experimental methods often include extraneous information such as “make sure not to do the next step until X occurs,” this information often being omitted in the published paper.  I would say that this information is not in fact extraneous but should be included, if there is some precise ordering of steps that needs to happen, then that information should be shared with the world.  So whatever protocol you used to do the experiment, with marginal notes and handy tips, just throw the entire protocol into your supplemental information as a “methods” section and stop playing hide the pickle with your experiment by citing ever older papers

The Short Cramer ETF and the paradox of the stock picking

Tuttle Capital made waves last week by bringing out an ETF called SJIM that would let you short the stock picks of TV personality Jim Cramer.  Cramer, the longtime host of “Mad Money” on CNBC, has a prolific history of making bad calls from “Bear Sterns is Fine” to “sell Netflix in 2012” and even “Buy Netflix in 2022.” So it’s entirely unsurprising that “just do the opposite of Cramer” would gain traction as a valid investment strategy.  What’s interesting is that this strategy runs counter to the semi-strong version of the Efficient Market Hypothesis (EMF) in a way that some might not expect.  I’ve at times seen people attack Cramer based on the EMF, pointing out that even the best stock pickers rarely perform better than random chance and that therefore Cramer is by definition a waste of time.  Yet many of those same people wouldn’t realize that if Cramer himself is a waste of time, then shorting him is a waste of money.

It comes down to what I sometimes call “the paradox of stock picking”: if you believe it’s impossible to predict the winners in the market, you must also agree it’s impossible to predict the losers.  Many people agree that you can’t know with certainty which company in the stock market will do well in the future, past performance is no guarantee of future success and all that.  What is the best electric vehicle company to invest in today?  Tesla is synonymous with EVs, but then Microsoft was synonymous with tech in 2001, and if you put all your money into Microsoft in 2001 you would have missed out on the massive gains made by Apple, Google, and others.  It’s hard to be certain that Telsa will continue to be the EV leader or even that it’s current growth trajectory is sustainable, and in either of those cases there could be some other company that would make a much better EV investment.  So then let’s flip this question on it’s head: what is the worst EV company to invest in?  Rivian is trading at around 600 times revenue for example (revenue 55 million, market cap 33 billion), can you guarantee that it is a bad investment?  What about Nikola?  They faked an electric truck by rolling one down a hill, are beset by scandal, and are still trading at about 80 times revenue, are they a bad investment?  The EMF states that you cannot beat the market with fundamental analysis, so the investment opportunity of scandal-plagued Nikola and profit-less Rivian are already priced in by the market just as the growth opportunities of Tesla are already priced in.  If you thought you could with 100% certainty pick which EV company was the worst investment, or even just a below average investment, then you could make an EFT made up of every EV company except the definitely-bad one. Then your EFT would beat the EV market as a whole because it would include all the market winners while eliminating one of the market losers.  This would run directly counter to the EMF which says you cannot beat the market.

So getting back to Cramer, is shorting him via an ETF a waste of money?  If you believe the semi-strong or strong versions of the EMF then Cramer’s chance of success as a stock picker is perfectly random, no more no less.  In order for shorting him to be a good investment, then you must believe: 

  • The market is not efficient and it is possible to pick winners and losers
  • Cramer’s analysis is not just so bad that his chances of success are random, but rather he is so bad that chances of success are worse than random.  
  • Cramer’s chances of success are so much worse than random that the gains from shorting him outweigh the expense ratio of the ETF

It’s important to note here that shorting Jim Cramer puts you on the hook for his successful calls as well as his failures.  Failed predictions often generate more buzz than successes since the schadenfreude of seeing some idiot on the TV be proven wrong is a powerful emotional tool for getting people talking.  But if SJIM had come about 15 years ago and you had held it, then you shorted Jim Cramer on his “Bear Sterns is Fine” call but also shorted him on “Buy Apple” in 2010.  Adjusting for stock splits Apple’s price has gone from around 5$ to around 150$ in that time period, is that the kind of short position you want to take?  Only time will tell if SJIM is a good investment I guess.

Raw Reports 7: Vince McMahon was an amazing heel

There’s a great line from a Raw in 1998: Vince McMahon had just stolen Steve Austin’s championship belt and says “the only place this belt belongs is above my mantle in one of my homes.” That single hilariously brilliant line perfectly encapsulated the evil rich bastard character that was heel Vince McMahon. And yet in early 1999 the unthinkable was happening and Vince McMahon was transitioning from heel (bad guy) to face (good guy).

Now I will admit this turn was expertly done. It was impossible for anyone to empathize with McMahon himself (a spoiled rich asshole who had victimized ever face in the WWF), so instead he garnered sympathy by proxy by having his innocent daughter Stephanie get attacked by the Undertaker. This led to him seeking help from Steve Austin and while the crowd still hated him his contrition and humility made him at least somewhat understandable if not likable. Next, his son Shane started attacking him and victimizing face wrestlers, overtly taking the spot that Vince had once occupied as the spoiled rich asshole of the WWF. This implicitly moved Vince away from being a heel as he was no longer doing the victimizing and was in fact being victimized himself. Finally, after Undertaker and Shane announced they had been working together the whole time, Vince came out and attacked Shane for the benefit of Steve Austin during a match Austin had with the Undertaker. By siding with the WWF’s biggest face, and attacking its two biggest heels, Vince had now fully turned from heel to face.

And yet… it was kind of crap. Making Undertaker be a “Greater Evil” over an above Vince was definitely cool, but trying to make Vince sympathetic on his own merits was just kind of boring. There was a novelty to having two sworn enemies (Austin and McMahon) have to work together, but the novelty wore off quickly when Vince came out and tried to cut a face promo saying how he knew he had been an asshole and would try to change as a human being. And you could tell the crowd wasn’t having it, they continued to chant “asshole” at him no matter what he said. Now, knowing as I do the future of the WWF, I know exactly where this storyline is going: Vince eventually reveals that he was secretly behind everything and was working with Undertaker the whole time as a heel. But I have to wonder why they ever tried to portray him as a face in the first place. Was it all for the unexpected twist that he was working with the Undertaker? Shocking though it was, it was also really really stupid to have him sobbing about the victimization of his daughter only to a few weeks later be revealed as the architect of that same victimization. Had they actually wanted him to be a face? I can’t imagine anyone thought that was a good idea, the crowd still hated him for everything he had done and a spoiled rich company owner just doesn’t make for a natural “good guy” character in any way shape or form. Maybe Vince personally wanted to be portrayed as a good guy just for his ego, but then someone should have told him it just wasn’t going to happen.

Despite a few moments of humor during Vince’s aborted face run, like giving a terrible Stone Cold Stunner to his son Shane, the whole thing kind of felt boring and subpar as I was watching it. I would have preferred Vince to have still explicitly been a bad guy throughout, just a bad guy who loved his own daughter. Then he could have had humorous promos demeaning the crowd and complaining about his situation, instead of boring promos where he tried to act sympathetic while complaining about his situation. But regardless, face Vince McMahon doesn’t detract from the stellar performance that Raw has done after Wrestlemania 1999.