Stop Using Eroom's Law Wrong

By Blaise AI Team

Eroom’s Law is biotech’s favourite doom chart.

It’s also one of the most misused ratios in the industry.

The original observation is narrow: FDA “new molecular entities” + “new biologics” per inflation-adjusted $1B of industry R&D fell ~100× from ~1950 to ~2010 (log scale), hence “Eroom” (Moore backwards).

That’s a real trend. But the meme version (“drug discovery is fundamentally broken; ideas are running out; only AI can save us”) is not what the analysis supports.

Here’s what gets lost when Eroom’s Law becomes a slogan:


1) It treats “an FDA approval” like a constant unit. It isn’t.

A 1960s approval is not equivalent to a modern approval—endpoints, trial sizes, comparators, safety expectations, and regulatory standards have all shifted. Many productivity measures quietly assume equivalence anyway.

So the denominator (spend) and the numerator (approvals) are not measuring stable objects over time.

Takeaway: Eroom’s Law is partly “inflation in evidentiary burden” masquerading as “failure of invention.”


2) The output distribution is violently skewed, so averages lie.

One mRNA vaccine program can be civilization-level impact; many approvals deliver marginal benefit and marginal revenue. When value is this heavy-tailed, “drugs per $” is a crude summary statistic and trendlines can mislead.

Takeaway: Even if “approvals per $” falls, the value created per $ can behave very differently.


3) The headline chart isn’t “fewer drugs.” It’s “more spend per drug.”

A key confusion: the absolute number of approvals didn’t collapse; what exploded was R&D spend per approval. Approvals hovered ~20–30/year until ~2010 and roughly doubled after, while spend per drug kept rising.

Takeaway: The popular retelling (“we can’t make drugs anymore”) is wrong. The story is cost structure + where we’re aiming.


4) The trend already broke around 2010 — and the reason matters.

There’s a post-2010 uptick, but it’s tightly coupled to a shift toward smaller eligible patient populations, in part due to more rare-disease focus.

Takeaway: “Breaking Eroom” can mean “approving more niche drugs for narrower cohorts,” not “solving the hard mainstream diseases.”


5) The strongest causal story is not “we ran out of targets.”

A more coherent mechanism is the rising competitive and evidentiary bar created by an expanding generic pharmacopoeia: once effective drugs go generic and cheap, the next entrant must clear a higher hurdle (clinically and commercially), which pushes R&D toward areas that have historically been less tractable (think Alzheimer’s, metastatic solid tumors).

Takeaway: Eroom’s Law is partly a victory lap: prior successes made the next marginal gain harder to justify.


6) Compute got cheaper; outcomes didn’t — because the bottleneck isn’t compute.

Genomics/HTS/CADD/comp chem got orders of magnitude cheaper, yet approvals per $ fell.

That doesn’t prove “science is stuck.” It points at the real choke point: predictive validity of our screening and disease models. When models don’t rank candidates the way patients would, more shots on goal just means more beautifully-optimized failures.

Takeaway: The industry has often optimized throughput when it needed to optimize decision quality.


What Eroom’s Law should actually be used for

Not as a gloomy law of nature. As a reminder to stop arguing about “AI vs no AI” and start arguing about:

  • Which decisions are we making too late? (and paying clinical trial prices to learn)
  • Where is predictive validity actually improving? (human genetics, clean biomarkers, tight patient stratification)
  • What metric are we trying to improve? approvals, patients helped, QALYs, risk-adjusted NPV, time-to-PoM — pick one and be honest.

And yes: AI helps — but mostly incrementally and often outside the true rate-limiting steps at industry scale.