Why Federated Learning for ADME Is Doomed to Fail
Federated learning promises privacy-preserving shared models. But misaligned incentives, adverse selection, and blocked explainability make it a poor fit for ADME.
Eroom’s Law is biotech’s favorite doom chart. It’s also the industry’s most misused ratio.
The original observation is specific: the number of new drugs approved per inflation-adjusted billion dollars of R&D dropped about 100-fold between 1950 and 2010. Hence “Eroom”—Moore’s Law in reverse.
The trend is real. But the meme version—that drug discovery is fundamentally broken, ideas are drying up, and only AI can save us—doesn’t actually fit the data. Here is what gets lost when you turn a complex economic observation into a slogan.
A drug approved in 1960 isn’t the same asset as one approved today. Endpoints, trial sizes, comparators, safety expectations, and regulatory standards have all shifted massively. Yet productivity measures usually assume they are equivalent.
The denominator (spend) and the numerator (approvals) aren’t measuring stable objects over time. Eroom’s Law is partly just inflation in the evidentiary burden masquerading as a failure of invention.
One mRNA vaccine program can have a civilization-level impact; meanwhile, plenty of approvals deliver marginal benefits and marginal revenue. When value is this heavy-tailed, “drugs per dollar” is a crude summary statistic. The trendlines mislead because even if approvals per dollar falls, the value created per dollar can behave very differently.
There is a common confusion that the absolute number of approvals collapsed. It didn’t. Approvals hovered around 20–30 per year until 2010 and roughly doubled after. What exploded was the R&D spend required to get there.
The popular retelling is that we can’t make drugs anymore. That’s wrong. The story is about cost structure and where we are aiming.
There has been an uptick in efficiency since 2010, but it’s tightly coupled to a shift toward smaller eligible patient populations, largely due to a focus on rare diseases. “Breaking Eroom” often just means approving more niche drugs for narrower cohorts, not necessarily solving the hard mainstream diseases more efficiently.
The most coherent explanation for the decline isn’t scientific stagnation, it’s the expanding generic pharmacopoeia. Once effective drugs go generic and cheap, the next entrant has to clear a much higher hurdle—both clinically and commercially.
This pushes R&D toward areas that have historically been less tractable, like Alzheimer’s or metastatic solid tumors. In this light, Eroom’s Law is partly a victory lap: prior successes made the next marginal gain harder to justify.
Genomics, HTS, and computational chemistry got orders of magnitude cheaper, yet approvals per dollar fell. This doesn’t prove science is stuck. It points to the real choke point: the predictive validity of our screening and disease models.
If your models don’t rank candidates the way human biology does, taking more shots on goal just generates more beautifully optimized failures. The industry has spent decades optimizing throughput when it needed to optimize decision quality.
Stop using Eroom’s Law as a gloomy law of nature. Use it as a reminder to stop arguing about “AI vs no AI” and start arguing about where we are losing money.
We need to ask which decisions we are making too late (and paying clinical trial prices to learn). We need to identify where predictive validity is actually improving—whether that’s human genetics, clean biomarkers, or tight patient stratification.
Most importantly, we need to be honest about what metric we are actually trying to improve: approvals, patients helped, or risk-adjusted NPV. AI helps, certainly. But it usually helps incrementally, and often outside the true rate-limiting steps at industry scale.