U.S. innovation ecosystem is envy of world. Here’s how it got started.

During World War II, government-supported research led scientists to successfully mass produce penicillin. Here workers at a United States Department of Agriculture research lab, ca. 1943, look for mold strains that produce the highest amounts of the antibiotic.
USDA file photo
Economist who studies technological change looks at public-private research partnership amid rising questions on federal funding
The participation of the federal government in the nation’s innovation ecosystem has been under scrutiny lately. For decades, federal funds have supported academic research, which in turn, has boosted private development, fueling new discoveries in medicine, technology, and other fields. The Trump administration is seeking to cap reimbursement for indirect research costs for biomedical science, which could mean billions of dollars in funding cuts from the National Institutes of Health.
The issue has turned a spotlight on the nation’s public-private research partnership, which has been credited with advances in a wide array of fields and emulated around the world. The Gazette spoke with Daniel P. Gross, an associate professor of business administration at Duke University’s Fuqua School of Business and former professor at Harvard Business School. Gross, together with Bhaven Sampat from Arizona State University, authored a recent National Bureau of Economic Research working paper on the postwar expansion of biomedicine.
In this edited conversation, Gross said the partnership was a response to the urgent demands of World War II, helped the U.S. and its allies win the war, and seeded the current thriving system.
What is your view of this partnership between the federal government and academia and how did it get started?
That partnership has been in place essentially since World War II. Its roots trace back to June 1940, when a handful of leaders at U.S. universities and industrial R&D labs approached President Franklin D. Roosevelt to propose harnessing civilian scientists to develop new technology for the U.S. military, which at the time significantly lagged on the technological frontier of warfare.
This was over a year before the U.S. entered the war, but it marked the beginning of an undertaking that engaged tens of thousands of scientists at firms and universities in the war effort, yielding numerous breakthroughs then, and was subsequently extended and deepened throughout the Cold War and has continued growing since. This partnership has been a pillar of U.S. technological leadership over the past 80 years, in biomedicine and beyond.
At the time, the National Institutes of Health existed, but it was a shadow of its current self?
The U.S. innovation system, and particularly the biomedical innovation system, looked very different in 1940.
The three pillars of U.S. biomedicine today are universities, the life sciences industry, and the NIH. Today they work together and build on each other. But in the 1930s, they were far more primitive. Universities were less research-intensive and had very little funding. The pharmaceutical industry wasn’t well organized, and to a large degree consisted of chemical companies with a minor subsidiary drug business rather than the large, dedicated drug developers we know today.
Drug discovery then was driven more by trial and error empiricism than science — drugs weren’t even subject to FDA review for safety until 1938 and efficacy until 1962. And the NIH was small and only intramural — it was not yet providing extramural research funding like we have now.
“In nearly every war before World War II, infectious disease killed more soldiers than battlefield injuries. Suddenly, there was an urgent need for innovation with immediate practical payoff — but no real infrastructure for getting it done.”

And that was seen as inadequate once the war began?
The war posed a wide range of technological problems, from detecting enemy aircraft to keeping soldiers healthy. In nearly every war before World War II, infectious disease killed more soldiers than battlefield injuries. Suddenly, there was an urgent need for innovation with immediate practical payoff — but no real infrastructure for getting it done.
The war provided an impetus for organizational innovation to support technological innovation. This included a new agency to coordinate and fund wartime research, the Office of Scientific Research and Development, or OSRD. It also triggered the invention of the federal R&D contract, new patent policies, peer review procedures, and even indirect cost funding.
Most importantly, however, was the embrace of the idea that R&D investment was an activity for the federal government, and a new pattern of collaboration between the government, firms, and universities.
Was it largely successful? Penicillin is a story that’s mentioned quite a bit.
Most would say yes. After all, the Allies won the war — and technology, medical and otherwise, was an important contributor to that outcome. New drugs are not necessarily the first thing you think of when you imagine military technology. Yet disease and other ailments could debilitate the military’s field forces, increasing required manpower. Tuberculosis, measles, and venereal diseases are all examples of common maladies among soldiers at the time. Malaria was prevalent in the Pacific theater and North Africa.
The broad range of fronts where this global war was fought, and new weapons with which it was fought, certainly expanded the set of problems needing attention — included protecting soldiers from extreme environmental conditions like hot and cold temperatures or oxygen deprivation at high altitudes, disease vector-control strategies, wound and burn treatments, blood substitutes, and much more.
OSRD’s Committee on Medical Research (CMR) directed and funded hundreds of projects on these problems and made significant progress in many of them. You’re right that one of the more important and remembered breakthroughs was penicillin. Though penicillin was discovered in the 1920s, at the dawn of World War II there was no method of producing penicillin in enough quantities even for clinical testing, let alone treatment.
CMR initially set out with two approaches to developing penicillin as a drug, not knowing which would succeed. One was to try synthesize it. The other was to try to grow it in large quantities from the mold that produces it. Scientists initially thought that the synthetic approach held more promise, but in the end it was scaled-up fermentation of natural penicillin that succeeded.
This breakthrough was transformative — not only for military health but civilian health too. The proof is in the data: Between World War I and World War II, military hospital admissions and death rates from most common infectious diseases declined by 90 to 100 percent. World War II research essentially solved the military’s problem of bacterial disease.
Perhaps even more important is that it spawned a golden age in drug development. The antibiotic revolution of the 1950s and 1960s can be directly traced to achievements in the war.
Some things became successful in the postwar period. Why did this effort have such long legs?
Across the CMR portfolio, the work undertaken to meet the urgent demands of war created a foundation upon which postwar biomedical science and technology subsequently began to grow. That foundation consisted of things like new research tools and techniques, new therapies and therapeutic candidates, new drug development platforms, newly developed capabilities at existing and emerging pharmaceutical firms — including experience in specific drug categories and more generally in science-based approaches to drug discovery, like rational drug design — and most importantly, new scientific understanding.
What about training a new generation of scientists?
It’s a great question. Many readers might think that public R&D funding primarily supports research. But scientific training is also important. The war effort engaged not only seasoned scientists but also thousands of graduate students, predoctoral researchers, and recently minted Ph.D.s. This was the case for both medical and nonmedical research: The labs doing the work were teeming with young people. Although we don’t trace the contributions of these students in biomedicine, I think it’s safe to presume that for many, it was formative.
In related work with Maria Roche, an assistant professor and former colleague at HBS, we have shown this was the case for researchers engaged in World War II radar research. More broadly, when you look at university and policy leadership across U.S. science in the first 25 years after World War II, you see OSRD alumni all over the place. The war proved to be a breeding ground for technical and administrative capacity that the U.S. harnessed afterwards.
When you talk about CMR funding, it included reimbursement for indirect costs — a subject of debate today. What was the rationale behind that then?
It’s useful to think about the context: OSRD needed to incentivize firms and universities to take on military R&D projects. Doing so required reorienting existing research efforts and displacing future ones — this was disruptive. Firms were being asked to use their own facilities, equipment, and sometimes best talent on national problems rather than commercial ones. Some were reluctant to do so without complete compensation. Medical researchers were also initially wary of public funding and bureaucratic control.
Reimbursing these R&D performers for overhead expenses, in addition to the immediate incremental costs of OSRD-contracted work, was one way it incentivized participation. Ultimately, the policy goal was for OSRD research to be “no gain, no loss” for its contractors. The structure of and motivations for indirect cost recovery have evolved somewhat since then, but the basic principles trace back to it.
Today it’s a bit different, in that we’re not building something, but we are trying to continue something that has proven to be successful?
It appears it’s been pretty productive. I wouldn’t dispute that there are opportunities to make the system more efficient, but overall, if you look at the output of this 80-year partnership between U.S. universities, federal research funders, and industry, it’s a story of success. I think we ought to be careful that, in pursuing reforms in science policy, we protect the golden goose.
The U.S. innovation system, and especially the biomedical innovation system, is the envy of the world. It has catalyzed decades of innovation that have supported national defense, health, and economic growth. To undo that would be a great loss for the U.S. and the world.