America’s sub-optimal use of applied and basic grants is a major problem. We have a system where our applied research is poor at producing real applications, and our basic research doesn’t provide the freedom to explore new, unproven areas.
In this piece, I’ll dive into:
What the goals of applied and basic research were intended to be.
How the American research ecosystem not living up to its goals in practice.
How our research ecosystem functioned differently in the era of inventing that drove our golden age of growth.
And how those looking to build in this space can do better now.
Applied and Basic Research: In Theory
Applied and basic research both have slightly fuzzy, general definitions.
Applied scientific research could be defined as research that works to solve practical problems that have some kind of short-term or immediate use. For this reason, applied research is generally seen as more industry applicable and the kind of research that might result in a patent in the short-term.
Basic scientific research, on the other hand, is meant to explore more broadly and build understanding in any area in which a researcher is interested. The goal of basic research is to provide a foundation of knowledge for eventual technological applications to possibly build on top of.
Vannevar Bush’s post World War 2 report to the President, Science: The Endless Frontier, is the closest thing our modern research ecosystem has to a founding document. He outlines the proper roles of each bucket of research as follows:
Basic research is performed without thought of practical ends. It results in general knowledge and an understanding of nature and its laws. This general knowledge provides the means of answering a large number of important practical problems, though it may not give a complete specific answer to any one of them. The function of applied research is to provide such complete answers. The scientist doing basic research may not be at all interested in the practical applications of his work, yet the further progress of industrial development would eventually stagnate if basic scientific research were long neglected.
[...]
Basic research leads to new knowledge. It provides scientific capital. It creates the fund from which the practical applications of knowledge must be drawn. New products and new processes do not appear full-grown. They are founded on new principles and new conceptions, which in turn are painstakingly developed by research in the purest realms of science.”
To put this in machine learning terms, it would not be ridiculous to say that basic research “explores” while applied research “exploits.” Basic research is meant to be searching widely, in areas that we might know little or nothing about, with the hope of uncovering new and promising areas of knowledge that can eventually be built on. Applied research looks to build on these areas once they are established by basic researchers, work on technical problems that can move industry forward.
Given that, it seems very clear that applied research should be producing more patents than basic research. Reasonable people could argue about how much more, but surely the difference should be noticeable.
What’s happening in practice
But there’s a problem. As things stand, that doesn’t seem to be what’s happening at all. Looking at the evidence, it’s possible that our applied science is no more applicable than our basic research on average!
If you’re wondering how we could possibly determine that, the most effective way to explore this question in the field of scientific grant-funding and innovation is using data from grants, papers, patents, and their citations of one another. While none of these are perfect measures of general concepts like “applicability” or other scientific concepts, they serve as good enough proxies for what we need them to in this particular case.
Patents are granted for novel, non-obvious solutions to some particular technical problem. These parameters would imply that to win a patent the inventor has devised both a new solution that works and a use case for it. Given this definition, we should expect that an applied research grant would result in patents more frequently than basic research grants for several reasons. The most notable reason is that applied research is expected to succeed more often, thus creating more of these patentable solutions. Not to mention, applied work is also meant to be done in areas with direct technical use cases and is thus more patentable than the general work done in basic research.
This is why what we find looking at the NIH funding data is so surprising. Its $30+ billion in yearly grants, an amount 3-4 times larger than the NSF, are primarily labeled as either “applied” or “basic.” And, when Li et. al looked at the data, they found that research funded by a basic grant was often no less likely to be cited by a patent than research funded by an applied grant, even in the short run (<5 years).1
That can't be right! Not if the system is functioning correctly.
So, the question is: Which area of research is to blame? Is the applied research not sufficiently applied? Is the basic research not sufficiently exploratory? Is it both?
This isn't an easy question to answer. So, I’ll share some evidence that informs my thinking on both of these areas and why it seems to me that the answer is likely both. I welcome you to come to your own opinion based on the information provided.
Is our basic research not sufficiently exploratory?
There are many data points one can use to make the argument that the US basic research ecosystem is not nearly as exploratory as it was meant to be. For the sake of brevity, I'll focus on two pieces of evidence: an anecdote about why the Prostate Cancer Foundation came to be and some interesting data from Bloom et. al on the slow rate of discovery of new research fields in recent memory.
How the Prostate Cancer Foundation came to exist
In the early 1990s Michael Milken, the Junk Bond King, was just released from prison after serving his sentence for securities violations. Upon being released, he was diagnosed with prostate cancer, which, at that time, had a poor life expectancy. Despite the large number of men afflicted by the condition and the need for medical improvements, prostate cancer was known to be something of a backwater for cancer research at the time. “Life expectancy was being measured in months not years," as Milken put it. "Dismal, dismal, dismal," a director of a cancer research center said of the field at the time. There was little grant money available for this "quiet corner [of cancer research] no one wanted to be associated with."2 3
This was because there was a shortage of exciting research ideas with proven positive results to build on. And, without those promising results to build on, it was hard to attract any substantial money to the field. Progress had stagnated. "People were afraid to try anything," noted another cancer research expert, Howard Scher. This type of situation, little results = little funding, is still all too common.4
But, lucky for prostate cancer researchers and the one in six American men developing prostate cancer at the time, an extremely rich benefactor took a shine to the issue. Milken dedicated a large chunk of his time and net worth to pushing this area of research forward, funding research quickly and requiring investigators to share results before publishing.5
But the primary reason his funding scheme worked so well is that he was willing to fund the unproven, exciting ideas of the research community. While there may have been no ample supply of papers with positive results to build on, researchers in the field actually had plenty of exciting ideas that they dreamed of researching. At the time, in order to get grant funding for their ideas, they felt pressured to submit their third or fourth most interesting idea because it was the one they could best sell to the NIH funding panel. They followed their incentives, and their incentives said, “pitch the idea most likely to get positive results, not the most exciting one.”
As the foundation's medical director said, “We told people to submit novel ideas, the ones they dream about at night, rather than what they think will get approved.” The creativity did not necessarily flow out of the researchers in this space at the snap of a finger. The first round of applications was about as in the box as standard NIH proposals at the time. An article in Science notes:
"In the first year, perhaps 60 of the 86 applications were basically identical, recalls Holden [ the foundation’s Medical Director]. ‘Everybody wanted to do gene therapy, because that's what was in favor at the NCI in 1993.’ But over time, more diverse ideas began to flow in, including the development and use of cancer vaccines and antibody therapies, which were being tested against other cancers but hadn't yet been tried on prostate tumors."
The creativity eventually flowed out of the research community and turned Prostate Cancer from a backwater of cancer research into an exciting field of research that was increasing patient lifespans.
But this is an issue that still plagues many other fields of research. If there aren't exciting research streams with positive results to build on, then the likelihood of attracting significant funding is low. And while this might "make sense" to many reading this, this should not be internalized as acceptable. What Milken came in and started funding is what basic research funding is supposed to be for!
The prostate cancer researchers in the early 1990s were sitting on exciting, exploratory ideas. They just had no belief that it was a good use of time to attempt to submit them to the existing funding agencies.
Milken's headfirst launch into this space should be seen as very clear evidence that, at least in the area of prostate cancer research, "basic" research was not functioning in a very "basic" or exploratory way at all.
We don’t discover fields the way we used to
Another area of evidence that seems to hint at our “basic” research not being very exploratory at all is that we don’t seem to be discovering new fields of research at nearly the same rate that we used to. This is particularly important because the beginnings of a new field often yield a lot of low-hanging fruit in terms of discovering new ideas. The longer a field is around, the harder it can become to discover new ideas. (I discuss this at length in a later piece, When do ideas get easier to find?)
While it is not possible to easily “count” new areas of research, that does not mean that it’s impossible to find a reasonable way to answer the question, “are we getting worse at discovering new areas of research?” This is exactly what Bloom et. al did in their now well-known paper Are Good Ideas Getting Harder to Find?. In the rest of this section, I’ll walk you through their line of reasoning using the logic and figures from the paper.
If you consider what factors contribute to our “research productivity,” you would come out with something like the following:
Amount of money spent on researchers/equipment
The rate of discovery of new ideas within a field given some amount of spending, and
The rate of discovery of brand new fields (which new ideas could be generated in) given some amount of spending
Given the above, that would mean if we 1) generally knew what our overall research productivity was, 2) roughly knew the rate of discovery within a field, and 3) the amount we’ve been spending on research, then we could make a rough estimate at whether we were getting better or worse at discovering new fields. And this is exactly what Bloom et. al did. And, as it turns out, we seem to have been getting significantly worse at discovering new fields.
Our research productivity, determined by measuring how much TFP growth we generate per a given dollar spent on research, has gone down by roughly a factor of 32 since the 1930s. This is because our spending on research has skyrocketed to roughly 24 times as big as it used to be while TFP has gone down in that same period. The first graphic shows the steadily increasing amount we’ve been spending on research inputs using the green line and the blue line reflects the decreasing growth rate we’ve been getting as our return on that investment. The blue line in the second graphic makes clear just how much our overall research productivity has been going down as a result of this.
Research Inputs and TFP Growth Since the 1930s
Research Productivity and Research Inputs Since the 1930s
Our research productivity going down by a factor of 32 since the 1930s is obviously not good. But this does not mean, by definition, that we have been getting worse at finding new areas of research. It could also be that we’ve gotten way worse at finding new ideas within an area of research but are somehow still improving at discovering new research areas. However, as the authors continue in their analysis of more than a handful of specific sub-fields, that turns out to not be the case.
For fields such as semiconductors, crop yields, medical research, and corporate research and development, they calculate the rough decrease in research productivity. In the table below, I provide their calculations of the change in research productivity for corporate R&D. The various estimates of the change in research productivity are in the “Factor decrease” column.
The reason these numbers should lead one to believe that our research ecosystem is growing worse at discovering new areas of research is by process of elimination. Given the evidence laid out above, we see that we’ve grown more than 32 times less productive since the 1930s. And, as we also established above, the two main culprits are either productivity within a research area or getting worse at discovering new research areas. And since the highest factor decrease in the chart above is 17.9, far less than 32, it would follow that the lack of productivity in finding new fields of research is also contributing to this decline in research productivity.
The authors conducted a similar analysis for more than a handful of other fields and found similar results. So, if you assume that these areas are at all representative of our general research productivity within fields on average, then this should definitely be seen as an indictment on the modern “basic” research ecosystem’s ability to discover new fields of research.
This, in tandem with the story of the Prostate Cancer Foundation, does not paint a rosy picture of our basic research system of fulfilling its goal to create the “fund from which the practical applications of knowledge must be drawn.”
Is our applied research not sufficiently applied?
6 So, our research might not be the most “exploratory.” And we might not be discovering new fields at the rate we used to. But that might be worth it if we were producing tons of applied research that was directly useful to existing industries. But that does not seem to be the case.
Our applied research ecosystem seems to have seen a significant reduction in its “applications” in the period since our golden age of growth. And, as I spoke about in-depth in my post, this period of growth lasted from about 1920 to 1970 and was largely built on the exceptional technological innovations of the late 1800s through about 1950—since it takes about 20 years for technology to pervade the market.
Productivity growth due to technology was massive from 1920 to 1970
The modern R&D ecosystem is largely characterized by a particular division of labor: the academics research, large corporations develop, and a smattering of startups and university tech transfer offices attempt to fill the gap. And, while this characterization is not 100% accurate in all cases, it is definitely much truer now than it was in the period of inventing that drove the golden age of growth pictured above.
This siloing of our academic researchers from concrete applications seems to have had a substantial negative impact on how the system operates. In the current system, knowledge from universities is often not being produced in ways that make it optimally useful and readily useable by those who make and sell products in industry. And that is not shocking, because the science is not being produced primarily with industry in mind anymore. It is primarily for other scientists. But in the period of massive yearly growth due to technology shown above, science was different.
The era of inventing that drove our 1920 to 1970 growth was largely done by three groups:
Independent entrepreneur-inventors
Private sector affiliated university research labs
Early forms of the corporate R&D lab
Independent Entrepreneur-Inventors
The inventor entrepreneur was predominant up until the early 1900s. These individuals would invent things with the goal of selling their ideas to large companies. In the rare case, they would start their own company based on their invention. But, in general, they sold to companies. These companies generally made little to no effort to invent internally. Their R&D departments existed to vet outside inventions and decide which ones to buy rather than doing any real research themselves. These inventors could be independent, full-time inventors or were, more commonly, intelligent and ambitious employees of a company whose daily work provided the inspiration for an invention.7
Edison would be an archetypal example of someone who invented full-time. In the early parts of his career, he did so on his own or working with a small group to fulfill contracts for companies like Western Union. He was so exceptionally successful in his inventing business that he was eventually able to run his own large research operation to work exclusively on inventing, either for corporate contracts or to invent a specific piece of technology to fill a hole in the market that Edison saw.8
However, it seems that Edison may have more been the exception than the rule. A more common archetype of the entrepreneur-inventor was a person who had a day job and a little bit of ambition. These individuals would produce inventions for all types of large companies. Many of the inventions purchased or licensed by the railroad companies, who invented almost none of the technology they used, were provided by employees of the railroad companies who were not hired to invent at all.9 These were the types of inventors generally submitting ideas to be evaluated by the patenting departments at these companies. Even Edison Electric, the brainchild of Thomas Edison and largely built on his invention of the lightbulb as well as dozens of other innovations of his that made the electrification of buildings feasible, largely innovated in the long run by acquiring individuals patent submissions.10
Private Sector Affiliated University Research Labs
Pre-1950, the federal government was not anywhere near the behemoth university research funder that they are now (I go into this in-depth in my previous post). From 1909 to 1939, federal funding was somewhere between 4% to 7% of university revenue. Instead, universities relied heavily on state and industry funding.11
Their share of revenue from state funding in this period was closer to 20% to 30%. In return for heavy state funding, research universities developed specialties that were specific to the industrial activity of their state.12 Examples of this include the University of Oklahoma pioneering innovations in petroleum engineering such as reflection seismology and the University of Illinois producing cutting-edge research in crop production that was actionable for regular farmers.
Many of the best universities also relied heavily on industry partners and contracts for funding. This was in both the form of industry-sponsored labs and studies to produce research directly related to the industry’s work or through “consulting” contracts. These consulting contracts were not seen as the sideshows to the actual teaching and research that they are today. Rather, they were seen as opportunities for the professor to produce useful and exciting research, stay sharp on how industry was actually functioning so they could better train the university students, and make the professors and their universities much needed income. Another major incentive was that exciting research often required expensive equipment that was much more abundant in industrial laboratories than academic ones.
It was extremely common for professors at top universities to leave their jobs for top positions in industrial research as well as those who inhabited top industrial research positions to be offered faculty positions. William Carothers, the inventor of Nylon, was drawn away from his Harvard position to DuPont. Two of MIT’s top electrical engineers, Willis Whitney and William Coolidge, left their positions to continue their lines of research at General Electric. And, in the inverse, much of MIT’s early faculty were hired directly out of industry jobs or even taught part-time while still working in industry full-time.13 14
In essence, the line between academic research and industrial research was porous. Research institutions at the time, given their different incentives, were both more able to and more willing to produce work that provided direct inputs into corporate inventions. “Applied” research output, in their eyes, was not a rough idea and some specifications that some corporation would hopefully develop to the point of being market-ready. Instead, applied research output was meant to produce something new and non-obvious, but was also something that was meant to be worked on until it was essentially market-ready and useable. Modern applied research has either lost this understanding or abandoned it in search of a less clear purpose.
Just this week I talked with several top academic life science researchers in a particular disease area who claimed one of the reasons that their patents were not being licensed is because they needed pharmaceutical companies to do a significant amount of R&D on them before the science would be close to ready for a company to run pre-market clinical trials. The academics have already stopped conducting research on those patents because they could no longer get additional highly cite-able papers on that line of work. But, also, the pharmaceutical companies don’t believe there is enough potential profit to justify the extensive additional “development” required to make those patents useable. So, little is happening.
You can’t be upset at either party for following their personal incentives, but a well-functioning applied research ecosystem would not allow equilibriums like this to exist to the extent that they do. Situations like this are sadly common in a way that they would never be in the era of private sector affiliated university labs.
The sign of a healthy area of applied research just might be one characterized by the type of revolving door between academia and industry described above. A revolving door would be an indication that similar work is happening on both sides of the door and that top people think cutting-edge, useable things are being done on both ends.
Early Forms of the Corporate R&D Lab
“The intellectual freedom and ability to share your results with the rest of the field are the major draw,” is an argument I often hear in favor of academic research over corporate R&D labs. And that is a very fair argument. In fact, corporate research labs in that golden era of productive inventing felt the same way.
They saw the ability to publish and explore new hypotheses to a reasonable extent as a major factor in being able to recruit and retain top research talent. The quality of publications coming out of these top industrial research groups kept pace with and often exceeded the quality of publications from top universities throughout the early 1900s.
Scientific Citations Per Publication by Sector
The first half of the 1900s saw the steady rise of the prominence and success of the corporate research ecosystem. Internal corporate R&D grew to surpass the simple purpose of acquiring individual patents from inventors due to the increasing complexity of implementing research in certain areas as well as certain antitrust reasons that prevented large corporations from buying more innovative firms.
Many of these corporate labs were thought of as almost equivalent to university labs in the research they carried out. But they were always focused on concrete problem solving, being under the umbrella of a for-profit company. This level of application, sometimes known as mission-oriented research, did not come at the expense of high-quality publications. Everyone knows the stories and successes of Bell Labs and people like William Shockley developing the semiconductor there in 1947, but this success was not limited to Bell Labs. Dupont’s central R&D unit, for example, was also a bastion of high-quality work that resulted in a Nobel Prize. In the 1960s, Dupont R&D unit published more articles in the Journal of the American Chemical Society than MIT and Caltech combined.16
The lines between academic and industrial research were blurred to the point that large companies founded scientific associations as well. The Optical Society of America was founded in 1916 by a group of employees at Eastman Kodak and the Acoustical Society of America was founded at Bell Labs in 1928, for example.17
The reign of the large corporate lab, sadly, does not seem like it was meant to last. As Arora et. al wrote,
Large corporate labs, however, are unlikely to regain the importance they once enjoyed. Research in corporations is difficult to manage profitably. Research projects have long horizons and few intermediate milestones that are meaningful to non-experts. As a result, research inside companies can only survive if insulated from the short-term performance requirements of business divisions. However, insulating research from business also has perils. Managers, haunted by the spectre of Xerox PARC and DuPont’s “Purity Hall”, fear creating research organizations disconnected from the main business of the company. Walking this tightrope has been extremely difficult. Greater product market competition, shorter technology life cycles, and more demanding investors have added to this challenge. Companies have increasingly concluded that they can do better by sourcing knowledge from outside, rather than betting on making game-changing discoveries in-house.
So, while these labs may have done a large societal good, they might not be destined to be a permanent fixture given the idiosyncrasies involved in running and managing companies whose main line of business and day-to-day success metrics are much different than those involved in running a research lab.
But the productivity-boosting scientific successes of these corporate R&D labs should highlight the desperate need for new research entities to take their place.
Better ways forward
Throughout the middle of the 1900s, the university’s pushed to loosen ties with corporations so they could conduct more properly exploratory basic research. This was not because they didn’t feel like they were adding significant value in their work with industry, but because they felt like they could add even more value if they were free to do truly exploratory work. But that does not seem to be how things have played out. It seems like these researchers disengaged from their exceedingly productive partnership with industry in exchange for “basic” research that is not very basic at all.
We are now in desperate need of new organizations that enable real basic research AND new ways to facilitate the kinds of applied research partnerships that characterized the research ecosystem of the early 1900s.
To produce more good basic research in the university system is a difficult structural problem given researchers’ propensity to chase sure-fire citations and grant money. But, in general, any grant money being applied towards the goal of basic research should at least encourage the types of behaviors that the Howard Hughes Medical Institute does with its grant recipients. The steady grant funding from the institution is essentially guaranteed and recipients are encouraged to explore frontier areas that aren’t guaranteed to succeed. With this encouragement and the financial security that comes with not losing out on future grants if a researcher underperforms, their researchers tend to produce far more extremely highly cited papers...as well as far more bad papers. But that’s the point of basic research. That’s what it should look like! (I go even deeper on the basic research point in a later piece)
Organizations and initiatives like Convergent Research’s Focused Research Organizations are a great start in working towards actionable applied research, but many more initiatives that work towards goals like this are needed. There may be no one silver bullet to solve this problem, but many creative solutions and arrangements can exist, both non-profit and for-profit. In coming pieces, I hope to outline numerous examples of projects that can work to fill this void for different areas of research.
The large corporate research labs do not seem like they’ll be coming back, at least not in their old form. They went away because it was difficult to manage a complex research operation where progress and outputs looked very different from the types of activities and metrics that drove the rest of the company on the day-to-day.
But that does not mean that their impact was not substantial and that the type of work they did can’t exist in some other form. They are the proof that if you give top research minds the right incentives and practical goals to achieve, you can get an astounding amount of productivity out of them.
Those building new organizations in the applied research space should keep this in mind.
Thanks for reading! Please reach out to me on Twitter with any questions, ideas for future pieces, or if you’d like to talk about how to build a great research organization! Please subscribe or tell a friend if you liked it. It helps me out a lot:)
Additional Citations and Footnotes 18
Refer to the paper for more granularity on this point as they measured it several different ways with slightly different results for each methodology. The sentence, as stands, seems to hold for the majority of their methods.
Stokstad, Eric. From Junk Bond King to Cancer Crusader. Science. 1999. https://www.science.org/doi/full/10.1126/science.283.5405.1100
Daniels, Cora. The Man Who Changed Medicine. Fortune Magazine. 2004. https://fortune.com/2013/03/03/the-man-who-changed-medicine-fortune-2004/
Daniels, Cora. The Man Who Changed Medicine. Fortune Magazine. 2004. https://fortune.com/2013/03/03/the-man-who-changed-medicine-fortune-2004/
Stokstad, Eric. From Junk Bond King to Cancer Crusader. Science. 1999. https://www.science.org/doi/full/10.1126/science.283.5405.1100
This entire section derived much inspiration from Arora et. al’s 2019 paper, “The Changing Structure of American Innovation: Some Cautionary Remarks for Economic Growth. Even where I did not cite them in this section, their paper underlies much of my thinking. (https://www.nber.org/papers/w25893)
Arora et. al. The Changing Structure of American Innovation: Some Cautionary Remarks for Economic Growth. NBER. 2019. https://www.nber.org/papers/w25893
Morris, Edmund. Edison. Random House, 2019.
Usselman, S. (1999). Patents, engineering professionals, and the pipelines of innovation: the internalization of technical discovery by nineteenth-century American railroads. In Learning by doing in markets, firms, and countries, pages 61–102. University of Chicago Press.
Arora et. al. The Changing Structure of American Innovation: Some Cautionary Remarks for Economic Growth. NBER. 2019. https://www.nber.org/papers/w25893
Arora et. al. The Changing Structure of American Innovation: Some Cautionary Remarks for Economic Growth. NBER. 2019. https://www.nber.org/papers/w25893
Arora et. al. The Changing Structure of American Innovation: Some Cautionary Remarks for Economic Growth. NBER. 2019. https://www.nber.org/papers/w25893
Arora et. al. The Changing Structure of American Innovation: Some Cautionary Remarks for Economic Growth. NBER. 2019. https://www.nber.org/papers/w25893
Alexander, Philip. A Widening Sphere: Evolving Cultures at MIT. The MIT Press. 2011
This graph plots the number of forward scientific citations per publications in Clarivate Web of Science, by the sector of the author’s affiliations. “Top Research Universities” refer (in alphabetic order) to UC Berkeley, Brown, Bryn Mawr, Caltech, Chicago, Clark, Columbia, Cornell, Harvard, Hopkins, Illinois, Iowa, Lafayette, MIT, Michigan, Minnesota, Missouri, Nebraska, North Carolina, NYU, Penn, Princeton, Stanford, Wisconsin, and Yale. The “Corporate” sector includes parents and subsidiaries of 200 large industrial firms included in Kandel et al. (2018). We fuzzy-match these university and firm names to the address column of Web of Science publications and count the number of forward scientific citations these publications receive until 2016.
Arora et. al. The Changing Structure of American Innovation: Some Cautionary Remarks for Economic Growth. NBER. 2019. https://www.nber.org/papers/w25893
Weart, S. R. (1979). The physics business in America, 19191940: A statistical reconnaissance. The sciences in the American context: New perspectives, page 321.
Total factor productivity is the geometrically weighted average of the ratio of real GDP to labor input and the ratio of real GDP to capital input, with respective weights of 0.7 and 0.3. Labor input consists of hours from the sources of the above citation multiplied by an index of labor quality, taken from the “educational productivity index” of Goldin-Katz (2008, Table 1.3, column 2, p. 39). The Goldin-Katz index is available for 1915–2005. Our educational index is extrapolated backward from 1915 to 1890 using the Goldin-Katz 1915–1940 growth rate, and it is extrapolated forward from 2005 to 2014 using the Goldin-Katz 1980–2005 growth rate. Capital input consists of the new capital series described in the Data Appendix of Gordon 2016, shown for 1920–1970 in figure A–1 by the line labelled “Add Government Capital.