Foreign Policy Research Institute A Nation Must Think Before it Acts Innovation and the Growth of the American Economy

Innovation and the Growth of the American Economy

Warming Up to Innovation

Technological change is a very important part of economic change and growth. The famous portrait by German immigrant Christian Schussele, “Men of Progress” (1862, at the National Portrait Gallery, Washington, D.C.) can be a departure point for talking about changes over time in American history, at least in perceptions about the sources of innovation and who are the innovators in our nation.

Schussele’s painting includes familiar inventors and innovators—Samuel Colt, inventor of the revolver that bears his name; Cyrus McCormick, inventor of the mechanical reaper; Samuel F. B. Morse, the American inventor of the electromagnetic telegraph; and Elias Howe, inventor, among others, of the sewing machine. In the background is Benjamin Franklin, who achieved world fame for his scientific work in electricity, his invention of the lightening rod and the “Franklin Stove,” and his leadership in establishing one of the nation’s oldest scientific societies, the “American Philosophical Society in Philadelphia for the Promotion of Useful Knowledge.”

In 2000, the American Heritage of Invention & Technology magazine ran a contemporary version of this painting, which has to be called “People of Progress” because it includes women. On the wall, the portrait of Dr. Franklin has been replaced by one of Thomas Edison, the nation’s most prolific inventor; Einstein is included, along with John von Neumann, Wilbur and Orville Wright, Henry Ford, and figures with important industrial connections. To stimulate thought and discussion on who is responsible for innovation in our society today, I like to ask students in my class, Technology in American Society, to design their own up-to-date versions of “Men of Progress.” I allow sketches of inventors and entrepreneurs or quick drawings of inventions. The results often include pharmaceuticals, especially, the “Pill,” laptop computers, the cell phone, and, almost always, the iPod. Inventors and entrepreneurs invariably include Bill Gates and Steve Jobs, but exactly what roles these men played in their respective innovations and the inventions that lay behind them or who “invented” the Pill and the cell phone is, almost without exception, unclear in students’ minds.

At the beginning of my course, I also like to use time capsules to get students to think about what defines the society in which they live. Later down the road, I show them the contents and the design of well-known time capsules that were buried in the past, such as Westinghouse Corporation’s 1939 New York World’s Fair time capsule. Westinghouse buried another one in almost the same spot as part of its exhibition at the 1965 New York World’s Fair; both were subsequently dug up a few years ago. Time capsules are an excellent device to think about what’s really important in our society and culture. I use 4–5 different earlier dates—when the Founding Fathers wrote the Constitution; 1862, when the Schussele painting was done; 1939; 1965—and today. Students fill out forms on which they list artifacts or objects that they think the society of the particular time might have selected for inclusion in a time capsule intended to be opened by our class. These are put them in dated “time capsules,” brown envelopes, which we then open and discuss when we are at that time chronologically in the course. This is just another exercise by which I challenge students to think across time about material culture, technological and social change, and economic growth. Occasionally I get to see lights go off in their minds.

In part because of global climate change, we have many challenges to confront. One of the most symbolic when it comes to signaling a bright idea, an invention, a “flash of genius,” or a Eureka moment, is to find a replacement for our almost universal trope, the incandescent light bulb. Thomas Edison’s incandescent light bulb is being regulated out of existence because of its gross inefficiency; Australia has already taken this step. Will the compact fluorescent bulb or its successor, the light-emitting diode, evoke the same image of invention or a brilliant idea in our minds as the ubiquitous incandescent bulb? Pose this question to your students. Or will something else take its place? If so, will it be the Post-it note? The image of a brilliant idea or invention for my students is, almost without exception, the iPod, as rendered by Apple’s graphic artists.

These, then, are some of the ways that you can get your students warmed up to inventors, invention, innovation, and innovators. By doing so, you can then move on to what is really a fundamentally important, related matter: innovation and economic growth.

The Study of Innovation

In what follows I want to discuss an arc of how we in the United States have historically perceived who in our society is responsible for innovation. This perception has changed over time in part because of some very fundamental changes in our economy, our information technology and the costs of information, our society’s perceptions of competition in business and monopoly, and a variety of other factors. One has to ask the following questions: What is innovation? Where does it come from? What are its principal drivers? Why is it important? Who’s responsible for it happening? The answers to these questions have changed over time.

The person who has brought more attention to the importance of innovation than anyone else is Joseph Schumpeter (1883-1950), an Austrian-American economist who coined the term “Creative Destruction” to describe one of innovation’s roles in the economy. He has not always been well known but increasingly is becoming so because of the importance of innovation in the American and global economy. Thomas K. McCraw’s Prophet of Innovation: Joseph Schumpeter and Creative Destruction (Belknap Press, 2007) provides a wonderful account of his contributions to economics, as well a sympathetic account of this great thinker’s tragic life.

Schumpeter lived a tortured personal life, but he revealed to us the importance of innovation in the growth of economies. In three works in particular, he refined his thoughts on this matter: Chapter 2, “The Fundamental Phenomenon of Economic Growth,” in his Theory of Economic Development (1911); an article, “The Instability of Capitalism,” Economic Journal , 1928; and Chapter 7, “The Process of Creative Destruction,” in Capitalism, Socialism, and Democracy (1943). In these works he focuses on entrepreneurship—he identifies who the entrepreneur is, what the function is in the growth of economies—and innovation. The entrepreneur is responsible for innovation, which Schumpeter reduces to some form of new combination. This is a broad idea that is fundamental in Schumpeterian thought, and it’s become the dominant thought among economists concerned about economic growth today.

In Theory of Economic Development Schumpeter says that his concept of innovation covers five cases:

      1. The introduction of a new good—that is, one with which consumers are not yet familiar—or of a new quality of a good.
      2. The introduction of a new method of production, that is one not yet tested by experience in the branch of manufacture concerned, which need by no means be founded upon a discovery scientifically new and can also exist in a new way of handling a commodity commercially.
      3. The opening of a new market, that is, a market into which the particular branch of manufacture of the country in question has not previously entered, whether or not this market has existed before.
      4. The conquest of a new source of supply of raw materials or half-manufactured goods, again irrespective of whether this source already exists or whether it has first to be created.
      5. The carrying out of the new organization of any industry, like the creation of a monopoly position (for example through trustification) or the breaking up of a monopoly position.

Schumpeter used innumerable variations of these cases in all his subsequent work. In his book on business cycles in 1939, he defines innovation as “in short, any doing things differently in the realm of economic life.” This is my favorite of his entire repertoire because it is so simple to remember. It also embraces innovation as we’ve been discussing it here and also institutional innovation, organizational innovation, and innovation within one’s own life—any time one does something differently and derives benefits from this change.

The entrepreneur is the innovator in Schumpeter’s conception. His original word for the entrepreneur was der Unternehmer, literally undertaker—not in the sense of mortician but from the French verb, entreprendre, to undertake. Schumpeter identifies the entrepreneur as the person who makes new combinations and carries them out. Entrepreneurs are change agents; they create the basis for economic growth. Schumpeter makes an important distinction in his 1939 definition between inventors andentrepreneurs. In some respects, Schumpeter plays down the significance of the inventor, although he says that inventors can be entrepreneurs as well. But mere invention is insufficient for economic growth. The entrepreneur is the person who transforms inventions and pushes them into what is usually a highly resistant marketplace or organization. The entrepreneur is the person who goes against the stream, often in the face of ridicule, resistance, and rejection, introducing change that brings about economic growth. Entrepreneurs are a very special type. Schumpeter emphasizes this point again and again in his works. Fortunately, with so many successful entrepreneurs living now in our “You-Tube Generation,” we can catch a glimpse of what Schumpeter had in mind when he described the entrepreneur and the “willpower” necessary to succeed. See, for instance, this short clip of Jeff Hawkins, one of the founders of Handspring (and earlier Palm) on the passion required of the successful entrepreneur: https://entr200.wiki.zoho.com/Introduction-to-Entrepreneurship.html. [This video snippet comes to us through the “Introduction to Entrepreneurship” web pages of Purdue University’s Entrepreneurship Certificate Program Wiki.]

In his 1928 article, “The Instability of Capitalism,” Schumpeter talks about two forms of capitalism. One is competitive capitalism, with its perfect competition, and the other is capitalism under what he calls “trustification.” Competitive capitalism, he says, is inherently unstable, and innovation is confined largely to efforts to lower the cost of manufacturing products or delivering services. Yet, beginning in the late 19th century, there emerged a form of capitalism that he calls “trustified capitalism.” Under trustified capitalism we find a very different form of innovation, and the innovators are fundamentally different. As early as 1928, Schumpeter began to recognize that the West had been moving away from competitive capitalism into trustified capitalism. While the entrepreneurial function had not changed materially, the nature of the entrepreneur or the person responsible for what he called “the entrepreneurial function” had. In competitive capitalism, it was individuals, often proprietors or employees of existing companies, who made innovations. But under trustified capitalism, innovation occurs in emergent organizations that we today would identify as industrial R&D laboratories that are part of large, diversified corporations. Schumpeter worried about what would happen under trustified capitalism and whether capitalism itself would be stable. From the vantage point of 1950—not very long after his classic work, Capitalism, Socialism, and Democracy , appeared—these corporate R&D laboratories looked like a juggernaut in the economy of the United States of America. Corporate industrial research and development programs appeared to be the major sources of technological change and hence economic growth, despite their parents’ being largely monopolies in their respective industries. Schumpeter had a love/hate relationship with these trusts in that he saw that they had solved the problem of innovation, and yet they were monopolies.

Fortunately for Schumpeter—and for societies living under trustified capitalism—there was an overriding process, which he called “creative destruction.” He talks about the “perennial gale of creative destruction.” Various other people who have written about innovation talk about radical innovation, market-altering innovation, game-changing innovation, breakthrough innovation—indeed, there is a whole lexicon devoted to terms that are essentially trying to describe the kind of innovation that Schumpeter envisaged as being behind the perennial gale of creative destruction. Such innovation basically casts out the old order and establishes a new one when it comes to technologies and industries.

Why Does Innovation Matter?

So why is innovation important? One way to answer this question is to go back to a classic paper published in 1957 by Robert Solow, an economist at MIT, entitled “Technical Change and the Aggregate Production Function” (Review of Economics and Statistics )Solow was one of the first major economists of the postwar period to examine technological change seriously. In this paper, he studied the sources of productivity growth, looking over U.S. history, and concluded that when he accounted for all the increases in land, labor, and capital inputs and overall productivity growth, only about 40 percent of this growth could be explained with conventional economic input factors. Less than half of the productivity growth in American history could be accounted for through normal means—i.e., the means employed under competitive capitalism in Schumpeterian terms. The other 50-60 percent of productivity growth has come to be known as the “Solow residual.” Solow argued thattechnological change essentially constituted this residual. Technological change—distinct from simple increased inputs of land, labor, and capital—thus was a principal source of economic growth. This phenomenon in economic growth had not been formally recognized by any economist to date, though had be been living in 1957, Schumpeter surely would not have found Solow’s conclusions surprising. Schumpeter would have said, “yes, this residual is a measure of the product of the perennial gale of creative destruction, which stems fundamentally from innovation.” Solow won the first Nobel Memorial Prize in Economics for his work, which has become a basic building block of economists’ work in economic growth theory ever since.

Where Does Innovation Come From?

Given that we now have some measure of the importance of innovation, let us turn to the question of where innovation comes from and who is responsible for it. We begin by going back to the “founding fathers” of the United States to see what they thought on these questions. The First Constitutional Convention explicitly discussed these questions. The outcome of the debate at the Constitutional Convention is embedded directly in the first article of the Constitution, section 8, line 8, in what is known as the copyrights and patents clause. Look at the language carefully, especially where the comma is placed. The provision gives the federal government the power, “To promote the Progress of Science and useful Arts, by securing for limited Times to Authors and Inventors the exclusive Right to their respective Writings and Discoveries.” (As Leo Marx has pointed out in a fine article about “technology,” the word and its origins, in 18th-century lexicon, “useful arts” would mean the same thing we would regard today as “technology,” although our use of the word is far more embracing and abstract.) The copyrights and patents clause is the only such statement about progress in science and technology in the Constitution.

There was extensive discussion at the Constitutional Convention about this clause. First, there was an aversion among the founding fathers to the word “patent,” which is not used in the Constitution, because they detested exclusive privileges that were inherent in the very word, “patent.” South Carolina’s Charles Pinckney, for example, wanted the federal power “to establish seminaries for the promotion of literature and arts and sciences, to grant charters of incorporation, to grant patents for useful inventions, and to establish public institutions, rewards, and immunities for the promotion of agriculture, commerce, trades, and manufactures.” James Madison wanted the Constitution to specify the establishment of a university as the means to promote progress in science and technology. Benjamin Franklin wanted a series of internal improvements to promote science and technology. But when the debate was over, the only provision that stayed in the Constitution was this patents and copyright provision, which is reserved for individual inventors and authors. We therefore might term what is in the Constitution as being a very strict construction of the federal government’s responsibility for innovation in the American economy. Innovation was seen as the work of individual inventors whom the federal government could reward with a patent (in exchange, remember, for public disclosure of that invention).

Over time, of course, the federal government’s involvement in the promotion of science and technology has changed, owing to various particular historical circumstances. You can work with your students to try to understand why and when that responsibility has changed. During the Cold War, if society had a problem, most American citizens, including students, immediately thought that the federal government, particularly the military, would be the best place to go to get a solution. That’s because of the way the federal government had assumed such a massive role in technological innovation during World War II and in the early and most dangerous years of the Cold War. Indeed, the federal government played a wide-ranging, diverse, and paramount role in the “innovation system” of the Cold War era. Today, however, a score of years after the Berlin Wall fell and the Cold War terminated without a formal armistice, most of my students immediately think of going to the market, either as a buyer or even more likely as a small-scale, entrepreneurial provider, when a new technology is needed to solve a problem.

On the eve of World War II—and one can easily verify this by re-visiting the 1939 New York World’s Fair, the “World of Tomorrow” Exhibition—most citizens, including students, would have looked to the highly centralized, well-organized industrial R&D laboratories of the large firms in highly concentrated industries, such as those of AT&T, General Electric, General Motors, Westinghouse, DuPont, and Eastman Kodak. Between roughly 1850 and 1900, Americans would have summoned the creative genius of independent, heroic inventors—the Edisons, the Bells, the Sperrys, the Morses, the “Men of Progress” with whom I began.

At least in terms of the public’s common perceptions, in 1850 most innovation was done principally through market relationships. If you were a manufacturer or even a service provider and had a problem, you would likely buy an invention to solve it (thereby innovating, in Schumpeterian terms), or, more likely, some independent inventor would approach you and offer to sell you an invention that would improve upon what you were doing. A century later in U.S. history, in the middle third of the 20th century, most Americans thought that corporate R&D labs generated almost all innovation. This was the age of the man in the white coat.

Today, when we are almost a decade into the 21st century, we are moving swiftly back toward the more market-oriented view of where innovations derive from. Universities, start-up companies, contract engineering and research organizations, and independent inventors and engineers are viewed as sources, anxious to supply innovations—for a price.

The above-described three stages are caricatures of the “conventional wisdom.” In our own time, statistics show that industry has been spending far more money on internal R&D than at any time in history, and much more than government, on either a relative or absolute basis. Somewhere between 70-75 percent of all trained scientists and engineers work in industry. But the perception is that we’re moving more toward market relationships. Procter & Gamble has ballyhooed its new innovation strategy—so-called “open innovation”—and billed it as a “game changer.” Professors from leading business schools across the developed world are selling the idea of “open innovation systems” as the “new paradigm” of innovation. Such promotion reflects real, fundamental changes that have taken place over time in the nature of the markets, information technologies, public policies, and professional norms and ideal among scientists and engineer, among other things. These changes have basically made many corporations decide to dis-integrate their R&D laboratories and especially to move more and more of it away from fundamental and pioneering research toward development of ideas and inventions gained through market relationships. Indeed, some firms have entirely closed down their once-renowned research laboratories and have put either parts or all of their patent portfolios up for sale on their corporate websites.

In the 19th century, as I’ve noted, innovation was largely a product of market relationships. Then some firms as they got bigger, particularly large science-based firms, began to move toward vertical integration, not just in terms of manufacturing and market, but also in research and development. That is, they founded organizations inside their firms specifically charged with the responsibility for innovation for their firms rather than those firms continuing to rely principally on the market. They did this in order to lower the cost of innovation and especially to reduce uncertainties with regard to innovation, and they also did it in response to federal laws, in particular the Sherman antitrust act of 1890 and the beginning of its enforcement in 1900-02 with the famed trust-busting of the Theodore Roosevelt administration. With increased enforcement of antitrust, these corporate research laboratories emerged, gained strength, and reached their pinnacle roughly in the two or three decades after World War II. Of course, much of the research done by some corporations during this era was funded by the federal government, especially by the military. National security in both war and Cold War had given the federal government an unprecedented role in funding research, development, and innovation. With decreasing federal and internal commitments to basic research, many of these show places for R&D began to disintegrate roughly in 1975, and this disintegration has been a slow and often painful process.

Why did they begin to fall apart? As noted, some of it has to do with information technology, and a lot has to do with public policy. In 1980, Congress passed and President Jimmy Carter signed the Bayh-Dole Act, which allowed universities to patent and take to the market innovations that came out of government-funded research and development. This act and several related acts that followed over the next four years were a response to the competitiveness challenge to the United States offered by Japan at the time. In 1984, the Reagan administration overhauled U.S. antitrust laws. In particular, it gave corporations permission to conduct joint R&D on a “pre-competitive basis” for the first time. Previously, though often encouraged to do so by the federal government, corporations would not join in and share information before this act was signed into law. These were just some of the very important public policy changes that occurred in response to perceived threats to the nation’s economic security. When all of the changes are netted out, we see after 1975 a shifting ecology of innovation in which corporate research has played an increasingly diminished role.

Within many industries, at no time in American history have Americans ever agreed fully on who is responsible for innovation in American society. The Constitutional Convention debate offered merely a prelude to debates that took place throughout the 19th century, certainly with the rise of antitrust, with the overhaul of antitrust; with World War II and the Cold War; and with competitiveness threats from Japan and now China and overall globalization. The debate will continue.

Particularly with the emergence of the Kauffman Foundation, the Lemelson Foundation, and other foundations and organizations that promote innovation, there is concern that the U.S. is not being sufficiently innovative and that, at an extreme, fundamental level, individuals bear responsibility for innovation and that we (through both public and private means) need to train our students to be innovative, to equip them with the tools to be innovative, and to ensure that the institutions of the market provide the right incentives to those so inspired. In this sense, the Kauffman Foundation is a reflection of our own times. My prediction as a historian is that this will evolve and this phase of such widespread concern about individualized innovation will pass. We could swing back to a much more heavily corporate-centered or government-oriented view of who is responsible for innovation. For the moment, however, let us all become more familiar with the work of Joseph Schumpeter, the Kauffman Foundation, and other individuals and organizations that focus our attention on the role of innovation in our society. After all, there is near universal agreement that innovation is the principal source of economic growth and welfare.