An apparent contradiction continues to bedevil the American economy. Business has been investing heavily in all forms of information technology (IT)—computers and related equipment, networking—in order to increase efficiency. Yet the official data on productivity continue to show relatively poor performance gains.

Moreover, as we saw in an earlier IRConcepts (Winter 1998), the United States has been enjoying improved economic growth, with both unemployment and inflation declining to lows not seen in decades, in defiance of economic theories. A logical explanation would be heightened productivity. Again, the data fail to reveal that this is the case.

To shed some light on the mystery, this issue of IRConcepts examines the subject of productivity, avoiding, to the extent possible, technical statistical concepts. The following pages define productivity and demonstrate its importance, then describe its measurement, historical performance, and variation over time and by economic sectors. Along the way, we search for clues that might help solve the mystery.

We hope to end the investigation by putting the evidence on the table and saying, “It’s obvious, my dear Watson.” So, without further ado, the game’s afoot.

A society can increase its total output by using more inputs. A larger labor force, for example, can lead to an increase in gross domestic product (GDP). In such a case, the pie would be bigger, but not its shares, since there would then be additional workers among whom it must be divided. To achieve larger slices (i.e., a rising standard of living), output must rise more than inputs. We call output per unit of input productivity. If, over time, we achieve greater output with the same resource input, then there would be more units of output to be distributed.

Productivity thus determines changes in living standards. Real wages (i.e., purchasing power) cannot rise faster than output per man-hour. If nominal wages go up more than productivity, labor costs rise and prices are pushed up, wiping out that gain in terms of real wages. Indeed, the government’s guideposts for noninflationary wage and price behavior, in effect from 1962 to 1965, called for wage rate increases, including fringe benefits, to be equal to the trend rate of productivity change in the economy at large.

Differences in rates of productivity improvement, however, have a tremendous impact on real wages over time. A 2.5 percent rate of growth would produce a doubling in 28 years, but a 1 percent rate would require 70 years for real wages to double. (At a 2.5 percent rate, real wages would increase sixfold in the 70 years.)

It is feasible to measure the output per each type of input—land, labor, and capital—and come up with a measure of productivity for each of them or for all of them combined. John W. Kendrick pioneered the measurement of output against a combination of the capital and labor inputs, and the Bureau of Labor Statistics (BLS) calculates indexes of multifactor productivity for major (i.e., two-digit) manufacturing sectors. According to the BLS, “These indexes, also called ‘KLEMS’ multifactor measures, compare changes in output to changes in a composite of all the inputs used in production—capital, labor, energy inputs, nonenergy material inputs, and business services.”

The data show a general slowing of productivity growth starting in the 1970s. In recent times, however, manufacturing productivity growth has regained some, but not all, of the pace of the early postwar period.

Our major concern here is with labor productivity, that is, output per man-hour, an area in which the BLS has been gathering data in one form or another for a century. The BLS derives its indexes of labor input from production-worker hours, the number of nonproduction workers, and an estimate of average annual hours paid for nonproduction workers. Production-worker hours include all hours paid for, which would include overtime and other premium pay on the basis of actual time spent at the plant. For trade and service industries, the BLS derives estimates of all-person hours by summing the aggregate hours for paid employees and the estimated aggregate hours for partners, proprietors, and unpaid family workers. Output in the economy and for many industries is based on the dollar value deflated for changes in prices. For some individual industries, such as electric and gas utilities, physical output is measured.

Productivity data are available for the total private economy, the nonfarm private economy, major sectors of the economy (e.g., manufacturing), and a group of individual industries. By dividing the output figures by the man-hour ones, the BLS comes up with output per man-hour. BLS analysts then construct an index with a particular year chosen as the base and assign it the value of 100. Percentage changes from year to year are then recorded relative to the base. The current base year, for example, is 1992 (1992 = 100). In 1998, output per hour of all persons in the business sector was reported to be 107.7, which means that labor productivity had risen 7.7 percent over the six years, or at a rate slightly more than 1 percent a year.

Since the 1940s, the BLS has expanded the program to cover 178 industries, at various levels of aggregation, in manufacturing and nonmanufacturing. In the period 1973–1994, data for 107 separate industries in the manufacturing sector showed that 92 percent of them had achieved productivity gains. The household audio and video equipment industry had the highest rate of increase—10.3 percent per year. At the other end of the scale, fabricated pipes and fittings had a 1.2 percent per year decrease in output per hour.

In 1994–1995, the BLS revised industry productivity measures to reflect generally accepted innovations and refinements in the economic theory of production and costs, as well as innovations in statistical theory. This resulted in only minor changes in long-run results. In nearly 90 percent of the industries, the average annual growth rates of output per hour for 1973–1990 were revised by 0.5 of a percentage point or less, and the changes were not predominantly positive or negative.

Collecting these data presents conceptual and practical problems. Hence, the data represent estimates of changes in output per man-hour over time. For example, it is extremely difficult to measure output of government services, so to a very large degree no productivity growth is imputed to the government sector. But, as we shall see presently, it is also not easy to measure the output of private sector services. Indeed, the major complaints of those in the “missing productivity” camp revolve around that issue.

The rate of change in output per man-hour varies from year to year, but the long-run trend shows labor productivity rising at an annual rate of 2.3 percent from 1870 to 1970. In the 1970s, however, a new trend unfolded—a measly increase of 1 percent a year. When the rate first dropped (circa 1974), economists found factors to explain the change. By now, however, those factors have run their course and the rate has not picked up. This puzzling development is at the very heart of the case for missing productivity. We shall resume our probe shortly, but let us continue to focus on the data.

The historical record reveals that movements in productivity have tended to follow the pattern of the business cycle. At the peak of an economic boom, productivity increases become smaller: Plants approach full utilization of capacity, production bottlenecks appear, and marginal workers and machinery are employed. An economic downturn affects productivity even more adversely, as companies cut back production faster than they can reduce the workforce. But, as the economy picks up once more, increased production results in fuller utilization of plant capacity and the labor force. Conse-quently, productivity tends to increase at a rapid rate.

Now we face another conundrum. In the current economic expansion, which began in 1992, output per man-hour did not show a spurt in its early years, but did in its later ones. From the less than 1 percent rate of the first half of the 1990s, labor productivity in the business sector jumped 2.7 percent in 1996, 1.5 percent in 1997, and 2.4 percent in 1998. This further deviation from historical trends adds fuel to the argument that the data are flawed.

Although labor productivity figures relate output to one input, they do not measure the specific contribution of that factor of production. Rather, they reflect the joint effect of a number of interrelated influences, such as changes in technology, capital investment per worker, level of output, utilization of capacity, layout and flow of material, managerial skill, and skills and effort of the workforce. Moreover, a whole set of attitudes—the idea of “progress,” the value we place on competition in a free market, getting ahead, and so on—provide cultural support for ongoing productivity improvement. These attitudes give rise to a spirit of entrepreneurship, risk taking, and innovation.

By combining these factors, we can conclude that over the long run, there are three major sources of labor productivity improvement in the economy: the quality of the labor input, the quantity of the capital input, and innovations that permit more efficient use of the labor input. Edward F. Denison, examining the period 1929–1982, concluded that improved labor quality accounted for 20 percent of the increase in labor productivity. Improved quality of capital accounted for 28 percent of the increase and more efficient labor utilization accounted for 53 percent (the major portion of which was due to technological innovation).

Obviously, a better-trained, more highly skilled worker can produce more units of output within a given time period than can a less skilled, poorly trained one. Thus, the increasing number of years of schooling that workers undergo and the vast training and development programs that industry conducts have been extremely important in increasing labor efficiency. The declining percentage of unskilled laborers and the increasing percentage of professionals (engineers, scientists, and computer experts) provide one indication of the improved quality of the labor force. Scientists can make new discoveries, engineers/ systems analysts can convert them into new methods of operation, and well-educated workers can operate the new, sophisticated systems. A higher-quality labor force also tends to be healthier, the result of improved living conditions, medical advances, and proper nutrition.

The United States invests heavily in its human resources and is the world leader in average years of schooling and money spent on education. Of our total GDP, 6.6 percent is currently going for education (not including physical plant investments), compared with a 5.6 percent average for 12 other advanced nations. Eighty percent of young people today go beyond high school for additional education and training. This is reflected in the changing composition of the labor force—30 years ago, 13.6 percent were in professional and technical jobs, compared with 17.1 percent today. Meanwhile, the percentage of operatives and laborers has declined from 18.9 to 13.5 percent. Indeed, the growing demand for people with higher skills has raised occupational wage differentials, spurring young people to pursue college and postgraduate education.

The importance of the quantity of the capital-input contribution should be apparent. A carpenter using an electric saw can cut more boards of wood in an hour than if he used a handsaw. Indeed, investment is considered the engine of economic growth, and the history of the Industrial Revolution can be written in terms of ongoing increases in capital investment. We have now shifted to an Information Revolution, but this also involves capital equipment—computers that process data, do calculations, operate machinery, or control processes. In the 1990s, U.S. firms invested over half a trillion dollars in new computers and related equipment.

Despite the emphasis on information technology, official data report the United States lagging behind other rich nations, at least relatively, in investment outlays. The problem is partially definitional. Spending on software is treated as a cost of production, but since software lasts a long time, it should be counted as a capital expenditure. Also, in this decade, the data show the United States allocating 17 percent of GDP to investment, compared with 20 percent in West European countries and 30 percent in Japan.

A recent study by Milka Kirova of St. Louis University and Robert Lipsey of the National Bureau of Economic Research points out, however, that nominal outlays as a percentage of GDP do not take account of the fact that capital goods prices are lower in the United States. They have a further criticism: The economic definition of “investment” covers all spending that is designed to raise future output, but the official data count only spending on physical capital (i.e., plant and equipment). Outlays on education, except that for buildings and equipment, are omitted, as are research and development expenditures (2.7 percent of our GDP but only 2.1 percent in other advanced nations).

By recalculating investment to include these categories, plus consumer durable goods and some military spending, Kirova and Lipsey conclude that U.S. total investment is actually slightly above that of the other 12 advanced nations.

Innovations enhance productivity by permitting more efficient use of labor. These include better plant layout, more efficient equipment, improvements in transportation and communication, and advances in managerial control systems. As the United States became an industrial power, economies of large-scale production (e.g., the mass production techniques of the automobile industry) became a major form of more efficient utilization. The higher labor productivity in large firms seemed to indicate significant economies of scale.

In recent years, however, traditional economies of scale have declined in importance. New technology, particularly computer-based processes, has led to more flexible production methods, allowing quicker responses to market shifts. Moreover, companies have adopted innovations in organizational structure. The old vertical structure of decision making has been flattened and layers of management peeled away. Many companies have also made greater use of new knowledge from the organizational behavior field. They have reorganized work to expand the scope of jobs, trained workers to be more versatile, and involved them in decision making with respect to their jobs. Companies that have empowered their workforces report significant gains in efficiency as a result.

The shift of redundant labor out of agriculture, particularly small inefficient farms, into more productive nonfarm employment provided another means of using labor more efficiently. The process of economic development has been one of moving from labor-intensive activities to more capital-intensive ones, with a concomitant improvement in national productivity. This process continues. Our labor-intensive, low-productivity clothing industry continues to decline because it cannot compete with cheap imports from low-wage, less developed countries—at the same time that our capital-intensive high-productivity machinery industries expand by exporting to less developed nations.

Unfortunately, our overview makes the data more perplexing. The quality of the labor input continues to improve, the quantity of the capital input grows, and we have had a burst of technological innovation. Yet the productivity that should have resulted is missing. Perhaps a more intense focus on the recent period would uncover some clues.

Break in the Long-Run Trend in the 1970s
While changes in productivity vary over time, the fall in the rate in the 1970s was dramatic. Multifactor productivity in the economy grew 0.3 percent annually from 1973 to 1994, compared with an annual average of 2.2 percent over the preceding 25 years. In manufacturing, it fell from 1.8 percent in 1949–1973 to 0.8 percent in 1973–1992. The United States enjoyed a 3 percent growth rate in output per man-hour in the post–World War II period, 1947 to 1967. In the 1970s, it nose-dived to 1 percent a year.

The decline in productivity improvement had severe consequences: loss of world competitiveness for American products, plant closings, worker layoffs, and a much slower rise in living standards. Meanwhile, consumers continued their efforts to achieve improved living standards in line with the former rate of productivity growth, and beyond the capacity of the economy to satisfy. This fueled inflation to double-digit levels. As meager productivity gains persisted, the American ethos of forward movement, of each generation being better off than its parents, started to erode.

If productivity is not improving, then there must be something askew in those factors that cause productivity to move upward. The quality of the labor force, then, emerges as the first suspect.

During the 1970s, a greater proportion of women and teenagers entered the workforce—individuals with less training and experience. Women, moreover, were still going into the less productive occupations. Finally, while the labor force continued to become better educated, marginal returns from the increased education were declining.

A shift in industry mix—a larger share of the labor force employed in slower-productivity-growth service sectors—emerged as a second suspect. But manufacturing productivity also plummeted.

Some saw the quadrupling of the price of oil in 1973 by the Organization of Petroleum Exporting Countries (OPEC) as a culprit. The relative rise in the price of energy led many plants to shift to less energy-intensive processes, which also happened to be less labor productive.

Others said that there was a lowered rate of capital investment. While there was no clear evidence to support this claim, there was a difference in how the money was being spent. More of the nation’s capital investment was going into things that did not increase productivity. For example, a spate of government regulations with respect to pollution control and occupational safety and health in the 1970s absorbed resources that would have been used to increase efficiency. A $1 million outlay on a scrubber contributed to a healthier environment but did not increase output per man-hour. A decline in research and development expenditures may also have resulted in less technological progress.

Can we say that our bloodhounds have ferreted out the causes of the recent drop in productivity growth? It would seem that way—except for one important fact. The usual suspects have a perfect alibi: While they may have been responsible for prior losses in productivity growth, they were no longer hanging around by the 1980s.

Consider the labor force issue. The number of young workers, aged 16 to 24, increased by five million between 1971 and 1982 as the baby boomers grew up, but then fell by six million from 1982 to 1993 as the baby-bust generation matured. Young workers fell from 22.3 percent of the labor force in 1971 to 16.5 percent in 1994, and those 65 and older also declined. The result has been a rise in the prime working-age group, those 25 through 54, considered the most productive. This group has increased from 60.5 percent of the labor force in 1971 to 71.7 percent in 1994. Similarly, the increase in female labor force participation has slowed. Moreover, women now go into high-productivity occupations and industries at a much higher rate than in the past.

Likewise, the investment explanations no longer hold: Capital expenditures as a proportion of GDP have risen since the 1970s. Also, the heaviest expenditures on environmental and health and safety are far behind us, so a greater share of recent investment has gone for productivity-enhancing plant and equipment. Moreover, the real cost of energy has dropped dramatically and no longer induces industry to shift to less labor-productive processes.

Deeper forces are at work. American industry’s declining competitiveness in the 1970s exerted a shock effect. Companies recognized that products were poorly designed, quality had deteriorated, and efficiency had lagged (output per man-hour in manufacturing had plummeted to 1.4 percent per year), with the result that product prices were too high. As the 1980s opened, manufacturing restructured, introducing new computerized technology, closing obsolete facilities, flattening organizational structures, and slimming its workforce. The results were singularly successful, with American manufacturing becoming world class once again.

Manufacturing productivity began to rise, but the data failed to show a pickup in economywide productivity. This implies that productivity really is missing.

Evidence Strengthening the Case for Purloined Productivity
Let us turn from hypotheses to hard evidence supporting the case for missing productivity. The evidence shows that innovations, which usually lead to a more efficient utilization of the labor factor in production, have been rising. Patent applications have surged in the 1990s and are running about 50 percent above the 1980s figures. Business outlays on research and development have also been mounting.

Edward Yardeni, chief economist of Deutsche Morgan Greenfell, shows that historically the growth in firms’ real sales per employee rose in line with the increase in productivity. For more than a decade, however, the two have diverged. In 1994, real sales per employee rose 4.8 percent, but recorded productivity only 0.6 percent. And in 1995, real sales leaped 9.3 percent, but nonfarm productivity was reported to be up a miniscule 0.3 percent.

Another historical correlation has been torn asunder; productivity rates in services traditionally had paralleled those in manufacturing. In the 1980s, however, these two diverged. Manufacturing productivity has been rising at a 3.5 percent rate in the 1990s (4.4 percent in the past three years), but that in services has been flat. This is very perplexing, since 80 percent of the investment in computers has been in the service sector and has led many economists, including Alan Greenspan, to argue that the huge outlays must have had some effect.

A 1998 Conference Board study, “Computers, Productivity and Growth,” highlights the diversion by examining and comparing productivity results in the 1990s in computer-intensive and non-computer-intensive industries, services as well as manufacturing. In manufacturing, the outcome was as expected—the most computer-intensive sectors showed labor productivity growth rates of 5.7 percent per year, compared with only 2.6 percent in other manufacturing sectors. Since the two groups did not have very different productivity growth rates prior to 1973, before the explosion of computer power, the differences in the 1990s offer strong evidence that computers have had a significant effect on productivity growth.

In sharp contrast, the computer-intensive service sectors failed to show comparable productivity improvement. In fact, there was virtually no difference in rates between the computer-intensive sectors (0.9 %) and the others (0.8%), with both showing gains of less than 1 percent. The difference between manufacturing and services is hard to fathom. The implication drawn from the study is that there is a ballooning understatement of real output and productivity growth in the service sectors.

A number of critics lay blame for the missing productivity on the official data on national output. For example, Robert Gordon of Northwestern University, a member of the Boskin Commission on the Consumer Price Index, faults the CPI. By overstating the rate of price rise, it cuts the true rate of growth in output and thereby masks the actual rate of productivity improvement. Indeed, the 1998 revisions in calculating the CPI may help explain why official data more recently have shown higher productivity growth.

A louder chorus claims that poor data for the services sector explain the “productivity paradox.” Michael J. Mandel of Business Week, among others, cites the inability of statistical measures to keep pace with the rapid changes in the fast-growing high-tech industries. He singles out two industries—medical care and banking—for mismeasurements that understate output. According to the official data, productivity in consumer banking services has been rising at a 0.2 percent rate in the 1990s, compared with 3.3 percent in the 1980s. Why the drop? The Commerce Department’s Bureau of Economic Analysis (BEA) measures banking output by hours worked by bank employees. So, when banks added staff in the 1980s, that showed up as higher output. And since banks have reduced staff in the 1990s, a fall in output is reported. This is an absurdity—actual improved efficiency, an explosion in checking and credit card transactions requiring fewer employees, shows up as reduced efficiency. Economists estimate that the undercounting in financial services alone lowers reported productivity gains by about 0.3 percentage points.

We find similar statistical problems in health care, a sector where output reportedly grew more slowly in the 1990s than in the 1980s. In this case, output is measured by such things as the number of procedures performed or the occupancy rates for hospital beds. Some partial statistical adjustments were introduced in 1997, but the data continue to fail to record the true output of the health care system, which should reflect the quality of care people receive.

There are further examples of incredible findings—the data indicate that food consumption per person has fallen 4 percent in the 1990s. Yes, we have become more weight conscious, but not to that extent! The answer to this riddle probably lies in changes in how Americans shop, which have not been captured by the bean counters.

Before we close the case, we must look at the views of those who deny that there is any productivity paradox and hence no mystery. According to information technology expert and former Xerox vice president Paul Strassman, industry’s vast spending on computers has been a sheer waste. In his book The Squandered Computer, Strassman charges that businesses have not subjected technology decisions to correct investment criteria. Moreover, after they installed the systems, they used them improperly. He supports his argument with data indicating a lack of correlation between IT spending and profitability in any industry.

With respect to why computers seem to be more effective in manufacturing than in services, some economists cast doubt on the manufacturing productivity figures, alleging that labor input is understated. They claim that the large number of temporary employees hired by manufacturing companies from temporary help agencies should properly be allocated to manufacturing employment. Stephen S. Roach of Morgan Stanley & Co. has recalculated the data in this way. His adjusted figure drops output per man-hour in manufacturing 0.5 percentage point and raises that in services 0.1 percentage point. Roach, moreover, charges that recent productivity gains will be short-lived, having been achieved through “restructuring strategies that put extraordinary pressures on the workforce.”

Others argue that since productivity has always moved in parallel with gross domestic product, the lower rates of productivity gain in recent decades are due to lower rates of economic growth. They cite as evidence the fact that in the last couple of years, high growth in GDP has been accompanied by a spurt in reported productivity. This argument works up to a point: The U.S. economy grew at a 3.7 percent rate from 1947 to 1973, and this was accompanied by productivity gains of about 3 percent. And when the growth rate plunged to about half of what it had been from 1973 to 1982, the rate of productivity improvement fell to just over 1 percent. The argument is shattered, however, by the fact that economic growth rose at a much higher rate (3 percent from 1982 to 1992) without an accompanying rise in productivity. The drop in unemployment in those years means that part of the GDP growth came from a larger labor input, but it cannot account for all of it. Indeed, this whole argument suffers from the fact that the GDP and productivity data are statistically interrelated, and if output is understated, then the improvement in productivity is reported as lower.

A third defense of the official data views the apparent mystery as simply representing a time lag. Nineteenth-century technological advances took many years to produce actual productivity gains, and the same may be true today with respect to information technology. Innovation is always fraught with mistakes, but the bugs are worked out eventually. This is a more optimistic view, since it says that we shall yet see productivity advances from IT. The official data’s recent showing of a jump in output per man-hour at this late stage of an economic expansion lends some credence to this position.

Others say we should stop faulting the data and start trying to understand why we have had such poor productivity. According to Professor Gordon, a combination of weak unions and a low minimum wage has allowed real wages at the bottom of the job ladder to deteriorate, with the result that American employers can enjoy relatively cheap labor. Consequently, they have less incentive to seek greater efficiency. (This is a variation of the claim offered decades ago that unions encourage productivity improvement without actually intending to do so. By their pressure on wages, unions force managements to seek ways of operating more efficiently in order to hold down labor costs.) Relatively low wages may be good for employment—we have more workers in restaurants and more checkout attendants in supermarkets than they do in Europe—but it also means that we have lower productivity. This situation has been changing. With the labor shortages of the past few years, pay in the service industries has been skyrocketing while that in goods production has been lagging far behind. Yet the data show no jump in service sector productivity.

We should like to say, “It’s obvious my dear Watson. Everything points to faulty data as the villain.” The evidence, however, is not overwhelming. It may simply be that we are seeing a long time lag between heavy investment in IT and its payoff. The data now show productivity rising at a faster rate, in line with more rapid economic growth. Or it may be that recent revisions in the way the BLS and BEA measure things are beginning to provide a better picture of reality. We suspect that it is the latter, but we do not have conclusive proof. Thus, the missing productivity remains an open case.

There is a much more fundamental issue involved with respect to productivity, and that is, Can we maintain the current official rate of productivity improvement? For the past three years, output per man-hour in the business sector has risen at an average of 2.2 percent a year, almost on par with the historical long-run average of 2.3 percent. If, when the current economic expansion slows down, productivity improvement slips back to its former rate of 1 percent, we could reencounter all the problems of compressed living standards, higher unemployment, and renewed inflation.

On the other hand, if the data heretofore understated productivity in services but now better reflect the switch to an information society, and if we can maintain that productivity improvement, the future will be a brighter one, with a healthy, competitive American business community. Combined with labor force growth, the higher rate of productivity improvement means that long-term economic growth can be closer to 3 percent, rather than the 2 percent currently assumed. This would go a long way toward solving our problems of caring for a larger aged population (current projections of social security difficulties by 2032 are based on an assumption of only 1.7 percent growth), and allowing us to enjoy high employment, low inflation, and enhanced living standards.

As we went to press, the BLS’s February Monthly Labor Review published three articles on the problems of productivity measurement. The articles in effect concede that the data understate the actual rate of productivity change, but do not grant that the magnitude is large. While the BLS data show output per man-hour rising better than 3 percent a year prior to 1973, they indicate an increase of only slightly more than 1 percent a year since then. Yet, the BLS admits that the slowdown occurred in a period when industrial technology advanced considerably and the economy prospered, with rapid growth of corporate profits, phenomena generally associated with increases in the rate of productivity growth.

The basic problem appears to be undercounting in the service sector, particularly in those industries where it is hard to measure output. The recent period has seen a divergence between manufacturing productivity, which has returned to something near its earlier growth rate, and services, which has showed a sharp deceleration. Indeed, focusing on overall multifactor productivity, the BLS finds that since 1979, manufacturing accounts for all of its modest growth. This would imply that the other sectors have had negative productivity growth over the two decades, which is most implausible.

Disagreeing with the Boskin Commission that the CPI grossly overstates inflation, particularly in view of improvements introduced into the CPI in the past couple of years, the BLS sees the major culprit to be output measurement in those areas in which price data are lacking. In some of the difficult areas, the Bureau of Economic Analysis, which gathers output data, has been forced to rely partially on input information to determine output, and this means that output will be correlated with the rate of growth inputs. This creates an inherent bias. As Edward R. Dean of the BLS’s Office of Produc-tivity and Technology explains, “Extrapolation of output by use of labor input and deflation of current-dollars series by labor cost indexes come close to embodying an assumption of zero labor productivity growth.” Aware of the limitations of output data, the BEA is working to develop missing price indexes, which should provide a more accurate picture of output. Indeed, all government statistical agencies recognize the need to improve service sector data, and are working diligently to that end.

This is the most gratifying news. Although we may still not be able to accurately measure the extent to which productivity improvement has been understated, we do know that faulty data have been a factor. (Productivity really has been missing.) More importantly, the improved data are beginning to disclose a more accurate picture, one that helps to explain the recent rise in reported productivity improvement. Over time, these data should become even more reliable and, we believe, will show that productivity, while not resuming its former 3 percent rate of growth, will be significantly above the measly 1 percent reported during the last couple of decades.