The Solow Productivity Paradox:
What Do Computers Do to Productivity?
Jack E. Triplett(1)
Brookings Institution
"You can see the computer age everywhere but in the productivity
statistics."
Robert Solow (1987)
Solow's aphorism, now more than ten years old, is often quoted. Is there
a paradox? And if so, what can be said about it? This paper reviews and
assesses the most common "explanations" for the paradox. It contains separate
sections evaluating each of the following positions.
(1) You don't see computers "everywhere," in a meaningful
economic sense. Computers and information processing equipment are
a relatively small share of GDP and of the capital stock.
(2) You only think you see computers everywhere. Government
hedonic price indexes for computers fall "too fast," according to this
position, and therefore measured real computer output growth is also "too
fast."
(3) You may not see computers everywhere, but in the industrial sectors
where you most see them, output is poorly measured. Examples are finance
and insurance, which are heavy users of information technology and where
even the concept of output is poorly specified.
(4) Whether or not you see computers everywhere, some of what they
do is not counted in economic statistics. Examples are consumption
on the job, convenience, better user-interface, and so forth.
(5) You don't see computers in the productivity statistics yet,
but wait a bit and you will. This is the analogy with the diffusion
of electricity, the idea that the productivity implications of a new technology
are only visible with a long lag.
(6) You see computers everywhere but in the productivity statistics
because computers are not as productive as you think. Here, there are
many anecdotes, such as failed computer system design projects, but there
are also assertions from computer science that computer and software design
has taken a wrong turn.
(7) There is no paradox: Some economists are counting innovations
and new products on an arithmetic scale when they should count on a logarithmic
scale.
Background
On its face, the computer productivity paradox concerns the question:
Why isn't U.S. output growing faster as we invest more in computers? But
Solow's aphorism gains its resonance from a different, though related,
question: Will the growing investment in computers and information technology
reverse the post-1973 productivity slowdown? From 1948 to 1973, multi-factor
productivity increased 1.9 percent per year in the U.S., and labor productivity
grew at the rate of 2.9 percent; after 1973, these productivity growth
rates were 0.2 percent and 1.1 percent.(2)
Similar slowdowns have been observed in most of the industrialized economies
of the OECD.
Another part of the context is the mechanism for diffusion of technical
change in the economy. In a view held by many economists, productivity
improvements are carried into the workplace through investment in new machinery.
On this view, any technical change we are now experiencing must be embodied
in the economy's investment in information technology, because that is
the kind of machinery investment that is growing. Investment in information
processing equipment accounted for about 34 percent of producer durable
equipment in 1997, which is more than the share of industrial machinery
(22 percent). (3)
There is substantial debate on this "new machinery" view. It must obviously
be true at some level that new technology implies new machines. But it
is not obvious that new machines are the entire engine for improving productivity.
In fact, if one correctly accounts for the enhanced productiveness of new
machines (by making a quality adjustment to the data on capital inputs)
then improved machinery will not, by itself, raise multi-factor
productivity, though it should increase ordinary labor productivity.
The computer-productivity paradox also resonates because we have become,
it is often said (but not often quantified), an information economy. It
is often said that quality change is a much larger proportion of final
output today than it was in the past, and that quality change, more customized
products, and the growth of services--as business inputs, as elements of
consumer demand, and as contributors to U.S. exports--all mean that information
is a much more important contributor to the production process than it
used to be. If it is true that the use of information as a productive input
is growing, or that information has become a more productive input than
it was in the past, then this heightened role for information heightens
as well the importance of information technology in a modern economy.
Thus, the context in which the Solow productivity paradox is interesting
revolves around a number of unresolved economic issues and questions. There
is the post-1973 productivity slowdown, a puzzle that has so far resisted
all attempts at solution. There is the supposed recent shift from a goods
economy to a services economy (actually, this is not all that recent; even
in 1940, more than half of U.S. employment was outside the traditional
goods-producing sectors(4)). There is the
shift to an "information economy" from whatever characterized the economy
before (surely not absence of information, but perhaps information was
less abundant, because it was more costly). None of these economic shifts
is very well understood. Understanding them is important for a wide range
of economic policy issues, ranging from the role of education and training
in the economy, to the role of investment (and therefore of incentives
to and taxation on investment), to the determinants of economic growth,
to forecasting the future trends of income distribution, and so forth.
For each of the issues, it is thought that computers and the contribution
of information technology is key. For example, Kreuger (1993) found that
workers who use computers have higher earnings than workers who do not,
suggesting that the adverse shifts in income and earnings distributions
in the United States in recent years are connected with the growth of computers.
Again, there is debate on this view: Computers sometimes substitute against
human capital, as they do against other inputs, reducing the demand for
skill in jobs such as, say, bank tellers.
One should note a strong dissenting view against coupling the Solow
paradox with some of these other issues. Griliches (1997), for example,
has stated:
"But then we're still stuck with the problem about the productivity
slowdown, or paradox, which is a problem, but not a computer problem. Is
the slowdown real or not? Or is it all a measurement issue? And more important,
is it permanent, or is it transitory? Here the paradox is really not so
much in terms of computers, but in terms of what is happening to science,
what is happening to inventiveness, what is happening to other activities."
The following numbered sections review seven positions on the computer
productivity paradox.
I. You don't see computers "everywhere," in a meaningful
economic sense.
In this view, what matters is the share of computers in the capital
stock and in the input of capital services. These shares are small. An
input with a very small share cannot make a large contribution to economic
growth, and so we should not expect to see a major impact on growth from
investment in computers. (In the remainder of this paper, I use the terms
"computers" and "computer equipment"--computers plus peripheral equipment--interchangeably;
the term "information processing equipment" is a broader category that
contains computer equipment as one of its components--see note 3.)
The most comprehensive explorations are Oliner and Sichel (1994), and
Jorgenson and Stiroh (1995). Both sets of authors calculate the growth
accounting equation:
(1) dtY = scdtKc + sncdtKnc
+ sLdtL + dtp
where dtY = dY/dt, the rate of growth of output, dtKc,
dtKnc and dtL are rates of growth of the
inputs--Kc, computer capital (properly, computer capital services),
Knc, non-computer capital (services), and L, labor--si
is the share of input i, and dtp
the growth of multifactor productivity.
This equation says that the rate of growth of output (dtY)
equals the share-weighted growth in inputs (for example, scd
tKc is the rate of growth of computer capital, weighted
by the share of computer capital in total cost), plus the rate of growth
of multifactor productivity.
Jorgenson and Stiroh (1995) estimate the share of capital services provided
by computer equipment capital, using the capital accounting framework developed
by Jorgenson (1980, 1989); Oliner and Sichel (1994) use computer equipment's
income share. As table 1 shows, the results of both papers are compatible.
Computer equipment made a relatively small contribution to economic growth,
even during the period of the 1980's when computer technology became so
widely diffused throughout the economy. In the growth accounting framework
of equation (1), even very rapid rates of input growth--and the growth
of computing equipment has been rapid indeed--make only relatively small
contributions to growth when the share of this equipment is small. As table
2 shows, computer equipment still accounts for only around 2 percent or
less of the physical capital stock(5), and
under 2 percent of capital services.
Oliner and Sichel enlarge the definition of computers to encompass all
of information processing equipment (their table 10, page 305) and also
computing software and computer-using labor (their table 9, page 303).
The result remains unchanged. On any of these three definitions--computer
equipment, information processing equipment, or the combination of computing
hardware, software, and labor--the shares remain small (see table 2), and
so does the growth contribution of information technology.
To check the reasonableness of their results, Oliner and Sichel (1994)
simulate results for the assumption that computers earn supernormal returns
(use of equation (1) implies that computers earn the same rate of return
as earned on other capital equipment). Romer (1986), Brynjolfsson and Hitt
(1996) and Lichtenberg (1993) all argued or implied that computers yield
higher returns than investment in other capital. These alternative simulations
raise the contribution of computing equipment to growth (from around 0.2
in table 1 to 0.3 or 0.4), but all of them confront the same problem: The
share of computing equipment is simply too small for any reasonable return
to computer investment to result in a large contribution to economic growth.
Growth accounting exercises calculate the computer's contribution to
growth, not its contribution to multifactor productivity.
Growth accounting answers the question: "Why is growth not higher?" The
paradox says: "Why is productivity not higher?" As equation 1 shows, multifactor
productivity's contribution to economic growth is separate from the contribution
of any input, including the input of computers. If one interprets the productivity
paradox as applying to multifactor productivity, growth accounting exercises
do not shed very much light on it.(6)
In the growth accounting framework, then, computer growth is simply
the response of input demand to the great fall in the price of computers
. Indeed, Jorgenson, in conference presentations, has emphasized this exact
point, as has Stiroh (1998). The enormous price decline in computing power
has led to its substitution, in a standard production analysis framework,
against all other inputs, including other kinds of investment. On this
view, the economic impact of the computer is not a productivity story at
all.
One reservation about this input substitution view arises because computer
output (and therefore computer capital input) is estimated by deflation,
using hedonic computer price indexes. Price and quantity are not independently
estimated. Some have argued that the computer price declines in government
statistics are overstated (see the next section); if price declines are
overstated, the growth of computer inputs is also overstated, and there
is less substitution than the data suggest.
A second reservation arises because many economists seem to think that
the amount of innovation they see in the economy--the number and pervasiveness
of new products, embodying new methods of production, and new technological
feats--is more than one could reasonably expect just from input substitution.
On this view, there must also be a mismeasurement story, and therefore
a missing productivity story, regardless of the validity of the input substitution
story. This view is discussed in section VII, below.
A third reservation is that aggregate labor productivity is also low,
not just multifactor productivity. If computers just substituted against
other inputs, then labor productivity should grow (because of increased
capital per worker), even though multifactor productivity does not. Stiroh
(1998) shows just that at the industry level: More intensive computer usage
raises industry labor productivity through input substitution, but it does
not raise industry multifactor productivity. At the aggregate level, the
share of computers is too small to make a major impact on either output
growth or labor productivity.
Flamm (1997) has in effect (though not explicitly) reinterpreted the
Solow productivity paradox as a semiconductor paradox: You see the semiconductor
age everywhere (and not just in the computer industry). Price indexes
for semiconductors have dropped even more rapidly than computer prices
(see the discussion in section II), and semiconductors go into other kinds
of machinery (antilock brakes and "intelligent" suspension systems on automobiles,
for example). Flamm calculates the consumer surplus from declining semiconductor
prices at around 8 percent of annual GDP growth, which cumulates to a huge
number over the 50-year history of semiconductors. For the analysis of
the productivity paradox, Flamm's results do not permit distinguishing
the part of the demand for semiconductors that arises because of input
substitution (the substitution of computers and other semiconductor-using
equipment against inputs that do not use semiconductors), and the part
of semiconductor demand that arises because they improve the productivity
of using industries (if indeed they do affect productivity). However, Flamm
estimates the output growth elasticity of demand for semiconductors at
roughly eight times their price elasticity of demand, and his percentage
point estimate of semiconductor contribution to GDP is around 0.2 for recent
years, a number that is, perhaps fortuitously, similar to the growth accounting
calculations for computers.
In summary, computers make a small contribution to growth because they
account for only a small share of capital input. Does the same small share
suggest that they likewise cannot have an impact on productivity? Perhaps.
But the paradox remains a popular topic for other reasons, which are discussed
in the following sections.
II. You only think you see computers everywhere.
The contention that computer price indexes fall too fast (and therefore
computer deflated output rises too rapidly) has several lines of logic,
which are not particularly connected.
Denison (1989) raised two different arguments against the BEA hedonic
computer price indexes. He contended, first, that the decline in the computer
price indexes was unprecedented, and for this reason suspect. We can now
put this aside. The U.S. price indexes for computers have been replicated
for other countries, with similar rapidly-falling results (see, for example,
Moreau, 1996 for France). Hedonic price indexes for semiconductors fall
even more rapidly than for computers.(7)
Trajtenberg (1996) shows that hedonic price indexes for CAT scanners also
have computer-like declines, and Raff and Trajtenberg (1997) show similar
large declines in hedonic price indexes for automobiles in the early years
of the century. Most of these price declines have been missed by conventional
economic statistics (automobiles did not get into U.S. government price
statistics until the third and fourth decades of the century, and one cannot
even determine from government statistics how much high-tech scanning equipment
hospitals buy). The computer price declines seemed to Denison unprecedented
because similar price declines had not been published.
Denison's second argument (one he would levy against all of the above
indexes) was that hedonic price indexes are conceptually inappropriate
for national accounts. Denison thought that hedonic price indexes measure
uniquely willingness to pay for quality improvements (the demand side of
the market) and not the cost of producing improved quality (the supply
side). However, in Triplett (1983, 1989) I showed this is incorrect, even
when it is relevant, because hedonic measures can be given both supply-side
and demand-side interpretations. Denison also suggested that demand-side
and supply-side measures would diverge in the case of computers, but there
is no evidence for such divergence (I discuss this in Triplett, 1989).
So far as I know, there is little current support for Denison's position
that hedonic indexes are conceptually inappropriate, so it is not necessary
to consider these arguments more fully here.
A second major line of reasoning on hedonic computer price indexes points
to what is actually done with the personal computers that sit on so many
of our desks. Many users have noted something like the following: "I used
perhaps a quarter of the capacity of my old computer. Now I have a new
one for which I use perhaps a tenth of its capacity. Where is the gain?"
McCarthy (1997, paragraphs 4 and 15) expresses a similar view:
"...The theoretical increase in [computers'] potential output, as measured
by the increases in their input characteristics, is unlikely to ever be
realized in practice.... Also, the increasing size and complexity of operating
systems and software are likely to be resulting in increasing relative
inefficiencies between the hardware and software.... The greater complexity
means that some part of the increased computer speed is diverted from the
task of processing to handling the software itself."
In other words, ever faster and more powerful personal computers, with
ever larger memory sizes, wind up being used to type letters, and the letters
are not typed appreciably faster. Is that not evidence that the computer
price indexes are falling too fast?
I do not think it is evidence. Typing a letter uses computer hardware,
computer software, and the input from the person (increasingly, not a secretary)
who types it. The technical bottleneck is often the human input. But this
hardly justifies revising upward the price index for the computer: The
computer is purchased, the capacity is paid for, and any assertion that
the purchaser could have made do just as well with an earlier vintage is,
even if proven, not relevant. And indeed, it is also not proven: Increased
computer capacity has been employed in an effort to make computing more
efficient and user-friendly, not just faster (but see sections IV and VI).
A third issue has emerged in the work of McCarthy (1997), a paper which
has excited quite a bit of comment within other OECD countries who are
considering following the U.S. lead on hedonic price indexes for computers.
McCarthy observes that price indexes for software typically do not exist,
and speculates that software prices decline less rapidly than computer
prices. Actually, price indexes for word processing packages, spreadsheets,
and database software have been estimated (Gandal, 1994, Oliner and Sichel,
1994, and Harhoff and Moch, 1997). This research confirms McCarthy's speculation:
Software prices have been declining steadily, but not at computer-like
rates.
McCarthy then contends that because software is often bundled with computers
the slower price decline of software must mean that computer price indexes
are biased downward. "The overall quality of a computer package (hardware
and all the associated software) has not been rising as rapidly as that
of the hardware input characteristics on which the hedonic estimates of
quality improvement are based. As a result, the quality adjustments being
used in the estimation of the price deflators for computer investment are
being overstated which leads to the price falls in computer investment
also being overstated" (McCarthy, 1997, paragraph 18).
The issue can be addressed more cogently if McCarthy's argument is re-stated
as follows. A computer price index can be thought of as a price index
for computer characteristics. Suppose, for the sake of the exposition,
that hedonic functions are linear, and that the price index is also linear
(a Laspeyres index).(8) Then if there are
three characteristics bundled into a personal computer--computer speed
(s), computer memory (m), and computer software (z)--we have:
(2) Ic = aIs + bIm + cIz
where a, b, and c are weights. The proper price index for computers
(Ic) is a weighted average of price indexes for all three characteristics,
or components, that are bundled into the computer transaction. However,
the third characteristic, computer software that is bundled into the computer
without a separate charge, is omitted from the computer price index. Because
its price index declines less than the other two characteristics (Iz
> Is, Ic), the computer price index based on the
two characteristics will fall too fast. The same argument applies, in a
modified form, if a price index for computer hardware is used in national
accounts to deflate both computer hardware and software (perhaps because
no separate software index is available).
If hedonic computer price indexes were actually constructed according
to the "price index for characteristics" method given by equation (2),
I would agree with McCarthy that they would be downward biased. The bias
would be the same if we calculated real computer investment growth as the
weighted average of growth rates of hardware characteristics (which is
the form in which McCarthy cast his illustration).
But considering the actual calculations, omission of software biases
the computer price indexes upward, which is the opposite direction
from McCarthy's contention. Computer price indexes are actually calculated
by quality adjusting observed computer prices for the value of changes
in hardware characteristics. We observe prices of two different computers,
Pc1 and Pc2, where each computer consists of a different
bundle of speed, memory, and software. The hedonic regression coefficients
on computer hardware (speed and memory) are used to adjust the price difference
between the two computers for changes in the computer's hardware (that
is, its speed and its included memory). We have, then:
(3) (Pc1)* = Pc1 (hs [s2/s1]
+ hm [m2/m1])
where the term on the left-hand side is the quality-adjusted price of
computer 1, and on the right-hand side, hi is the hedonic
"price" for characteristic i, and s and m are, respectively,
speed and memory, subscripted for computer 1 and computer 2. The price
index uses this quality-adjusted price in:
(4) Ic = Pc2 / (Pc1)*
Equation (4) contains no adjustment for the quantity of bundled
software (i.e., hz [z2/z1]). If more software,
or improved software, is included in the bundle, the quality adjustment
in equation (3)-- (hs [s2/s1] + hm
[m2/m1])--is too small, not too large, because the
improvement in software receives no adjustment. The adjusted price,
(Pc1)*, is too low (not too high), which means that the computer
price index falls too slowly--it is biased upward, not downward,
contrary to McCarthy's contention.(9)
Whether software prices are declining faster than hardware prices, or
whether the quantity of software (bundled with the hardware) grows less
rapidly than the rate of improvement in hardware characteristics like speed
and memory, are neither one the issue. The price index for the computer-software
bundle does not decline fast enough because no adjustment is made for the
value of the increased quantity of software included in the bundle. Its
quantity is implicitly treated as zero.
In conclusion, neither evidence nor reasoning indicates a serious downward
bias to the computer price indexes. My own view on this matter agrees with
Griliches (1994, page 6), who in discussing BEA computer price indexes,
wrote:
" There was nothing wrong with the price index itself.
It was, indeed, a major advance...but...it was a unique adjustment. No
other high-tech product had received parallel treatment...."(10)
III. You may not see computers "everywhere," but in the industrial
sectors where you most see them, output is poorly measured.
Griliches (1994) noted that more than 70% of private sector U.S. computer
investment was concentrated in wholesale and retail trade, finance insurance
and real estate, and services (divisions F, G, H, and I of the 1987 Standard
Industrial Classification System).(11)
These are exactly the sectors of the economy where output is least well
measured, and where in some cases even the concept of output is not well
defined (finance, for example, or insurance, or consulting economists,
as an example from the services division of the SIC).
Why has this [computer investment] not translated itself into visible
productivity gains? The major answer to this puzzle is very simple: ...This
investment has gone into our 'unmeasurable sectors,' and thus its productivity
effects, which are likely to be quite real, are largely invisible in the
data (Griliches 1994, page 11).
That there are serious measurement problems in all of these areas is
well established. The volume edited by Griliches (1992) is a fairly recent
example of a long history of attempts to sharpen measurement methods and
concepts for services. Triplett (1992) presents an additional review of
the conceptual issues in measuring banking output, and Sherwood (forthcoming)
discusses the insurance measurement problem.
It is also the case that services account for a large part of output.
Services that directly affect the calculation of GDP are those in personal
consumption expenditures (PCE) and in net exports (and of course the output
of the entire government sector is notoriously mismeasured).(12)
Consumption of non-housing services accounts for about 43 % of personal
consumption expenditures, or 29 % of GDP, and net export of services is
about 1.3% of GDP.
The productivity numbers are not calculated for total GDP. One widely-used
BLS productivity calculation refers to the private business economy. It
is difficult to break out an explicit services component for that aggregate.
However, government compensation and capital consumption and owner-occupied
housing are clearly excluded from private business; and one can remove
these components from GDP to form a rough approximation to the private
business (farm and non-farm) economy (see table 3). PCE non-housing services
plus net export of services amounts to about 43 % of final private sector
non-housing demand.
Thus, services make up a large proportion of the aggregate productivity
ratio and they are poorly measured. Of course, services include many--such
as household utilities, bus transportation, barber and beauty shops and
so forth--that have probably not benefitted appreciably from output-enhancing
productivity improvements caused by computers. Nevertheless, a relatively
small amount of mismeasurement in some of the larger services categories
would impact the productivity statistics substantially. If the sign of
the measurement error goes in the right direction, mismeasured services
could go a long way to resolve the computer productivity paradox.
What of the sign of the measurement error in services output? Even though
some sector is measured badly, we cannot know the sign of the error for
sure. "Mismeasurement" does not always means upward bias in the
price indexes and downward bias in the output and productivity measures.
Banking, for example, is measured badly: The output measure in national
accounts makes questionable economic sense. A considerable amount of research
has accumulated on measuring banking output in alternative ways (I reviewed
much of it in Triplett, 1992, but Berger and Mester, 1997, and Fixler and
Zieschang, 1997, are more recent contributions). These alternative measures
of banking output make more sense to me than either of the measures that
are used in U.S. government statistics.(13)
But they do not seem to imply a higher rate of growth of banking output
and productivity. For example, Berger and Mester (1997) report that multi-factor
productivity in banking fell during a period when the BLS banking labor
productivity measure was rising sharply.
The alternative banking measures, like the government ones, can be criticized
because they omit things such as the increased convenience that automatic
teller machines have provided for banking customers. For this and other
reasons, Bresnahan (1986) shows that the downstream influence of information
technology on banking is substantial. Bresnahan (in private discussions)
has pointed out that the innovation that made the ATM practical was devising
methods to cut down fraud. But Berger and Humphrey (1996) show that the
effect of the ATM on banking cost has been perverse: the ATM costs about
half as much per transaction as a human teller, but ATM transactions are
smaller, and about twice as many occur for the same volume of transactions.
If the ATM has had little significant impact on banking cost, then all
of the ATM's improvement in banking productivity must come from consumer
valuation, at the margin, for increased convenience (and freedom from fear
about fraud). But since the ATM service is typically not charged for, one
must estimate consumer surplus and add it into the banking output measure,
to get an estimate of the contribution of technology to banking output
and productivity.
Would adding an allowance to banking output for the convenience of ATM's
yield a large upward adjustment? Frei and Harker (1997) reported that one
large bank, which aggressively tried to reduce customer access to human
tellers (to save cost), very quickly lost a substantial amount of its customer
base. Bank customers want human tellers, too. Although the availability
of ATM's is certainly an advantage for banking customers, beyond some point
of utilization the value of the ATM falls below the value of the human
teller.
Adding a valuation for ATMs would probably increase the measured rate
of growth of banking output and therefore increase banking productivity.
Improved measurement of banking and financial output might therefore help
to resolve the paradox. But, as the foregoing suggests, estimation is complicated,
and the magnitudes are certainly not at all clear.
Some economists have approached the measurement problem in services
by examining circumstantial, as it were, evidence of anomalous behavior
of the statistics in some of these badly measured areas. For example, Stiroh
(1998) extends Jorgenson and Stiroh's (1995) methodology to analyze the
contribution to growth of computers at the sectoral level. He identifies,
from among 35 industrial sectors, the most computer intensive sectors.
His computer-using services sectors are Griliches' poorly measured ones--wholesale
and retail trade, finance, insurance and real estate, and services (SIC
division I).
Stiroh finds that noncomputer input growth decreased as the use of computer
capital services increased in these computer intensive sectors. Cheaper
computers substituted for other inputs, including labor. But measured output
growth rates increased less rapidly as well: "For all computer-using sectors...the
average growth rate of multifactor productivity fell while [computer] capital
grew" (Stiroh, 1998). An inverse correlation between computer investment
and multifactor productivity growth does seem anomalous. See also Morrison
and Berndt (1991) for a compatible result. Either computers are not productive,
or output growth is undercounted. This anomaly is consistent with the "badly
measured services" hypothesis. However, it also emerges in Stiroh's results
for computer-intensive manufacturing industries, such as stone, clay and
glass, where output measurement problems are, if not absent, not well publicized.
Prescott (1997) noted that prices of consumption services that he regards
as "badly defined" (personal business, which includes finance and insurance
from Griliches' list, plus owner-occupied housing, medical care, and education)
rose 64 percent between 1985 and 1995, while "reasonably-well defined services"
(the others) rose only 40 percent. He felt this implied measurement error
in the former prices.(14) The evidence
of price divergence is not compelling in itself (no economic principle
suggests that prices should always move together--it is commonplace in
price index theory that relative prices do diverge). But if the price indexes
are overstated, then deflated output growth and multifactor productivity
growth are both understated.
The Boskin Commission estimated that the CPI (which provides deflators
for many components of PCE) was overstated in recent years by 1.1 percent,
of which approximately 0.4 percentage points was mismeasurement of prices
for consumer services. Most of that would translate into error in deflated
output of services in the productivity measures.(15)
For measurement error to explain the slowdown in economic growth, productivity,
or real consumption, requires either that measurement error increased after
1973, or that the shares of the badly measured sectors increased. There
is little evidence for the former, and although services have increased,
their shares have not grown by as much as productivity declined. Moreover,
increasing measurement error, if it did increase over time, must have occurred
gradually; the productivity slowdown, on the other hand, was abrupt.
I doubt that increasing mismeasurement of services consumption
can explain the post-1973 slowdown of real per capita consumption, and
therefore of productivity. Mismeasurement might, however, account for loss
of some of the computer's contribution to growth in the past two decades
or so.
Overall, mismeasurement of services probably has the right sign to resolve
the paradox. But does the mismeasurement hypothesis have enough strength
to resolve the paradox? My own guess is that it does not.
IV. Whether or not you see computers everywhere, some of what they
do is not counted in economic statistics.
"Windows fills the screen with lots of fun little boxes and pictures.
DOS is for people who never put bumper stickers on their cars." (Windows
for Dummies, page 12)
Following the passage quoted, the Windows for Dummies manual points
out, correctly, that pictures require much more computing power, so using
Windows 95 requires a relatively powerful computer. An enormous amount
of recent computer and software development has been directed toward making
computers easier to use.
Where do we count the value of increased convenience and better user
interface in economic statistics? If they are productive, if pictures and
icons result in more work being done, then the improvement will show up
in the productivity figures, or at least in the labor productivity data.
On the other hand, if the pictures are just more "fun," then I suppose
that new software incorporating screen graphics, "point and click" controls,
and so forth has created more consumption on the job, compared with earlier
software. Consumption on the job is not counted anywhere in economic statistics.
If more advanced computer software contributes partly to output and partly
to making the workers more content when they are working, some of that
gain will be lost in economic statistics.
Even conceding that a little fun has value, I suspect the technologists
have oversold these "fun little boxes and pictures." Whether the newest
developments in software and computers have in fact made computers that
much more user-friendly is an unresolved issue. Whether the benefits are
worth the all changeover cost is another unsettled issue (see section VI).
But if the software designers have met their goal, if
modern computers and software are more user-friendly and more flexible,
and if the computer power on your desk has been directed toward
that end, we would not capture much of the improved interface in economic
statistics.
The computer facilitates the reorganization of economic activity, and
the gains from reorganization also may not show up in economic statistics.
The following example (but not the analysis) comes from Steiner (1995).
Suppose a not so hypothetical toy company that once manufactured toys
in the United States. The computer, and faster and cheaper telecommunications
through the Internet, has made it possible to operate a toy business in
a globally integrated way. Today, the company's head office (in the U.S.)
determines what toys are likely to sell in the United States, designs the
toys, and plans the marketing campaign and the distribution of the toys.
But it contracts all toy manufacturing to companies in Asia, which might
not be affiliated with the U.S. company in any ownership way. When the
toys are completed, they are shipped directly from the Asian manufacturer
to large U.S. toy retailers; thus, this U.S. toy company has no direct
substantial U.S. wholesale arm, either. The billing and financial transactions
are handled in some offshore financial center, perhaps in the Bahamas.
The computer and advanced information technology have made it possible
for this company to locate the activities of manufacturing, distribution,
financial record-keeping and so forth in different parts of the world where
costs are lowest.
From the standpoint of the stockholders and company management, the
computer has permitted vast increases in the profitability of this company.
But where do these gains show up in U.S. productivity statistics?
In this case, the computer may have increased the productivity of Asian
toy manufacturers, of Liberian shipping companies, and of Caribbean banking
and payments establishments, by giving them better access to American markets
and American distribution. The only activity left in the United States
is the toy company's head office. What is the measure of "output" of a
head office?
If impact of the computer on the toy company's profitability does contribute
to U.S. productivity, calculating the computer's productivity effect requires
determining ways to account for the design, marketing, distribution, and
coordinating activities of the U.S. head office. Those are services activities
where the outputs are presently imperfectly measured.(16)
V. You don't see computers in the productivity statistics yet,
but wait a bit and you will.(17)
David (1990) has drawn an analogy between the diffusion of electricity
and computers. David links electricity and computers because both "form
the nodal elements" of networks and "occupy key positions in a web of strongly
complementary technical relationships." Because of their network parallels,
David predicts that computer diffusion and the effects of computers on
productivity will follow the same protracted course as electricity:
"Factory electrification did not...have an impact on productivity growth
in manufacturing before the early 1920's. At that time only slightly more
than half of factory mechanical drive capacity had been electrified....
This was four decades after the first central power station opened for
business" (David, 1990, page 357).
This idea has received very widespread diffusion in the popular press.
Whether or not the computer's productive potential has yet to be realized
fully (see section VI), I doubt that electricity provides an instructive
analogy. Mokyr (1997) warns us that: "Historical analogies often mislead
as much as they instruct and in technological progress, where change is
unpredictable, cumulative, and irreversible, the analogies [are] more dangerous
than anywhere." The networking properties of computers and electricity
may or may not be analogous, but the computer differs fundamentally from
electricity in its price behavior, and therefore in its diffusion pattern.
More than four decades have passed since the introduction of the commercial
computer. The price of computing power is now less than one-half of
one-tenth of 1 percent (0.0005) of what it was at its introduction
(see table 4). In about 45 years, the price of computing power has declined
more than two thousand fold.
No remotely comparable price decreases accompanied the dawning of the
electrical age. David reports that electricity prices only began to fall
in the fourth decade of electric power; and although Nordhaus (1997) estimates
that the per lumen price of lighting dropped by more than 85 percent between
1883 and 1920, two-thirds of that is attributable to improved efficiency
of the light bulb, rather than to electric power generation. Sichel (1997)
presents an alternative estimate. Gordon's (1990) price indexes for electricity
generation equipment only extend to 1947, but there is little in that history
to suggest price declines even remotely in the same league as those for
computers.
Because their price histories are so different, the diffusions of electric
power and computing power have fundamentally different--not similar--patterns.
In the diffusion of any innovation, one can distinguish two sources of
demand for it. The innovation may supplant an earlier technology for achieving
existing outcomes--new ways of doing what had been done before. An innovation
may also facilitate doing new things.
The introduction of electricity did not initially affect what had been
done before by water power or steam power. The manufacturing plant that
had been located by the stream and that transformed water power to mechanical
energy directly did not convert to electricity. It did not convert because
water power remained cheaper (electricity transformed water power twice,
first into electrical energy and then into mechanical energy).(18)
Electricity made it possible to locate manufacturing plants away from the
stream side. That is, the diffusion process for electricity was initially
the diffusion to new ways of doing things. Only after a long lag did electricity
generation affect the things that had been done before with water or steam
power.
In the computer diffusion process, the initial applications supplanted
older technologies for computing.(19) Water
and steam power long survived the introduction of electricity; but old,
pre-computer age devices for doing calculations disappeared long ago. Do
our research assistants still use Marchant calculators? (Or even know what
they are?) The vast and continuous decline in computing prices has long
since been factored into the decision to replace the computational analogy
to the old mill by the stream--electric calculators, punched-card sorters,
and the like--with modern computers.
In electricity, extensions to new applications preceded the displacement
of old methods because the price of electricity did not make the old methods
immediately obsolete. In the computer diffusion process, the displacement
of old methods came first, because old calculating machines were rapidly
made obsolete by the rapidly-falling price of computing power.
Although some new applications of computing power are quantum improvements
in capabilities, price effects matter here as well. In adopting computerized
methods, the high-valued ones are implemented first. As computing power
became ever cheaper, the incremental computerizations are lower valued:
New applications are low-value applications at the margin, not high-value
ones. This principle is suggested by utilization rates. When I was a graduate
student, I took my cards to the computer center and waited for the computer;
the computer was expensive, and I was cheap. Now, the computer on my desk
waits for me. And it is not so much that I have gotten more expensive,
it is instead that the computer has become so very cheap that it can be
used for activities that are themselves of not particularly high value.(20)
The price histories of electric power and computing power during their
respective first four decades differ by at least a thousandfold. What is
known about the differences in the diffusion processes for electric power
and computing power is consistent with that thousandfold price difference.
Indeed, it is inconceivable that it would be otherwise. Accordingly, I
do not believe that the diffusion story for electric power, as outlined
by David, matches very well the diffusion history--and prospects--of computing
power.
VI. You see computers everywhere but in the productivity statistics
because they're not as productive as you think.
Dilbert (cartoon of 5/5/97) claimed that the "total time that humans
have waited for Web pages to load...cancels out all the productivity gains
of the information age." Dilbert is certainly not the only curmudgeon who
has questioned whether the spread of information technology has brought
with it benefits that are consistent with either the amount of computer
investment or the vast increase in computer speed.
It is commonplace that the history of the computer is the constant replacement
of one technology with a newer one. The down side of rapid technological
advance is the breathtaking rate of obsolescence that has caused the scrapping
of earlier waves of investment well before the machines are worn out. Had
these machines not been discarded, the flow of computer machine services
today would be larger--but probably not that much larger. A personal computer
with an 8086 chip attained 0.33 MIPS in 1978, a Pentium-based computer
had 150 MIPS (a measure of speed) in 1994, and more than 200 today. Had
we saved all of the 8086 machines ever built, they would not add that much
to today's total stock of installed MIPS.
Nevertheless, no matter how little they are worth today, real resources
went into the production of those 8086 machines in 1978-82 when they were
state of the art. There is no return today for the substantial resources
given up to investment in computers in the relatively recent past.
It is not only the hardware. Stories of very expensive "computer systems
redesign" projects are legion. They usually emerge as newspaper anecdotes
when there is a very expensive project, or some abject failure of a redesign
project, usually after a massive cost overrun. Examples are a $3+ billion
Internal Revenue Service failure several years ago, and a Medicare record
system project that was criticized more recently. In some organizations,
it almost appears that the completion of a computer systems redesign project
brings with it the realization (or claim) that it is outdated and needs
to be replaced with a new system. The Wall Street Journal (April 30, 1998,
page 1) reported that "42 % of corporate information-technology projects
were abandoned before completion" and "roughly 50% of all technology projects
fail to meet chief executives' expectations." The "year 2000 problem" could
be added to the list, though that seems more a managerial problem, or the
result of software lasting far longer than its designers intended, than
an inherent computer/software problem.
At the personal computing level, there is the constant churning of standardized
personal computer operating systems, spreadsheet and word processing packages,
and so forth. Even if every new upgrade were a substantial improvement
for all the users, there is still the cost of the conversion: Many persons
within the computer industry and without have asked whether conversion
costs are adequately considered in the upgrade cycle. Do most users really
need to be on the frontier? But the upgrading process goes on.
Raff and Trajtenberg (1997) show that the quality-adjusted price decline
for automobiles in the early part of their history was comparable to computers.
But for a large number of car buyers, the Model T proved good enough, they
did not need to be on the technological frontier, even if some other buyers
wanted the best that could be obtained. The failure of the Model T to emerge
in the computer market may be evidence of technicians run amok, but it
also may reflect fundamental differences in the computer market and in
the car market. If you bought a used Model T you could still drive it on
the same highway that the new one used (in the case of the computer you
can't drive the old machine on the new highway). And you could always find
someone to fix it. The computer repair industry has shown nowhere near
the growth of the computer making industry.
Is all of this upgrading productive or is it wasteful? Informed opinion
is divided. One may do the same word processing tasks with new technology
and with old. What is the value of the marginal improvements in, say, convenience
and speed? They may not, as some users assert, be worth all that much--but
their cost is also not high, in the newest technology. Graphics and icons,
for example, take a lot of computer capacity; but because machine capacity
is cheap in the newest technology, the incremental cost of providing graphics
and icons is low, so they are provided in the newest software. From the
technologists' view, they can at small cost give users a little
animated icon to show when a page is printed, instead of a mundane "job
printed" signal, so why not do it? And they can also give software users
a tremendous range of menu choices at small cost, so why not do it?
The curmudgeon points to the end result of adding all these features:
A far faster computer, with far greater memory capacity, that executes
many of its jobs more slowly than the older, slower machine. A 386 machine,
with an earlier operating system and earlier version of a word processing
package, may be faster for some operations than a Pentium with the latest
operating system and word processing upgrade. To get the gains offered
by the newest word processing upgrade requires a considerable cost in upgraded
operating system and upgraded machine, plus giving up something that the
old system provided, which is the opposite way of looking at it, compared
with the technologist view of the software designers.
Menu choices, too, have costs. My newest e-mail upgrade has far more
choices than the old, but some of the things I did before now require more
key-strokes, and the system is much slower in executing the commands that
the earlier version performed with alacrity. That is just a computer application
of the general economic principle that, while increased opportunities for
choice are desirable, making choices is costly, so I do not want to be
forced continually to choose from a wider menu.
Some computer professionals also question the direction recent software
design has been taking. Michael Dertouzos was quoted (New York Times,
June 24, 1997, section C, page 1): "Calling today's machine 'user friendly'
because of its endless choice of fonts and screen patterns is tantamount
to dressing a chimpanzee in a green hospital gown and earnestly parading
it as a surgeon."
Then too, the user is conscious, if the software designer is not, that
changeover itself is costly. It is not just the acquisition cost of a new
package, nor the cost of mounting and debugging it, it is also the substantial
cost to the users in unlearning old commands and learning new ones.(21)
The time cost of users is undoubtedly a far greater component of the cost
of upgrading systems than any of the direct costs associated with the changes.
Typically, only the direct costs are recorded in organizations' ledgers,
but the "down time" associated with changes takes its substantial toll
on productivity (Blinder and Quandt, 1997, also emphasize the costs of
learning and obsolescence as a drag on realizing the productiveness potential
of computers).
Computer industry spokespersons are fond of the analogy that says computer
industry technology has given consumers something like a Rolls-Royce that
goes 200 miles per hour, gives 500 miles per gallon, and costs $100. The
curmudgeon on computer and software progress hears a different story:
"We have the software equivalent of a new toll road for you to drive,
but you must buy our new Rolls-Royce equivalent computer to use it. And
you can't drive on the old highway, which was already paid for, because
we don't maintain it anymore."
Nevertheless, people do not, by and large, forego the newest improvements
and retain the old technology, or do not do it very long. It is troubling
to appeal to some sort of market failure (the well-publicized Microsoft
antitrust case to the contrary) as the reason. If there is something to
this "wastefulness" explanation for the computer paradox, it is a managerial
and decision-making failure on the part of users of computer equipment
and software.
If past decisions on computers have an elements of inefficiency, what
of the future? One way of looking at it is to say that when we finally
learn to use our computers, the future promises to fulfill the hopes so
often disappointed in the past. Computers are productive, it is
only that we humans have not used them productively, and they will
improve productivity in the future, even if they haven't in the past. That
says that the true, potential return to computers is much greater than
the return that has been measured so far. If the true or potential return
is much greater, then the economy ought to invest far more in computers
than it has.
But if past decisions on computers have been incorrect or inappropriate,
that may also suggest that we have already invested too much in computers.
Computers are less productive than they were thought to be when decisions
were made to "computerize." That bodes less well for the future. As with
most of the debate on this topic, research knowledge at present does not
go much beyond the insights of Dilbert.
One final point should be made. One of the computers' accomplishments
may be to cut the cost of various kinds of rent-seeking behavior, and to
facilitate rivalrous oligopoly behavior, market sharing strategies, and
so forth. The computer has made it possible to execute far more stock market
transactions, for example. Bresnahan, Milgrom and Paul (1992) explored
the value of enhanced information in the stock market. They concluded that
improved information did not contribute to productivity because information
really just affected who received the gains, it did not increase the social
gain from stock market activity. That suggests the importance of distinguishing
the computer's effects on individuals or on firms from its effects on the
economy--some gains to individuals or to firms are at the expense of other
individuals or firms, so there is no net effect at the economy-wide level.
VII. There is no paradox: Some economists are counting innovations
and new products on an arithmetic scale when they should use a logarithmic
scale.
For many economists, and especially business economists, the preceding
discussion will not be satisfactory. They believe there is a paradox because
they believe they see more technical changes, more new products, more changes
in consumer service, in methods of delivery, and in other innovative areas
than is consistent with government productivity numbers. We are a "new
economy," in this view, inundated with an unprecedented flow of innovations
and new products, and none of this flow of the new is reflected in the
productivity numbers.
This new economy view is repeated in the newspapers, in business publications
and places such as Federal Reserve Bank reviews, and we hear it at conferences.
It once was true, the story goes, that products were standardized and therefore
easy to measure. Today, we are told there is an unprecedented stream
of new products and quality improvements and customized products to meet
market niches, product cycles are shortening to an unprecedented
degree, new services from industries such as banking and finance are being
introduced with a rapidity that is unprecedented historically, the
Chairman of the Federal Reserve Board has been quoted to the effect that
the unprecedented current level of technological innovations is
a once in a century phenomenon that will yield an enormous upward surge
in productivity.
In the new economy view, the productivity paradox is not really a computer
paradox. Rather, people are stacking up and cumulating anecdotes, whether
from within their own companies or from what they read in the newspapers
or hear other people saying. Those cumulated anecdotes do not seem consistent
with the modest rise in the aggregate productivity numbers. From this point
of view, it is not so much a belief that the computer has increased productivity,
but rather a belief that productivity has improved, based on other evidence.
Indeed, the sentence in Solow (1987) immediately preceding the widely-quoted
aphorism makes the same point: "[The authors] are somewhat embarrassed
by the fact that what everyone feels to have been a technological
revolution, a drastic change in our productive lives, has been accompanied
everywhere...by a slowing-down of productivity growth, not by a step up"
(emphasis supplied).
Thus, the computer was a signal or perhaps a symbol for all this innovation-new
product productivity that people thought was happening. The computer provided
a rationale that explained why this perception that the rate of technical
change was accelerating in the economy was correct, and why the productivity
statistics were wrong. For that reason, economists took the paradox seriously.
It wasn't so much whether computers were seen everywhere, or not seen everywhere,
or whether they were themselves productive, or whether some of their uses
were wasteful, or the other considerations discussed in sections I-VI of
this paper. It was, rather, that the computer gave plausibility to all
the new things economists thought they saw in an anecdotal way but which
did not show up in the aggregate productivity numbers.
Those anecdotes about new products, new services, new methods of distribution
and new technologies are no doubt valid observations. Although no one knows
how to count the number of these "new" things, I would not seriously dispute
the proposition that there is more that is new today than there was at
some time in the past. Yet, these anecdotes wholly lack historical perspective,
and for that reason are misleading as evidence on productivity.
To have an impact on productivity, the rate of new product and
new technology introductions must be greater than in the past. A simple
numerical example makes the point. Suppose all productivity improvements
come from the development of new products. Suppose, further, that in some
initial period 100 products existed and that ten percent of the products
were new. In the following period, there must be 11 new products just to
keep the rate of productivity growth constant, and in the period after
that 12 new products are required. At the end of 10 years, a constant productivity
rate requires 26 new products per year, and after 20 years, 62 new products
and so on, as the arithmetic of compound increases shows. As the economy
grows, an ever larger number of new products is required just to keep the
productivity growth rate constant.
Most of the anecdotes that have been advanced as evidence for the "new
economy" amount to an assertion that there are a greater number
of "new" things, which is not necessarily a greater rate. As an example,
many economists have cited the number of products carried in a modern grocery
store as evidence of increased consumer choice, of marketing innovations,
and so forth.(22) Diewert and Fox (1997,
Table 5) report that in 1994 there are more than twice as many products
in the average grocery store than in 1972 (19,000, compared with 9,000).
But the 1948-72 rate of increase (from 2,200 in 1948 to 9,000) was over
four times as great as the 1972-94 increase (the intervals 1948-72 and
1972-94 are roughly equal). Thus it is true that in 1994 there are many
more products in grocery stores than there were two decades before; but
the rate of increase has fallen.
Some other illustrations enhance the point. The Boskin Commission cited
welfare gains from the increased availability of imported fine wines, and
so forth. Because of the great reduction in transportation costs, we now
get Australian wine in the United States at low prices (as low, in my experience,
as in Australia). That is certainly an increase in the number of commodities
available, and an increase in welfare. But is the increase in tradeable
commodities a larger proportionate increment to choice and to consumption
opportunities than the increments that occurred in the past?
Diewert (1993) cites an example, taken from Alfred Marshall, of a new
product in the 19th century: Decreased transportation costs, owing to railroads,
made fresh fish from the sea available in the interior of England for the
first time in the second half of the 19th century. Mokyr (1997) observed
that: "Nothing like the [present] unprecedented increase in the quality
and variety of consumer goods can be observed in Britain during the Industrial
Revolution. The working class still spent most of its income on food, drink,
and housing." Considering the very small number of consumption goods then
available to the average worker, and even allowing for the fact that the
fresh fish were undoubtedly initially consumed mostly by the middle class,
was the introduction of fresh fish a smaller proportionate increase in
the number of new commodities than is the availability of Australian wine
and similar goods a century later? I suspect the best answer to this question
is: we do not know. But we also have looked at the decade of the 1990's
with far too short a historical perspective.
In developing a related point, Mokyr (1997) refers to "the huge improvements
in communications in the 19th century due to the telegraph, which for the
first time allowed information to travel at a rate faster than people....
The penny post, invented...in the 1840s, did an enormous amount for communications
-- compared to what was before. Its marginal contribution was certainly
not less than Netscape's."
One could go on. My numerical example, above, implied that each new
product had the same significance as before. In fact, new products of the
1990's must equal the significance of automobiles and appliances in the
1920's and 1930's (home air conditioning first became available in the
early 1930's, for example), and of television and other communications
improvements in the 1940's and 1950's (mobile telephones, for example,
were introduced in the 1940's). If the average significance of new products
in the 1990's is not as great as for individual new products from the past,
then the number of them must be greater still to justify the new economy
view of the paradox.
The same proposition holds for quality change. It is amazing to see
quality improvements to automobiles in the 1990's, great as they have been,
held up as part of the unprecedented improvement story, or--as in a press
account I read recently--quality change in automobiles given as an example
of the new economy, contrasted with a ton of steel in the old. Actually,
the first thing wrong with that contrast is that quality change in a ton
of steel has been formidable. Second, quality change in autos is a very
old problem in economic statistics, it did not emerge in the 1990's as
a characteristic of the new economy. Hedonic price index methodology was
developed in the 1930's to deal with quality change in automobiles (Court,
1939). The study by Raff and Trajtenberg (1997) suggests that the rate
of quality improvement in automobiles was greater in the first decade of
the twentieth century than in its last decade. Again, much of what has
been said about the new economy is true; what has been lacking is a proper
historical appreciation for the magnitudes and significance of new product
introductions and quality change in the past.
I believe that the number of new products and "new things" is greater
than before. But that is not the question. The proper question is: Is the
rate of improvement, the rate of introduction of new things, unprecedented
historically? I do not believe we know the answer to that question. If
the number of "new things" is a measure of productivity improvement, then
we have to have an increase in the rate of introduction of new things,
not just an increase in the number. Most of the anecdotes that have been
cited for the "new economy" suggest that many economists have been looking
at the wrong question, they have been looking at the number of new things
rather than the rate.
Thus, the paradox has gained acceptability partly because some economists
have mistakenly been counting new innovations on an arithmetic scale, and--finding
more of them--have thought they have evidence confirming the paradox. They
ought to be looking at a logarithmic scale, a scale that says you must
turn out ever greater numbers of "new things" to keep the current rate
of "new things" up to the rates of the past.
We look at the new products and new technical changes at the end of
the 20th century, and we are tremendously impressed by them. We should
be. It is clear those new products are increasing welfare, and the technical
innovations are contributing to output. But are they increasing at an increasing
rate? Is the number of new products increasing more rapidly on a logarithmic
scale? That is not clear at all. For the "new things" to improve productivity,
they must be increasing at an increasing rate. I think it safe to assert
that the empirical work in economic history that would confirm the increasing
rate hypothesis has not been carried out.
References
Baily, Martin N., and Robert J. Gordon. 1988. The productivity slowdown,
measurement issues, and the explosion of computer power. Brookings Papers on
Economic Activity (2):347-420.
Berger, Allen N., and David B. Humphrey. 1992. Measurement and efficiency
issues in commercial banking. In Output Measurement in the Service Sectors,
Zvi Griliches, ed. National Bureau of Economic Research Studies in Income and Wealth
Vol. 56. Chicago: University of Chicago Press.
Berger, Allen N., and Loretta J. Mester. 1997. Efficiency and productivity
trends in the U.S. commercial banking industry: A comparison of the 1980s and the 1990s.
Centre for the Study of Living Standards Conference on Service Sector Productivity
and the Productivity Paradox. Manuscript (April 11-12). http://www.csls.ca/conf-pap/mest.pdf.
Berndt, Ernst R., and Jack E. Triplett, eds. 1990. Fifty Years of
Economic Measurement: The Jubilee of the Conference on Research in Income and Wealth. National
Bureau of Economic Research, Studies in Income and Wealth vol. 54. Chicago: University
of Chicago Press.
Blinder, Alan S., and Richard E. Quandt. 1997. The computer and the
economy: Will information technology ever produce the productivity gains that were
predicted? The Atlantic Monthly 280(6) (December): 26-32.
Bresnahan, Timothy F. 1986. Measuring the spillovers from technical
advance: Mainframe computers in financial services. American Economic Review 76(4):
742-755.
Bresnahan, Timothy F., Paul Milgrom, and Jonathan Paul. 1992. The real
output of the stock exchange. In Output Measures in the Service Sectors, Zvi Griliches,
ed.. National Bureau of Economic Research Studies in Income and wealth vol. 56. Chicago:
University of Chicago Press.
Brynjolfsson, Erik, and Lorin Hitt. 1996. Paradox lost? Firm-level evidence
on the returns to information systems spending. Management Science 42(4) (April):
541.
Court, Andrew T. 1939. Hedonic price indexes with automotive examples.
In The Dynamics of Automobile Demand, pp. 99-117. New York: General Motors Corporation.
David, Paul A. 1990. The dynamo and the computer: An historical perspective
on the modern productivity paradox. American Economic Review 80 (May): 355-61.
Denison, Edward F. 1989. Estimates of Productivity Change by Industry:
An Evaluation and an Alternative. Washington, DC: Brookings Institution.
Diewert, W. Erwin. 1993. The early history of price index research.
In Contributions to Economic Analysis: Essays in Index Number Theory Vol. 1, W. Erwin
Diewert and Alice O.Nakamura, eds.. New York: North-Holland.
Diewert, W. Erwin, and Kevin Fox. 1997. Can measurement error explain
the productivity paradox? Centre for the Study of Living Standards Conference on Service
Sector Productivity and the Productivity Paradox. Manuscript (April 11-12).
http://www.csls.ca/conf-pap/diewert2.pdf.
Dulberger, Ellen R. 1993. Sources of price decline in computer processors:
Selected electronic components. In Price Measurements and Their Uses, Murray F. Foss,
Marilyn E. Manser, and Allen Young, eds. Conference on Research in Income and Wealth, Studies
in Income and Wealth, Vol. 57. Chicago: University of Chicago Press.
Federal Register. 1997. Part II, Office of Management and
Budget: 1997 North American
Industry Classification System--1987 Standard industrial Classification
Replacement; Notice Vol. 62, No. 68 (April 9).
Fixler, Dennis, and Kim Zieschang. 1997. The productivity of the banking
sector: Integrating financial and production approaches to measuring financial service output.
Centre for the Study of Living Standards Conference on Service Sector Productivity
and the Productivity Paradox. Manuscript (April 11-12). http://www.csls.ca/conf-pap/fixler2.pdf.
Flamm, Kenneth. 1993. Measurement of DRAM prices: Technology and market
structure. In Price Measurements and Their Uses, Murray Foss, Marilyn Manser,
and Allan Young, eds. National Bureau of Economic Research Studies in Income and Wealth
Vol. 57. Chicago: University of Chicago Press.
Flamm, Kenneth. 1997. More for Less: The Economic Impact of Semiconductors.
San Jose, CA: Semiconductor Industry Association.
Frei, Frances X., and Patrick T Harker. 1997. Innovation in retail banking.
National Academy of Science, National Research Council's Board on Science, Technology,
and Economic Policy, Conference on America's Industrial Resurgence: Sources and Prospects.
Draft manuscript. (December 8-9). http://www2.nas.edu/step/2296.html
Gandal, Neil. 1994. Hedonic price indexes for spreadsheets and an empirical
test for network externalities. RAND Journal of Economics 25.
Gillingham, Robert. 1983. Measuring the cost of shelter for homeowners:
Theoretical and empirical considerations. Review of Economics and Statistics
65(2) (May): 254-265.
Gordon, Robert J. 1990. The Measurement of Durable Goods Prices.
Chicago: University of Chicago Press.
Griliches, Zvi, ed. 1992. Output Measurement in the Service Sectors.
National Bureau of Economic Research Studies in Income and Wealth Vol. 56. Chicago:
University of Chicago Press.
------. 1994. Productivity, R&D, and the data constraint. American
Economic Review 84(1) (March): 1-23.
------. 1997. Plenary Session: Perspectives on the Productivity Paradox.
Centre for the Study of Living Standards, Conference on Service Sector Productivity and the
Productivity Paradox, Ottawa. Transcription. (April 11-12). http://www.csls.ca/conf-pap/Conf-fin.pdf.
Grimm, Bruce T. 1998. Price indexes for selected semiconductors, 1974-96.
Survey of Current Business 78 (2) (February): 8-24.
Harhoff, Dietmar and Dietmar Moch. 1997. Price indexes for PC database
software and the value of code compatibility. Research Policy 24(4-5) (December):
509-520.
Harper, Michael J., Ernst R. Berndt and David O. Wood. 1989. Rates of
return and capital aggregation using alternative rental prices. In Dale W. Jorgenson and
Ralph Landau, eds., Technology and Capital Formation. Cambridge, MA: MIT Press.
Jorgenson, Dale W. 1980. Accounting for capital. In Capital, Efficiency
and Growth, George M. von Furstenberg, ed. Cambridge: Ballinger.
Jorgenson, Dale W. 1989. Capital as a factor of production. In Technology
and Capital Formation, Dale W. Jorgenson and Ralph Landau, eds. Cambridge,
MA: MIT Press.
Jorgenson, Dale W., and Kevin Stiroh. 1995. Computers and growth. Economics
of Innovation and New Technology 3(3-4): 295-316.
Kreuger, Alan B. 1993. How computers have changed the
wage structure: Evidence from micro data 1984-1989. Quarterly Journal
of Economics CVIII: 33-60.
Lichtenberg, Frank R. 1993. The output contributions of computer equipment
and personnel: A firm-level analysis. National Bureau of Economic Research working paper
4540 (November).
Longley, James W. 1967. An appraisal of least squares programs for the
electronic computer from the point of view of the user. Journal of the American Statistical
Association 62(319) (September): 819-841.
McCarthy, Paul. 1997. Computer prices: How good is the quality adjustment?
Capital Stock Conference, Organisation of Economic Co-operation and Development. Canberra
(March 10-14). http://www.oecd.org/std/capstock97/oecd3.pdf.
Mokyr, Joel. 1997. Are we living in the middle of an industrial revolution?
Federal Reserve Bank of Kansas City Economic Review 82(2): 31-43.
Moreau, Antoine. 1996. Methodology of the price index for microcomputers
and printers in France. In OECD Proceedings: Industry Productivity, International
Comparison and Measurement Issues. Paris: Organisation for Economic Co-operation
and Development.
Morrison, Catherine J., and Ernst R. Berndt. 1991. Assessing the productivity
of information technology equipment in U.S. manufacturing industries. National Bureau
of Economic Research working paper 3582 (January).
Nordhaus, William D. 1994. Do real-output and real-wage measures capture
reality? The history of lighting suggests not. In The Economics of New Goods,
Timothy F. Bresnahan and Robert J. Gordon, eds. National Bureau of Economic Research, Studies
in Income and Wealth Vol. 58. Chicago: University of Chicago Press.
Oliner, Stephen D. 1993. Constant-quality price change, depreciation,
and retirement of mainframe computers. In Price Measurements and Their Uses, Murray
F. Foss, Marilyn E.
Manser, and Allan H. Young, eds. National Bureau of Economic Research,
Studies in Income and Wealth Vol. 57. Chicago: University of Chicago Press.
Oliner, Stephen D., and Daniel E. Sichel. 1994. Computers and output
growth revisited: How big is the puzzle? Brookings Papers on Economic Activity (2):
273-317.
Prescott, Edward C. 1997. On defining real consumption. Federal Reserve
Bank of St. Louis Review 79(3) (May/June): 47-53.
Raff, Daniel M. G., and Manuel Trajtenberg. 1997. Quality-adjusted prices
for the American automobile industry: 1906-1940. In The Economics of New Goods,
Timothy F. Bresnahan and Robert J. Gordon, eds. National Bureau of Economic Research Studies
in Income and Wealth Vol. 58. Chicago: University of Chicago Press.
Romer, P. 1986. Increasing returns and long-run growth. Journal of
Political Economy 94(5) (October): 1002-1037.
Sherwood, Mark. Output of the property and casualty insurance industry.
Canadian Business Economics. Forthcoming.
Sichel, Daniel E. 1997. The Computer Revolution: An Economic Perspective.
Washington, DC: Brookings Institution Press.
Solow, Robert M. 1987. We'd better watch out. New York Times Book
Review (July 12): 36.
Standard Industrial Classification System. 1987. Executive Office
of the President, Office of Management and Budget. Springfield, VA: National Technical Information
Service.
Steiner, Robert L. 1995. Caveat! Some unrecognized pitfalls in census
economic data and the input-output accounts. Review of Industrial Organization 10(6)
(December): 689-710.
Stiroh, Kevin J. 1998. Computers, productivity and input substitution.
Economic Inquiry. 36(2) (April): 175-191.
Trajtenberg, M. 1990. Economic Analysis of Product Innovation--The
Case of CT Scanners. Cambridge, MA: Harvard University Press.
Triplett, Jack E. 1983. Concepts of quality in input and output price
measures: A resolution of the user value-resource cost debate. In The U.S. National Income
and Product Accounts: Selected Topics, Murray F. Foss, ed. National Bureau of Economic
Research, Studies in Income and Wealth, Vol. 47. Chicago: University of Chicago Press
Triplett, Jack E. 1989. Price and technological change in a capital
good: A survey of research on computers. In Technology and Capital Formation, Dale
W. Jorgenson and Ralph Landau, eds. Cambridge, MA: MIT Press.
Triplett, Jack E. 1992. Banking output. In The New Palgrave Dictionary
of Money and Finance Vol. 1, Peter Newman, Murray Milgate, and John Eatwell,
eds. New York: Stockton Press.
Triplett, Jack E. 1996. High-tech industry productivity and hedonic
price indices. In OECD Proceedings: Industry Productivity, International Comparison and
Measurement Issues. Paris: Organisation for Economic Co-operation and Development.
Triplett, Jack E. 1997. Measuring consumption: The post-1973 slowdown
and the research issues. Federal Reserve Bank of St. Louis Review 79(3) (May/June):
9-42. U.S. Department of Commerce, Bureau of the Census. 1944. Statistical
Abstract of the United States 1943. Washington: U.S. Government Printing Office.
U.S. Department of Commerce, Bureau of Economic Analysis. 1990. Improving
the quality of economic statistics. Survey of Current Business 70(2) (February):
2.
U.S. Department of Commerce, Bureau of Economic Analysis. 1998. Survey
of Current Business 78(3) (March): Table 5.4, p. D-13.
U.S. Department of Labor, Bureau of Labor Statistics. 1998a. Multifactor
productivity, major sector multifactor productivity index. http://146.142.4.24/cgi-bin/dsrv?mp.
U.S. Department of Labor, Bureau of Labor Statistics. 1998b. Multifactor
productivity trends, 1995 and 1996: Private business, private nonfarm business, and manufacturing.
Press release (May 6).
Wyckoff, Andrew W. 1995. The impact of computer prices on international
comparisons of labour productivity. Economics of Innovation and New Technology 3.
Table 1
Contributions of Computers, Information Equipment and Software
to Economic Growth
  |
Oliner and Sichel (1994)a |
Jorgenson and Stiroh (1994)d |
  |
1970-79 |
1980-92 |
1979-85 |
1985-90 |
1990-96 |
Output growth rate average annual rate |
3.42 |
2.27 |
2.35 |
3.09 |
2.36 |
Contributions of:
Computing equipment |
0.09 |
0.21 |
0.15 |
0.14 |
0.12 |
Information processing equipment |
0.25b |
0.35b |
n.a. |
n.a. |
n.a. |
Computing hardware, software and
labor, combined (1987-93) |
n.a. |
0.40c |
n.a. |
n.a. |
n.a. |
Notes:
a) Oliner and Sichel (1994), Table 3, page 285, unless otherwise noted
b) Oliner and Sichel (1994), Table 10, page 305
c) Oliner and Sichel (1994), Table 9, page 303: note that the time
period differs from the other two lines
d) Jorgenson and Stiroh (1994): Updated tables supplied by the authors.
Table 2
Computer, Information Equipment, and Software Shares
(Data for 1993)
  |
Oliner and Sichel (1994)
|
Jorgenson and Stiroh (1994)
|
  |
Capital Stock Shares |
Income Shares |
Capital Stock Share |
Capital Services Share |
Computing equipment |
2.0a |
0.9b |
0.5e |
1.8e |
Information processing equipment |
11.7a |
3.5c |
n.a. |
n.a. |
Computing hardware, software and labor, combined |
n.a. |
2.7d |
n.a. |
n.a. |
Notes:
a) Oliner and Sichel (1994), Table 2, page 279: share of the wealth
capital stock (see text)
b) Oliner and Sichel (1994), Table 10, page 305
c) Oliner and Sichel (1994), Table 10, page 305
d) Oliner and Sichel (1994), page 297
e) Updated tables provided by the authors: Share of the productive
capital stock (see text), which also includes land and consumer durables.
Table 3
Final-demand Services as a Proportion of Private Non-housing Purchases
(1996, in billions)
  |
  |
Percent |
1. Gross domestic product, less government and housing |
5,442.1 |
100.0 |
2. PCE non-housing services |
2,251.2 |
41.4 |
3. Net exports of services |
96.6 |
1.8 |
4. Final demand services (line 3 plus line 4) |
2,347.8 |
43.1 |
Source: Survey of Current Business, December 1997, NIPA Tables 1.1 and 2.2.
Table 4: Computer Equipment Price Indexes
  |
Mainframes |
PCs |
Computer Equipment |
1958 |
142773.6 |
  |
  |
1972 |
3750.4 |
  |
  |
1982 |
382.5 |
578.5 |
404.9 |
1987 |
144.9 |
217.6 |
170.4 |
1992 |
100.0 |
100.0 |
100.0 |
1996 |
49.1 |
37.9 |
45.5 |
1997 |
42.1 |
25.2 |
34.6 |
FootNotes
1. This paper is based on remarks originally presented
at the Conference on Service Sector Productivity and the Productivity Paradox,
Ottawa, April 11-12, 1997. The draft was written while the author was Chief
Economist, Bureau of Economic Analysis, and was presented in preliminary
form at the January, 1998 Chicago meetings of the American Economic Association,
in a session titled "Is Technological Change Speeding Up or Slowing Down?"
I am greatly indebted to Claudia Goldin for conversations on some relevant
points of economic history. Copies of the paper can be obtained from the
author at: Brookings Institution, 1775 Massachusetts Ave., NW, Washington,
D.C. 20036, phone 202-797-6134 or e-mail:JTRIPLETT@BROOK.EDU
2. U.S. Department of Labor (1998a, 1998b ).
3. In 1977, the shares were 22 percent for information
equipment, 26 percent for industrial equipment (unpublished detailed data
found at the BEA Stat-USA websites, http://www.stat-usa.gov/BEN/ebb2/bea/aatitlot.prn
and http://www.stat-usa.gov/BEN/ebb2/bea/uadata.exe). "Information processing
and related equipment," in the BEA data, includes the categories Office,
computing and accounting machinery (which in turn includes computers and
peripheral equipment), Communication equipment, Instruments, and Photocopy
and related equipment. Computer equipment in 1997 amounted to about 40
percent of Information processing equipment, and about 14 percent of Producer
durable equipment investment.
4. The goods-producing industries, Agriculture, forestry
and fishing, Mining, Manufacturing, and Construction, accounted for 49
percent of employed persons, and 49 percent of the "experienced labor force,"
in the 1940 Census (Statistical Abstract of the United States, 1944, table
no. 128, pp. 116-118).
5. Oliner and Sichel (1994) compute computing equipment's
share of the wealth capital stock (2.0 percent), which is higher
than Jorgenson and Stiroh's share of the productive capital stock
(0.5 percent), partly because Jorgenson and Stiroh's capital stock includes
land and consumer durables. Note that the capital stock share of computers
is much smaller than their investment share; computers are very short-lived
investments.
6. Alternatively, if one thought the Solow paradox
referred to labor productivity, then growth in computer input will
affect labor productivity, even if it does not affect multifactor productivity.
It seems to me, as it has seemed to others (see David, 1990, for example),
that Solow must have been talking about multifactor productivity, and not
labor productivity. In any case, labor productivity also slowed after 1973.
7. The major empirical work on semiconductor price
indexes is Flamm (1993, 1997), Dulberger (1993), and Grimm (1998). Triplett
(1996) compares computer, semiconductor, and semiconductor manufacturing
equipment price indexes.
8. Without these two simplifying expositional assumptions,
the price index becomes a very complicated construction, as I indicated
in Triplett (1989), and it unduly complicates the exposition for no gain
for present purposes. Empirically, however, the measurement is sensitive
to both assumptions.
9. Assuming that the exclusion of software from the
hedonic regression does not bias the coefficients of the included variables.
The bias to the price index from omitted variables might go either way,
depending on the unknown correlation between included and excluded variables,
and on the unobserved movement in the excluded variable.
10. In the interval following BEA's introduction
of the hedonic computer equipment price indexes in 1985, they were not
extended owing to a combination of factors. (a) Shortage of resources within
BEA. Though there was something to this, the "Boskin Initiative" to improve
economic statistics came along soon after (1989), and there were no hedonic
projects in the Boskin initiative (and very few resources for price index
improvements) (U.S. Department of Commerce, 1990). (b) Lack of appreciation,
perhaps, by decision-makers of the significance of what was done, and overreaction
to somewhat mild outside criticism and more forceful, though indirect,
criticism from within the U.S. statistical system. Though it made sense
to let the dust settle a bit after the introduction of the computer indexes,
this was undoubtedly the most far-reaching innovation, internationally,
in national accounting in the decade of the 1980's (for some of its international
implications, see Wyckoff, 1995).
11. In the revised BEA capital stock data (supplied
by Shelby Herman of BEA), these sectors account for 72.3 percent of computer
capital stock in the benchmark year 1992.
12. Most services in SIC division I are intermediate
products (such as the services of consulting economists) that do not enter
final GDP.
13. The BLS output measure used in the banking industry
productivity measure is a substantially different definition from the BEA
banking measure used in calculating components of GDP. See Triplett (1992).
14. Prescott (1997) includes owner-occupied housing
services in his "badly defined" category, on the grounds that a user cost
measure of housing services is theoretically preferable to owners' equivalent
rent, which is the measure now used in the national accounts and the CPI.
Because the standard Jorgenson (1989) expression for the user cost of capital
has the rental value on the left side of the equation, Prescott's point
cannot be a theoretical one because rental and user-cost measures should
in theory be the same. He must, rather, implicitly be asserting that user
cost estimates work better empirically in the case of owner-occupied housing
than the use of rental foregone. This empirical issue has been explored
extensively in the literature, and the evidence goes against Prescott's
assertion. See Gillingham (1983) for the case of owner-occupied housing
and Harper, Berndt, and Wood (1989) for analysis of comparable problems
in the estimation of user cost for other capital goods. I do not mean to
suggest that there are no problems with measuring the cost of owner-occupied
housing, just that Prescott's reasoning does not seem consistent with the
empirical work that has been done on this topic.
15. I reviewed the Boskin Commission bias estimates,
and their implications for the measurement of real PCE (and therefore productivity),
in Triplett (1997).
16. And in any case, there is no present convention
for imputing them to head offices.
In the 1987 SIC, a company head office or management office is designated
an "auxiliary." The employment and expenses of auxiliary offices are lumped
into the data for the industry that the head office manages. So, if this
toy company still manufactured toys in the U.S., the costs of the
head office would be put into the toy manufacturing industry, on the assumption
that whatever the head office does, it contributes its services to the
manufacturing establishments of the company. No imputation for the services
of the head office would have been made.
In a globalized world, and indeed in a world in which the head office
may manage establishments belonging to many different industries, putting
the costs of head offices into manufacturing industries no longer makes
economic sense. In the new North American Industry Classification System
(NAICS), head offices are put in a separate industry and grouped in a sector
with other economic units (like holding companies) that have no natural
output units (Federal Register, 1997). Where those units provide
services to U.S. manufacturing establishments, one could impute the head
office's management services to the costs of the manufacturing units. But
in the global toy manufacturing world, it is not clear that the head office
is providing services to the Asian manufacturing plants, or to anyone else.
17. Parts of this section are adapted from my "comment"
on Oliner and Sichel (1994).
18. David (1990, p. 357), notes the "unprofitability
of replacing still serviceable manufacturing plants embodying production
technologies that used mechanical power derived from water and steam."
He remarks that "applications of electric power awaited the further physical
depreciation of durable factory structures...." That manufacturers waited
for water-powered equipment to wear out before replacing it with electric
is eloquent testimony to the powerful impact of prices and obsolescence
on computer diffusion: The evidence suggests that computers do not deteriorate
appreciably in use (Oliner, 1993), but how many computers from the first
decade or two of the computer age are still in service? Computing power
and electric power have different, not similar, histories.
19. As an illustration, Longley (1967) showed that
matrix inversion algorithms for early computer regression programs were
patterned on short-cut methods used on mechanical
calculators and therefore contained inversion errors that affected regression
coefficients at the first or second significant digits. The designers of
faster and cheaper methods to displace old ones did not initially take
advantage of the computer's speed to improve the accuracy of the calculations,
they just "computerized" exactly what had been done before.
20. That computers are less than fully utilized
is sometimes cited as an inefficiency that is somehow related to the paradox.
It is not inefficient to let utilization of the lower cost input adjust
to economize on the use of the higher cost one. That modern computers stand
idle much of the time is just another indication that they are cheap--or,
to put it another way, it is another piece of information that confirms
the rapidly falling price index for computers (see section II, above).
21. It is hard to avoid wondering if much of this
cost could not have been avoided, had the retention of old icons and symbols
been made an objective of the upgrade design. The analogy to the typewriter's
QWERTY keyboard (retained on the computer) is apt: Why is there no similar
inertia in changing software commands and icons?
22. Reservations might be expressed about this interpretation
of the number of products in supermarkets.
|