PURPOSE AND ORIGIN
The Cowles Foundation for Research in Economics at Yale University, established as an
activity of the Department of Economics in 1955, is intended to sponsor and encourage the
development and application of quantitative methods in economics and related social
sciences. The Cowles Foundation continues the work of the Cowles Commission for Research
in Economics, founded in 1932 by Alfred Cowles at Colorado Springs, Colorado. The
Commission moved to Chicago in 1939 and was affiliated with the University of Chicago
until 1955. At that time, the professional research staff of the Commission accepted
appointments at Yale and, along with other members of the Yale Department of Economics,
formed the research staff of the newly established Cowles Foundation. The members of the
professional staff typically have faculty appointments and teaching responsibilities in
the Department of Economics or other departments at Yale University.
RESEARCH ACTIVITIES
INTRODUCTION
The Cowles Commission for Research in Economics was founded approximately forty years
ago by Alfred Cowles, in collaboration with a group of economists and mathematicians
concerned with the application of quantitative techniques to economics and the related
social sciences. This methodological interest was continued with remarkable persistence
during the early phase at Colorado Springs, then at the University of Chicago, and since
1955 at Yale.
One of the major interests at Colorado Springs was in the analysis of economic data by
statistical methods of greater power and refinement than those previously used in
economics. This was motivated largely by a desire to understand the chaotic behavior of
certain aspects of the American economy the stock market in particular
during the Depression years. The interest in statistical methodology was continued during
the Chicago period with a growing appreciation of the unique character and difficulties of
statistical problems arising in economics. An important use of this work was made in the
description of the dynamic characteristics of the U.S. economy by a system of
statistically estimated equations.
At the same time, the econometric work at Chicago was accompanied by the development of
a second group of interests explicitly mathematical but not related to econometric
estimation. The activity analysis formulation of production and its relationship to the
expanding body of techniques in linear programming became a major focus of research. The
Walrasian model of competitive behavior was examined with a new generality and precision,
in the midst of an increased concern with the study of interdependent economic units, and
in the context of a modern reformulation of welfare theory.
The move to Yale in 1955 coincided with a renewed emphasis on empirical applications in
a variety of fields. The description of economic growth, the behavior of financial
intermediaries, and the embedding of monetary theory in a general equilibrium formulation
of asset markets were studied both theoretically and with a concern for the implications
of the theory for economic policy. Earlier work on activity analysis and the general
equilibrium model was extended as was early work on social choice in non-market contexts
such as voting. Analysis of the optimization of resource allocation was extended to
consider optimization over time. Along with the profession at large, we have engaged in
the development of analytical methods oriented to contemporary social and economic
problems, in particular the specifics of income distribution, the economics of exhaustible
resources, and the dynamics of inflation.
For the purposes of this report it is convenient to categorize the research activities
undertaken at Cowles during the last three years in the following way:
A. Descriptive and Optimal Growth Theory
B. General Equilibrium Analysis and Game Theory
C. Microeconomics of Information
D. Macroeconomics and Monetary Economics: Theory and Policy
E. Econometrics
F. Public Sector
A. Descriptive and Optimal Growth Theory
In a lecture given at Stockholm, Sweden, on December 11, 1975 (CFDP 421), Koopmans has summarized his
research over the last 25 years on the optimal allocation of resources. The lecture,
entitled "Concepts of Optimality and Their Uses," consists of three parts. The
first part deals with the early developments in mathematical programming and activity (or
process) analysis up to about 1960. It reviews the parallel contributions, made in part
independently and in part in interaction by Dantzig, Kantorovich, Koopmans and others. It
also links these developments with earlier ideas in economics and in mathematics.
The second part deals with the best allocation of resources over time, and draws its
illustrations mostly from Koopmans' own contributions to that field since about 1955. It
also contains some observations on the choice of an optimality criterion when population
growth is directly or indirectly affected by policy observations previously made
only in a discussion at the World Congress of the Econometric Society in Toronto, August
2026, 1975. The third and briefest part indicates how the two strands of thought,
process analysis and optimal growth theory, find joint application in the growing field of
"development programming."
During the period of this report, Koopmans has continued his exploration
of the concept of an invariant capital stock described in the previous report. This is a
capital stock of a size and composition such that, in equilibrium, discounted utilities
from future consumption would require preserving that stock, as long as the given
technology, resource base and consumers' preferences are expected to remain constant for
the indefinite future. In a paper (CFDP 408),
presented at a Conference of the International Economic Association on "The
Microeconomic Foundation of Macroeconomics," held in s'Agaro, Spain, in April 1975,
he worked out an example involving one capital good, one resource (labor), two consumption
goods, and three production processes. It was shown that an interesting anomaly may arise
if the consumption good that is least capital-intensive in its production is at the same
time an inferior good that is, its consumption decreases as real income increases
while relative prices remain the same. The possible anomaly is depicted in Figure 1. For
intertemporal consumers' preferences represented by an annual discount rate for utility
between 0 and some largest value delta bar, the capital stock z bar capable of employing
all labor in producing only the superior good (and reproducing that stock) is an invariant
capital stock. But, for all discount rates above some smallest rate delta, which is
located somewhere between 0 and delta bar, the smaller capital stock z needed to
employ all labor in producing only the inferior good (and in reproducing that stock), is
also an invariant stock. For any delta between delta bar and delta, there is a
third invariant capital stock of an intermediate size z(delta) that depends on delta. This
stock is sufficient to reproduce itself and to produce a combination of the two
consumption goods. It was proved by Iwai that the intermediate stock z(delta) is unstable
(for all delta between delta and delta bar) in the sense that starting from a
slightly larger initial stock, optimization over time will require the capital stock to
increase and ultimately to approach z bar, while a slightly smaller initial stock will
lead to a decrease down to z. Both z bar and z are stable for all such
delta, z bar for all lower ones as well, and z for all higher ones. This is
illustrated in Figure 1. The example raises the question whether this type of instability
can occur with many more capital goods and consumption goods.
During the calendar year 1974, Koopmans was on leave from Yale at the recently created
International Institute for Applied Systems Analysis in Laxenburg, near Vienna, Austria.
While there, he gave some lectures on the ideas of optimal growth theory as they bear on
long run problems of energy, of ecology and of water resources. He also served as leader
of the methodology project for part of the period. In the summer of 1975, he returned for
a few weeks as chairman of a two-week workshop on the Analysis and Computation of
Equilibria and Regions of Stability with Applications in Chemistry, Climatology, Ecology
and Economics.
Some of the questions that Koopmans considered at IIASA are related to more empirically
oriented work by Nordhaus. A major effort Nordhaus has been engaged in over the last three
years is modelling energy and natural resource systems. The first work in the area was a
theoretical investigation of problems in resource markets ("Markets and Appropriable
Resources"), published in abbreviated form in Energy: Demand, Conservation and
Institutional Problems (M.S. Macrakis, ed.). This examined the allocation of
appropriable exhaustible resources, and concluded that there could be inefficiencies in
their allocation in the absence of a full set of futures markets. The study suggested that
there are no general results, however, and that a determination of whether the rate of
exhaustion is too high or too low can only come from economic analysis of individual
markets.
Given this conclusion, Nordhaus turned to an examination of the energy market as the
best case study of the problem of exhaustion of natural resources. In CFP 401, he presented the results of a
preliminary empirical study of the efficiency of the allocation of energy resources. The
study is based on a model of energy use which takes account of the costs and availability
of alternative energy sources, both now and in the future, as well as demand functions for
different energy demand categories. Using linear programming, the model derives the
efficient path for allocation of energy resources over the indefinite future; this is the
path that would emerge through time in a free competitive market. The technological
assumptions in the model are based almost completely on econometric and engineering
estimates, and are as realistic as possible.
The results are as follows: Some types of energy are virtually free gifts of nature to
mankind, involving very low labor and capital costs, but these are limited in supply and
not renewable. In an efficient allocation, such low-cost sources of energy are used first;
as they are exhausted, the price of energy rises. In a competitive market, the owner of a
low-cost energy source such as a rich field of oil, balances the decision to sell at
today's price (and invest the proceeds) against the alternative of keeping his product in
the ground and waiting for prices to rise. This assessment determines prices and
quantities at all points in time, and generates a rising trend in prices and royalties to
the owners of energy resources.
As the fuels of today move up in price, alternative sources of energy tomorrow's
fuels become profitable alternatives. The world economy gradually makes transitions
from the lowest-cost sources to the next least expensive fuel and ultimately to
technologies that require much capital and labor but are less dependent on scarce,
depletable natural resources. The calculations of the model predict a movement from
today's heavy reliance on petroleum and natural gas to deep-mined coal, gasified and
liquefied coal, shale oil, and nuclear power during the century ahead. In the model it is
assumed that a "backstop technology" exists, or will come into being, that
provides a virtually infinite energy source but at a relatively high price because of high
capital costs. In the limit of the calculations, reached in the twenty-second century,
this "backstop technology" supplies almost all of the world's energy needs.
The basic model assumes a world with free international trade and competition in
energy. For comparison, Nordhaus explored the case in which the United States achieves
complete self-sufficiency in energy sources.
The total cost of meeting energy demand over the next twenty years is about fifty
percent higher in this case than with free trade an added average annual cost of
energy of $16 billion.
Since the initial model was published, several extensions and applications have been
made. First, the model was used in a more general examination of the role of resources as
a retarding factor in economic growth (CFP
406, and "Energy and Economic Growth," forthcoming). These further studies
speculate on the role of resources in general and energy in particular in the process of
economic growth over the next several decades.
More technical applications were pursued by Nordhaus while he also spent a year
(197475) at IIASA. These were a more detailed examination of the nuclear fuel cycle
("Notes on Inclusion of Nuclear Fuel Cycles," unpublished) and examination of
the link between energy systems and climate ("Can We Control Carbon Dioxide?",
IIASA working paper, 1975). The former study was undertaken in collaboration with Prof. A.
Suzuki of Tokyo University while both authors were at IIASA. This study integrates a more
realistic fuel cycle into the original model. The SuzukiNordhaus model includes five
of the major alternative fuel cycles in the available technology. One of the important
results of this analysis will be estimation of a price path for energy resources produced
by man (plutonium and U233). The second application of the model was to consider the
interaction of the energy system with climate through the role of atmospheric carbon
dioxide.
One final spinoff of the original work was a project investigating energy demand
functions. Because Nordhaus felt the original assumptions were somewhat crude, he
initiated, at IIASA, a project of estimating energy demand functions for a number of
countries. The project consisted of gathering data for different fuels in three sectors
and ten large OECD countries. Preliminary results were presented in CFDP 405.
Beyond the formal publications over the last three years, Koopmans and Nordhaus have
participated in a number of workshops and conferences, and (together with Heal) twice
given a Workshop on Economic Models of Resources at Yale University. This workshop has
been primarily oriented toward research, and a number of excellent student papers have
emerged.
Further work on resources was done by Heal during his visit to Cowles in the Fall of
1975. This followed up some of his earlier theoretical work on the economics of
exhaustible resources (some of which was done during an earlier visit) with an empirical
study of the determinants of resource prices. In recent years, there have been many
analyses of the rate of resource depletion, some concerned with analyzing the optimal
depletion rate and others focussed on analyzing the market determined rate. In CFDP 407, Heal applied these theories to
an empirical study of three crude commodity markets, copper, zinc and lead. After
extensive examination of different specifications, he concluded that these three resource
prices are systematically related to interest rates, output, and their own past behavior.
A puzzling finding is that resource demand seems to depend not only on the rate of change
of the resource price, but also on its level. This is inconsistent with earlier work on
resource pricing.
B. General Equilibrium Analysis and Game Theory
A number of members of the Cowles staff are actively engaged in the study of general
equilibrium models. Some of this work is concerned with analysis of the Walrasian model
and its generalizations, including research on efficient means of computing competitive
equilibria and analysis of purely competitive economies where the number of consumers is
large. Other work is concerned with game theoretic formulations of general equilibria.
Extensions of these lines of research are currently underway dealing with such difficult
problems as indivisibilities, increasing returns to scale, aggregative properties of
competitive equilibria and explicit incorporation of price adjustment mechanisms.
Continuing beyond the development of computational methods described in the Cowles
Foundation monograph The Computation of Economic Equilibria by Scarf with
Hansen, Scarf and others have further investigated computational algorithms. A major
mathematical paper on computation by Scarf and Eaves (of Stanford University, who visited
Cowles in 197475) demonstrates the use of piecewise linear techniques and shows that
virtually all of the fixed point computational methods which have been developed over the
past decade can be placed in this framework. It is expected that these techniques will
prove to be of considerable importance in the future for the numerical solution of large
economic models.
In this paper, "The Solution of Systems of Piecewise Linear Equations" (CFDP 390), Eaves and Scarf studied the
solution of systems of piecewise linear equations involving one more variable than the
number of equations. It considers a set P in (n+1) dimensional space which is the union of
a finite number of convex polyhedra, each of which has a non-empty interior, and no two of
which have interior points in common. A mapping F is given, which takes the point of P
into a Euclidean space of one lower dimension. The mapping is completely general aside
from the conditions that it be continuous in P and linear in each piece of linearity. The
system of piecewise linear equations referred to in the title of the paper arises from the
study of those points in P which map into a preassigned point c in n-dimensional
space.
It is demonstrated that if c is a "regular" value of the
mapping, then its inverse image is a finite union of paths and loops. (A path is a curve,
linear in each piece of linearity of P, which touches the boundary of P in precisely two
points; a loop is a piecewise linear curve with no boundary.) Figure 2 illustrates a
solution set which is composed of a single loop and two paths.
As an illustration of these elementary geometrical ideas, consider a
mapping of the unit simplex [0,1] into itself. Eaves and Scarf introduce the product of
this simplex with another unit interval (in this case forming a square) which is then
decomposed into a number of small triangles. The method proceeds by constructing a simple
mapping of the top of the square into itself, with a unique fixed point, combining this
mapping linearly with the given mapping on the bottom of the square and tracing out the
set of fixed points as one moves from the top to the bottom. See Figure 3. In addition to
providing a computational method, these arguments can be used to give a proof of Brouwer's
theorem, for an arbitrary mapping of an n dimensional simplex into itself, similar to the
proof given by Hirsch in 1963.
The geometrical setting of the paper leads naturally to an index theory
analogous to that arising in the study of differentiable manifolds which is an
important tool in the analysis of the monotonicity of computational procedures, and the
uniqueness of solutions to systems of equations.
In CFDP 389, Eaves proposed an
algorithm for solution of the classical model of exchange in which each consumer has a
linear utility function. Although the solution to this problem can be obtained by use of
the fixed point methods, Eaves' algorithm takes advantage of the special structure of the
problem and is considerably more efficient.
During the summer of 1973 Mycielski of the Institute of Theoretical Physics, University
of Warsaw, visited the Cowles Foundation. He collaborated with Scarf on the development of
computational techniques for the determination of equilibrium exchange rates in a general
model of international trade.
During the last year or so, Scarf has also been concerned with the
application of fixed point methods to the solution of economic problems involving
indivisibilities in production. The basic idea is to associate a piecewise linear manifold
with the discrete set of production plans arising from an activity analysis model with
integral activity levels. See Figure 4. The problem of maximizing output, subject to
constraints on the availability of factors, can then be solved by associating with each
vertex of this manifold an integer label depending on the specific constraints violated at
this vertex and then searching for a simplex in the manifold, all of whose labels are
distinct.
This approach is quite general in the sense that an arbitrary discrete programming
problem can be cast in this form. There are, however, considerable difficulties in
practice, since the sequence of small steps required to implement the algorithm may be of
substantial complexity in themselves. Scarf's recent research has been devoted to an
examination of those discrete activity analysis models for which these small movements can
be carried out easily.
An alternative to Scarf's method for computing fixed points is the Global Newton method
developed by Smale during his visit to the Cowles Foundation and by Kellogg, Li and Yorke
of the University of Maryland. These methods construct a differentiable curve starting at
the boundary of the unit simplex and terminating at a fixed point. The process of
following the curve may be cast into the form of a set of differential equations which are
immediately seen to be equivalent to Newton's method in the vicinity of the fixed point.
Yet another alternative to fixed point methods for the computation of equilibria is
being explored by Mantel. The investigation centers on the search for social welfare
functions which can be used in order to obtain a competitive allocation as a solution to a
maximization problem. Special cases are known where this is possible and where the
equilibrium prices emerge as the Lagrangian multipliers associated with the resource
constraints. One such special case is that of homothetic preferences either equal,
or different but with a constant relative income distribution. Mantel shows that the
linear utility case analyzed by Eaves implies that the welfare function is concave so that
the computation of the equilibrium reduces to a concave programming problem. He has also
found a social welfare function for the general pure trade model which satisfies the
condition that it be monotone in the individual utilities so that it can be defined
without previously solving the equilibrium equations for the economy. In contrast to the
special case of homotheticity of preferences, however, this welfare function depends on
information about all the tastes as well as endowments in the economy. This function will
be non-concave unless the competitive allocation for the economy is unique.
This computational approach of Mantel's obviously involves analysis of aggregation
problems. In closely related work, he has been exploring the decomposition properties of
aggregate excess demand functions and of market demand functions. The usual assumptions on
the preference maximizing behavior of consumers subject to a budget constraint imply that
individual excess demand functions are essentially characterized by Walras' law,
homogeneity of degree zero in prices and lower boundedness. It has been conjectured that
these properties are inherited by the aggregate excess demand function. A sequence of
papers by various authors permit one to assert that this conjecture has been proven. This
work is summarized in Mantel's CFDP 409.
Perhaps a more important part of the investigation, however, refers to market demand
functions. Mantel shows by means of examples that some restrictions can be inferred from
microeconomic theory in addition to the characterizations mentioned above: He also shows
that some of the results that apply to excess demand functions do not carry over to market
demand functions. A surprising result is that there does exist a characterization of the
Jacobian of a differentiable demand function. This theorem, a converse of a theorem by
Diewert, will appear in Chapter 6 of Frontiers in Quantitative Economics III (M.
Intriligator, ed.).
In work on Walrasian general equilibrium, mathematical economists have given several
formulations to the naive notion of a competitive economy as one in which individual
economic agents have a negligible effect on the outcome of the economic process. These
formulations are referred to in the preceding Report of Research. All of them involve the
concept of "largeness" in some sense. Earlier work has shown that if the concept
of the number of traders in an economy being "large" is formalized through (a)
the application of nonstandard analysis, (b) use of the notion of a continuum of traders,
or (c) analysis of a sequence of replications of a finite economy, then it is true that
the equilibrium concept of the "core" is equivalent to the concept of
competitive equilibrium in the sense that competitive allocations of goods to traders are
identical to points in the cores for market games. Brown considered the relationship
between nonstandard economies and economies with a continuum of agents in a paper
"The Core of a Purely Competitive Economy" which was presented in 1974 at a
Symposium on Computation of Equilibria organized by the Computation Centre of the Polish
Academy of Sciences. In that paper, he constructed a nonstandard economy from a continuum
economy such that (1) an allocation is in the core of the continuum economy if and only if
the nonstandard representation of the allocation is in the core of the nonstandard economy
and (2) the core of the continuum economy is non-empty if and only if the core of the
nonstandard economy is non-empty.
Another important equilibrium concept is the Shapley value. Brown, in a paper with
Peter Loeb (of the University of Illinois) shows that it is also equivalent to the notions
of the core and competitive equilibria (CFDP
406). Brown and Loeb apply the technique of nonstandard analysis to an exchange
economy. In independent work, Dubey (in his doctoral dissertation at Cornell University)
also showed the equivalence of the Shapley value and competitive equilibria. Dubey used
the continuum model of Aumann and Shapley and was able to introduce production.
Other aspects of mathematical analysis of Walrasian general equilibrium rely heavily on
assumptions of smoothness or differentiability. Recently, Debreu has shown that the rate
at which the core converges to the set of competitive allocations when a standard economy
is replicated is, under a particular assumption, inversely proportional to the number of
replications. Grodal has shown that this result can be derived from the properties of a
continuum economy viewed as a limit of the replicated economies. Brown is using the
assumption that traders have utility functions that are differentiably convex in an
attempt to extend the DebreuGrodal result to nonstandard economies.
Use of differentiability conditions to consider price adjustment
processes in exchange models and models with production was an important part of Smale's
work at Cowles. In one part of CFDP 378,
Smale analyzes the conditions for a Walrasian price equilibrium, in a pure exchange model,
to be "catastrophic" in the sense that it is discontinuous in the endowment
allocations. He considers the question, for what combinations of initial endowments, final
allocations and associated equilibrium prices will it be the case that a small change in
the initial endowments could produce a large jump in prices. In Figure 5, the dot is an
example of a catastrophic point. Smale suggests some perspective on circumstances
associated with such points. One circumstance is when the difference between the initial
endowments and the final allocation becomes large. Another is when the curvature of the
indifference surfaces becomes large.
An essential feature of the Walrasian models and their extensions discussed thus far
(with the exception of the work of Scarf on indivisibilities) is the assumption of linear
or convex production technologies. The efficient allocation of resources in the presence
of increasing returns to scale in production is a problem which Brown and Heal are
currently exploring. The basic idea which they are extending can be illustrated in the
following way: If a firm has a nonconvex production possibility set, then it will have
efficient production programs which are not supported by any linear price system. If the
firm is a price-taker, but faces nonlinear prices, then efficient points which lie in a
region of non-convexity can be supported. In the figure below, which shows a two-commodity
nonconvex production possibility set, y* is one such point. It cannot be supported
by any linear price system but can be supported by a nonlinear price schedule in which the
relative prices at y* are equal to the marginal rate of transformation at y*,
i.e. the broken line is an iso-revenue curve of one such price schedule.
Another type of nonconvexity is considered by Starr in a paper with Heller (of the
University of Pennsylvania), "Equilibrium with Non-convex Transactions Costs:
Monetary and Nonmonetary Economies." In this model, the transactions demand for money
is associated with a set-up cost on transactions between money and other assets. This
contradicts the usual convexity requirements of general equilibrium theory. The resolution
is by the technique of using large numbers to smooth out in the aggregate the
discontinuities in individual behavior. The existence of approximate equilibria in a
monetary economy is demonstrated.
Shubik's central concern has been the application of a number of
different game-theoretic solution concepts to problems in general equilibrium theory.
There are many results in game theory which are clear and well defined for side-payment
games but are not so easily dealt with when no-side-payment games are considered. As the
economy as a whole is best modelled as a no side-payment game, it is natural to ask if the
results establishing the relationships between the set of competitive equilibria and the
core for side-payment games also hold for no-side-payment games. In an as yet unpublished
paper, Shapley and Shubik have defined the inner core of a no-side-payment game.
This is the set of imputations within the core such that, if we associate with any point,
p, in the core of the no-side-payment game, a side-payment game constructed by passing a
tangent hyperplane through that point and assuming that side-payments can be made among
members of coalitions at the rates given by the direction cosines of the tangent plane,
then the point of tangency between the side-payment game and the no-side-payment game is
in the core of the side-payment game. Not all points in the no-side-payment core have this
property. It has been shown that the competitive equilibria of any exchange economy
associated with the same no-side-payment market game are in its inner core.
A paradox appears to be present with this result inasmuch as the competitive equilibria
and the core are ordinally defined solutions whereas the inner core definition makes use
of a cardinal utility. This paradox is resolved by showing that there exists a
transformation of the utility functions which will bring any point in the core into the
inner core. This result is related to the recent work of Debreu in establishing the
conditions under which preferences can be represented by a concave utility function.
Gordon Bradley (formerly of the Department of Administrative Science at Yale) and
Shubik have provided an (unfortunately) negative answer to the hope that there might be a
way to construct an intrinsically transferable utility measure by finding transformations
to flatten the Pareto optimal surface. In CFP
417, they ask the following simple question: Given n individuals and in prospects over
which each individual has a strong preference ordering, how many prospects are required
before it is not possible to find a set of order preserving transformations which place
the set of Pareto optimal prospects on a hyperplane. The answer is 6 for n = 3 and 4 for n
> 4.
Noncooperative equilibrium notions, formulated in economics by Cournot and generalized
by Nash, are also a major interest of Shubik. These solution concepts seem to be the
natural ones to use in the analysis of problems in oligopoly theory. They have also been
applied by Dubey and Shubik to the analysis of markets where firms can choose to enter,
i.e. to become active, or to exit. Finally, the notion of noncooperative equilibria seems
to have a natural application to games where agents bid for goods with money and where
information and trust are less than complete. In a game where agents bid for goods with
commodity money, Shubik and Shapley were able to establish the existence of a
noncooperative equilibrium. Shubik's extensions of this model to incorporate fiat money
and financial institutions are discussed in the following section.
C. Microeconomics of Information
Although the market mechanism is informationally efficient by comparison to a process
of complete central planning, the conventional analysis of competitive equilibria
nonetheless assumes that agents have a great deal of information about the alternatives
available to them. In a world of heterogeneous products and services and where agents
differ in many characteristics, it seems desirable to relax these assumptions. Early work
in this area was done by Akerlof, Arrow and Spence. In closely related work, Stiglitz (CFDP 354, published in the American
Economic Review, June, 1975) examined a model in which workers differed in
productivity but employers could not readily identify these differences, either before or
after hiring, in the absence of screening. Stiglitz noted that given certain screening
costs, there could exist multiple equilibria in one of which there might be no screening
and in another of which there might be full screening. It is quite possible that the
equilibria with screening will not be Pareto optimal.
In CFDP 375, Stiglitz examines a
reverse situation one in which workers do not differ in productivity but in
which a distribution of unequal wages may exist. The type of model he considers can be
seen in this simple example: Assume that the quit rate of individuals is affected by the
wage distribution. Note also that the wage paid by a firm with given training costs will
be determined by the quit rate function. Then wage rates and quit rates both depend on the
distribution of training costs and must be such, in equilibrium, that firms just break
even.
Wilson has been concerned with an analysis of self-selection models. The fundamental
problem is perhaps best illustrated in the context of an insurance market (CFDP 432). Firms are assumed to be unable
to differentiate among consumers on the basis of their probability of having an accident.
It can be shown, however, that consumers of different risk classes will tend to have
different preference orderings over the set of insurance policies. Therefore, firms may
have an incentive to structure the menu of insurance policies so that different risk
classes purchase different policies. The most striking result of this analysis is that
equilibria will not necessarily exist under the assumption that firms behave as if they
expect no response on the part of other firms.
This result led Wilson to search for simple expectation rules for which there is a
stationary equilibrium. Under very strict assumptions relating the preference ordering of
consumers to their risk class (or profitability to firms) such a rule has been found.
Under those assumptions, if firms expect other firms to withdraw unprofitable policies and
copy profitable policies, an equilibrium will exist. This result has been demonstrated in
an abstract model which captures many of the essential features of other
"self-selection" models such as Spence's signalling models, Akerlof's assembly
line problem, and several other models of labor and loan markets. Wilson has also
demonstrated that these equilibria need not be Pareto optimal even with the information
constraints taken into account.
Starr, during the period of this report, analyzed the role of money as a medium of
exchange in reducing the level of information needed for trade to take place. In two
papers on this topic, OstroyStarr, "Money and the Decentralization of
Exchange,"and Starr, "Decentralized Non-Monetary Trade," a model is set
forth which emphasizes the bilateral nature of barter and the requirements for information
and organization to achieve an equilibrium allocation even when it is assumed that
equilibrium prices are already established. It is then shown that the use of monetary
trade allows a significant reduction in the needed trading time or required level of
organization and information.
In the past three years, Shubik's work has concentrated heavily on micro-economic
aspects of money and financial institutions studied by means of models of a closed economy
solved as a noncooperative game. The first satisfactory model was obtained in connection
with an oligopoly problem, as was noted in the previous section. The thrust of the work
since then has been primarily on the monetary and financial aspects of the models. This
work was divided into several parts. They can best be described as: (1) A critique of
general equilibrium theory; (2) An outline of the methods to be employed in modelling
financial and information conditions; (3) Models of market structure describing price
formation and bidding or trade conditions; (4) Models of markets with exogenous
uncertainty; (5) Markets with production as well as trade; (6) Dynamics; and (7) The role
of the "float" and bankruptcy conditions.
It is Shubik's belief that the process models and the noncooperative game analysis
applied to them provide a more promising approach to the development of microeconomic
monetary theory than do direct modifications of general equilibrium theory. The reasons
for this belief and the critique of general equilibrium theory appear in two papers, CFP 432 and CFDP 417 which will be published in Economic Applique.
The approach suggested as an alternative to the general equilibrium models is to
construct explicit market mechanisms specifying the process of trade in detail, using the
extensive form representation of a game followed by its strategic form representation.
Details such as the sequencing of financial and marketing moves are brought into focus
using these methods. These are described (as previously noted) in CFDP 330 which appears in the
International Journal of Game Theory and in CFDP 377, "Mathematical Models for a Theory of Money and
Financial Institutions," which has appeared as a chapter in a book edited by Day and
Groves entitled Adaptive Economic Models.
In his original paper (CFP 391)
Shubik formulated a market in which traders were required to offer all of their
nonmonetary possessions for sale. This is somewhat restrictive and unrealistic. In an as
yet unpublished work, Shapley and Shubik considered several variants of this model. Shubik
formulated a double auction market (CFDP
368) as an alternative market clearing mechanism. Dubey and Shubik have completed the
analysis of a model in which individuals may both bid and decide what to offer to the
market. This model was originally formulated jointly by Shapley and Shubik. There is an
indeterminacy in this model which results in there being a large class of noncooperative
equilibria. This indeterminacy disappears if an extra condition is introduced. The
appropriate condition appears to be to minimize cash flow (CFDP 414). A further paper by Shubik considers in general the number
of types of markets there may be when individuals bid simultaneously and when the
mechanism generates a single price for each commodity in a "reasonable" way (CFDP 416).
Dubey and Shubik have considered economic models of trade with exogenous uncertainty
and nonsymmetric information about the outcomes of the random variables. This work is
related to the treatment of uncertainty by Arrow and Debreu and the treatment of
nonsymmetric information conditions by Radner. Dubey and Shubik (CFDP 410R) obtain a price system which
reflects the lack of symmetry in information and suggest a general way in which to define
Pareto optimality under nonsymmetric information conditions.
In attempting to construct a process model of the economy to be studied as a
noncooperative game where production is present it is necessary to consider a multistage
process. In particular firms must obtain raw materials or other inputs before they can
produce. Dubey and Shubik (CFDP 429)
have been able to construct a multistage model with traders and managers of the firms, and
with trade in shares of the firms as well as in raw materials and final goods. They have
been able to establish the link between the noncooperative equilibrium points in this
model and the competitive equilibria in the Walrasian model.
Shubik has collaborated with Whitt (of Yale's School of Organization and Management)
and with Evers (during his visit at Cowles) in considering infinite horizon models of
exchange with money. Shubik and Whitt (CFP
389) analyzed trade with fiat money and a single commodity. Evers and Shubik (CFDP 431) considered a competitive
infinite horizon economy with trade in money.
Two key items in understanding the functioning of a monetary economy are the float,
i.e. the amount of money in transit which "greases the system," and the
bankruptcy conditions which indicate the penalties to be assessed if individuals fail to
meet their obligations. Shubik has considered these phenomena in several papers (CFDP 394, CFDP 395 and CFDP 417).
It is suggested that the optimal bankruptcy rule needed in order to design a
noncooperative game using fiat money, which will give noncooperative equilibria
appropriately related to the competitive equilibria, must be related to the Lagrangian
multipliers obtained from solving the Walrasian system for its competitive equilibria. It
is further suggested that bank and fiat money play different roles, the first in financing
intertemporal trade and the second in covering the float.
D. Macroeconomics and Monetary Economics: Theory and Policy
Research in macroeconomics, including its monetary aspects, has been pursued along both
theoretical and empirical paths. On the theoretical side, work has been guided by two
major objectives. One is to develop models that are consistent in linking short- and
long-run phenomena and in accounting for changes in stocks of real and financial wealth as
well as all flows of income and spending. A second objective is to define more
persuasively and precisely the short-run responses of microeconomic units to imperfectly
foreseen contingencies, or shocks to the system, in order to understand better such
aggregate phenomena as persistent unemployment or inflationary bias in the economy.
Empirical investigation has been guided by these same objectives. Investigations have
included specification and estimation of models, of the entire economy and of particular
sectors, which satisfy the consistency requirements mentioned above, as well as
investigation of the behavior of specific variables such as wholesale prices and the cost
of capital.
One of the issues in current macroeconomic theory is whether fiscal stimulus without
printing money is effective in increasing aggregate demand or to put the question another
way, in increasing the velocity of money. The possibility that whatever short run impact
pure fiscal stimulus may have is reversed by the longer-run monetary effects of
accumulated public debt was addressed in CFDP
384 by Tobin and Buiter (formerly a graduate student at Yale and subsequently at
Princeton University). Their answer is negative if fiscal policy is expansionary or
inflationary in the short run, it has qualitatively the same effects in the long run. The
analysis involves a dynamic extension of traditional "IS-LM" macroeconomic
models to account for the shifts in these loci as stocks of assets change.
In closely related work, Buiter and Smith are developing a compact "IS-LM"
model, in the spirit of KeynesHicksPatinkin, that is contemporaneously and
sequentially consistent in accounting for all flows of funds. This is intended to remedy
shortcomings of textbook models regarding the effects of monetary and fiscal policies
in particular, to remedy the inadequate analysis of the question of short- and
long-run "crowding out" to which the TobinBuiter paper was also addressed.
A related question is whether aggregate demand, as determined by some version of the
IS-LM model, will necessarily be brought into equality with aggregate supply, as
determined by labor market behavior and production relations, through the adjustment of
prices and money wages. One line of inquiry on this topic is reported by Tobin in CFP 428. He notes that Keynes aspired to
explain persistent involuntary unemployment as an equilibrium phenomenon but that it is
difficult, from a theoretical point of view, to swallow the Keynesian notion that
persistent excess supply of labor will fail to induce wage adjustments leading to
increased employment. Tobin therefore begins with the assumption that wages are flexible
and that only full employment equilibrium exists. He then proceeds to show that the
dynamics of wage and price adjustment and of the generation of expectations during these
adjustments may well make the equilibrium unstable, globally if not locally. The result is
that unemployment may arise which is not eradicable except by policy measures, even though
the unemployment is not a feature of equilibrium.
Many macroeconomic phenomena, both static and dynamic, are best understood as the
aggregative outcomes of continuous readjustments of individual households, firms, and
markets to stochastic disturbances. Tobin, Brainard and Iwai have been seeking to model
precisely the responses of economic units to imperfectly foreseen shocks, and the
system-wide consequences of such shocks.
The concept of stochastic macro-equilibrium, in which shifting micro-economic
disequilibria exist within fairly stable aggregates, was described in Tobin's presidential
address (CFP 361) to the American
Economic Association. In such an equilibrium, both job vacancies and unemployment persist;
likewise, overall balance of supply and demand for goods and services is consistent with
excess demand in some markets and excess supply in others. In Tobin's address, he argued
that inflationary bias results from stochastic shifts in demand, even in the absence of
aggregate excess demand, because wages and prices in individual sectors are more
responsive to excess demand than to excess supply. Consequently it takes excess supply in
aggregate to maintain zero inflation, or any steady rate. Tobin further argues that the
corresponding unemployment rate has no normative or "natural" significance.
Tobin has constructed a computer simulation model for illustrating these points. The
model focuses on disequilibrium in labor markets and provides a framework for
investigating structural changes affecting the speed of adjustment of wages and of
movement between labor markets. Lepper has used a variant of this model to examine the
effects of long-term labor contracts, and of cost-of-living clauses in such contracts, on
the inflation bias of the simulated economy and on the allocational loss of simultaneous
vacancies and unemployment.
Does rationality of expectations and behavior imply a unique natural rate of
unemployment, so that there is no durable tradeoff between output and inflation? This is
the conclusion of simple aggregative models, and of some disaggregated models, notably
those of Lucas. Brainard (together with F.T. Dolbear of Brandeis) has constructed a
disaggregated model, similar in spirit to Tobin's, which can be used to investigate this
issue. In disaggregated models the "rationality" of individual agents by itself
is not sufficient to establish the existence of a natural rate. Brainard and Dolbear find,
for example, that if price adjustments are more sluggish downward than upward, a variety
of levels of utilization of the economy may be consistent with non-acceleration of
inflation, even if all agents accurately anticipate the rate of inflation. Whether or not
there is a "natural rate" depends on subtle features of the way the price
adjustments to real disequilibria are affected by inflationary expectations.
The subjects so far discussed are standard issues of macroeconomic theory which
disaggregated models can illuminate. In addition there are other important questions, some
suggested by recent world events, which cannot be analyzed at all without explicit
disaggregation.
One set of questions related to the effects of large disturbances to the demand or
supply of particular commodities, for example oil, which require changes in relative
prices for restoration of equilibrium. Tracing the inflationary and allocational
consequences of such shocks requires the use of a model which articulates the market
mechanisms by which prices, consumption and production of other commodities are affected.
The Tobin and BrainardDolbear models can be used for this purpose.
In an economy with downward price rigidities, the relative price adjustments required
to restore equilibrium when the economy is subjected to such shocks are difficult to
achieve without inflation. Hence the tradeoff between lost output and inflation is quite
different from the one associated with an economy-wide inflationary or deflationary gap.
Brainard and Dolbear have used their model to illustrate how this tradeoff differs, and to
investigate the sensitivity of the difference to variations in behavioral parameters,
e.g., the costs, and consequent speeds, of price adjustments in particular markets.
Iwai's exploration of disequilibrium dynamics is similar in spirit to Tobin's and
Brainard's work but places greater emphasis on mathematical modelling of the behavior of
individual agents. He is considering, in particular, the adjustment of wages and prices in
an economy of monopolistically competitive firms. These firms are subject to stochastic
shocks and set prices and wages in accordance with subjective expectations. The supply of
labor to a single firm is a function of the firm's wage offer relative to that of other
firms, and the demand for the firm's product is a function of the firm's announced price
relative to other prices. Iwai has developed and analyzed a number of models within this
general framework (CFP 415, CFDP 369, CFDP 385, CFDP 386,
CFDP 411, CFDP 423).
In the earlier papers, Iwai examined the conditions for an equilibrium in which firms'
expectations are mutually consistent and self-fulfilling. This was shown to require
"Say's condition" that aggregate demand and aggregate supply balance. Iwai
distinguishes two forms of disequilibrium. The first and trivial form of disequilibrium is
caused by disturbances of expectations while Say's condition continues to hold. The
natural-rate theory of unemployment and, more generally, Walrasian equilibrium theory have
been confined to this case. In this case, the analysis of disequilibrium is reduced to the
analysis of processes by which economic agents revise their expectations. The second and
more fundamental type of disequilibrium occurs with the disturbance of Say's condition,
which automatically disturbs a majority of firms' expectations. When, for example,
aggregate demand exceeds aggregate supply, most firms try to raise their prices and wages
relative to others' and end up with expectations betrayed. Their revisions of expectations
further raise prices and wages and generate a cumulative inflation process in which
expectations continuously lag actual events. During this cumulative process unemployment
tends to be lower than its natural rate. Since this process continues as long as aggregate
demand exceeds aggregate supply, the stability of long-run equilibrium hinges upon the
central question of whether or not cumulative increases in prices and wages can themselves
restore Say's condition. This is the same question that Tobin explored, in the aggregate,
in CFP 428 discussed above.
In later papers, Iwai incorporates a real cost associated with firms' adjustment of
money wages. He is able, with some further assumptions, to prove convergence to a
stochastic macro-equilibrium where firms' expectations are fulfilled on average. This
equilibrium is a stochastic steady state in which labor demands and supplies at individual
firms constantly fluctuate, sometimes yielding excess demand, sometimes excess supply.
Aggregate unemployment in this model is governed by the dispersion of labor market
disequilibria among firms, and its level will exceed the natural rate of unemployment
pertaining in the absence of adjustment costs. Furthermore, if the costs of reducing money
wages exceed the cost of raising them, then aggregate unemployment will be inversely
correlated with the average rate of increase of money wages. The method used to prove the
existence of the stochastic macro-equilibrium employs both random walk theory and the
theory of renewal processes.
Price determination as well as wage determination plays an important part, of course,
in the inflation process. Nordhaus has been engaged in several research projects in this
area. The longest project is a continuation of a long-term investigation, in collaboration
with Wynne Godley and others at the Department of Applied Economies in Cambridge, England,
of the process of price setting and the transmission of inflation in United Kingdom
manufacturing. Their first published article came out in 1972 (CFP 371), examining manufacturing as a
whole; since then, they have disaggregated their analysis to examine seven individual
manufacturing industries. Among the questions the study examines are the following: First,
is the "normal pricing hypothesis" an accurate description of the price
formation process in U.K. manufacturing in particular, does inflation respond to
the pressure of demand as well as to "normal" or cyclically corrected costs? In
the industries examined so far, they have found, perhaps surprisingly, that the normal
pricing model is an excellent description of price formation. Second, to what extent are
changes in corporation taxes and investment allowances passed through into prices, i.e.,
shifted forward? This part of the study has been especially difficult because it requires
building a new data base as well as developing a methodology, for analyzing shifting in a
markup model. Third, to what extent have the variety of incomes policies tried in the U.K.
during the period under study affected pricing behavior? They have constructed an explicit
model of the functioning of incomes-price policies (rather than the usual dummy variable
approach) and have constructed an index of the strength of these policies. Finally, they
have constructed a model to measure the importance of world manufacturing prices in
influencing the domestic British price level. From preliminary results, it appears that
the effect of world prices is much less than had previously been thought. In addition,
Nordhaus is using some of the ideas introduced in the U.K. work to study price behavior in
the United States. As a preliminary step, in CFDP 415 presented to the American Economic Association in 1975, he
has outlined a simple model of a dual economy part of the economy functioning along
the lines of the normal price model, part along the lines of auction markets.
A somewhat different strand of work in inflation theory and policy concerned the
structure of price indices. Nordhaus and John Shoven (of Stanford University) undertook a
careful examination of the structure of the United States Wholesale Price Index which has
been reported in "Techniques for Decomposing Inflation" (forthcoming in a
UniversitiesNBER conference volume). The study examined the weighting system and
analytical basis of the WPI and concluded that, in light of modern developments, it was
seriously deficient. A new index was proposed and calculated over selected recent periods,
and it was shown that the official index could differ by a factor of fifty percent from
the theoretically more sound index during a period of rapid commodity inflation.
Empirical work on the relationship of output and labor input was done in the fall of
1974 by Sims, who was then visiting Cowles. The particular question with which he was
concerned is the paradox of shortrun increasing returns to labor that frequently appear in
econometric studies which use quarterly data and treat the quantity of labor demanded as a
distributed-lag function of output in a single-equation model. Sims' results, based on
monthly data for production workers in manufacturing, shrinks this paradox in two ways.
First, his estimates show that the response of man hours to a change in output is
essentially complete within six months and that the total response is fully proportionate.
Second, the theoretical discussion shows that, if the formation of expectations is treated
realistically, the sum of coefficients of estimated lag distributions of labor on output
would not correspond to the static optimum response of employment to output.
Theorists working in macroeconomics have had a strong incentive for interest in
consumer theory. The life-cycle and permanent income hypotheses of household consumption
behavior originated with macroeconomists and have been the subject of continuing research
interest at Cowles (e.g. see Report of Research
Activities, 197073). Extension of this interest to the expenditure behavior of
perpetual institutions is a newer phenomenon. Work on this topic was begun at Cowles by
Donald Nichols (who visited here from the University of Wisconsin in 197172) in
cooperation with Tobin and others. The practical issue is to design a rule for annual
expenditure from endowment, as at a university like Yale, which (a) is neutral as between
generations of faculty and students (as the trustees of immortal institutions desire), (b)
does not compel abrupt changes in expenditure levels, and (c) faces the difficulty of
distinguishing between temporary and permanent changes in the return (dividends, interest,
and capital gains or losses) earned on the endowment. A paper by Tobin, "What Is
Permanent Endowment Income," was given at the American Economic Association meetings
in 1973.
In the field of macroeconomic theory, Brainard (with R. Cooper) presented a review
paper, "Empirical Monetary Macroeconomics: What Have We Learned in the Last 25
Years?" (CFP 427), at the American
Economic Association meetings in 1974. Recent Cowles research in this field has involved
extensive empirical analysis. Brainard and Tobin have investigated the way in which the
stock market's valuation of a corporation depends on the firms' characteristics (CFDP 427). The paper includes a discussion
of the rationale for using "q" as a measure of the incentive for investment
rather than using bond or stock market yields and attempts to obtain a measure of
"q" the ratio of stock market valuation to replacement cost
purified of compositional changes. The first step in the computation is to relate
"q" to "fundamental" characteristics of corporations: past growth, and
current level of earnings on real investment, cyclical sensitivity of these earnings,
coverage of debt charges and of dividends, volatility of earnings and their covariance
with other corporations, dividend pay-out policy and the stability of dividends. This is
done by a series of cross-section regressions for fifteen years; changing coefficients of
the several characteristics are thus estimated. These are of interest in themselves and
permit the computation of the value of "q" for a representative firm for the
fifteen-year period (19601974).
Another major part of monetary research at Cowles is concerned with the specification
and estimation of a flow-of-funds model for the U.S. economy. Earlier work on this
project, partly in collaborations with researchers at MIT and the University of
Pennsylvania, American University, and elsewhere, was described in the Report of Research Activities, 197073. During
the past three years, this collaborative work has continued, involving Brainard, Smith and
Tobin. The monetary sector on which work has been focused thus far has several
distinguishing features. One is that the model specifies each sector's real and financial
transactions in an integrated and consistent way. This flow-of-funds approach is a major
departure from the usual reliance on a collection of seemingly unrelated quasi-reduced
form equations.
A second distinguishing feature of the model will be its explicit treatment of
disequilibrium markets which are cleared by non-price mechanisms. Brainard and Smith
describe this approach in "Estimation of the Savings Sector in a Disequilibrium
Model" (American Economic Association meeting, 1974) and present an estimation of the
savings and loan and mutual savings bank sectors which allows for the possibility of
credit rationing in the mortgage market. They found that these sectors were apparently
never far from their notional demand schedules, which is consistent either with there not
being major disequilibria in the mortgage market or with these sectors not absorbing any
of the market disequilibria when it does arise. It is hoped that a consideration of the
demand equations for other sectors will allow them to distinguish between these
alternative explanations.
A third distinguishing feature is the liberal use of a priori information in estimating
the parameters of the model. There has been a growing recognition that there is not nearly
enough independent variation in aggregate time series data alone to yield parameter
estimates which will give reliable predictions in a variety of forecasting situations.
While the inadequate effective dimensionality of the data makes it easy to find models
which fit a particular historical period quite well, it also makes it difficult for models
estimated from such data to forecast satisfactorily during other periods in which the
intercorrelations among the explanatory variables are unlike those for the sample period.
In a variety of papers, Smith has formally analyzed the effects of high
intercorrelations on forecasting accuracy and criticized some of the popular responses
(such as pretesting, stepwise regression, principle components, and ridge regression)
which impose parameter restrictions based upon the characteristics of the data rather than
the nature of the parameters. A paper by Brainard and Smith (CFDP 382) used the savings and loan and
mutual savings bank sectors to illustrate the practical value of a priori information as
opposed to ad hoc pseudo information. They found that the data was indeed very receptive
to prior information in that it was relatively easy to pull into reasonable regions and
peculiar estimates that resulted from using only the data. The use of a mixed estimation
technique improved the model's out-of sample forecasting model; and, surprisingly, using
their prior means as exact restrictions (with only the intercept terms estimated from the
data) gave predictions which were as good as or better than estimates drawn solely from
the data. Buttressed by these results, they are applying this estimation strategy to other
sectors.
Econometric work on British "building societies," an institution similar in
many respects to U.S. savings and loan associations, is reported in CFDP 398 by Hendry (of the London School
of Economics and a visitor at Cowles in the fall of 1975) and Gordon Anderson (Southampton
University). This paper contains a small dynamic simultaneous equations model of this
sector which formulates the primary objectives of these institutions as relending for
mortgages a relatively constant fraction of their expected total deposits, satisfying
"reasonable" mortgage applications and maintaining their long-run reserve ratio.
Further specification of the dynamic adjustments of the building societies permits the
authors to derive a completely specified model of short-run disequilibria. Statistical
tests of the various implicit hypotheses were then performed yielding suggestions of
appropriate ways of revising the formulation.
A complete and closed short-term macroeconometric model has also been estimated at
Cowles by Fair. The theoretical foundations for the model were developed while Fair was at
Princeton and are published in A Model of Macroeconomic Activity, Volume I: The
Theoretical Model. In this model, he integrates the behavior of financial institutions
("banks"), firms and households at the microeconomic level in a recursive model
in which prices are set by monopolistically competitive banks and firms which also set
maximum quantities they will sell or buy at these prices. This information passes to
households which then decide simultaneously how much labor to supply, how many goods to
buy and how many financial assets to acquire, subject to their flow-of-funds constraints.
The choice variables of each of the sectors are determined by maximization of utility or
profits. Aggregate flow-of-funds constraints are observed at all times and financial
earnings, including capital gains, are taken into account in households' flow-of-funds
constraints. In contrast to the flow-of-funds modelling by Brainard, Smith and Tobin,
however, more emphasis is placed on disaggregated detail in the real sectors (price
setting, production, hiring and investment decisions by firms) and the financial sector is
more highly aggregated.
The empirical or econometric model, published as Volume II, is motivated by
characteristics of the theoretical model. The empirical model accounts explicitly for
disequilibrium effects. For example, the relations explaining consumption behavior and
labor force participation include "constraint" variables incorporating the
possibility that firms may not choose to employ at the posted relative prices which
are also included in the relations the full amount of labor households wish to supply.
Similarly, the relations pertaining to firms' price and wage setting include a constraint
variable to incorporate the effect of labor market tightness, and a "credit
rationing" variable to reflect various aspects of household and firm behavior. The
model is dynamic in several respects: stocks of real and financial assets are augmented by
investment flows and latted values of variables appear frequently in order to capture
gradual adjustments of expectations and behavior to actual events. Finally, it is true of
the empirical model, as it was of the theoretical model, that the flows-of-funds comprise
a completely closed system.
It is illustrative to consider properties of the model that relate to several issues in
macroeconomics. One such issue is the relationship of the rate of inflation to the
unemployment rate. The specification of the model suggests that one is unlikely to observe
a stable Phillips curve. Wage and price changes are affected by variables such as tax
rates and import prices in addition to a variable measuring labor market tightness.
Similarly, since unemployment is determined residually as the difference between
employment and the labor force, it is influenced by all the factors determining
households' labor supply decisions (including the level of transfer payments from the
government and the marginal personal tax rate) as well as the factors determining firms'
employment decisions (including lagged as well as contemporaneous output). Hence, it is
unlikely that there would be a stable relationship between the unemployment rate and real
output. For analogous reasons, one could not necessarily expect the relationship between
aggregate demand and the rate of inflation to be stable.
Issues concerning stabilization policy are addressed by a number of simulations
presented in the book. On the controversial question of "crowding out," the
evidence from Fair's model is that a bond-financed increase in the real value of
government purchases is expansionary but considerably less so than if the increase in
purchases is financed through the monetary system. Fair has subsequently used the model
for optimal control analysis of policy issues.
In addition to the work reported above Tobin has presented lectures and papers
addressed to problems of current policy. Tobin's Janeway lectures at Princeton, "The
New Economics a Decade Older," were published in 1974. At the 1973 Economic Outlook
Conference, he presented an analysis of current inflation, distinguishing structural
sources from excess demand. In early 1974 he presented at the Federal Reserve Consultants
meeting, and published in Brookings Papers on Economic Activity, an analysis of the
monetary requirements for avoiding serious recession in 1974 and of the recessionary
implications of monetarist recommendations at that time ("Monetary Policy in 1974 and
Beyond"). "Monetary Policy, Inflation and Unemployment" is a general
expository article by Tobin on the relationship of fiscal and monetary policies to
inflation and unemployment, attempting to reconcile "Keynesian" and
"monetarist" approaches and to show the crucial importance of distinguishing
short- and long-run effects.
E. Econometrics
Applied econometric work by staff members and visitors at the Cowles Foundation is
described in the appropriate substantive sections of this report. Research in econometric
methodology is discussed here.
In a series of papers completed while Hendry was at Cowles, Hendry (CFDP 399), Hendry and Srba (CFDP 400), and Hendry and Anderson (CFDP 398) studied the consequences of
misspecification of a model for estimation. Since economic theory often provides only
tentative or conflicting specification of a relationship to be estimated, misspecification
is likely to be present in most empirical applications. This is particularly likely for
the dynamic specification of the model. Hence the distributions of conventionally used
econometric estimators will not be those found under the usual assumption that the
specification is correct; many conventionally appropriate procedures will be inconsistent.
In CFDP 399, Hendry analyzed the effects of misspecification on members of the class of
Generalized Instrumental Variables Estimators (GIVE) including Ordinary and Two Stage
Least Squares (OLS and 2SLS). For simultaneous and dynamic models the misspecifications
analyzed include ignoring serial correlation of the disturbances and the use of
instruments which are correlated with the disturbances. Large sample limiting
distributions are found, and their accuracy in explaining small sample outcomes is checked
by Monte Carlo experiments. Close agreement is found for both first and second moments of
OLS and 2SLS, indicating the usefulness of asymptotic approximations in small samples.
Some remarks on earlier findings of Maddala and Rao in CFDP 302 are also made.
In CFDP 400, a Monte Carlo approach is taken to studying the finite-sample behavior of
the Autoregressive Least Squares (ALS) and Autoregressive Instrumental Variables (AIV)
estimators in a dynamic simultaneous model, where inappropriate use of Ordinary Least
Squares (OLS) or Two-Stage-Least Squares (2SLS) is especially likely to result in
misleading inferences about dynamics. Hendry and Srba study the efficiency of
"control variables" for Monte Carlo work and find substantial gains. (A control
variable is one whose moments can be derived analytically and which is positively
correlated with the stochastic variable of interest, typically an estimator. Use of such a
control variable in Monte Carlo estimates can reduce the variance.) Concerning the
estimators, they find that AIV is optimal in large samples with substantial
autocorrelation of the disturbances; 2SLS is optimal in large samples with low
autocorrelation; ALS is optimal for small samples and high autocorrelation; and OLS is
best for small samples with low autocorrelation.
In CFDP 398 mentioned in Section D, Hendry and Anderson develop and apply a sequential
procedure for testing statistically the dynamic, error process, and economic theory
components of the full specification of a model of building society behavior in the United
Kingdom.
In CFDP 404 Peck investigates
another problem of misspecification and strategy in a dynamic single-equation regression.
He considers the procedure of first testing for serially correlated errors and, then,
adopting an appropriate estimator depending on the outcome of the test, reestimates the
equation as a whole in order to examine the effects on the final estimates. This test and
reestimate process can be viewed as a preliminary test estimator with respect to a
nuisance parameter. The three components of a strategy, the test employed, the
significance level chosen, and the alternative estimator used if correlation is found, are
studied by Monte Carlo methods. Peck finds that the maximum likelihood estimator is
usually superior, that the test should usually be performed at a significance level
algebraically much higher than customary, and that the theoretically inappropriate
DurbinWatson test is acceptable when used at these high levels.
In work begun elsewhere ("On the robust Estimation of Econometric Models," Annals
of Economic and Social Measurement, October, 1974), Fair estimated a large nonlinear
econometric model using a number of robust estimation procedures including the Least
Absolute Error (LAE) estimator. This estimator is less affected by large disturbances than
conventional estimators minimizing the sum of squared errors. Fair and Peck, in "A
Note on an Iterative Technique for Absolute Deviations Curve Fitting," have
considered some issues of the computation of LAE estimators as an iterated weighted least
squares estimator.
Work by Sims ("Output and Labor Input in Manufacturing," Brookings Papers
on Economic Activity, 1974:3), mentioned in Section D, uses methods previously
developed by him to analyze problems in the estimation of the relationship between output
and labor input in manufacturing. Considerations leading to poor estimates of the dynamic
structures of that relationship are discussed and tests of exogeneity are performed.
Smith (CFDP 383) and Campbell and
Smith (CFDP 402) consider the problem
of multicollinearity as an obstacle to empirical determination of the correct
specification of a relationship. A number of possible and actual strategies used by
researchers to overcome the problems of near and perfect multicollinearity are discussed.
The strategies include reducing the number of exogenous variables by arbitrary constraints
or by preliminary test procedures and the use of Bayesian and quasi-Bayesian methods to
include weak prior knowledge in the estimation process. It is argued that pretest
procedures are not a substitute for economic theory in formulating the model. Particular
attention is paid to the consequences of these procedures for forecasting. While properly
applied a priori knowledge is found useful, it is argued in CFDP 402 that
one method which implicitly uses prior information, ridge regression, is typically
motivated by the characteristics of the data rather than by a priori knowledge of
the parameters. In fact, the restrictions are often placed on transformed data where it is
extremely difficult to interpret the prior information which is being incorporated.
A further paper by Smith, CFDP 381,
considers difficulties with the coefficient of multiple determination, the R2, as a measure of predictive precision or as a decision
tool for improving the accuracy of the estimated coefficients through the deletion of
variables whose coefficients are statistically insignificant.
A paper by Sargan, CFDP 370, extends
available results on the existence of finite-sample moments for estimators in systems of
equations. Sargan establishes that for a slightly modified form of Three Stage Least
Squares (3SLS), the order of the highest moment which exists of the estimator for the
coefficients of any equation in the model is the same as for the 2SLS estimator, i.e., the
number of overidentifying restrictions. The conditions found for existence of moments are
generally sufficient for Monte Carlo work and for allowing Nagar approximations (which are
asymptotic in sample size) to the finite-sample moments to be developed. Similarly, these
results can be used to justify other limiting approximations to finite-sample moments of
the estimator, such as the Kadane approximations developed for k-class estimators (CFP 364) in which the error variance tends
to zero.
Peck has analyzed the finite-sample properties of instrumental variables estimators for
a dynamic equation using small variance asymptotic approximations. Approximate expressions
for bias and mean squared error are found for an arbitrary error covariance matrix without
the necessity of stability assumptions. This work (CFDP 433) extends earlier efforts reported in CFDP 325.
Fair has proposed a computationally feasible method for estimating large nonlinear
simultaneous equations models by full information maximum likelihood (FIML) and has
obtained these estimates for a subset of the parameters in his macroeconometric model
discussed in Section D. He has also proposed a computationally feasible method (called
FDYN) of obtaining estimates of such models based on minimizing a generalized variance of
dynamic simulation errors.
In a short note, Mirer and Peck explored issues related to the combined use of
simulation and regression procedures as proposed by B. Bergmann. Also, in work discussed
preliminarily in the previous Report, Peck demonstrated that, in the New
JerseyPennsylvania Graduated Work Incentive Experiment, there was bias in findings
on labor supply, due to attrition from the experimental population.
In 197374, Hannan (Australian National University) visited the Cowles Foundation
and the Department of Statistics. During his visit Hannan pursued research on a variety of
topics in the field of time series analysis. One was the study of the estimation of ARMAX
(Autoregressive Moving-Average systems with exogenous variables) systems. These systems,
of great importance in economics, are fully equivalent to the stationary state-space
systems that are important in systems analysis. In a paper written jointly with W.
Dunsmuir ("Vector Linear Time-Series Models" in Advances in Applied
Probability, vol. 8), Hannan gave a definitive analysis of the asymptotic properties
of an estimator for ARMA systems. In order to estimate such a system, the calculations
have to be iterative and in another paper ("The Estimation of ARMA Models," The
Annals of Statistics; vol. 3, no. 4), he proved the consistency of an initiating
estimator. Another topic Hannan studied was the application of time series techniques to
the measurement of properties of wave forms propagated through space. An article on this
topic ("Time Series Analysis") appeared in the Institute of Electrical and
Electronics Engineers Transactions on Automatic Control (Vol. AC-19, No. 6).
These techniques are relevant to economic problems involving distributed lags where the
lags are unknown.
F. The Public Sector
Research at Cowles on the economics of the public sector has continued along three
lines: voting and political mechanisms for social choice; the interrelationship of legal
policy and economic theory; and issues related to public expenditure, taxation and income
redistribution.
1. Voting and Political Mechanisms for Social Choice. Kramer has been involved
during the past three years in a number of theoretical analyses of political mechanisms
for resolving differences in preferences and reaching collective decisions. "Formal
Theory," a paper done with a student, Joseph Hertzberg (published as Chapter 7 in
Volume 7 of Handbook of Political Science, F.I. Greenstein and N.W. Polsby, eds.),
is a non-technical survey of political science literature concerned with modelling
political institutions. In "Theories of Political Processes" (presented at the
Econometric Society, Third World Congress, 1975, and to appear in Frontiers in
Quantitative Economics III, M.D. Intriligator, ed.) Kramer presents a more rigorous
overview of recent work on the modelling of political processes in political science and
economics and discusses its relation to the social choice literature.
A major project of Kramer's, begun during his stay at the Center for Advanced Study in
the Behavioral Sciences in 197374, is a model of electoral competition. This extends
the original analysis of Downs and Hotelling which showed that under certain conditions,
an equilibrium will exist in an electoral "market" in which two political
parties compete for votes by offering rival programs or governmental policies to the
electorate. This equilibrium is of intrinsic interest, and has potential application to
the economics of the public sector. For example, these electorate models might be
exploited to construct an endogenous public sector in a general equilibrium framework; and
the efficiency and equity characteristics of the allocations resulting from this voting
mechanism might be compared to those resulting from alternative mechanisms, such as a
private market, or the various iterative procedures for centrally planned goods allocation
proposed by Dreze, Malinvaud, Groves and Ledyard, and others. The DownsHotelling
equilibrium is too restrictive for these purposes, however, for it exists only when the
underlying policy space is essentially one-dimensional. There have been many subsequent
attempts to extend their analysis to more realistic multi-dimensional situations, but none
has succeeded in giving a useful general characterization of the behavior of a competitive
electoral system in such situations.
The approach taken in Kramer's CFDP 396,
which draws in part on insights from work in economics by Smale (a visitor to the Cowles
Foundation during the Fall of 1974), is to imbed the problem in a more dynamic context.
The competition for votes is assumed to extend across an indefinite series of elections,
and, over time, a sequence of policies is generated, which depends on the policy choices
of the parties and the outcomes of the intervening elections. These sequences, or
trajectories, are shown to converge on a relatively small subset of the feasible policies,
and to remain in the vicinity of that set. This set, which can be explicitly
characterized, thus constitutes a sort of "dynamic equilibrium" which gives a
useful characterization of the behavior of a competitive electoral system under quite
general conditions. Moreover, the equilibrium set also has an interesting social choice
interpretation, for it turns out that there exists an essentially Arrovian social ordering
over the alternatives, whose maximal elements are precisely the points contained in the
equilibrium set. The ordering itself seems an interesting and plausible one from a
normative point of view. It has been given a precise axiomatic characterization by Douglas
Blair, in a Yale Economics Ph.D. thesis (Brown and Kramer, advisers).
Kramer has also been working on a theory of multi-party electoral competition, a
subject on which there are few useful results. One serious complication, which does not
arise in the two-party context, is the problem of strategic voting, since, when there are
three or more parties or alternatives to vote on, some voters or groups will generally
have incentives to misrepresent their true preferences and vote "strategically."
These strategic distortions are generally too complex to be usefully characterized and
pose a serious obstacle to the analysis of multi-party competition. In "A Theorem on
Proportional Representation" (unpublished, Center for Advanced Study in the
Behavioral Sciences Paper, 1974), however, Kramer shows that (under the classic
DownsHotelling assumptions) strategic voting will not arise under a proportional
representational electoral rule, in the sense that there are no individual or collective
incentives towards insincere voting. Kramer has also been working on a more general model
of electoral competition in which parties are assumed to have policy as well as electoral
objectives. These premises seem to lead to a much richer and more complex set of models,
the implications of which he is still exploring.
Parallel with Kramer's work on analytical modelling of electoral competition is
axiomatic work on the problem of group decision making and social choice. This work, by
several different investigators at the Cowles Foundation, has involved a number of
different techniques and approaches. One of these is game-theoretic, and involves the
analysis of power indices as measures of the a priori distribution of power among
"players" of a game when voting is employed for decision making. Such measures
have found considerable application in political science and to practical policy questions
of apportionment and electoral reform. The ShapleyShubik power index has been
extensively employed in such questions, as has the somewhat different index proposed by
Banzhaf. Dubey, working partially with Shapley, has developed an axiomization of these two
indices which provides an interpretation of their differences ("On the Uniqueness of
the Shapley Value," forthcoming in the International Journal of Game Theory
and "Some Properties of the Banzhaf Power Index," with Shapley, forthcoming as a
Rand report). This work suggests the existence of other indices that are intuitively
acceptable as a priori evaluations of games, some of which may be better suited to certain
applications. An axiomatic characterization of these other indices as prior probabilistic
weightings of marginal contributions to a winning coalition appears in Blair's Ph.D.
thesis. In further unpublished work, Dubey has extended the work of Shapley, Blair and
Owen to describe this class of generalized values of games.
The game theoretic approach to collective choice has also been pursued by Shubik in
collaboration with Trotter (formerly of School of Organization and Management at Yale) and
a graduate student, van der Heyden. They have started to explore a class of games called
"budget allocation games." In these games, n individuals are required to vote
the potential inclusion of m items within a budget which may be constrained to prevent the
players from accepting all m items. They have been able to show that if this is modelled
as a game with side payments, there is no logrolling scheme which enables the individuals
to find an equilibrium price for their votes. Shubik and van der Heyden have extended
these results to no-side-payment games with the aid of a result of Shapley.
Kramer has also been exploring related issues, dealing with procedural and agenda rules
used by committees, as evolved under parliamentary practice and codified in the Rules of
Order. The analysis of "Due Process and the Rules of Order" (presented at the
1973 Meeting of the American Society for Political and Legal Philosophy, New Orleans,
December 1973) and "Some Procedural Aspects of Majority Rule" (to appear in NOMOS;
Due Process, R.I. Pennock and J.W. Chapman, eds.), drawing in part on game-theoretic
concepts and results of Farquharson, shows that the Rules of Order, in contrast to the
many other possible rules that might be used, do have intrinsic advantages as social
decision mechanisms, and minimize incentives for certain types of strategic distortions.
Brown also has been doing axiomatic work on social choice theory. One objective of this
work is to demonstrate the existence of social welfare functions that, in a qualitative
sense, approximately satisfy the conditions set forth by Arrow in his classic work on this
subject. As is well known, the only social welfare function satisfying all Arrow's
conditions is dictatorial. In CFP 419,
Brown shows that there exist social welfare functions which "approximately"
satisfy Arrow's conditions in an explicitly defined sense. In these social welfare
functions, the dictator who emerged in Arrow's formulation appears in the weaker form of a
"veto player." That is, instead of there being one individual whose preferences
are always ratified in the social welfare function, independent of the preferences of the
other individuals, there is one individual with veto power but whose preferences must be
ratified by at least one other member of society. Brown's social welfare functions, which
weaken the Arrow requirement that such functions be complete and transitive and, instead,
require acyclicity, can be ordered with respect to their social decisiveness. That is, sigma
at least as socially decisive as mu if, for every profile of preferences, when x
is socially preferred to y under mu, then x is socially preferred to y
under sigma for all x and y.
In CFDP 391, Brown extends his
earlier work on acyclic voting rules. In CFP
431 (discussed in the last Report of Research) he showed that if there are at least as
many alternatives as there are individuals, and a few other mild restrictions are placed
on the voting rule, then voting cycles can be prevented only by requiring that at least
one individual accede to every social choice. In the more recent paper, he defines and
characterizes a comparable class of acyclic voting rules over a set of alternatives
smaller than the number of voters. This class is defined in terms of the intersections of
winning coalitions; mathematically, this is an application of lattice theory which Brown
has also used in other work on social choice. An interesting example of such an acyclic
aggregation rule, due to Craven, is the rule that x is preferred to y if and only if the
proportion of individuals who prefer x toy strictly exceeds (m1)/m, where m is the
number of alternatives. Brown is able to show that Craven rules are only a subset of
acyclic aggregation rules. He characterizes rules within the broader set with respect to
the degree of "domination" of the rule (heuristically, the extent to which the
same few individuals always determine the outcome of the social choice), the decisiveness
of the rule (cycling being an acute case of social indecision) and whether or not the rule
is anonymous (the outcome of a choice depends on the number of affirmative votes, not on
who these voters are). In another paper (CFDP
393), Brown has used lattice theory and model theory to investigate properties of
individual preference orderings that are preserved under aggregation procedures satisfying
the ethical and institutional conditions suggested by Arrow.
Empirical research on voting has also continued at Cowles. Much of this work is an
outgrowth of the original work by Kramer of some time ago (CFP 344) on the effect of economic conditions on the outcome of
elections. Recently, it was argued in an article by Arcelus and Meltzer (American
Political Science Review, December 1975) that earlier findings of such an effect by
Kramer and others are largely illusory because these studies did not take account of
differential effects on voter turnout. Kramer and a student, Saul Goodman, in a
"Comment on Arcelus and Meltzer, `The Effect of Aggregate Economic Conditions on
Congressional Elections'," (American Political Science Review, December 1975)
point out some serious problems of specification and interpretation in the
ArcelusMeltzer analysis, and re-analyze their data to show that economic conditions
are indeed an important electoral influence. This finding is substantiated in work on U.S.
presidential elections by Bruno Frey and Fritz Schneider, of the University of Konstanz,
done during their visit to the Cowles Foundation in the fall of 1975.
Independent work on the effect of economic events on votes for president has been done
by Fair (CFDP 418). He uses the same
sample period of U.S. data as Kramer (18961972) but focuses exclusively on votes in
presidential elections. His specification is more general than Kramer's in that he allows
voters to remember the performance of a party when it was previously in office and he
considers the possibility that economic performance during a larger part of a presidential
term than the year of the election itself may be relevant to voters. Despite these
generalizations, he finds that, although economic events do have an important effect on
the presidential vote, voters are quite myopic. The rate at which they discount the
relevance of past events is so high that they consider neither the economic performance of
the opposition party during its last term in office, nor the performance of the incumbent
party in years prior to the election year. Of the three measures of economic performance
that were tested (growth in per capita real GNP, the unemployment rate, and the rate of
price inflation) the rate of growth of real income was found to be clearly the most
salient.
The results just reported have striking implications for the behavior of a party trying
to retain power. Earlier work by Nordhaus (CFP
425) had noted that even if voters have an aversion to inflation as well as to slow
growth of income, a flatter Phillips curve in the short run than in the long run will
induce governments to pursue a cyclic and, on average, pro-inflation policy. Fair's
conclusion that voters do not respond independently to inflation, however, implies that
the only constraint on government's pursuit of expansion is the basic structure of the
economy. Fair has done extensive work developing a computationally feasible method of
solving optimal control problems for macroeconometric models. Application of these
techniques to his own model of the U.S. economy (described in Section D) indicates that
single-minded pursuit of electoral victory would call for generating a recession that
would reach its trough some time during the first three-quarters of the year preceding the
election; this would then permit a maximum growth rate of real GNP of about 20 per cent to
be achieved in the election year (CFDP 397).
A new way of evaluating the economic performance of an administration is also suggested
by the optimal control theory approach. Fair developed such a measure of economic
performance (CFDP 420) which takes into
account both the existence of exogenous shocks and lagged influences over which a given
administration has no control, and the effects that a given administration's policies may
leave behind after it has left office. If a loss function is postulated e.g. loss
is a quadratic function of the deviations of real GNP and the rate of inflation from
respective target values Fair then suggests that an appropriate measure of an
administration's misbehavior, M, would be the actual loss in the administration's term
less the loss that would have occurred if the administration had optimally determined the
variables under its control, plus the expected loss to the following administration
resulting from the future effects of non-optimal actions of the administration in
question. Fair has calculated approximations to such a measure for five administrations
(the first Eisenhower administration through the first Nixon administration) using two
different loss functions and assuming that the real value of government purchases is the
administration's control variable while monetary policy is managed to maintain a target
interest rate.
2. Legal Policy and Economic Theory. In his paper "Law and Economic
Theory," CFP 424, Klevorick gave
his views, as a participant in the law and economics enterprise, of the types of
contributions economic theory can make to law to legal decision making, to the
study and development of legal doctrine, and to the study and analysis of legal structure.
The first kind of contribution an economic theorist can make in law arises when
economic concepts become important in understanding some aspect of a particular legal
case. The second involves instances where the entire structure of the problem area has
economic roots. The objectives and design of the institutions and doctrine are explicitly
stated in economic terms, and the economist is called upon to evaluate and give advice
about the best ways to achieve the specified objectives.
The third role Klevorick sees for the economic theorist in the joint enterprise of law
and economics envisions the economist or economic theorist as the propounder of a new
vocabulary, a new analytical structure for viewing a traditional legal problem. In
contrast to the economist's approach in the first two categories of interaction, in this
third role he no longer takes the problem as framed by the lawyer. Rather he takes the
general problem area with which the lawyer is concerned say, torts, or property, or
procedure and poses in his own terms that is, in economic terms the problem
he sees the legal structure or legal doctrine confronting. He provides, thereby, a
different way of looking at the legal issue which yields alternative explanations of how
current law came to be what it is and new proposals for new law. Klevorick goes on to
examine the kinds of problems which confront the economist when he presents a new
vocabulary or a new structure for analyzing a legal problem.
The facilities and processes governments provide for resolving legal disputes
constitute an important public service only recently analyzed by economists. For the
resolution of some of these disputes, society turns to a body of laymen a jury. In
considering the jury as a conflict-resolving instrument, several interrelated questions
arise concerning the jury's size, the way its members are selected, and the voting rule it
uses in reaching its decision. In his paper, "Jury Size and Composition: An Economic
Approach" (presented at International Economic Association Conference on the
Economics of Public Services, 1974), Klevorick presented a theoretical structure to help
address these questions. The model, which uses a statistical decision-theoretic framework,
is then used to examine the specific issue of how "representative" a jury should
be. The paper suggests and explicates the analogy between the selection of a jury and the
selection of a portfolio of assets by an investor. Pursuing this analogy, with the
consequent delineation of the similarity between representativeness of a jury and
diversification of an investment portfolio, Klevorick draws upon portfolio selection
theory to suggest the kinds of circumstances under which representatives would make the
jury a more effective fact-finding body and the types of situations in which
representativeness would not serve that end.
Together with Michael Rothschild of Princeton University, Klevorick has also developed
a simple, testable model of the jury decision process. They view a jury's deliberation as
a continuous-time, birth-and-death process whose state at any point in time is the number
of persons voting for acquittal at that time. The critical assumption in the model
concerns the transition probabilities from one state to another. If a transition occurs at
time t, the probability that the acquittal vote increases by one is assumed to be equal to
the fraction of the jury voting for acquittal at that time; the probability that the
number of votes for conviction increases by one is equal to the fraction of the jury
voting for conviction at that time. By assumption, the probability that more than one
juror's vote is switched at any instant in time is negligibly small. This specification is
the simplest one which captures the idea that the momentum of the majority increases with
its size. The model can be used to determine the effect on the expected deliberation time
of changing from a unanimous jury standard to a non-unanimous decision standard. The model
also makes it possible to calculate the percentage of cases which would be decided
differently if the decision standard were changed. For example, changing from unanimity to
a 102 standard results in a substantial percentage reduction in expected jury
deliberation time but has very little effect on the verdicts reached. It is, of course,
critically important to test the theory which underlies such predictions. Fortunately, the
theory can be tested in a way which does not compromise the secrecy of the jury.
Peck has also been involved in an application of econometric theory to a legal problem.
A statistical debate on the deterrent effect of capital punishment was published in the
December 1975 issue of the Yale Law Journal. Peck has evaluated this debate in an
article "The Deterrent Effect of Capital Punishment: Ehrlich and His Critics" (Yale
Law Journal, February 1976). The debate included disagreements over statistical
methodology use of regression analysis vs. paired-state or matching methods
econometric specification e.g. linear or logarithmic functional form and
choice of data aggregate or disaggregate. Peck suggests ways of resolving some of
these disputes through application of more sophisticated techniques, and identifies
technical weaknesses in the statistical support actually demonstrated for the conclusions
the debaters drew.
Klevorick has continued his work on public utility regulation (discussed in the last Report of Research). One paper he has prepared
on this subject is "An Excess-Profits-Taxation Approach to Public Utility
Regulation" (presented at the Econometric Society Meetings, 1974). It analyzes a
proposal for regulatory reform, advocated most recently by Posner, which would substitute
an excess-profits tax on public utilities for the current form of rate regulation.
Regulatory commissions' responsibilities would be sharply curtailed. They would continue
to set the "fair rates of return" for regulated firms and to establish the value
of the firms' rate bases. A regulated firm, however, would then be treated in the same way
as any other firm except that if its net revenue exceeded the fair rate of return on its
rate base, this excess profit would be taxable at a rate higher than the ordinary rate on
corporate profits. In the model Klevorick considers, the firm at any point in time uses
labor and its current stocks of capital and knowledge (or technology) to produce its
(single) output. The labor input is perfectly variable while the stock of capital and the
stock of knowledge can be increased but not decreased over time. The firm's rate base
consists only of its physical capital which grows with each unit of capital investment.
The addition to the stock of knowledge resulting from a unit of investment in research is
uncertain and is assumed to be governed by a known stationary stochastic process. The
firm, operating under an excess-profits-taxation system of regulation, is assumed to
maximize the expected discounted present value of its new cash flow. Dynamic programming
techniques are used to characterize its optimal capital investment and research policies.
One interesting result is that for a wide class of production functions and demand
conditions, an increase in the excess-profits tax rate will lead to an increase
rather than a decrease in the amount of research and development the firm undertakes.
Klevorick is also engaged in other research on the process of public utility regulation
focusing, in particular, on the interaction between the regulated firm and the regulatory
agency.
3. Taxation, Income Redistribution and Public Expenditures. As noted in the Report of Research, 197073, the negative
income tax is perhaps the most widely discussed of redistribution schemes and one that has
been of considerable interest at Cowles over several years. Within the period of this
report, Brainard and Tobin (with Shoven, now at Stanford and Bulow, a student) carried out
a calculation of the effects on various groups of the population of several proposed
packages of tax and welfare "reform." These calculations, which were presented
at the 1974 meetings of the American Economic Association and Econometric Society, all
pertained to "reforms" incorporating "cashable tax credits" or
negative income tax principles for integrating income assistance with the regular personal
income tax. A number of features of the current tax and transfer payment system were
considered e.g. eligibility of mortgage interest payments as an income tax
deduction, the "effective tax rate" resulting from eligibility limitations in
the welfare and food stamp programs, and the treatment of family units by income tax
exemptions and alternatives were considered which would be more smoothly integrated into
the income tax structure and would intrude fewer tax incentives into individuals' choices.
Lepper has continued her work on the issues of horizontal and vertical equity in the
supply of public services across communities differing in the size and composition of the
tax base. The last research report mentioned her preliminary analysis of data on local
expenditures for public primary and secondary education in the towns of Connecticut. This
econometric analysis has been extensively revised and the results reported in CFDP 376 which emphasizes possible
horizontal inequities that might arise from application of some proposed equalization
formulae. The possible flaw she finds in these formulae is that, while they do compensate
for differences in the size of the total tax base per pupil, they ignore such differences
between central cities and suburbs as the distribution of the tax base between residential
and business property, and the incidence of poverty.
GUESTS
The Cowles Foundation is pleased to have as guests scholars and advanced students from
other research centers in this country and abroad. Their presence contributes stimulation
and criticism to the work of the staff and aids in spreading the results of its research.
The Foundation has accorded office, library, and other research facilities to the
following guests who were in residence for various periods of time during the past three
years.
JOSEPH J. M. EVERS, Tilburg School of Economics. BRUNO FREY, Universitat Konstanz
VICTOR A. GINSBURGH, Universite Catholoque de Louvain
V. L. MAKAROV, Academy of Sciences of the USSR
JERZY MYCIELSKI, Institute of Theoretical Physics, University of Warsaw
STEPHEN SMALE, University of California, Berkeley
EDUARDAS VILKAS, Vilnius University, Institute of Physics and Mathematics, Lithuanian
Academy of Sciences
CONSULTANTS
The following scholars, not directly affiliated with the Cowles Foundation during the
period of this report, collaborated actively in Cowles research or published Cowles
Monographs containing work conceived and initiated at Cowles.
DONALD D. HESTER (University of Wisconsin)
JAMES L. PIERCE (Board of Governors of the Federal Reserve System)
ABRAHAM ROBINSON (Yale University)
LLOYD S. SHAPLEY (Rand Corporation)
JOHN SHOVEN (Stanford University)
SEMINARS
In addition to periodic Cowles Foundation staff meetings, at which members of the staff
discuss research in progress or nearing completion, the Foundation also sponsors a series
of Cowles Foundation Seminars conducted occasionally by staff but most frequently by
colleagues from other universities or elsewhere in Yale. These speakers usually discuss
recent results of their research on quantitative subjects and methods. All interested
members of the
Yale community are invited to these Cowles Foundation Seminars, which are frequently
addressed to the general economist including interested graduate students. The following
seminars occurred during the past three years.
| 1973 |
|
| October 17 |
AXEL LEIJONHUFVUD, UCLA, "Informal
Talk on Macroeconomics" |
| October 19 |
TONY ATKINSON, Essex University, England,
"The Distribution of Wealth" |
| November 2 |
GARY CHAMBERLAIN, Harvard University,
"Returns to Schooling and Ability as an Unobserved Component" |
| November 9 |
ARTHUR M. OKUN, The Brookings Institution,
"Perspectives on the 1973 Inflation" |
| December 14 |
E.J. HANNAN, Yale University and the
Australian National University, "On Measuring Leads and Lags" |
| 1974 |
|
| January 17 |
RAY FAIR, Princeton, "General
Disequilibrium Model of Macroeconomic Activity" |
| February 22 |
ANNE KREUGER, Massachusetts Institute of
Technology, "The Political Economy of the Rent-Seeking Society" |
| March 1 |
JERRY GREEN, Harvard University,
"Insurance and the Economics of Liability Law" |
| March 8 |
LLOYD S. SHAPELY, The Rand Corporation,
"Noncooperative Models of General Equilibrium" |
| April 5 |
J D. SARGAN, London School of Economics and
Yale University, "Data Mining and Model Specification" |
| April 12 |
ARNOLD HARBERGER, Princeton University,
"Distributional Weights in Cost Benefit Analysis" |
| May 6 |
GEORGE STIGLER, University of Chicago,
"The Theory of Enforcement" |
| May 10 |
WILLIAM J. FELLNER, Council of Economic
Advisers, "On Current Economic Policy" |
| May 24 |
JOHN WILLIAMSON, International Monetary
Fund, "The Impact of Increased Exchange Rate Flexibility on International
Liquidity" |
| May 31 |
RICHARD NELSON, Yale University,
"Factor Price Changes and Factor Substitution in an Evolutionary Model of Economic
Growth" |
| June 7 |
S. DECANIO, W. PARKER, and C. VANN
WOODWARD, round-table on Fogel and Engerman, Time on the Cross |
| June 21 |
JEROME STEIN, Brown University,
"Inside the Monetarist's Black Box" |
| 1975 |
|
| September 19 |
BRUNO S. FREY, Universitat Konstanz,
"Modeling Politico-Economic Interdependence" |
| October 3 |
RAY FAIR, Yale University. "On
Controlling the Economy to Win Elections" |
| November 7 |
BARRY SALTZMAN, Yale University, "The
Theory and Practice of Modelling the Climate" |
| November 14 |
WILLIAM D. NORDHAUS, Yale University,
"Can We Control Carbon Dioxide'?" |
| November 21 |
WILLIAM J. BAUMOL, Princeton University and
New York University, "The Weak Invisible Hand and the Multi-Product Monopoly" |
| December 3 |
D. HENDRY, T.C. KOOPMANS, and G. ORCUTT,
Yale University, "Is There a Use for Theory in Econometric Modelling?" |
| December 11 |
M.W. HIRSH, University of California and
Harvard University, "A Global Newton Method for Solving General Systems of
Equations" |
| 1976 |
|
| January 9 |
JEAN WAELBROECK, University of Brussels,
CORE, and World Bank, Washington, "The Price of Energy and Potential Growth" |
| February 13 |
PETER DIAMOND, Massachusetts Institute of
Technology, "Reforming Social Security" |
| March 26 |
RUDIGER DORNBUSCH. Massachusetts Institute
of Technology, "Exchange Rates in the Short Run" |
| April 2 |
ROBIN MARRIS, University of Maryland,
"The Public Goods Paradigm." |
| April 23 |
ROBERT MERTON, Massachusetts Institute of
Technology, "The Pricing of Contingent Claims and Its Relationship to Option
Pricing" |
| May 14 |
KARL SHELL, University of Pennsylvania,
"The Hamiltonian Approach to Economics Dynamics" |
| May 18 |
ROBERT J. AUMANN, Hebrew University and
Stanford University, "Power and Taxes in a Multicommodity of Economy" |
| May 28 |
DONALD J. BROWN, Yale University,
"Existence of a Market Equilibrium in an Economy with Increasing Returns to
Scale" |
FINANCING AND OPERATION
Since the Cowles Foundation was founded, gifts from Alfred Cowles and members of his
fancily have provided the cornerstone of its financial support. In 1970, the Cowles family
started an endowment at Yale to provide permanent support of the Cowles Foundation. In
June, 1974, the entire principal of the Cowles Commission was added to this endowment. The
income from the endowment replaces the income previously received in the form of gifts.
This income is supplemented by income from the smaller Marcus Goodbody Foundation
endowment. In addition, Yale University provides the use of the building at 30 Hillhouse
Avenue and supports the Foundation's research and administration through paying or
guaranteeing the salary of the Director and half of the salaries of two other Cowles
professors. These three sources of financial support provide dependable discretionary
funds permitting a degree of intellectual and administrative flexibility which is
essential to the successful operation of an organization engaged in basic research.
During the period of this report, the Cowles Foundation was also fortunate in receiving
a substantial amount of external support in the form of large, institutional grants from
the National Science Foundation and the Ford Foundation. The continuing, institutional
grant from the National Science Foundation was for the period 197376 and replaced
the previous institutional award which had covered the 196873 period. The Ford
Foundation grant provided support both for the general program of the Cowles Foundation
and for a visitors program to facilitate visits especially by Eastern European scholars
and scholars from other disciplines. This grant was for the period 196876. Funding
also continued to be received from the Office of Naval Research which has financed work at
Cowles on operations research and game theory since the late 1940's.
The major part of Cowles Foundation expenditures is accounted for by salaries (and
associated fringe benefits). The rest of the budget consists of office and library
expenses for materials, the cost of duplicating and distributing Cowles Foundation Papers
and Discussion Papers, computing services and travel to professional meetings and
conferences and overhead expenses charged by the University against grants and contracts.
The pattern of Cowles Foundation income and expenditures in recent years is outlined in
the table.

During the period of this report, the research staff of the Cowles Foundation included
18 or 19 members in faculty ranks (including visiting faculty and one to three staff
members on leave). This size has changed very little over the last decade. Excluding
visiting appointments, the staff included seven or eight tenured faculty in the
Departments of Economics and Political Science and the Schools of Law and of Organization
and Management. Non-tenured staff numbered eight to ten. Both permanent and,younger staff
devoted one-quarter to one-half of their professional effort during the academic year and
up to two full months in the summer to their research and to seminars and discussions with
their colleagues.
Research at Cowles is facilitated by a small library in the building which makes
materials readily available to the staff and supplements the technical economics and
statistics collections of other libraries on the Yale campus (it is open during the week
to all faculty and students associated with Yale). The collection includes about 6,000
books and Government documents, 178 journals, reprints from 22 research organizations and
a rotating collection of recent unpublished working papers. The collection is oriented
towards the research needs of the staff and emphasizes economic theory and monetary
theory, mathematics and mathematical economics, statistical and econometric studies and
methods and, recently, energy and natural resources.
The research staff was also supported by the services of five secretaries and a
manuscript typist under the supervision of Miss Althea Strauss, administrative assistant
at Cowles since the Foundation was established at Yale. The end of the period of this
report marked the end of her full-time services to Cowles as ill health forced her
retirement. Her efficient and loyal service is remembered with appreciation by all who are
or have been associated with the Cowles Foundation.
PUBLICATIONS AND PAPERS
MONOGRAPHS
See complete LISTING OF MONOGRAPHS
(available for download)
COWLES FOUNDATION PAPERS
See complete LISTING OF COWLES FOUNDATION
PAPERS
DISCUSSION PAPERS
See complete LISTING OF COWLES FOUNDATION
DISCUSSION PAPERS
OTHER PUBLICATIONS
AND PAPERS
BRAINARD, WILLIAM C.
- "Tax Reform and Income Redistribution: Issues and Alternatives," presented at
ASSA meetings in New York, December 1973 (with J. Tobin, J. Shoven, J. Bulow).
- "Estimation of the Savings Sector in a Disequilibrium Model" (with G. Smith),
presented at the North American Meeting of the Econometric Society, San Francisco,
December 1974.
- "Some Results of the American Economic Association Readership Survey,"
presented at the December 1975 meetings of the American Economic Association.
BROWN, DONALD J.
- "The Core of a Purely Competitive Economy," presented at the 1974 conference
on Nonstandard Analysis at Oberwolfach, Germany.
FAIR, RAY C.
- A Model of Macroeconomic Activity
, Volume II: The Empirical Model, Ballinger
Publishing Company, 1976.
- "A Note on an Iterative Technique for Absolute Deviations Curve Fitting" (with
J.K. Peck), mimeograph, November 1974.
KOOPMANS, TJALLING C.
- "Ways of Looking at Future Economic Growth, Resource and Energy Use," in Energy:
Demand, Conservation and Institutional Problems, M. S. Macrakis, ed., MIT Press, 1974.
- "Proof for a Case Where Discounting Advances the Doomsday," IIASA Research
Report RR-74-1 to appear in Review of Economic Studies.
- "Analytical Aspects of Policy Studies," presented at IIASA conference, May
1976.
- "Economics of Exhaustible Resources," to appear in Frontiers of
Quantitative Economics, Volume III. North-Holland Publishing Company.
KRAMER, GERALD H.
- "Theories of Political Process," to appear in Frontiers of Quantitative
Economics, Volume III, North-Holland Publishing Company.
- "Commentary," to appear with reprint of "Sophisticated Voting over
Multi-Dimensional Choice Spaces," in Social Science Yearbook, Volume 5,
Munich-Verlag.
- "Comment on Arcelus and Meltzer, 'The Effect of Aggregate Economic Conditions on
Congressional Elections'" with Saul Goodman, American Political Science Review,
December 1975.
LEPPER, SUSAN J.
- "Wage Indexing: Boon or Boom?" paper presented at the Econometric Society
Meetings, December 1974.
NORDHAUS, WILLIAM D.
- Technique for Decomposing Inflation" (with J. Shoven), Stanford University
Memorandum No. 181, to appear in Volumes on Income and Wealth.
- "Energy and Economic Growth," prepared for the North American Study Group
"The Middle East and the Crisis in Relations Among the Industrialized States: The
International Economic and Political Spinoff of the Energy Crisis."
- "The 1974 Report of the President's Council of Economic Advisers: Energy in the
Economic Report," The American Economic Review, September 1974.
- "World Modelling from the Bottom Up," IIASA Research Memorandum RM-75-10,
Austria, March 1975.
- "Can We Control Carbon Dioxide?" IIASA Working Paper.
- "Mental Maps: Without Spaghetti They are Baloney," IIASA Working Paper
WP-75-44, Austria, April 1975.
- "The Demand for Energy: An International Perspective," prepared for Workshop
on Energy Demand, May 1975.
- "Proceedings of the Workshop on Energy Demand," IIASA Cp-76-1, May 1975.
- "The Effect of Incomes Policies" (with W.A.H. Godley and K.J. Coutts).
- "Short-Run Shifting of Corporation Taxes" (with W.A.H. Godley and K.J.
Coutts).
PECK, JON K.
- "A Note on an Iterative Technique for Absolute Deviations Curve Fitting" (with
R.C. Fair), mimeograph, November 1974.
SCARF, HERBERT E.
- "The 1975 Nobel Prize in Economics: Resource Allocation," Science,
November 1975, Volume 190, pp. 649, 710712.
SHUBIK, MARTIN
- Games for Society, Business and War
, Amsterdam: Elsevier, 1975.
- The Uses and Methods of Gaming
, New York: Elsevier, 1975.
SMITH, GARY
- "Comments on the FRB-Model," in Brookings Model: Perspective and Recent
Development, Gary Fromm, Lawrence Klein, North-Holland, 1975, pp. 568572.
- "A Model of Interrelated Financial Markets," with William C. Brainard,
presented at the Econometric Society Meetings, December 1974.
TOBIN, JAMES
- "Can We Live with Inflation? Can We Live Without It?" in the Economic Outlook
for 1974, Annual University of Michigan Conference on the Economic Outlook, November
1516, 1973, pp. 7180.
- The New Economics One Decade Older
, Princeton, NJ: Princeton University Press, 1974.
- "What is Permanent Endowment Income?" American Economic Review, Proceedings,
Vol. LXIV, No. 2, May 1974, pp. 42732.
- D. Review of: Economics and the Public Purpose, by J.K. Galbraith,Yale Law
Journal, Vol. 83, No. 6, May 1974, pp. 12911303.
- "Monetary Policy in 1974 and Beyond," Brookings Papers on Economic Activity,
1:1974, pp. 219232.
- "Notes on the Economic Theory of Expulsion and Expropriation," Journal of
Development Economics, 1 (1974), pp. 718.
- "Monetary Policy, Inflation, and Unemployment," paper presented at the
Conference Board Conference on Answers to Inflation and Recession: Economic Policies for a
Modern Society, April 89, 1975.
- "The World Economy in Retreat: The United States," First Chicago Report,
May 1975 (First National Bank of Chicago Conference on World Economic Stabilization).
- Essays in Economics: Consumption and Econometrics
, Vol. II, North-Holland Publishing
Company, 1975.
- "Discussion" of "Some Reflections on Describing Structures of Financial
Sectors" by Albert Ando and Franco Modigliani, in: The Brookings Model:
Perspective and Recent Developments, Gary Fromm and Lawrence R. Klein, eds.,
Amsterdam: North-Holland Publishing Company, 1975, pp. 65667.
|