PURPOSE AND ORIGIN
The Cowles Foundation for Research in Economics at Yale University, established as an
activity of the Department of Economics in 1955, has as its purpose the conduct of
research in economics, finance, commerce, and industry, including problems of the
organization of these activities. The Cowles Foundation seeks to foster the development of
logical, mathematical, statistical, computational and other information processing methods
of analysis for application in economics and related social sciences. The professional
research staff are, as a rule, faculty members with appointments and teaching
responsibilities in the Department of Economics or other departments.
The Cowles Foundation continues the work of the Cowles Commission for Research in
Economics, founded in 1932 by Alfred Cowles at Colorado Springs, Colorado. The Commission
moved to Chicago in 1939 and was affiliated with the University of Chicago until 1955. In
1955 the professional research staff of the Commission accepted appointments at Yale and,
along with other members of the Yale Department of Economics, formed the research staff of
the newly established Cowles Foundation.
RESEARCH ACTIVITIES
1. Introduction
The Cowles Commission for Research in Economics was founded approximately forty years
ago by Alfred Cowles, in collaboration with a group of economists and mathematicians
concerned with the application of quantitative techniques to economics and the related
social sciences. This methodological interest was continued with remarkable persistence
during the early phase at Colorado Springs, then at the University of Chicago, and since
1955 at Yale.
One of the major interests at Colorado Springs was in the analysis of economic data by
statistical methods of greater power and refinement than those previously used in
economics. This was motivated largely by a desire to understand the chaotic behavior of
certain aspects of the American economy the stock market in particular
during the Depression years. The interest in statistical methodology was continued during
the Chicago period with a growing appreciation of the unique character and difficulties of
statistical problems arising in economics. An important use of this work was made in the
description of the dynamic characteristics of the U.S. economy by a system of
statistically estimated equations.
At the same time, the econometric work at Chicago was accompanied by the development of
a second group of interests also explicitly mathematical but more closely connected
with economic theory. The activity analysis formulation of production and its relationship
to the expanding body of techniques in linear programming became a major focus of
research. The Walrasian model of competitive behavior was examined with a new generality
and precision, in the midst of an increased concern with the study of interdependent
economic units, and in the context of a modern reformulation of welfare theory.
The move to Yale in 1955 coincided with a renewed emphasis on empirical applications in
a variety of fields. The problems of economic growth, the behavior of financial
intermediaries, and the embedding of monetary theory in a general equilibrium formulation
of asset markets were studied both theoretically and with a concern for the implications
of the theory for economic policy. Earlier work on activity analysis and the general
equilibrium model was extended with a view to eventual applications to the comparative
study of economic systems and to economic planning at a national level. Along with the
profession at large, we have engaged in the development of analytical methods oriented to
contemporary social and economic problems, in particular the specifics of income
distribution, the economics of exhaustible resources and other limitations on the growth
of economic welfare.
For the purposes of this report it is convenient to categorize the research activities
undertaken at Cowles during the last three years in the following way:
Descriptive and Optimal Growth Theory
Game Theory and Equilibrium Analysis
Rational Behavior under Risk and the Economics of Information
Macroeconomics and Monetary Theory
Econometrics
Public Sector
2. Descriptive and Optimal Growth Theory
During the last several years Koopmans has continued his work on the extension of the
theory of optimal growth to models with many goods and in particular, the inclusion of
both capital goods and natural resources. The last three-year report described his study
of a stationary optimal growth path in such a model. Such a path starts from an initial
capital stock so constituted that optimization into the future implies its reproduction
with the same composition and level at the end of each future period. The condition to be
met by such an "invariant" capital stock is that the shadow prices for all
capital goods, taken at the beginning and the end of each period of one year, say, are
proportional, with a factor of proportionality equal to the factor whereby the utility
derived from future consumption is discounted annually. The validity of this condition was
proven in the period of this report and compared with related work by Sutherland ("On
Optimal Development Programs when Future Utility is Discounted," Ph.D. Dissertation,
Brown University, 1967).
An invariant optimal capital stock may be specified as a fixed point of a particular
continuous mapping. This led Terje Hansen to develop, in the specific terms of this
problem, an algorithm in the class of fixed-point algorithms originated by Scarf and
developed by Scarf and Hansen. CFP 375
contains both Koopmans' theoretical constructions and results from Hansen's algorithm,
with an application to a constructed example. To facilitate computation of an invariant
capital stock, the technology was assumed to be based on a finite set of processes with
given ratios of inputs to outputs.
In related work, Stiglitz has investigated (CFDP 306) the transition between steady states in multi-sector linear
models with a single primary factor. Stiglitz first establishes the circumstances under
which the transition between the steady states of two different technologies can be made
without unemployment of any resource along the way. Under these circumstances he shows
that the rate of return for the transition is equal to the rate of interest at the point
at which the two technologies are equally profitable.
In the past, much of growth theory has been concerned with models in which all, or all
but one, of the factors of production are reproducible. In the last few years several
members of the staff have become interested in the analysis of the role of
non-reproducible factors, in particular, energy and natural resources, in the growth
process. In "World Dynamics, Measurement without Data," Nordhaus analyzed the
growth theoretic structure of World Dynamics. His paper noted that the theoretical
assumptions of Forrester (World Dynamics, Cambridge, MA: Wright-Allen Press, Inc.,
1971) and Meadows, et.al. (The Limits to Growth, New York: Universe Books,
1972), were quite arbitrary, even contrary to established empirical regularities, and
further, that their predictions about the future are very sensitive to specification of
the form of the functional relations.
Partly as an outgrowth of his service on a committee of the National Academies of
Science and of Engineering to plan studies in the field of energy, Koopmans has been
attempting to apply optimization models to the problems of exhaustible resources in
general, and of energy resources in particular. In CFDP 356, he contrasts the effect of discounting of future utilities
on the optimal path in the classical Ramsey model of aggregate capital with that in a
model containing a single exhaustible resource.
Future research planned by Koopmans (partly in collaboration with Alan S. Manne of
Stanford University) goes in the direction of construction and analysis of models of
natural resource use, especially energy resources, and the introduction of uncertainty
about future technologies in these models. A general problem also arising in this context
is that of specifying terminal capital stock conditions in development programming.
Finally a long-run aim of Koopmans' work is to find a theoretical and empirical basis for
aggregative production functions (with labor, capital and resources as inputs) in less
aggregative process analysis models of the productive system.
In a paper currently in progress Nordhaus focuses on the problem of the pricing of
appropriable natural resources. The question which is raised in this paper is whether the
market mechanism can be relied on to generate the proper scarcity prices for resources.
After an examination of the shortcomings of current resource markets in particular the
absence of futures and insurance markets he concludes that we should well be
skeptical about the correctness of market prices for such commodities. This work is still
in progress and will require considerable further exploration. Two related problems which
Nordhaus hopes to spend time on in the near future are the following: First, it appears
that without futures markets the market for resources is dynamically unstable. If this is
so, what are its implications? Second, it has often been proposed by non-economists that a
separate discount rate be applied to unique, non-reproducible resources or to very
important investments, particularly research. Nordhaus will be concerned with the
appropriateness of separate discount rates in the absence of perfect futures markets. In
addition, Nordhaus intends to consider some of the international aspects of resource
scarcity, in particular the impact on our balance of payments and the interaction of this
with political considerations.
It seems apparent that detailed specification of an energy or resource sector will
require inputs from individuals with a variety of technical knowledge and skills. At the
joint initiative of Koopmans and Alan S. Manne, the Cowles Foundation was host to a small
informal conference (held on November 1718, 1972) of economists and operations
researchers interested in modeling and projecting the energy sector of the economy. A
sequel to this conference was a seminar held at Stanford on July 2326, 1973. Also,
as a part of their teaching activities in the spring semester of 1973 Nordhaus and
Koopmans established a workshop on economic models of the resource sector in the hope that
this may stimulate the writing of dissertations in this area.
Given the amount of research in growth theory and the substantial empirical work
attempting to find the determinants of growth, it is somewhat surprising that there has
not been a greater concern with the conceptual relationship between economic growth and
the growth of economic welfare. In Is Growth Obsolete?" Tobin and Nordhaus address
the question: Do existing measures of the economic progress of nations, namely GNP or NNP,
provide adequate statistical tools for the measurement of economic welfare? The immediate
answer is negative, for GNP is an index of production rather than consumption, and
economic welfare is concerned with consumption. Tobin and Nordhaus therefore made an
exploratory attempt to determine whether an index better designed to measure economic
welfare could be constructed. Such a measure was derived in the paper and designated a
Measure of Economic Welfare (MEW). Of particular interest was how the MEW behaved compared
to NNP: It was shown that the MEW grew more slowly than the NNP for the United States over
the period 1929 to 1965. They also examined in this paper the role of natural resources in
determining future growth patterns. And finally, they turned to the question of the effect
of reduced, or even zero population growth on the level of per capita consumption. Using a
life cycle model they estimated that the growth in per capita consumption stemming from a
reduction in population growth to a zero level would be in the order of ten per cent.
The problem of optimal economic growth can be formulated as a non-linear programming
problem. However, the present state of convex programming theory for infinite horizon
models is not entirely satisfactory. Roughly speaking, classical duality principles can be
shown to apply to finite subsections of an optimal trajectory and this avoids
inefficiencies over any finite horizon. But it has never been completely clear how to
avoid the kind of non-optimality which results from piling up too much "left
over" capital forever. In Weitzman's paper (CFDP 317) a rigorous treatment of the subject is undertaken. Under a
set of general axioms, a certain limiting "transversality condition," in
conjunction with other duality conditions, is shown to be necessary and sufficient for
optimality over an infinite horizon.
A technical point in the theory of optimization, also pertinent to intertemporal
optimization, is whether the assumption of "quasi-concavity" of an objective
function is really weaker than that of "concavity." While in general this is the
case, it need not be so if the objective function falls in a special class frequently
assumed in optimal growth theory. Koopmans clarified this question in a not yet
distributed paper with the title, "If f(x) + g(y) is Quasi-convex at least One of
f(x), g(y) is Convex." This paper was presented in July 1972 at a Symposium on
Mathematical Methods of Economics, organized by the Mathematical Institute of the Academy
of Sciences, Warsaw, Poland.
3. Game Theory and General Equilibrium Analysis
The study of the general Walrasian model of economic equilibrium has been a continuing
interest at the Cowles Foundation for a number of years. This report will describe four
directions of research relating to this topic, which have recently been undertaken. They
are, in order of presentation, the application of nonstandard analysis to the study of
economic equilibria with an infinite number of agents; the development of techniques for
the numerical solution of general equilibrium models; the relationship between game theory
and the Walrasian model; and the incorporation of monetary considerations in a general
equilibrium framework.
1. Let us use the term "standard economy" to refer to a model of exchange
with a finite number of traders each of whom has an initial endowment and a preference
relation. If one assumes that each trader's preference relation is complete, continuous,
transitive, monotonic, and convex and that each trader is positively endowed in each
commodity, a competitive equilibrium can then be shown to exist. Having shown existence,
we would next like to know if competitive equilibria are unique and if they are a
continuous function of the defining data the set of preferences and endowments. The
best results to date pertaining to continuity and uniqueness of the competitive
correspondence of a standard exchange economy can be found in Debreu ["Neighboring
Economic Agents," in La Decision, Colloques Internationaux du C.N.R.S. No.
171, Paris, 1969, and "Economies with a Finite Set of Equilibria," Econometrica,
38 (1970), 387392].
Standard exchange economies provide an admirable mathematical formulation of the
economist's notions of conflicting tastes and limited or constrained opportunities. In one
respect, however, they are inadequate for describing a major feature of perfect
competition. The assumption, which underlies the bulk of neoclassical economics, is that
each economic agent has a negligible influence in determining the market clearing prices.
Both the continuous and the nonstandard exchange economies are attempts at modeling not
only conflicting tastes and limited opportunities, but also the economic negligibility of
individual traders, by assuming an infinite number of agents. In the continuous model
agents are identified with the points on the real line. In contrast the nonstandard model
makes use of the mathematical technique introduced by Professor Abraham Robinson which
permits the arithmetic manipulation of infinitesimals and infinitely large numbers.
Continuous exchange economies were first introduced by Aumann in his important paper of
1964 ("Markets with a Continuum of Traders," Econometrica, 32, 1964). In
that paper he proved Edgeworth's conjecture that in a perfectly competitive economy, every
core allocation is a competitive equilibrium, a result which has come to be known as the
equivalence theorem. In a subsequent paper he provided an independent demonstration of the
existence of a competitive equilibrium in a continuous exchange economy.
Kannai and Hildenbrand have defined limiting processes which allow the infinite
continuous exchange economies of Aumann to be interpreted as limits of generalized
sequences of standard exchange economies. These limiting processes allow us to interpret
properties of continuous economies as approximate properties of standard economies. For
example, Edgeworth's conjecture becomes the statement that in very large economies every
core allocation is almost a competitive equilibrium. Since the Aumann existence proof does
not assume convexity of preferences, one can show that in very large economies, even
without assuming convex preferences, there exist quasi-competitive equilibria. These are
striking results and justify the amount of mathematical machinery needed to prove them.
Nonstandard exchange economies were first defined by Brown and Robinson in CFDP 308 in which they proved the
equivalence of the core and set of competitive equilibria. Recently Brown (CFDP 342) has shown the existence of a
nonstandard competitive equilibrium under assumptions analogous to those of Schmeidler,
i.e., nonconvex, continuous monotonic partial orders as preferences.
The primary motivation for introducing nonstandard exchange economies as an alternative
way of modeling perfectly competitive markets was to provide a more direct link between
the idealized infinite economy and the large standard economies than exists in the
continuum approach. The principal goal has been to obtain limit theorems and asymptotic
properties of large standard exchange economies, as is done for example in CFDP 326.
Having demonstrated the equivalence and existence theorems for non-standard exchange
economies, the immediate research problem areas are:
- Continuity of the core correspondence of a nonstandard exchange economy.
- Differentiability of the core correspondence of a nonstandard exchange economy.
The work of Debreu and Stephen Smale indicates that this is the way to approach the
uniqueness question for nonstandard economies.
- Nonstandard representations of continuous economies.
Brown has obtained some results concerning (c), which are contained in a preliminary
paper. The major conclusion is that the nonstandard equivalence and existence theorems
imply the continuous equivalence and existence theorems, an indication of the richness of
the nonstandard approach.
2. The second major area to be discussed in this section concerns the construction of
numerical methods for the approximation of fixed points of a continuous mapping.
Fixed-point theorems were first introduced into mathematical economics by von Neumann in
1937 in order to demonstrate the existence of equilibrium prices and capital proportions
in his disaggregated model of an expanding economy. Since that time these techniques have
become part of the standard equipment of the economist concerned with the simultaneous
equations and inequalities of the Walrasian model of general equilibrium. Along with the
activity analysis model of production and associated considerations of convexity,
fixed-point methods have enlarged the mathematical tools available to the present
generation of economists.
On the production side of the economy, linear programming has provided a superior
alternative to the calculus in discussing the relationship between competitive prices and
the choice of efficient productive techniques. The simplex method, the major tool for the
solution of linear programming problems, has also suggested important analogies between
numerical methods and the use of information concerning prices in guiding decentralized
economic decisions. But important as these theoretical insights may be, linear programming
would hardly have achieved its current prominence had it not been for its ability to
provide a remarkably effective computational procedure for the explicit solution of a wide
variety of problems with considerable practical importance.
Fixed-point methods, on the other hand, have typically been used by economists in order
to demonstrate the existence of a solution to economic models describing the interaction
between a variety of consumers and producing units. When applied to the Walrasian model,
the goal has been to demonstrate the consistency of underlying behavioral assumptions,
rather than using this type of formulation for the evaluation of economic policy.
Our ability to employ the general equilibrium model for the purpose of policy analysis
has been enhanced by the development of a series of effective computational algorithms for
the solution of fixed-point problems. The original work by Scarf and Hansen on this topic
is alluded to in the previous three-year report, along with contributions by Harold Kuhn
and Curtis Eaves. These methods are described in detail in the Cowles Foundation monograph
The Computation of Economic Equilibria, by Scarf with the collaboration of Terje Hansen.
During the period covered by this three-year report, work has continued at the Cowles
Foundation and elsewhere in several directions: the further refinement of methodology
itself, a search for potential applications beyond those previously discussed, and a
series of specific numerical implementations. Perhaps the most significant development in
the first of these directions is the recent work of Curtis Eaves ("Homotopies for
Computation of Fixed Points," Math. Programming, 3 (1972), 122) of
Stanford University, who visited the Cowles Foundation in the summer of 1973. Instead of
considering a single continuous mapping of the price simplex into itself as is
customary in studying the Walrasian model Eaves works with a continuous family of
mappings indexed by a parameter ranging, say, between zero and one. Such a family of
mappings might naturally arise if a particular parameter of the general equilibrium model
a tax rate, for example were varied continuously over an interval. One is then
concerned with the way in which the entire family of price equilibria depend upon the
parameter in question.
The most satisfying situation would be that in which a unique equilibrium exists for
each value of the parameter, varying continuously with the parameter. Unfortunately the
possibility of multiple equilibria may cause a more complex type of dependence to emerge,
as the following figure illustrates.
 |
An earlier theorem of Browder ("On Continuity of Fixed Points Under Deformations
of Continuous Mappings," Summa Brasil. Math., 1960) tells us that it is
possible to select a subset of equilibria, including at least one equilibrium for each
parameter value, which forms a continuous path. The path may be forced to turn backwards
on itself, but it is possible to connect the equilibria corresponding to any pair of
parameter values in a continuous fashion. What Eaves has shown is that the numerical
algorithms previously mentioned may be modified so as to approximate the entire path in a
single computation. Since many applications of fixed-point methods involve a policy
instrument which varies over some interval, Eaves' method may turn out to have substantial
practical importance.
Earlier work on the approximation of fixed points of a continuous mapping typically
required the degree of approximation to be specified in advance. If a higher degree of
accuracy was required, it was necessary to initiate the algorithm again with no use made
of the earlier, rough approximation. Eaves' work also contains a procedure which overcomes
this numerical difficulty and permits a continual refinement of accuracy. This technique
involves a number of ideas which were independently introduced by Orin Merrill in his
doctoral thesis presented to the University of Michigan in 1971 ("Applications and
Extensions of an Algorithm that Computes Fixed Points of Certain Non-Empty Convex Upper
Semi-Continuous Point to Set Mappings," Technical Report 717, Department of
Industrial Engineering, University of Michigan). In another doctoral thesis, by Michael H.
Wagner ("Constructive Fixed Point Theory and Duality in Non-linear Programming,"
Technical Report No. 67, Operations Research Center, Massachusetts Institute of
Technology), similar considerations are applied to the solution of nonlinear programming
problems.
Fixed-point algorithms are not required for the solution of nonlinear programming
problems, since a variety of techniques which exploit the conventional convexity
assumptions have been available for a number of years. Nevertheless it is conceivable that
fixed-point methods may compete successfully with standard approaches, such as Newton's
method, particularly in a search for local maxima of nonconvex programming problems.
Further exploration of this possibility is being undertaken.
The application of fixed-point methods to the determination of an optimal invariant
capital stock is described earlier in this report. From a purely methodological point of
view this work of Hansen and Koopmans (CFP
375) may be seen as a problem of optimal control theory. As such, there is an
interesting parallel to the recent work of Allgower and Jeppson who apply fixed-point
methods to the numerical solution of nonlinear differential equations (see, for example,
Jeppson, M.M., "A Search for the Fixed Points of a Continuous Mapping," in
Mathematical Topics in Economic Theory, ed. by Richard Day (Philadelphia, SIAM), 1972, pp.
122129, and related work by E.I. Allgower and Jeppson), admittedly a topic far
removed from the economic considerations which motivated the early development of these
techniques.
A number of studies have recently been initiated in which the numerical calculations of
equilibrium prices have been undertaken in order to evaluate the consequences of a
specific policy change. In these instances the computer is used either to cope with a
level of disaggregation which rules out the possibility of diagrammatic analysis, or
because the contemplated policy change is sufficiently large so as to cast suspicion on a
purely local analysis. For example, in 1971, Marcus Miller and John Spencer presented an
initial attempt to analyze the economic consequences of the United Kingdom's joining the
European Economic Community, based upon a general equilibrium model involving four
countries (U.K., EEC, AustraliaNew Zealand, and the United States), eight final
commodities (two per country) and two factors per country. Another application, to the
Hungarian economy, was presented at the meeting of the Econometric Society held at
Budapest in September 1972. In their paper entitled, "Experiences in the Application
of Scarf's Method to General Equilibrium in the Hungarian National Economy," Kondor,
Simon and Gabor construct a model based on an underlying Leontief technology, augmented by
import and export sectors, and with the explicit introduction of demand functions for
final goods. In order to determine equilibrium prices and activity levels, they made use
of a computational variant based on the observation that the more nearly the technology is
given by a pure input-output table, the more closely the equilibrium prices will be
determined by the production side alone.
As discussed below in the Public Sector section, John Shoven and John Whalley have used
the Scarf algorithm to examine the distortionary impact of the taxation of income from
capital in the United States (CFDP 328).
In another work (forthcoming in the Review of Economic Studies) Shoven and Whalley
examine the general problem of incorporating ad valorem taxes in a general
equilibrium model and discuss the modifications of the basic computational procedures
caused by this extension.
3. The basic mathematical idea underlying this variety of numerical techniques and
applications was originally introduced by Scarf in 1967 in the proof of a general theorem
providing a set of sufficient conditions that an n-person game have a nonempty core. As
may be demonstrated quite easily, these conditions are satisfied by market models in which
the customary convexity assumptions are placed on individual preferences and on the
aggregate production set. In the paper On Cores and Indivisibilities," by Scarf and
Lloyd Shapley of the RAND Corporation, a simple example is given of an exchange economy
with indivisible commodities which also satisfies this set of sufficient conditions for
the core to be non-empty. The example may be described as follows: Let each of n consumers
own a specific indivisible good, say, a house, and let each consumer have an arbitrary
ordinal ranking of the collection of houses. Then there exists a permutation of the houses
such that no coalition could have done better for all of its members by an alternative
permutation of the houses which they initially owned.
Scarf also collaborated with Gerard Debreu on a paper which appeared in the volume Decision
and Organization honoring Jacob Marschak (CFP 369). The paper places in a modern context Edgeworth's original
argument for the convergence of the core to the set of competitive equilibria. Shubik was
the first economist to recognize that Edgeworth's discussion of the contract curve in 1886
could be seen as a game theoretic result identifying the core of a replicated economy with
its set of competitive equilibria. More recently, Shapley and Shubik have been able to
establish the convergence of the value solution of an economy under replication to
its competitive equilibria. In addition, they have been able to show, under somewhat more
restrictive conditions, that a similar result obtains for the bargaining set.
The game theoretic solution concepts mentioned above are all cooperative in nature and
can be interpreted in terms of the bargaining power of groups, equity or fair division
procedures, and bilateral bargaining among individuals threatening to defect from
coalitions. It is remarkable that such different behavioral assumptions seem to lead, in
the limit, to identical conclusions. An interesting negative result was obtained by
Shapley and Shubik when they established that there exists a market for which no von
NeumannMorgenstern stable set solution exists. This has raised questions as
to the appropriateness of the stable set solution for economic problems.
4. In the work noted above, one solution concept noticeably missing from the study of
markets under replication is the noncooperative equilibrium. In distinction to cooperative
game theory, this particular concept leads quite naturally to a dynamic formulation.
Furthermore, it does not make use of the assumption concerning Pareto optimality that is
typically employed in all cooperative solutions.
Shubik has recently succeeded in formulating a model of a closed trading economy in
terms of a strategic game which can be played noncooperatively (CFDP 324 and a subsequent series of papers
on "Theory of Money and Financial Institutions"). In order to do so, it was
natural to treat one of the commodities in a nonsymmetric manner; this special commodity
could be regarded as a commodity money used to pay for purchases. Upon replicating this
noncooperative market game Shubik observed that the limiting noncooperative equilibrium
points were not necessarily the same as the set of competitive equilibria. A cash flow
constraint had been introduced by the requirement that a certain commodity be used in
payment. If the supply of this commodity were not appropriate, the competitive solution
would not be obtained. In order to relax this constraint, it is necessary to model a loan
market with an explicit bankruptcy law.
In terms of the replicated noncooperative game it then becomes possible to define what
is meant by an optimal bankruptcy law. It is a rule which has the property that when the
game is played noncooperatively, the rule is sufficiently severe that strategic bankruptcy
is never profitable when compared with strategies not involving bankruptcy. The law must
be sufficiently lenient that in the limit the cash flow constraint for every individual is
completely relaxed; thus, an individual will be able to borrow without fear up to an
amount which in the limit will be his ex post budget constraint rather than his cash
constraint.
There are many extensions of this basic model of the noncooperative economy which
Shubik plans to pursue over the next few years. For example, the models studied so far
typically have no exogenous uncertainty and stipulate a finite time horizon. Both the
introduction of uncertainty and the accommodation of an infinite horizon require
modifications of the bankruptcy law. Shubik and Whitt have recently given a solution to a
dynamic infinite horizon model with fiat money and a single commodity using an extension
of the concept of a perfect equilibrium point (CFP 389). Another area in which work will be done is in the modeling
of the loan market, distinguishing among various types of banking functions.
Much of the work described above is closely related to the economics of information. In
particular, an implicit assumption is made concerning the nature of the information
structure as the number of traders in a market is increased. It is not unreasonable to
consider different limiting states of information as numbers grow. For example, one might
consider a market in which individuals always move simultaneously and forget all previous
history. This can be regarded as an extremely low information state and can be contrasted
with a game in which all moves are sequential with perfect information. Thus it may be of
interest to examine the sensitivity of noncooperative equilibria to variations in the
information structure. An example of such a sensitivity analysis has been given in
"Information Duopoly and Competitive Markets" (CFDP 347). Shubik expects to do further work on the economics of
information in conjunction with the work on money and financial institutions.
Ross Starr has also been concerned with the integration of monetary theory and the
theory of value. Starr's approach is the inclusion of monetary variables in a mathematical
general equilibrium model. In CFP 365 a
family of classical contentions on the relation of money and barter exchange is
investigated. It is shown there that when the restrictions of "double coincidence of
wants" is placed on bilateral trade, in general a barter economy will not be able to
achieve a competitive equilibrium allocation. On the other hand, a monetary economy, with
the double coincidence condition suitably reinterpreted will be able to do so. The
classical view of the superiority of monetary over barter exchange is verified in this
simple model.
CFDP 300 investigates the existence
of equilibrium and the demand for media of exchange as it may vary with transactions costs
and initial endowments. CFDP 310
proposes a solution to a long troublesome technical problem in this field. A monetary
economy may be demonetized, in a fashion that is difficult to treat mathematically, if the
value in exchange (price) of money becomes zero. This can be a serious theoretical problem
if the currency is unbacked. It is occasionally noted that a government can guarantee that
this currency will be accepted in payment of taxes. It is shown in CFDP 310 that this guarantee can be made
sufficient to keep the value of the currency above zero.
In "Money and the Decentralization of Exchange" (CFDP 349) Starr and Joseph Ostroy of UCLA
investigate the role of money in facilitating the process of exchange. It is shown that a
barter economy can achieve an equilibrium allocation by use of a great deal of
coordination and information transmission. In a monetary economy achievement of an
equilibrium allocation requires dramatically less information and coordination. This study
succeeds in formalizing and analyzing the often recurring and seldom analyzed concept of
"difficulties of barter" and advantages of monetary exchange. Thus, CFDP 349 generates an analytic basis for
the transactions demand for money.
4. Rational Behavior under Risk and the Economics of Information
During the most recent period, research on behavior under uncertainty has continued in
the areas discussed in the last three-year report: the development of basic concepts,
extensions of portfolio models, and the development of general equilibrium models
incorporating considerations of risk. Work on the economics of information has also been
initiated during this period.
1. Behavior under Risk. In work reported earlier (CFP 341), Michael Rothschild of
Princeton and Stiglitz examined several possible interpretations of the intuitive idea
that one distribution is riskier than another. They found that a number of apparently
different definitions were equivalent to one based on the notion of a
"mean-preserving spread," which may be stated in terms of the indefinite
integrals of differences between cumulative distribution functions. Further, they
demonstrated that in applications where one is willing to assume that individuals and
firms are risk averse, nothing is gained by a more restrictive definition, e.g., using
variance as measure of risk. In current research, Peter Diamond of MIT and Stiglitz have
extended this concept to that of "expected utility-preserving spread." They have
established a general theorem specifying those situations in which it is possible to
determine the qualitative responses of households and firms when mean-utility-preserving
increases in risk occur.
In other work on the basic concepts used in analyzing behavior under uncertainty,
"A Note on 'The Ordering of Portfolios in Terms of Mean and Variance',"
Klevorick investigated how wide the class of expected utility functions is that can serve simultaneously
as expected utility functions for both the normal family of distribution functions and the
two-point even-chance family of distribution functions. (The latter family consists of all
lotteries offering outcomes a and b, each with probability 1/2, when a
and b are allowed to take on any real value.) He demonstrated that the class is
quite narrow, consisting only of the family of expected utility functions derived by
taking the expectation of cubic utility-of-income functions. The result is of interest
because in an earlier paper, John Chipman had demonstrated that a great variety of
expected utility functions could serve for the normal family alone and for the two-point
even-chance family alone.
Most of the early portfolio models took a narrow view of the allocation problem facing
an individual. Typically, the consumption-saving decision was taken as given, and
transactions costs and the uncertainty in labor income and consumption needs were ignored.
The last three-year report summarized some of the continuing efforts at Cowles and
elsewhere to understand the importance of these limitations and to build models which
avoid them. For example, Stiglitz (CFP 330)
has analyzed the term structure of interest rates in a utility-maximizing model involving
consumption during each of several periods. In the face of an uncertain future short-term
interest rate, the consumer is given the choice of buying short or long-term bonds in
order to finance future consumption. In a related Ph.D. thesis (supervised by Tobin and
Stiglitz), J. Schaeffer is investigating optimal portfolio behavior in the presence of
transactions costs in the purchase and sale of assets and uncertainty about future labor
income. In order to focus on the implication of these conditions for liquidity preference,
Schaeffer assumes perfect knowledge about future asset prices.
In most portfolio analyses the joint distribution of returns on physical assets is
taken as given. A direction in which future work is planned is in the analysis of the
economic determination of the structure of risk. It seems desirable to incorporate in this
analysis the assumption that the investor simultaneously takes into account the risk in
the return to investments and the related uncertainty about final product prices. Stiglitz
has begun by exploring a simple two commodity model in which the output of one commodity
is stochastically related to the input of the primary factor. The consequences of this
technological uncertainty on the riskiness of investment in the two sectors is not
obvious. The uncertainty about the output of one commodity creates uncertainty in the
relative prices, and hence in the return to investment in both sectors. In a simple
example, Stiglitz has established that when investors take into account the implications
of the variation in relative prices of their consumption possibilities, which investment
appears riskier depends on the elasticity of substitution of demand for the two
commodities. Indeed, for an elasticity less than unity the investor acts as if the sector
without technological uncertainty is in fact the riskier investment.
One of the most important analytical devices available to economists for the analysis
of uncertainty is the concept of a conditional commodity introduced by Arrow and Debreu
almost twenty years ago. According to this point of view a commodity may be characterized
not only by its physical attributes and the date and location of its availability but by
one or another in a series of uncertain events. Thus what is ordinarily called an umbrella
may be viewed as a vector of commodities contingent upon the state of the weather and
other relevant states of nature. Using this device one may immediately apply economic
arguments developed for the treatment of a certain world to the analysis of uncertainty.
This point of view has proved to be very fruitful in clarifying the conceptual problems
involved in a variety of applications. Examples are the determination of the appropriate
social discount for risk, the evaluation of firms with different financial structures and
the analysis of insurance markets. Indeed, the framework has been useful in understanding
the consequences of limiting the markets for contingent commodities themselves.
In the ArrowDebreu framework, social risk may be defined as variation in the
social endowment across states of nature uncertainty about the aggregate
consumption opportunities of society. The consequence of a given amount of social risk for
the welfare of the individuals comprising society depends crucially on the financial
arrangements society makes for its distribution. If financial markets are perfect, the
"distribution of risk" would be Pareto optimal. In that case, prices, or
"risk premia," would be associated with each competitive equilibrium allocation
indicating the terms on which individuals would be willing to trade certain income for
risky outcomes. In the absence of perfect markets, in particular in the absence of a
complete set of contingent commodity markets, risk premia do not provide a direct
indication of the presence and magnitude of social risk. In CFP 350, Brainard and Dolbear discuss the conceptual problems of
measuring social risk in such a situation and attempt to make a rough estimate of the
extent to which, for particular sectors of the economy, private and social risk differ.
They tentatively conclude that there is a substantial difference between private and
social risk and that even in the case of the returns to capital traded in the stock market
there would be a substantial reduction in private risk if there existed a way to diversify
the returns to capital with the returns to other factors. Brainard and Dolbear then
discuss the extent to which the markets for composite commodities stocks, bonds,
labor, etc. are able to provide opportunities for the efficient distribution of
risk. Such an analysis is useful both in providing an insight into the reasons particular
types of financial claims exist in the real world and in suggesting areas in which
particular types of risk-spreading assets are missing.
Another example in which the absence of a complete set of contingent commodity markets
plays a central role is Stiglitz's analysis of a primitive economy which depends on
sharecropping to diversify agricultural risk (CFDP 353). In a model of such an economy Stiglitz investigates the
extent to which restricting the type of allowable contracts interferes with the efficient
allocation of risk and affects the distribution of income.
In order to obtain specific results about the effects of risk on resource allocation it
is sometimes useful to place restrictions either on preferences or on the distribution of
returns. Assuming that a typical investor's expected utility depends only on the mean and
variance of returns, Sharpe and Lintner have provided a simple expression for the value of
a firm whose assets yield risky returns. In CFP
362, Stiglitz uses the SharpeLintner formula to analyze the investment decisions
of firms. If a firm purchases assets so as to maximize its stock market value as given by
the SharpeLintner formula, the allocation of investment will not be optimal. Since
the equities of the various firms provide differentiated types of risk to the investor,
firms have some monopoly power. This leads to a systematic underinvestment in the more
risky activities.
A related set of concerns has been the determination of the optimal financial structure
of the firm under uncertainty. Some of this work, which relaxed the assumptions necessary
to demonstrate the validity of the ModiglianiMiller theorem, was described in the
last three-year report. In this work, as well as in the multiperiod extension due to
Stiglitz, it appeared necessary to maintain the unattractive assumption of a universal
belief in the impossibility of bankruptcy. Stiglitz has begun an investigation in which
individuals have different expectations about the returns to investments, including
bankruptcy. Early results suggest that it will be optimal for firms to borrow to such an
extent that, in the judgment of the lender, there will be a non-zero probability of
bankruptcy. Stiglitz also proposes to investigate the implications of divergent
expectations and bankruptcy on the determination of equilibrium prices and the allocation
of investment.
The after tax returns of shareholders are extremely sensitive to the form in which
corporate profits are disbursed. The structure of taxes may therefore have a substantial
impact on corporate financial policy. In the absence of uncertainty, the optimal debt
equity ratio for individuals in a high personal income tax bracket requires financing of
investment first through retained earnings, and subsequently through debt. The cost of
capital is merely the market rate of interest. Stiglitz has also analyzed the cost of
capital in the presence of uncertainty, assuming firms are constrained to avoid bankruptcy
as well as a number of other simplifying assumptions. The impact of taxes on the financial
structure of the firm, taking into account the choice of corporate form, the possibility
of bankruptcy, and the related questions of control is a fundamental and difficult
problem, deserving continued interest.
2. Risk and the Economics of Information. The inadequacy of the
assumption of perfect information commonly made in competitive models has long been
recognized. Information, like economic goods in general, is scarce, and its production
requires the input of scarce resources. Recent and continuing research at Cowles has been
concerned with the extent to which private markets provide incentives for the production
of the Pareto optimal amount of information, and the implications of imperfect information
for the behavior of markets themselves.
Some types of information for example, information about new processes or
commodities are in principle appropriable. In work reported in the last three-year
report Nordhaus analyzed the optimal life of patents. Since typically there is a trade off
between the life of the patent and the social benefit derived from an invention, optimal
patent systems only allow partial appropriation of the benefits by the inventor. This
would seem to give a presumption that in the absence of subsidy there will be an
under-supply of inventive activity. In recent preliminary work, Stiglitz has found that
this presumption may not be correct. A distortion exists since the patent system rewards
an inventor for being first in making a discovery rather than for the amount by which he
has advanced the time of its arrival. The implications of this distortion are investigated
in a simple model in which a firm can influence the probability of making a discovery in a
given time interval by varying the factor inputs. Equilibrium in the model has too many
firms doing research, each doing it at too fast a rate. One may view this result
intuitively as firms or individuals drawing upon an unpriced "pool" of potential
inventions; an externality exists which is analogous to the resource stock externalities
which occur in fisheries and some extractive industries.
Many kinds of information, such as that concerning the distribution of prices in a
market, are not patentable. Indeed, some types of economic information are not produced by
a single agent in a visible fashion, but rather are revealed by the functioning of the
market itself. One such example is the job market where potential employees have different
productive capabilities not perfectly known to employers. The employer is concerned with
the productivities of those he is about to hire, but must rely upon characteristics such
as education which are imperfect predictors of ability and may be subject to the control
of the potential employee. Another example occurs in insurance markets. Individuals may
know their own probabilities of accident but it may be difficult for insurance companies
to determine them ex ante. As a consequence insurers are led to offer complex
contracts, involving deductability and coinsurance clauses, designed to take advantage of
the self-selection process, and minimize the problem of adverse selection. Stiglitz and
Rothschild are considering in a forthcoming paper the existence and properties of
equilibrium in a market for insurance with these characteristics. An equilibrium is
specified by a set of insurance policies, such that no other insurance policy can be
offered, which will be chosen in preference to those presently selected by individuals and
which will at the same time be profitable for the insurer.
5. Macroeconomics and Monetary Theory
In recent years, developments in macroeconomic theory and policy have continued to
broaden and refine the insights of the Keynesian revolution. In both theoretical and
empirical dimensions, the thrust of research at the Cowles Foundation has been to extend
simple aggregative specifications of economic relationships by an analysis of underlying
microeconomic behavior. On a theoretical level, this involves examination of the
properties of optimizing behavior by individual economic agents and explicit recognition
of the problems of aggregation. On the empirical side, there has been an attempt to
disaggragate to greater sectoral detail, even to individual households or firms.
In his Presidential Address to the American Economic Association in 1971,
"Inflation and Unemployment," Tobin presented an overview of post-Keynesian
theoretical and empirical research bearing on macroeconomic employment policy: Is
unemployment an equilibrium or a disequilibrium phenomenon; is there a single
"natural rate" of unemployment associated with a stable rate of inflation; and
is such a rate optimal? Tobin criticizes the view that satisfactory answers to these
questions may be obtained by treating labor as a single homogeneous commodity traded in a
single market. His approach views aggregate wage and unemployment statistics as
reflections of conditions in numerous loosely-connected labor and product markets subject
to shocks which are only partially correlated. According to this view the system never
arrives at a long-run equilibrium in all markets. The responses of money wages and prices
to changes in aggregate demand reflect differences among markets in the speeds of
adjustment of wages to disequilibria, institutional constraints (such as downward rigidity
in money wages) and relative wage patterns which lead workers to emulate the wage demands
of those in markets that are contiguous in geography, industry, or skill.
Such a general approach seems more consistent with the observed behavior of money wages
and unemployment than do the results of a search model which focuses only on the movement
of labor from sector to sector. While search models have provided a valuable contribution
to the understanding of frictional unemployment, their implications for the cyclical
behavior of quit rates and hence for the shape of the short-run Phillips curve are
ambiguous.
Tobin's research plans call for continued work in the field of unemployment and
inflation. He is continuing to develop a more completely articulated model of wage and
employment behavior which extends earlier work by Lipsey, Archibald, and Holt.
Price and wage dynamics have also been given intensive examination by Nordhaus in a
number of papers. In CFDP 296, he
reviewed the theoretical foundations of work on price dynamics and inflation theory and
surveyed U.S. econometric studies of price determination. Nordhaus concluded that the
theoretical basis of price dynamics had been largely ignored in empirical work, leading to
misspecifications that could bias the results. This paper was presented at a conference
sponsored by the Board of Governors of the Federal Reserve System. At the same conference
Tobin provided the summary comment (CFDP
315). He presented the basic specifications underlying the empirical work that had
been discussed at the conference and commented upon its relevance.
In "Pricing in the Trade Cycle" (CFP 371) Nordhaus and W.A.H. Godley of Cambridge University attempt to
determine whether cost-push or demand-pull theories of price inflation more accurately
describe the process of price formation in industrial economies. The hypothesis tested is
whether price is simply a mark-up over normal historical cost or whether there are
cyclical variables which determine price movements. The study employed a new method of
testing such a hypothesis, using a constructed or predicted price series and then
comparing the predicted with the actual. The test was applied to British data in non-food
manufacturing over the period 1954 to 1969. The results confirmed the mark-up hypothesis.
Nordhaus intends to continue his collaboration with Godley in an examination of the
problem of the shifting of direct company taxation. Nordhaus and Godley have been able to
formulate measures of tax shifting and are now in the process of testing these on data for
British manufacturing. A monograph reporting on this work, with estimates covering five or
six manufacturing industries, is scheduled for completion in the next eighteen months.
A third recent paper by Nordhaus (CFP
374) in the area of short-run price and wage dynamics considered whether any unifying
explanation of the surprising worldwide acceleration of wages after 1968 could be found.
It appeared that low unemployment rates in the U.S. late in the 1960s, the subsequent
growth in inflationary expectations, and the international spread of this inflation
offered the best though not fully satisfactory explanation.
Concern about inflation stems of course from its impact on the welfare of households
and institutions. Although empirical studies of the distributional effects of inflation
have been conducted in the past, few of these studies rest on a theoretical foundation
which clearly defines an appropriate concept of welfare. In CFDP 329, Nordhaus attempted to remedy the difficulties of using only
current income as a measure of welfare. A concept of "annuity income" is
developed which measures the ability to sustain a consumption path over the expected life
of the household. A simple macroeconomic model is then presented which determines both the
cyclical and long-run response of capital, wealth, and employment to macroeconomic
policies. Drawing on a data set provided by the Federal Reserve System, this model is able
to simulate the impact of different counter-cyclical policies on the distribution of
income. The results indicate that if there is a long-run trade-off between inflation and
unemployment, expansionary monetary and fiscal policies lead to significantly greater
equality in the distribution of economic welfare.
In a related paper (CFDP 321),
Nichols investigates the definition of income that should be used by households or
perpetual institutions in arranging their consumption decisions. It is shown that neither
dividends alone nor dividends plus capital gains is appropriate.
Work on the macroeconomics of consumption has been concerned with the aggregation of
life cycle models which incorporate liquidity constraints faced by households. These
models permit an estimate of the influence of monetary and fiscal policy on aggregate
consumption assuming a population with specified demographic characteristics. The life
cycle theory originally set forth in Tobin's CFP 272 is expanded with explicit attention to liquidity constraints
in "Wealth, Liquidity and the Propensity to Consume" (CFDP 314, published in Human
Behavior in Economic Affairs, Essays in Honor of George Katona). The theory was the
basis of a simulation designed to evaluate a series of monetary and fiscal policy measures
carried out by Tobin and Walter Dolde (CFP
360). In this model consumers are assumed to maximize the discounted sum of utilities
from consumption in each period over a lifetime of certain length. Consumers are also
assumed to borrow in order to invest in consumer durables early in their life and are then
subject to contractual saving requirements in order to repay this loan. The amount of
additional borrowing in which they engage may be limited by quantity restrictions and is
subject to a penalty rate of interest higher than the lending rate. In the policy
simulations this model exhibited small responses to temporary changes in tax rates and
relatively large responses to long-run changes in interest rates. The liquidity
constraints, which were binding on the young and the poor, had the effect of raising the
marginal propensity to consume from current wealth and from current disposable income. The
results of the simulations appeared plausible both in magnitude and time path.
Other work in macroeconomic models builds on areas discussed in the report of research
for the 196770 period. In particular, Bischoff has continued his investigations of
the determinants of business investment decisions. His econometric work has focused on (1)
choosing a satisfactory theoretical formulation, (2) developing investment relations that
make accurate forecasts in the presence of changes in tax and monetary policies, and (3)
giving particular attention to lag distributions. In CFP 372, for example, Bischoff investigated the effect of alternative
tax incentives on capital spending. He started with a model allowing substitution ex
ante and fixed proportions ex post. He then derived the optimal path of
investment under the assumption that firms choose factor proportions in such a way as to
minimize cost. The major advantage of Bischoff's work over earlier works is the separation
of plant and equipment into two different categories, which allows for different lags on
price and output terms, and for non-unitary elasticities of substitution between capital
and labor. The principal conclusions of the work substantially confirm the value of the
neoclassical approach to investment. Relative prices appear to have a significant effect
on equipment expenditures; and the investment tax credit is shown to have an additional,
independent, and statistically significant effect.
Bischoff intends to extend his research on investment behavior to a complete model of
demand for factors and outputs. The major work, on which he is now proceeding, is a
five-equation structural model which would simultaneously determine factor demand
(equipment, structures, and labor), output, and prices. He hopes to estimate this model,
using nonlinear methods, for a number of major industries and sectors using post-World War
II data and for the total private sector using yearly data from 1929 to 1968. A further
study in which Bischoff is currently engaged is the study of relative wages, capital and
industrial growth in the new South for the period 1865 to 1900. It is hoped that this
foray into econometric history will add evidence on the question of determination of
investment and factor price behavior in competitive market economies.
Research in monetary economics has proceeded in two directions which were discussed in
the previous report. Tobin continued to be involved in current debates about the channels
of influence and the magnitudes of the effect of changes in the money supply on other
macroeconomic variables. In the 197072 period, this debate revolved around articles
by Tobin (CFP 370) and Friedman in the Journal
of Political Economy concerned with the consistent specifications of models
determining real income and prices and the ability of monetary authorities to control the
real rather than the nominal money supply.
Work at the Cowles Foundation has also continued toward estimation of a complete
disaggregated model of financial markets and flows of funds. The theoretical foundations
for this project were developed in earlier work by Brainard and Tobin (see Report of Research Activities 196770). A
starting point for the empirical estimation was the 1971 doctoral dissertation of Gary
Smith (who subsequently joined the staff of the Cowles Foundation). Smith estimated a
financial model, using postwar quarterly flow of funds data, which accounted for the
supply of and demand for 12 types of financial instruments and the government sector. In
each sector's demand equations, the proportion of the sector's wealth which it desires to
hold in an asset is a function of interest rates on all assets and a set of additional
explanatory variables. Actual holdings of assets respond with lags to deviations of last
period's holdings from desired holdings. Whether in equilibrium or disequilibrium, each
sector's holdings of assets and liabilities are constrained to sum to net worth.
Smith's demand equations forecast quantities of sectoral asset holdings moderately well
over the first two years (1966-I through 1967-IV) of a four-year period but then drifted
off the historical path and performed worse than a naive autoregressive model. The most
dramatic characteristic of this particular out-of-sample period was the extraordinary
level of nominal interest rates, and the primary forecasting error was a predicted but
unrealized massive shift out of money holdings into interest-bearing assets. This suggests
that the primary problem may have been that the variation in rates was too large to
maintain the assumption of linearity in the demand equations.
Subsequent work building on these foundations is now underway involving Brainard,
Nordhaus, Smith and Tobin in collaboration with investigators at Massachusetts Institute
of Technology, the University of Pennsylvania and the American University. The model being
developed is designed to be linked to the real sectors of the FMP econometric model of the
United States economy. The specifications of variables and equations will permit the
"Yale model" to receive inputs of relevant variables from the real FMP sectors
and to transmit outputs to them. The performance of the full FMP model can then be
examined with the Yale financial sector substituted for the regular FMP monetary and
financial equations. [The FMPFederal Reserve, MIT, Pennmodel is one of a
family of models developed initially by a group from the Board of Governors of the Federal
Reserve System and from Massachusetts Institute of Technology. At subsequent stages of the
project researchers from a number of other institutions, particularly the University of
Pennsylvania, became involved and sponsorship shifted to the Social Science Research
Council.] The same approach can hopefully be extended to other econometric models, so that
there will be a detachable and interchangeable Yale "module" which can be used
as an alternative to the regular financial sectors of large scale models.
The Yale model will recognize explicitly the interrelatedness of financial markets.
Many financial assets are imperfect, though close substitutes for each other in the
portfolios of financial institutions and other economic agents. Moreover, certain balance
sheet identities must be respected. For individual agents and sectors, a decision to hold
funds in one asset is, simultaneously, a decision not to hold them in other assets.
Likewise, assets acquired or debts issued by one sector must be balanced by assets sold or
debt absorbed elsewhere. In the Yale model these identities are satisfied in the asset
demand or supply equations of a sector by including the same list of explanatory variables
always including interest rates, exogenous constraints on disposable funds, and
initial positions in all assets in every equation.
6. Econometrics
Applied econometric work by staff members and visitors at the Cowles Foundation has
been described under the appropriate substantive subheadings of this report. Research on
problems of econometric methodology has also been undertaken in a number of different
areas.
Grether and Maddala, in CFDP 301,
studied the properties of several commonly used two-stage procedures for estimating
distributed lag models. In the presence of serially correlated errors, these procedures,
though still consistent, are known to be asymptotically less efficient than the method of
maximum likelihood. Grether and Maddala explored the large-sample performance of these
estimators under various conditions to determine how large the loss of efficiency is, and
to develop guidelines for choosing among alternative estimators. They found that with high
positive serial correlation, which is characteristic of many economic time series, the
loss in efficiency from all of the two-stage procedures is considerable, and that maximum
likelihood estimators, despite their greater computational complexity, should generally be
used instead. They also found that in some cases the two-stage procedures are even less
efficient than the instrumental variable estimators used in the first step of the
procedures.
In CFDP 302, Maddala and Rao used
Monte Carlo sampling experiments to investigate the performance of two tests proposed by
Durbin for testing serial correlation in distributed lag models. They found that both of
Durbin's tests generally do about equally well, and that in most cases they compare
favorably with the computationally more difficult likelihood ratio test. Their results
also provide some clues as to when the tests are likely to lead to wrong conclusions.
Peck's CFDP 325 is concerned with
the properties of several procedures for using a cross section of time series to estimate
a regression containing a lagged endogenous variable. The estimators he considers were
developed and their properties investigated via Monte Carlo methods by Nerlove in an
earlier paper (CFP 348). Peck's study
was based on a different analytical approach, developed and applied by Kadane to
simultaneous equation problems (CFDP 269,
CFDP 326), which used a new type of
approximation of bias and mean-square error as the variance of the disturbance tends to
zero. Peck applies this asymptotic theory to the time series of cross sections problems
and derives analytic approximations of the bias and mean-square error of several
estimators. His results indicated that, even among large-sample-equivalent estimators,
which of several estimators is best depends upon the true coefficient of the lagged
variable, the relative variances of the time-series and cross-sectional error components,
and the autocorrelation of the exogenous variables.
Another continuing interest of Peck's is the problem of data merging. This problem,
which is of substantial practical interest, is to develop methods of combining samples
which are drawn from the same population but do not contain the same observations, and
which cover different but not disjoint sets of variables. The objective is to be able to
estimate relationships among variables in the "joint" sample. An early empirical
attempt to deal with these difficulties is criticized in Peck's "Comment" on
Benjamin Okner, "Constructing a New Data Base from Existing Microdata Sets: The 1966
Merge File." Peck's current research in this area is directed at analytical
evaluation of the cases where these techniques may succeed.
Peck is also engaged in research on models involving limited dependent variables. He
and Klevorick are exploring the possibility of extending the standard multiple probit
model, as presented in Tobin's CFDP 1,
to the case where there are several critical indices and each index is a linear
combination of a set of independent variables. They intend to derive estimating equations
for this "multi-index probit model," examine the properties of the estimators,
and compare the results obtained using the multi-index model with those obtained in
empirical studies which have used single-index probit models. Peck is also currently using
the standard probit and limited-dependent variable models, and extensions due to Amemiya
and Boskin, to investigate the problem of bias due to attrition in the New Jersey
graduated work incentive experiment. His paper on this subject will be published as part
of the forthcoming final report on the experiment.
7. The Public Sector
During the last several years research at Cowles on the economics of the public sector
has proceeded along the following three lines: social choice and voting, legal policy and
regulation, and taxation and public expenditure.
1. Social Choice and Voting. The study of collective choice by voting or
other non-market socio-political mechanisms has a long tradition at the Cowles Foundation,
dating back to Kenneth J. Arrow's Cowles Foundation Monograph 12, Social Choice and Individual Values. Current and
prospective research by staff members continues this tradition on a variety of fronts,
ranging from abstract mathematical theory to more applied theoretical and empirical work.
In Arrow's original formulation of the problem of social choice, i.e., the aggregation
of individual preference relations into a social preference relation, he assumed that the
individual and social preference relations were complete and transitive. The Arrow
Possibility Theorem is that any aggregation procedure which is Pareto efficient, which
exhibits positive responsiveness to individual and social values, and which satisfies the
condition of independence of irrelevant alternatives is dictatorial. Brown, in
"Aggregation of Preferences," forthcoming in the Quarterly Journal of
Economics, has shown that if the social preference relation is only required to be
acyclic, then there exist nondictatorial aggregation procedures satisfying the remainder
of Arrow's conditions. These aggregation procedures are characterized by a distinguished
family of coalitions of individuals in society, which may be termed a polity. The social
preference relation defined by a polity is that society prefers x to y if and only if the
set of individuals who prefer x to y belongs to the polity. A typical polity over a
society of k individuals is given by choosing m individuals and some number n such that m
+ n = k. Society will then prefer the state x to the state y if the chosen m individuals
and at least n other members of society prefer x to y.
The most common social choice mechanisms, based on majority rule or other voting
mechanisms, are often unstable because of the phenomena of "cyclical"
majorities. Typical public sector problems which might arise for resolution by some voting
mechanism might include determining the levels of each element in a set of governmental
services or public goods, or choosing from a set of alternative social states which can be
partially characterized by quantitative indices or social indicators. In such problems the
feasible alternatives constitute a point set in some appropriately defined
multidimensional choice space, and citizen preferences can often be represented by utility
functions with the usual properties. A variety of preference restrictions which
require, in effect, some degree of similarity in individual preferences have been
shown in the literature to be sufficient to eliminate intransitivities and to restore
stability to majority rule. In an earlier discussion ("On a Class of Equilibrium
Conditions for Majority Rule") Kramer showed that these conditions, when applied to
multidimensional choice problems, turn out to be extraordinarily restrictive and are
tantamount to requiring unanimity of individual preferences. In a subsequent paper
("Sophisticated Voting over Multi-dimensional Choice Spaces") the problem was
re-examined under somewhat different institutional and strategic assumptions about
individual voters. The actual course of voting is sequential, with each of the underlying
decision variables being considered and voted on separately. Voters are
"sophisticated" in Farquharson's sense, thus permitting the possibility of
strategic voting (but not explicit collusion). Kramer showed that under these assumptions
a voting equilibrium will exist if the preferences of the individual voters satisfy a
well-known additive separability condition. This result holds for a wide variety of voting
rules, and a corollary of one of the results is that the well-known single peakedness
condition is sufficient for a voting equilibrium under any voting rule which defines a
simple game (a much broader class than voting games based on the simple majority voting
rule, which have received most of the attention in the literature).
In a paper "A Voting Model for the Allocation of Public Goods: Existence of an
Equilibrium" (a part of a forthcoming Ph.D. dissertation supervised by Brainard,
Brown, and Kramer), Steven Slutsky has examined a related problem in the context of a
general equilibrium model which includes public goods provided by an endogenous public
sector. The public sector is financed by taxes on individual income, and the amounts to be
provided of each of the public goods are determined by majority votes of the consumers who
are assumed to vote non-strategically on each of the goods separately. Slutsky establishes
the existence of a public competitive equilibrium, defined as a vector of prices for both
public and private goods, a vector of tax rates on individuals, a vector of commonly
consumed public goods and a distribution of private goods to the members of the economy.
In addition to the conventional properties of a competitive equilibrium the voting
equilibrium requires that no majority coalition of consumers will vote to alter the level
of any of the public goods. Investigation of the efficiency and distributional properties
of this equilibrium is now under way.
Voting equilibria in the social-choice-theory sense are closely related to the
game-theoretic solution concept of the core. In CFDP 343, Kramer and Klevorick find a condition (which turns out to be
a rather natural generalization of the well-known single peakedness condition) for the
existence of a "local" core that is, an outcome which will not be blocked
by any "nearby" alternative in the class of voting processes that can be
represented as simple games. In CFDP 351,
Shubik surveys the application of game-theoretic concepts to a variety of political
science problems, and he is currently working on a game theoretic treatment of logrolling
in voting games.
At a more applied level, Klevorick and Kramer in CFP 387 have undertaken an examination of the Genossenschaften,
a collection of agencies responsible for managing water quality in the Ruhr area of
Germany. These agencies are widely cited in the environmental literature as successful
models of basin-wide management of water resources. The typical Genossenschaft has
the authority to set water quality standards, raise revenue by imposing effluent charges
on industries and towns within its jurisdiction, and use the revenues so raised to
construct treatment facilities. Ultimate authority within each agency rests with a voting
body composed of representatives of communities and industries in the area, and typically
voting strength in the assembly is approximately proportional to each member's financial
contribution to the agency. Since contributions depend primarily upon effluent charges,
the largest polluters have the most votes in determining water quality standards, an
arrangement which seems strange. From a theoretical point of view, the existence of an
equilibrium or self-sustaining water quality standard under such a representation scheme
is not obvious. Different agents will respond differently (depending on their concern for
water quality and their ability to treat their own waste) when the level of effluent
charges is changed. Thus, if the assembly were to vote a change in the charges, under the
new charges a different distribution of votes results, and the newly-reapportioned
assembly might then vote for a further change in the effluent charges, leading to the
possibility of succession of changes in effluent charges. Using theoretical voting models,
Klevorick and Kramer establish conditions for the existence of a self-sustaining voting
equilibrium under this representation scheme, and they compare it with the water quality
standard that would prevail under alternative political mechanisms. They also find that
introduction of a more efficient technology for pollution treatment may, under certain
circumstances, actually result in a lowering of the prevailing water quality standard when
the Genossenschaften representation scheme is used to decide on water quality.
In general, a majority voting rule may result in either an over or under-supply of
public goods, depending upon the nature of the good, the distribution of individual
preferences, the tax schedule, and so forth. The situation is even more complicated when a
number of communities are involved. F. Westoff, in a Yale Ph.D. dissertation in progress
(supervised by Brainard, Brown, and Stiglitz), has been exploring the application of
voting theory to the "Tiebout" local public goods problem. Consider a society
composed of a number (possibly variable) of small communities, with individuals free to
move from one community to another, and with the level of public goods provided in each
community determined by majority vote of the residents of that community. The existence of
an equilibrium allocation of individuals to communities and set of levels of public goods
is not obvious in such a system, even if individual preferences satisfy the "single
peakedness" assumption. Westoff has shown the existence of an equilibrium under
reasonably general conditions, and is currently exploring the question of efficiency in
this context.
In a joint research project, Klevorick and Professor A.B. Atkinson of the University of
Essex use voting theory to study the social choice of rules for achieving distributive
justice. They envision a representative constitutional assembly whose function is to
select the principle that the society's government shall follow in designing its income
redistribution policy. In one case the government's instruments are limited to a linear
income tax and in another case to lump-sum taxation. They also restrict the menu of
possible principles before the assembly to three: average utilitarianism, a Rawlsian
maximin policy, and laissez faire nonintervention. Individuals in the society are
all assumed to have the same utility as a function of consumption and leisure but to
differ in their ability levels, which in turn determine their respective wage rates.
Finally, Atkinson and Klevorick consider several alternatives with respect to the
information each individual has about his own ability level perfect knowledge,
knowledge only about the distribution of ability levels in the society as a whole, and an
intermediate case of imperfect knowledge. Individuals are assumed to act to further their
own interests, as represented by maximizing utility or (in the case of limited knowledge)
expected utility. Given the knowledge people have about their position, conflicts of
interest arise about which principle the government should use in designing the tax
structure, and these conflicts are assumed to be resolved by a voting rule (for example,
simple majority rule) in the assembly. The research in progress examines what principle is
adopted by the assembly under alternative conditions concerning, for example, the
distribution of ability levels in society, the degree of information people have about
their own ability levels and the degree of risk aversion people display.
In CFDP 333, Nordhaus has examined
the behavior of a political voting mechanism in dealing with economic stabilization.
Starting with a model of political behavior developed earlier by Kramer in CFP 344, Nordhaus added an economic model
postulating a dynamic relationship between unemployment and inflation. Two basic
theoretical results were demonstrated in the paper. First, it is shown that as a result of
the dynamics of the economic and political system the long-run equilibrium in such a model
will have a lower unemployment rate and higher inflation rate than would be determined by
a conventional social welfare function. Second, it was shown that economic policy within
the period of incumbency will display a marked cyclical effect. It is optimal from a
political point of view to deflate the economy in the early stages of incumbency and to
expand in the latter stages.
In an earlier paper (CFP 344) Kramer
investigated econometrically the relationship between aggregative economic fluctuations
and election outcomes in the United States. Kramer and Lepper subsequently
("Congressional Elections") have pursued these questions at a more disaggregated
level. Most recently, in CFDP 341,
Lepper has extended this line of research both theoretically, clarifying the
micro-behavioral assumptions underlying the specification of the earlier econometric
models, and empirically, using a variety of econometric techniques.
2. Legal Policy and Regulation. Klevorick has continued his research on
the behavior of regulated public utility firms. In "The Behavior of a Firm Subject to
Stochastic Regulatory Review" (CFP 393)
he proceeded with his efforts to provide a more realistic model of the regulatory process
than the standard AverchJohnson model. In particular, the model Klevorick proposes
in this paper considers the firm's operations in a dynamic context (with the firm looking
to the future in making today's decisions), and it incorporates some of the interplay
between the regulatory agency and the firm. The model captures the price-setting role of
the regulators, and it encompasses the phenomenon of regulatory lag. Rate reviews are
assumed to occur stochastically through time, and the model also incorporates technical
change generated by the firm's program of research and development. The regulated firm's
optimal policy is characterized, and the implications of this policy for two traditional
issues in regulatory economics the input efficiency of firms and the effect of
regulatory lag on research and development are examined.
Klevorick's research suggests the need for a re-examination of the economic and legal
rationales offered for regulation and a more precise statement of our societal goals in
controlling public utilities. Klevorick has also begun some interdisciplinary research
bridging the fields of economics and law. He is concerned with the possible contribution
of economics to our understanding of several issues in constitutional law, specifically,
the "new" equal protection doctrine and freedom of speech. In the case of the
former, he is focusing on the economic basis, if any, for the central concept of a
"fundamental interest" (analogous to the concept of a "merit want"
suggested by Musgrave). With respect to the freedom of speech issue, he is attempting to
use economic theory to explore what legal and policy implications flow from the
justification of freedom of speech which rests on the economic metaphor of "the
market place of ideas."
3. Taxation and Public Expenditure. The primary emphasis of work
discussed in Section 1 above is on the mechanism of social choice rather than the economic
consequences of the choices themselves. Various members of the staff have also been active
in examining, both theoretically and empirically, the distributional and efficiency
consequences of government taxation and expenditure policies.
The investigation of efficient taxation is a continuing problem of general research
interest. Recently, Samuelson, Diamond, and Mirrlees have returned to the classic work of
Ramsey (1928) and Boiteux (1943) and placed it in a modern setting. Ramsey argued that in
the case of independent demand curves, efficiency requires that commodity taxes be
proportional to the sum of the reciprocal of the demand and supply elasticities; Diamond
and Mirrlees, on the other hand, derive tax formulae depending only on demand
elasticities. In CFP 352, Stiglitz and
Dasgupta show both to be special cases of a more general formulation: Ramsey implicitly
assuming that rents cannot be taxed, and Diamond and Mirrlees implicitly assuming either
no rents or 100% rent taxes. More generally, the optimal tax formulae depend on the set of
restrictions which are imposed on the class of admissible taxes, and whether there is an
overall budget constraint on the government.
Further questions of optimal tax structure in the case of additive utility functions
are discussed by Atkinson and Stiglitz in CFP
367. In this paper it is noted that the Ramsey rule alluded to above assumes
implicitly a constant marginal utility of leisure. On the other hand, if labor is
completely inelastically supplied, it is well known that a uniform tax on all commodities
(i.e., a tax on labor income alone) is optimal. This suggests that more generally, the
optimal tax formulae might be written as a weighted "average" of the Ramsey
rates and a set of uniform rates, with the weights related to the elasticity of the
marginal utility of leisure. In the special case of additive utility functions this in
fact turns out to be the case.
Although the "optimal tax structure" obviously involves less dead-weight loss
than a simple income tax, it is also likely to be administratively more complex, and the
desirability of adopting it depends at least in part on the magnitude of the resource
savings. Using estimated demand functions, Stiglitz and Atkinson are attempting to derive
optimal tax rates and to estimate the welfare gains involved in switching to the optimal
tax structure.
Stiglitz has also applied the analysis of efficient taxation to the specific problem of
the taxation of risky assets in a general equilibrium context in CFDP 305. The question posed is whether
there should be differential treatment of risky and safe securities. The answer is shown
to depend upon the source of uncertainty, the objectives of government policy, and the
effects of taxation on the financial structure of firms. In CFP 383, the effects of taxation on financial structure and on the
cost of capital are explored further. It is shown that under certain circumstances (for
example, when the corporate tax rate is less than the personal income tax rate) the
combined effects of the corporate profits tax, the special treatment of capital gains and
the provisions for interest deductability need not create a divergence between the
marginal cost of capital in the incorporated and unincorporated sectors.
Work has also continued on the distributive as well as the efficiency aspects of
taxation. The negative income tax is perhaps the most widely discussed of redistribution
schemes. As indicated in the last two research reports, there is continuing interest in
the design and analysis of such plans. Most recently, Peck and Harold W. Watts (Irving
Fisher Professor of Economics, 197273) have evaluated a variety of linear negative
income tax plans in a paper, "On the Comparison of Income Redistribution Plans,"
written for the Institute for Research on Poverty. A variety of plans differing in their
marginal tax rates and intercepts were contrasted with the current (1970) tax structure.
It was found that substantial amounts of poverty could be alleviated without increased
taxation of middle income taxpayers.
Studies analyzing the distribution effects of government taxation or expenditure policy
frequently need to make use of summary measures of inequality. A difficulty arises in that
different measures often give contrasting estimates of the degree of inequality. Atkinson,
in a recent paper, has utilized the conceptual framework developed by Rothschild and
Stiglitz (CFP 341) in their analysis
of risk to provide measures of income inequality. In CFDP 344, Rothschild and Stiglitz present an axiomatic formulation of
the ordering under which one distribution can be said to be more unequal than another and
explore the implications of this information for summary measures of inequality.
Theoretical and empirical studies at Cowles have pursued a "descriptive"
analysis of several questions related to taxation. An example of a theoretical study is
Mieszkowski's work (CFDP 304), which
investigated the conditions under which the property tax is essentially an excise tax and
those under which it is essentially a profits tax. At the empirical level, Shoven and
Whalley have used Scarf's algorithm to consider the distributional and the efficiency
implications of the distortionary taxation of income from capital in the U.S. using a
two-sector general equilibrium model. Their estimate of the efficiency loss differs
substantially from Harberger's earlier results ("Efficiency Effects of Taxes on
Income from Capital," Effects of the Corporate Income Tax, M. Krzyzaniak, ed., Wayne
State University Press, 1966.)
A second application of Scarf's algorithm is John Whalley's examination of proposals
for tax changes in the United Kingdom. In April 1973 the U.K. abolished the purchase tax
and selective employment tax and replaced these by a value-added tax; in addition, changes
in the systems and rates of corporation tax and personal income tax were introduced. A
competitive model of the U.K. economy has been developed with industrial divisions chosen
so as to capture the major discriminatory features of these taxation arrangements. This
model is used to examine equilibrium solutions for the economy under these alternative tax
changes. It is found that gains to the economy from these changes are small if not
negative in spite of official expectations to the contrary. This is in contrast to large
potential benefits from the replacement of all discriminatory aspects of the tax system
with a broadly based value-added tax of perhaps 23%. Impacts on the personal and
functional distributions of income, and on the balance of payments are also considered.
The foregoing studies are primarily concerned with the structure of the tax system. Two
ongoing studies focus on the interaction between the level of government expenditures and
the system of taxation used to raise revenue for it.
Lepper has recently been analyzing variations in local expenditures for education
across communities. Preliminary analysis of data for 130 of the 169 towns in Connecticut
was reported in "Fiscal Capacity and the Demand for Public Expenditures With
Special Reference to Education," presented at the Winter 1972 Meetings of the
Econometric Society. In the towns considered, local financing of public primary and
secondary education is almost exclusively dependent on the local property tax, and
per-pupil expenditures vary directly with the size of the local property tax base per
pupil. It is found, however, that a higher property tax base which is associated with
higher family income makes a distinctly larger positive contribution to per pupil
expenditures for education than a higher property tax base resulting from greater
commercial and industrial activity. Lepper intends to extend this analysis in a number of
directions: (a) further refining in statistical technique, (b) focusing more attention on
the role of preferences for private versus public education, and (c) considering the
relative importance of private affluence and business wealth on local expenditures other
than education. Lepper is also considering the usefulness of the notion of
"equity-constrained" Pareto-optimal allocations for private goods and public
services. One type of equity constraint is that each consumer receive the same amount of
benefits from the public service. Another is that consumers who differ in ability to
derive benefits from the public service receive equal allocations of public resources.
On the theoretical side Stiglitz and Dasgupta have analyzed the consequences of
distortionary taxation on the optimal level of expenditure on public goods. In CFDP 352 they established that the
Samuelson rule for the optimal supply of public goods (the sum of the marginal rates of
substitution equals the marginal rate of transformation) is not valid with distortionary
taxation and have developed an alternative rule. An analysis of the implications of this
alternative rule has begun, but at this juncture, it is not clear whether the Samuelson
rule usually leads to an under or oversupply of public goods.
A related "second best" problem which arises when market prices do not
correctly reflect social costs is the specification of a criterion to be used for the
selection of government projects. In the absence of restrictions on the type of taxation
Diamond and Mirrlees argued that the government's prices should be the same as the
producer's prices in the private sector, even when there are strong redistributive
objectives. In contrast, Stiglitz and Dasgupta (CFP 352) have established that with plausible restrictions, e.g., that
pure rents are taxed at less than 100% or that the government obeys a budget constraint,
the two sets of prices should systematically differ.
A quite distinct set of problems in the public sector relate to the management of
common resources such as land, fishing grounds, or highways. For such resources a free
access equilibrium is inefficient; average products rather than marginal
products of the variable factor tend to be equated. Weitzman (CFDP 323) has proposed a formal model to
characterize and compare the allocations of resources which occur under conditions of free
access and of private property ownership.
GUESTS
The Cowles Foundation is pleased to have as guests scholars and advanced students from
other research centers in this country and abroad. Their presence contributes stimulation
and criticism to the work of the staff and aids in spreading the results of its research.
The Foundation has accorded office, library, and other research facilities to the
following guests who were in residence for various periods of time during the past three
years.
RONALD G. BODKIN, The University of Western Ontario.
KARE HAGEN, Norwegian School of Economics and Business Administration.
HARRY JOHNSON, University of Chicago and London School of Economics.
HAYNE E. LELAND, Stanford University.
NIKITA MOISEEV, Computer Center of U.S.S.R. Academy of Sciences.
TAMAS NAGY, Hungarian Academy of Sciences.
J. KIRKER STEPHENS, University of Oklahoma.
KNUT SYDSAETER, University of Oslo.
EDUARDAS VILKAS, Institute of Physics and Mathematics of Lithuanian Academy of Sciences.
COWLES FOUNDATION SEMINARS AND CONFERENCES
Seminars
In addition to periodic Cowles Foundation staff meetings, at which members of the staff
discuss research in progress or nearing completion, the Foundation also sponsors a series
of Cowles Foundation Seminars conducted by colleagues from other universities or elsewhere
in Yale. These speakers usually discuss recent results of their research on quantitative
subjects and methods. All interested members of the Yale community are invited to these
Cowles Foundation Seminars, which are frequently addressed to the general economist
including interested graduate students. The following seminars occurred during the past
three years.
| 1970 |
|
| October 23 |
DANIEL McFADDEN, MIT, "Revealed
Stochastic Technology" |
| November 6 |
EYTAN SHESHINSKI, Harvard University,
"Optimal Government Production and Inflation" |
| November 13 |
TERRENCE GORMAN, University of North
Carolina and London School of Economics, "Aggregates for Variable Goods: An
Application of Duality" |
| December 4 |
HUGO SONNENSCHEIN, University of
Massachusetts, "Three Problems in General Equilibrium and Welfare Economics" |
| 1971 |
|
| February 5 |
ROBERT SOLOW, MIT, "Land Use in a Long
Narrow City" |
| February 26 |
ROBERT E. LUCAS, JR., CarnegieMellon
University, "Cross Section Tests of the Natural Rate Hypothesis" |
| March 5 |
MICHAEL BRUNO, MIT, "Disequilibrium
Growth in an Open Economy" |
| April 2 |
RICHARD D. PORTES, Princeton University,
"A Quantity-Guided Decentralized Planning Procedure" |
| April 16 |
CHRISTOPHER SIMS, National Bureau of
Economic Research, "Money, Income and Causality" |
| April 23 |
WITOLD TRZECIAKOWSKI, Warsaw, Poland,
"The Application of Short-Run Optimization Models in Foreign Trade Planning and
Management in Poland" |
| May 14 |
RICHARD ROSETT, University of Rochester,
"The Effect of Health Insurance on the Demand for Medical Care" |
| September 30 |
JOHN WILLIAMSON, Warwick University,
"Estimates of the Impact of the EEC on Trade" |
| October 27 |
STEPHEN GOLDFIELD, Princeton University,
"Econometric Model Selection in the Presence of Repeated Structural Change" |
| November 12 |
PETER DIAMOND, MIT, "Aggregate
Production with Consumption Externalities" |
| December 10 |
KOICHI HAMADA, MIT, "Social Choice on
Income Distribution" |
| 1972 |
|
| January 14 |
MARTIN GEISAL, CarnegieMellon
University, "Bayesian Model Comparisons" |
| February 18 |
MARC ROBERTS, Harvard University,
"Alternative Social Criteria: A Normative Approach" |
| March 3 |
HERBERT SIMON, CarnegieMellon
University, "Process Models That Predict the Size of Business Firms" |
| March 10 |
SIDNEY WINTER, University of Michigan,
"Simulation of Technical Change in an Evolutionary Model" |
| March 17 |
WLADYSLAW WELFE, University of
Pennsylvania, "Medium-Term Econometric Models of Poland, Czechoslovakia and Hungary:
Goals, Specification and Empirical Results" |
| April 14 |
ALBERT MADANSKY, City College of New York,
"Improved Instrumental Variable Estimators" |
| April 21 |
ARTHUR GOLDBERG, University of Wisconsin,
"Unobservable Variables" |
| April 28 |
ANDRAS NAGY, United Nations Conference on
Trade and Development, "Projections of Foreign Trade in the Socialist Countries and
the Problem of International Consistency" |
| May 12 |
MORDECAI KURZ, Stanford University,
"Equilibrium with Transaction Cost and Money" |
| May 19 |
DAVID CASS, CarnegieMellon
University, "Interpreting the Lagrange Multipliers in Quasi-Concave Programming and
Conditions for 'Concavifiability'" |
| October 27 |
JANOS KORNAI, Institute of Economics,
Hungarian Academy of Sciences, "Intertemporal Aspects of Hungarian Long-Term
Planning" |
| November 3 |
NICHOLAS STERN, St. Catherine's College,
Oxford, and MIT, "Optimal Savings with Economies of Scale" |
| November 10 |
AVINASH DIXIT, MIT and Oxford, "The
Optimal Factory Town" |
| November 17 |
ALAN MANNE, Stanford University,
"Electricity Investments Under Uncertainty About the Date of Breeder
Availability" |
| December 8 |
HUGO SONNENSCHEIN, University of
Massachusetts, "An Axiomatic Characterization of the Competitive Mechanism" |
| December 15 |
TRUMAN BEWLEY, Harvard University,
"Preliminary Work on a Dynamic Central Market Model" |
| 1973 |
|
| January 5 |
MICHAEL SPENCE, Harvard University,
"Market Signaling" |
| February 2 |
GEORGE BROWN, Center of Naval Operations,
"Some Problems Involving Disturbance-Variance Systems" |
| February 23 |
ALEXANDER SCHMIDT, Computing Center of the
U.S.S.R. Academy of Sciences, "Applications of Control Theory to Economic
Planning" |
| March 2 |
H E. GOELLER, Oak Ridge National
Laboratory, "A Long-Term View of Material Resources Availability and
Utilization" (Joint seminar with Institute for Social and Policy Studies) |
| March 16 |
MARTIN WEITZMAN, MIT, "Prices vs.
Quantities as Planning Instruments" |
| April 6 |
JERZY Los, Computer Center of the Polish
Academy of Sciences, "Recent Polish Work in the Field of von Neumann Models" |
| April 9 |
NIKITA MOISEEV, Computer Center of the
U.S.S.R. Academy of Sciences, "Some Problems of Centralized Planning" |
| April 13 |
PETER DIAMOND, M.I.T. "Single Activity
Accidents" |
| May 4 |
T.N. SRINIVASAN, Indian Statistical
Institute. "A Re-Analysis of the HarrisTodaro Model," jointly with
Economic Growth Center |
| May 11 |
DAVID HOAGLIN, Harvard University,
"Exploring Some Unemployment Data" |
Conferences
The Cowles Foundation was also the host for a conference on Energy Sector Modeling,
November 17 and 18, 1972.
FINANCING AND OPERATION
The Cowles Foundation relies largely on gifts, grants and contracts to finance its
research activities. Yale University contributes to the Cowles Foundation the use of a
building at 30 Hillhouse Avenue which provides office space, a seminar room, and related
facilities. The University also supports the Foundation's research and administration
through paying or guaranteeing part or all of the non-teaching fractions of the salaries
of three permanent staff members.
The annual gifts of the Cowles family are the cornerstone of the financial support of
the Cowles Foundation. Two endowment funds were established at Yale during the past three
years which also provide funds for the Foundation. The Cowles family initiated an
endowment in 1970, the income from which provides additional general support. In 1972, the
Marcus Goodbody Foundation made a gift to Yale from which another endowment fund was
established. Income from this fund contributes to the research salary and expenses of a
member of the Cowles Foundation staff designated as the Marcus Goodbody Fellow. These
three sources provide dependable untied funds permitting a degree of intellectual and
administrative flexibility which is extremely useful for any organization engaged in basic
research.
The scale of activity at Cowles is dependent on the availability of a substantial
amount of financial support from grants and contracts. During the period covered by this
report, the Cowles Foundation has been fortunate in having sizeable grants from the
National Science and Ford Foundations which are not tied to specific research projects,
and has continued to receive support from the Office of Naval Research which has financed
work at Cowles on operations research and game theory since the late 1940's. The National
Science Foundation grant was a "continuing" grant providing annual funding for
the period July 1968 through June 1973. Additional funds for support of the general
program of the Cowles Foundation and for a program of visiting staff members were
generously provided by the Ford Foundation for the period 19681976. This Ford
visitors program is intended specially to facilitate visits by Eastern European economists
and also by scholars in disciplines other than economics but related to interests of
Cowles Foundation staff. These guests are regular members of the Cowles Foundation staff
for the period of their stay generally four months or longer.
The major part of Cowles Foundation expenditures is accounted for by research salaries
(and associated fringe benefits). The rest of the budget consists of office and library
salaries, overhead expenses such as the costs of preparing and distributing manuscripts,
and the costs of computing services.
The pattern of Cowles Foundation income and expenditures in recent years is outlined in
the table.

|
During the period of this report, the research staff of the Cowles Foundation included
18 or 19 members in faculty ranks. This size was determined by an interplay of
considerations including financial constraints, limitations of space at 30 Hillhouse
Avenue, and opportunities to bring to the Foundation colleagues who would complement or
supplement current research activities. The balance among ranks of the staff in residence
varied from year to year depending largely upon leaves of absence and the opportunities to
compensate for such absences by visiting appointments. Excluding staff members on such
visiting appointments, the staff included seven tenured faculty of the Departments of
Economics, Administrative Science and Political Science and 8 to 10 faculty on term
appointments. On average, both the permanent and the younger members of the staff devoted
about half of their professional effort in the academic year and up to two full months in
the summer to their research and to seminars and discussions with their colleagues.
These activities were supported by the services of five secretaries and manuscript
typists under the direction of Miss Althea Strauss. In addition to the office staff, a
varying number of student research assistants and part-time computer programmers assisted
directly in the research studies.
A small library, under the supervision of Patricia Graczyk, is maintained in the
building of the Cowles Foundation. It makes research materials readily available to the
staff and supplements the technical economics and statistics collections of other
libraries on the Yale campus. The library includes a permanent collection of some 5,400
books and 158 journals primarily in the fields of general economics, mathematical
economics, econometric studies and methods, statistical methods and data; numerous
pamphlets from Government sources and international organizations; series of reprints from
22 research organizations at other universities in the United States and abroad; and a
rotating collection of recent unpublished working papers. Although the library is oriented
primarily to the needs of the staff, it is also used by other members of the Yale faculty
and by students of the University.
PUBLICATIONS AND PAPERS
MONOGRAPHS
The monographs of the Cowles Commission (Nos. 115) and Cowles Foundation (Nos.
1624) are listed below:
See complete LIST OF
MONOGRAPHS (available for download)
COWLES FOUNDATION PAPERS (most papers are on-line
and downloadable)
See complete LISTING OF
COWLES FOUNDATION PAPERS
COWLES FOUNDATION DISCUSSION PAPERS (all papers
are on-line and downloadable)
See complete LISTING OF
COWLES FOUNDATION DISCUSSION PAPERS
OTHER PUBLICATIONS BY STAFF
MEMBERS
Contains papers which were published during the period and resulted from work at the
Cowles Foundation, papers published while the author was a staff member, and a few other
papers referred to in the text of the Report.
BISCHOFF, CHARLES W.
- "Revisions in Investment Anticipations," Brookings Papers on Economic
Activity, 1970 (2), 319325.
- "The Outlook for Investment in Plant and Equipment," Brookings Papers on
Economic Activity, 1971 (3), 735751.
- "Domestic Tax Stimulus in the New Economic Policy," 1972 Proceedings of the
National Tax Association, 3136.
BRAINARD, WILLIAM
- "Private and Social Risk and Return to Education" in Efficiency in
Universities: The LaPaz Papers. Elsevier Scientific Publishing Company,
Netherlands, 1973.
BROWN, DONALD J.
- "A Limit Theorem on the Cores of Large Standard Exchange Economies" (with
Abraham Robinson), Proceedings of the National Academy of Science, U.S.A. Vol. 69,
No. 5, 12581260.
KLEVORICK, ALVIN K.
- Review of Shlomo Reutlinger, Techniques for Project Appraisal under Uncertainty, in
Economica. August 1971.
- "The Graduated Fair Return: A Further Comment," American Economic Review,
September 1971.
- "Money Illusion and the Aggregate Consumption Function: Reply"(with W.H.
Branson), American Economic Review, March 1972.
- "A Note on 'The Ordering of Portfolios in Terms of Mean and Variance'," Review
of Economic Studies, April 1973.
KOOPMANS, TJALLING C.
- "If f(x) + g(y) is Quasi-convex at least One of f(x), g(y) is Convex",
presented at the Symposium on Mathematical Methods of Economics organized by the
Mathematical Institute of the Academy of Sciences, Warsaw, Poland, July 1972.
KRAMER, GERALD
- "Congressional Elections" (with Susan J. Lepper), Ch. V in Dimensions of
Quantative Research in History, edited by W.O. Adelotte, A.G. Brogue and R.W. Fogel,
Princeton University Press, 1972.
- "Sophisticated Voting in Multidimensional Choice Space," Journal of
Mathematical Sociology, March 1973.
LEPPER, SUSAN J.
- "Congressional Elections" (with Gerald Kramer), Ch. V in Dimensions of
Quantitative Research in History, edited by W.O. Adelotte, A.G. Brogue and R.W. Fogel,
Princeton University Press, 1972.
- "Fiscal Capacity and the Demand for Public Expenditures With Special
Reference to Education," presented at the Annual Meetings of the Econometric Society,
Toronto. 1972.
SCARF, HERBERT E.
- Cowles Foundation Monograph 24, The Computation of Economic Equilibria (with the
collaboration of Terje Hansen).
SHUBIK, MARTIN
- "On Homo Politicus and the Instant Referendum" Public Choice, Fall
1970, 7984.
- "Price Strategy Oligopoly: Limiting Behavior with Product Differentiation," Western
Economic Journal, VIII, 3, Sept. 1970, 226232. (Reprinted in: R.E. Neel (ed.) Readings
in Microeconomics. Cincinnati: Southwestern Publishing Co., 1972).
- "Gaming and Planning for Campus Crises," in Knight, et al., Cybernetics,
Simulation and Conflict Resolution, Spartan Press, 1971, 7983.
- "How to be Data Rich and Information Poor, or Let's Bury Ourselves with the
Facts," in M. Greenberger (ed.), Computers, Communications and Public Interest,
Baltimore: Johns Hopkins Press, 1971, 5659.
- "A Simulation Model of the Economy of Brazil" (with Naylor, Fioravante and
Monteiro), Revista Brasileira de Economia, Jan.Mar., 25(1), 1971, 3963.
- "Econometric Models of Brazil: A Critical Appraisal" (with Naylor and
Zerkowski), Revista Brasileira de Economia, Jan.Mar., 25(1), 1971,
6591.
- "Models, Simulations and Games" (with Kertenetzky and Naylor), Revista
Brasileira de Economia, Jan.Mar., 25 (1) , 1971, 937.
- "The Dollar Auction Game: A Paradox in Noncooperative Behavior and
Escalation," Conflict Resolution, XV, 1, 1971, 109111.
- "An Artificial Player for a Business Market Game" (with Wolf and Lockhart), Simulation
and Games, Mar. 1971, 2743.
- "Corporate Reality and Accounting for Investors" (with Whitman), Financial
Executive, May 1971, 314.
- "Systems Simulation and Gaming as an Approach to Understanding Organizations"
(with Brewer), in Siegel (ed.), Symposium on Computer Simulation as Related to Manpower
and Personnel Planning, Washington, DC, 1733, 1971.
- "On Gaming and Game Theory," Management Science, 18, 5, 1972,
3749.
- "The Assignment Game I: The Core" (with Shapley), International Journal of
Game Theory, Vol. 1, No. 2, 1972, 111130.
- "Methodological Advances in Political Gaming The One-Person, Computer
Interactive, Quasi-Rigid-Rule Game" (with Brewer), Simulation and Games, Sept.
1972, 329348.
- "Some Experiences with an Experimental Oligopoly Business Game" (with Wolf and
Eisenberg), General Systems, XIII, 1972, 6175.
- "An Experiment with Ten Duopoly Games and Beat-the-Average Behavior" (with
Riese), in H. Sauermann (ed.), Beitrage Zur Experimentellen Wirtschaftsforschung,
III, Tubingen: J.C.B. Mohr, 1972, 656689.
- "Some Aspects of Socio-Economic Modeling," in M. Beckmann, et al
(eds.).
- Lecture Notes on Economics and Mathematical Systems
(International Seminar on Trends
in Mathematical Modeling, Venice, Dec., 1971). Berlin: Springer Verlag, 1973,
155163.
- "A Note on Decision Making and Replacing Sure Prospects," Management
Science, 19(6), Feb. 1973, 711712.
STARR, ROSS M.
- "Equilibrium in a Monetary Economy with Nonconvex Transaction Costs" (with
Walter P. Heller), Technical Report No. 110, Economics Series, Institute for Mathematical
Studies, Stanford.
STIGLITZ, JOSEPH
- "Perfect and Imperfect Capital Markets," paper presented to the 1971 meetings
of the Econometric Society, New Orleans, LA.
- "On Optimal Taxation and Public Production" (with P. Dasgupta), Review of
Economic Studies, January 1972.
- "Taxation, Risk Taking and the Allocation of Investment in a Competitive
Economy," in M. Jensen, editor, Studies in the Theory of Capital Markets,
Praeger, 1972.
- "The Badly-Behaved Economy with the Well-Behaved Production Function," in J.
Mirrlees (ed.), Models of Economic Growth, Macmillan, 1973.
- "Recurrence of Techniques in a Dynamic Economy." Ibid.
TOBIN, JAMES
- Essays in Economics; Volume I, Macroeconomics
, North-Holland Publishing Co., 1971.
|