PURPOSE AND ORIGIN
The Cowles Foundation for Research in Economics at Yale University, established as an
activity of the Department of Economics in 1955, is intended to sponsor and encourage the
development and application of quantitative methods in economics and related social
sciences. The Cowles Foundation continues the work of the Cowles Commission for Research
in Economics, founded in 1932 by Alfred Cowles at Colorado Springs, Colorado. The
Commission moved to Chicago in 1939 and was affiliated with the University of Chicago
until 1955. At that time, the professional research staff of the Commission accepted
appointments at Yale and, along with other members of the Yale Department of Economics,
formed the research staff of the newly established Cowles Foundation. The members of the
professional staff typically have faculty appointments and teaching responsibilities in
the Department of Economics or other departments at Yale University.
RESEARCH ACTIVITIES
INTRODUCTION
The Cowles Commission for Research in Economics was founded approximately fifty years
ago by Alfred Cowles, in collaboration with a group of economists and mathematicians
concerned with the application of quantitative techniques to economics and the related
social sciences. This methodological interest was continued with remarkable persistence
during the early phase at Colorado Springs, then at the University of Chicago, and since
1955 at Yale.
One of the major interests at Colorado Springs was in the analysis of economic data by
statistical methods of greater power and refinement than those previously used in
economics. This was motivated largely by a desire to understand the chaotic behavior of
certain aspects of the American economy the stock market in particular
during the Depression years. The interest in statistical methodology was continued during
the Chicago period and into the present with a growing appreciation of the unique
character and difficulties of statistical problems arising in economics. An important use
of this work was made in the description of the dynamic characteristics of the U.S.
economy by a system of statistically estimated equations.
At the same time, the econometric work at Chicago was accompanied by the development of
a second group of interests also explicitly mathematical but more closely connected
with economic theory. The activity analysis formulation of production and its relationship
to the expanding body of techniques in linear programming became a major focus of
research. The Walrasian model of competitive behavior was examined with a new generality
and precision, in the midst of an increased concern with the study of interdependent
economic units, and in the context of a modem reformulation of welfare theory.
The move to Yale in 1955 coincided with a renewed emphasis on empirical applications in
a variety of fields. The problems of economic growth, the behavior of financial
intermediaries, and the embedding of monetary theory in a general equilibrium formulation
of asset markets were studied both theoretically and with a concern for the implications
of the theory for economic policy. Earlier work on activity analysis and the general
equilibrium model was extended with a view to eventual applications to the comparative
study of economic systems and to economic planning at a national level. Algorithms for the
numerical solution of general equilibrium models were developed, the study of non-convex
production sets was pursued, and a variety of applications of game theory to economic
problems were investigated. Along with the profession at large, we have engaged in the
development of analytical methods oriented to contemporary social and economic problems,
in particular the specifics of income distribution, the economics of exhaustible resources
and other limitations on the growth of economic welfare.
For the purposes of this report it is convenient to categorize the research activities
undertaken at Cowles during the last three years in the following way:
A. Macroeconomics
B. Mathematical Economics
C. Game Theory
D. Microeconomics
E. Econometrics
A. Macroeconomics
The effects of monetary and fiscal policies on economic activity on production,
employment, and prices are a prominent practical concern of national and
international politics. They are also a central topic of theoretical and empirical
economics. They have been the major focus of James Tobin's research for forty years. Most
of it has taken place at the Cowles Foundation since its establishment at Yale in 1955, in
association and collaboration with colleagues and students. Previous Reports have
described the approach and the findings of much of this research.
Tobin has been particularly interested in the processes by which financial markets and
institutions transmit the monetary and fiscal measures of government to the economy at
large. These markets and institutions balance the changing demands for and supplies of
assets of various types, from currency to capital goods. They also determine interest
rates and asset prices, and through them affect expenditures on goods and services, for
capital investment or consumption. Asset demands and supplies reflect the saving,
portfolio, and balance sheet choices and adjustments of households, businesses,
foreigners, and governments. Central government policies work by altering the amounts and
terms of supply of important assets especially currency and bank reserves, and
public debt obligations of various maturities. In its emphasis on a broad spectrum of
assets, generally imperfect substitutes for each other, and on the relative attractiveness
of real assets, or claims to real assets, to savers and portfolio managers, the approach
differs sharply from theories that concentrate on the volume of an arbitrarily defined
monetary aggregate.
In the recent past Tobin has restated, revised and extended his theoretical framework
in several publications. In the articles, "Deficit Spending and Crowding Out in
Shorter and Longer Runs" and "Fiscal and Monetary Policies, Capital Formation
and Economic Activity" (with Willem Buiter), Tobin presents models of the
accumulation of savings in the forms of currency, interest-bearing public debt, and real
capital. The papers concern the short-and long-run effects of government deficits, and of
their division between currency issue and debt issue, on real output and capital
formation. They attempt, in particular, to delineate the conditions under which government
demands for private saving "crowd out" private investment. A discussion and
summary of the basic framework also appears in Tobin's Yrjo Jahnsson Lectures, delivered
in Finland in 1979.
In December 1981 Tobin took the opportunity of his Nobel Lecture in Stockholm to
present a full discussion, exposition and defense of his theoretical model. In a sense,
this is a restatement and revision of the earlier exposition in "A General
Equilibrium Approach to Monetary Theory" (Journal of Money, Credit and Banking,
1969).
The "flow-of-funds" statistics compiled by the Federal Reserve System are
data on sectoral holdings of various assets and debts, which provide the possibility of
empirical implementation of the theoretical framework just described. Research at the
Cowles Foundation designed to estimate sectoral asset demands and supplies and to combine
them in a complete system of asset market equations is described in previous Reports.
Tobin contributed to "A Model of U.S. Financial and Nonfinancial Economic
Behavior," a recent report of the status of this empirical research.
The theory and practice of monetary and fiscal policy have recently been the scene of
intense controversy, not just in the public and political arena but in the economics
profession itself. A "new classical macroeconomics," involving the principle and
methodology of "rational expectations," has questioned the effectiveness of
systematic policies of countercyclical stabilization. The new school rejects Keynesian
theories of economic fluctuations, from which Tobin's framework is in important respects a
descendant. It is not surprising, therefore, that Tobin has participated actively in the
controversy. In the second of his Yrjo Jahnsson lectures, Tobin criticizes the new
classical theories for their inability to explain the common features of observed business
fluctuations and argues that this failure renders suspect their policy conclusions. In a
major paper, "Stabilization Policy Ten Years After," invited for the tenth
anniversary of the Brookings Panel on Economic Activity of which he was a charter member,
Tobin reviewed the events and policies of the 1970s in the light of their congruities or
incongruities with macroeconomic theories and models.
Tobin has had a long-standing interest in saving behavior. During the period of this
Report, his Paish Lecture in York, England in 1978, endeavored to refute the hypothesis of
Robert Barro that government deficits financed by interest-bearing debt do not absorb
saving because taxpayers save extra in anticipation of higher future taxes to service the
debt. If the hypothesis were true, Tobin's theoretical framework, described above, would
not apply; Tobin argues that most savers' horizons are much shorter than the infinite
horizons Barro assumes. An empirically oriented discussion of the same issue is contained
in the article "Debt Neutrality: A Brief Review of Doctrine and Evidence" with
Willem Buiter. In "Mandatory Retirement, Saving and Capital Formation," Tobin
and Walter Dolde presented simulations to describe the effects of social and private
retirement systems on national saving and investment. This is a sequel to their earlier
study "Wealth, Liquidity and Consumption" (in Consumer Spending and Monetary
Policy: The Linkages, Federal Reserve Bank of Boston, 1971).
In 1982, a third volume of Tobin's professional papers was published.
It has long been recognized that the effects of monetary policy on the economy are
influenced by the legal restrictions placed on the behavior of financial intermediaries,
such as Regulation Q ceilings on commercial bank savings and time deposit interest rates.
In CFDP 605, Christophe Chamley and
Heraklis Polemarchakis used a general arbitrage argument of the ModiglianiMiller
type to show that in a model with unrestricted trading, government trades in existing
assets can have no effect on the real allocation of resources in equilibrium. In such a
setting, monetary policies such as open-market operations would have no impact on the
pattern of resource use. Thus real effects of such policies can only occur when
restrictions are placed on private trading.
The recent theoretical debates on the effectiveness of macroeconomic policies are in
part a reflection of the continued tension between economists' vision of market-clearance
at the microeconomic level, and their macro-economic prescriptions. During the period of
this report, Katsuhito Iwai published Cowles Foundation Monograph No. 27, Disequilibrium Dynamics A
Theoretical Analysis of Inflation and Unemployment which was awarded the "Grand
Prix of Nikkei Keizaitosho Bunka Sho" (grand prize for books in Economics) in 1982 in
which his aim is to provide a microeconomic model of a market economy with no necessary
tendency towards optimal employment of resources. To this end, Iwai dropped the
conventional assumption of perfect competition and proposed instead a model of a
monopolistically competitive economy in which the numerous interdependent firms set their
own prices and fix their own wage offers without knowing what demands and supplies will be
forthcoming. On this basis Iwai has tried to build a structure that explains the evolution
of prices, wages, employment, and output for the economy as a whole, not as a smooth
trajectory of equilibrium positions, but as a causal process that is moved by the complex
pattern of dynamic interactions among firms.
The monograph consists of three parts. Part I reformulates Knut Wicksell's theory of
cumulative processes. It shows that in a monetary economy, if prices and wages are
flexible, a deviation from equilibrium, however small, inevitably produces errors in
firms' expectations and starts a dynamic process that tends to drive prices and wages
cumulatively away from equilibrium. Such a process of inflation or deflation breeds, in
the course of its own development, both accelerating and decelerating forces, and whether
or not it will eventually return to equilibrium is decided only by the relative strength
of these conflicting forces. With flexible prices and wages there is no a priori ground
for a belief in the self-adjusting character of the economic system. On the contrary, it
is argued in Part II, inflexibility rather than flexibility of money wages is what
stabilizes a monetary economy. With sticky money wages, the system normally approaches a
Keynesian equilibrium where employment is determined by effective demand. It is only in
response to a macroeconomic disturbance large enough to break the inflexibility of money
wages that the system abandons the Keynesian equilibrium and sets off on a cumulative
process of inflation or deflation. A Keynesian principle of effective demand is thus
integrated with a Wicksellian theory of cumulative processes. Part III then undertakes a
long-run analysis of inflation and unemployment. It demonstrates that a monetary economy
never outlives its monetary history. In particular, if money wages rise more readily than
they fall, the Phillips curve is never vertical. Part III concludes with an analysis of
the problem of wage-push stagflation, showing how this can be approached by the method
developed in the monograph.
In other work studying macroeconomic variables out of equilibrium, Peter C.B. Phillips
together with V. Hall and R. Bailey report the theoretical development of a small
aggregative model of output, employment, capital formation and inflation (CFDP 552). The model is designed to
explain medium term cyclical growth in a small open economy. It allows explicitly for
disequilibrium in the markets for goods and factors of production and has a wage-price
sector in which the movements in these variables are specified to allow for intended price
setting behavior by firms while, in addition, responding to realizations which may differ
from these intentions as well as responding to the effects of disequilibrium in the real
sector. The model is formulated in continuous time as a system of non-linear differential
equations and has a particular solution which corresponds to plausible steady state growth
behavior for the variables of the model. The properties of this particular solution are
analyzed directly, and solution trajectories for the variables corresponding to various
initial values which deviate from the steady state growth paths are computed numerically
and compared with the steady state growth paths. The model has been developed with a view
to subsequent empirical application to a small open economy and, as a foundation for this
later work, some econometric methodology for the treatment of non-linear differential
equations is developed in the paper.
Fair has recently completed a book (Specification, Estimation, and Analysis of
Macroeconometric Models, Harvard University Press, forthcoming 1983) that is a summary
of much of his research in the last few years. The 'specification' part contains a
discussion of both his theoretical and econometric macro models. The theoretical model is
an attempt to integrate three main ideas. The first is that macroeconomics should be based
on better microeconomic foundations. In particular, macroeconomics should be consistent
with the view that decisions are made by maximizing objective functions. The second idea
is that macroeconomic theory should allow for the possibility of disequilibrium in some
markets. The third idea is that a model should account explicitly for balance sheet and
flow of funds constraints. Contrary to previous disequilibrium work, including the work on
fixed price equilibria, the model provides a choice-theoretic explanation of market
failures. Firms set prices in a profit maximizing context, but because of possible
expectations errors, these prices may not be market clearing. The original discussion of
the theoretical model is in A Model of Macroeconomic Activity, Volume I: The
Theoretical Model (Ballinger, 1974). The model is expanded to two countries in "A
Model of the Balance of Payments." The original discussion of the United States
econometric model is in A Model of Macroeconomic Activity, Volume II: The Empirical
Model (Ballinger, 1976) and "The Sensitivity of the Fiscal Policy Effects to
Assumptions about the Behavior of the Federal Reserve" (Econometrica,
September 1978), and the original discussion of the multicountry econometric model is in CFDP 541R and in "Estimated Output,
Price, Interest Rate, and Exchange Rate Linkages Among Countries" (Journal of
Political Economy, June 1982).
The 'estimation' part contains a discussion of the estimation of large scale nonlinear
models by various methods. The methods include two stage least squares, three stage least
squares, full information maximum likelihood, and two stage least absolute deviations.
Fair has worked on various aspects of these estimators during the past few years,
particularly the computational aspects. The original discussions are in "The
Estimation of Simultaneous Equation Models with Lagged Endogenous Variables and First
Order Serially Correlated Errors" (Econometrica, May 1970), and
"Full-Information Estimates of a Nonlinear Macroeconometric Model" with William
Parke. The results in the book show that it is now computationally feasible to estimate
large scale nonlinear models by full information methods and by robust methods like two
stage least absolute deviations.
The main theme of the 'analysis' part of the book is the argument that more testing of
models should be done once they are specified and estimated. It is only by testing models
against each other that there is any hope of narrowing the current range of disagreements
in macroeconomics regarding the structure of the economy. Fair has recently developed a
method for estimating the predictive accuracy of models that takes into account the four
main sources of uncertainty of a prediction: uncertainty due to 1) the error terms, 2) the
coefficient estimates, 3) the exogenous variables, and 4) the possible misspecificaton of
the model. Because the method accounts for all four sources, it can be used to make
comparisons across models. The method has been used to compare a number of models in the
book, including Fair's U.S. model and two vector autoregressive models. The original
discussions of this work are in "An Analysis of the Accuracy of Four Macroeconometric
Models," "Estimating the Expected Predictive Accuracy of Econometric
Models," and "The Effects of Misspecification of Predictive Accuracy." The
method relies heavily on the use of stochastic simulation, which, as seen in the book, can
now be routinely done. The analysis part also contains a discussion of the estimation of
the uncertainty of policy effects in models (originally in "Estimating the
Uncertainty of Policy Effects in Nonlinear Models") and a discussion of the solution
and analysis of optimal control problems (originally in "On the Solution of Optimal
Control Problems as Maximization Problems" (Annals of Economic and Social
Measurement, January 1974) and "The Use of Optimal Control Techniques to Measure
Economic Performance" (International Economic Review, June 1978).
The final chapter of the book contains a discussion of the solution and estimation of
nonlinear rational expectations models. The original discussion is in "Analysis of a
Macro-Econometric Model with Rational Expectations in the Bond and Stock Markets" and
"Solution and Maximum Likelihood Estimation of Dynamic Rational Expectations
Models" with John Taylor. The methods discussed in this chapter have considerably
expanded the range of rational expectations models that can be estimated and analyzed.
Fair and Parke have recently completed a computer program that allows all the
techniques discussed in the book to be easily applied once the model has been set up in
the program. The hope is that this program will allow more testing and analysis of models
than has been true in the past.
Papers presenting early versions of Tobin's theoretical framework "Pitfalls in
Financial Model Building" (American Economic Review, May 1968) with William
Brainard and "A General Equilibrium Approach to Monetary Theory," did not take
account of international financial markets and capital movements. Nevertheless the
"Yale" portfolio approach proved to be the foundation for a vigorous and
fruitful literature on international payments balances and exchange rates in a world of
capital mobility across currencies. In "The Short-Run Macroeconomics of Floating
Exchange Rates: An Exposition," Tobin, in collaboration with Jorge de Macedo, essays
his own extension of the basic model to open economies, concluding that the qualitative
conclusions of the closed economy models survived, whether exchange rates were fixed or
floating. Asset holdings in different currencies are also allowed in "Fiscal and
Monetary Policies, Capital Formation, and Economic Activity," and in Tobin's Yrjo
Jahnsson Lectures. Other papers on international economics are "A Proposal for
International Monetary Reform," which argues for taxing inter-currency transfers of
funds in order to give national monetary and fiscal policies more autonomy and "The
State of Exchange Rate Theory: Some Skeptical Observations," which reviews critically
current fashions in the theory of foreign exchange rates.
In another article on open economy macroeconomics, "Macroeconomic Tradeoffs in an
International Economy with Rational Expectations," John Taylor, who visited the
Cowles Foundation during 1980, considered alternative exchange rate rules in conjunction
with alternative monetary policy rules. One of the main features of the monetary rules
considered was the dependence of the rate of growth of the money supply on the recent rate
of inflation that is, the degree of monetary accommodation to inflation. There is a
close relationship between monetary accommodation and exchange rate accommodation
the latter being defined as the degree of response of a managed exchange rate to a change
in price competitiveness. A purchasing power parity rule in which the exchange rate
adjusts to fully offset any change in the home country price level relative to the rest of
the world is analogous to a fully accommodative monetary policy rule. Corresponding to
zero monetary accommodation is a fixed exchange rate regime. The interaction between these
two alternative types of accommodation has important implications for macroeconomic
fluctuations of an open economy.
In order to provide a framework for examining some of the accommodation issues
quantitatively, a general N-country international model based on wage contracts and
rational expectations was developed. Econometric work described in the article indicated
that the international connections are strong and that a small open-economy framework
could be misleading for econometric examinations of alternative policies. A general
procedure for evaluating policy in this international context was developed.
One outcome of this research is the finding that international price linkages call for
more macroeconomic coordination. If such price linkages modelled formally by
including foreign prices evaluated in the domestic currency in the domestic price
determination equation are quantitatively significant, then it is difficult for
individual countries to manage their internal economy without external effects. And
perhaps more importantly, such linkages suggest that a policy mix of less accommodative
exchange rate rules and more accommodative monetary policy would be preferred to a mixture
which calls for exchange rate rules which are close to purchasing power parity rules.
Other research on the effects of economic policy undertaken at the Cowles Foundation
used data from various countries as a source of information. Such international
comparisons must be carefully performed, since no policy analysis which effectively uses
international evidence can ignore structural differences between countries. Structural
differences arising from behavioral, technological, or institutional factors
influence economic performance and may prevent the repeated success of an economic policy
attempted at a different time or place.
In his article, "Policy Choice versus Economic Structure," Taylor adopted an
econometric approach to sorting out policy choice from economic structure in a comparison
of macroeconomic performance in several large OECD countries. In particular, the aim was
to determine whether international differences in cyclical fluctuations in inflation and
real GNP are due to policy differences in monetary and fiscal procedures, or to structural
differences in wage and price setting arrangements and the susceptibility of each country
to external shocks.
The criterion of economic performance used in this comparison was the magnitude of
fluctuations in inflation and output around longer term secular trends. Economic
performance is rated poor according to this criterion if the fluctuations are large and
long-lasting; a good rating results if the fluctuations are small and temporary.
The macroeconomic policies which were examined have the objective of holding down the
size of these cyclical fluctuations. Although every country would like to use these
stabilization policies to minimize both inflation and output fluctuations, there is a
macroeconomic tradeoff which forces a choice between the two. When a country is up against
this tradeoff, smaller fluctuations in output can only be achieved through larger
fluctuations in inflation. Since some countries have greater concern with inflation
stability while others have greater concern with output stability, they will naturally
choose different policies when faced with this tradeoff. Hence, policy choice will differ
across countries.
The approach is illustrated in Figure 1 (taken from the article), where
tradeoff curves for six OECD countries are presented. On each curve the darkened circle
represents the actual economic performance of each of the countries. The time period for
which the econometric parameters were derived is 195575.
The actual performances of the six countries in Figure 1 are substantially different
from each other. But since their tradeoff curves are also quite different, we are safe in
saying that these differences are not entirely due to policy choice. Only the U.S. and
Canada have tradeoff curves which are nearly the same. For these two countries a large
part of the difference between their economic performances can be attributed to policy
choice, with Canada choosing a relatively accommodative policy and the U.S. choosing a
relatively nonaccommodative policy. It is interesting that Canada is located on a very
flat part of the tradeoff curve. According to these results Canadian economic policy could
have been made considerably less accommodative with only a negligible deterioration of
output performance, but big gains in inflation performance.
In Figure 1 it is also shown how preferences or tastes could be taken into account, by
extending a ray from the origin of the diagram with a slope determined by the average
ratio of inflation fluctuations to output fluctuations in each country. Along this ray
each country would have the same ratio of output to inflation stability as the average of
all countries during the observation period: one definition of compromised preferences. If
each country had average tastes (by this definition), there would still be significant
differences in economic performance; but the differences in price stability would be
considerably smaller. These results indicate that much of the difference in economic
performance across countries is due to structural differences rather than to taste
differences. In particular, it is interesting to observe the extent to which high price
stability in Germany appears to be due to a favorable economic structure rather than
solely to more "dislike" for price instability.
In related work Stanley Black, who visited the Cowles Foundation during 198081
studied the conduct of monetary policy in a paper "The Use of Monetary Policy for
Internal and External Balance in Ten Industrial Countries" presented at the NBER
Conference on Exchange Rates and International Macroeconomics in November 1981. Monetary
policy reaction functions were estimated using advanced regression techniques. The results
yielded some interesting cross-country comparisons, leading to the following conclusions:
(a) There is an inverse correlation between the importance given to inflation objectives
in formulating monetary policy in different countries and observed average rates of
inflation in the 1970s. (b) The importance attached to inflation and unemployment
objectives varies inversely across countries. (c) There appears to be little relationship
across countries between the importance of unemployment objectives and observed average
rates of unemployment. (d) There is an inverse correlation across countries between the
importance of internal and external objectives for monetary policy. (e) There is an
inverse correlation between the flexibility of the exchange rate and the relative
importance of external compared to internal objectives, both over time and across
countries. (f) Finally, conservative election victories have often led to tighter monetary
policies.
In recent years, one of the major influences on the macroeconomic performance of the
industrial economies has been the price of oil and other energy sources. Since 1979,
William Nordhaus has been investigating the incorporation of models of energy systems into
more general macroeconomic models. The model and results were presented in Brookings
Papers on Economic Activity (1980:2).
The model begins with a model of the supply and demand for oil. For a given price,
energy demand appears to be approximately proportional to output in the short run, due to
the lack of substitutability with a given capital stock. Substitution away from energy
takes place only as the capital stock embodying old technology is replaced with capital of
newer vintages.
A crucial element in any model concerned with the interaction between world markets and
economic activity is the specification of OPEC oil pricing behavior. The article argues
that the behavior of key producers is best viewed as "noncooperative" rather
than (either) as competitive or monopolistic. Formally, whenever world oil supply
approaches short-run capacity either because of strong demand or capacity
disruptions spot prices rise above list or contract prices. If such a situation is
maintained for long, list prices are raised. Subsequently, if demand slackens, output of
individual producing countries, particularly those that are financially unconstrained,
will be restricted so as to maintain the higher list prices. Thus the mechanism generates
an upward ratchet in the price of oil through time.
In the long run, oil supplies and prices depend on new discoveries. Although drilling
activity has increased sharply in response to higher oil prices since 1973, it appears
that additions to reserves have not.
Higher world oil prices affect industrial economies in a number of ways. They directly
add to inflation by raising the average price level and contributing to the wage-price
spiral; they transfer real income to oil producers; and they alter production techniques,
reducing labor productivity and potential output. In addition, depending on the response
of policy, they may substantially reduce actual output relative to potential and increase
unemployment. To examine these effects, a simplified econometric model of the major OECD
economies is presented, incorporating a world oil market and paying particular attention
to the mechanisms by which oil affects the industrial countries. The model is used to
investigate the effects of past OPEC price increases and to explore the policy
alternatives available to the industrial nations. It is concluded that the first OPEC
price increase of 197374 added only between 0.6 to 1.2 percentage points to the
annual inflation rate for 197379 and subtracted less than 0.2 percentage point from
the annual growth in labor productivity. Real incomes were reduced by 2.9 percent over
this period, primarily through the transfer of real income abroad.
The inclusion of a world oil market with explicit behavioral equations for OPEC oil
supply and pricing enables projection of future conditions in industrial economies under
alternative assumptions for both oil supply and policy responses. In his projections the
greater economic costs come from loss of real income, either through higher world oil
prices or through slow economic growth that reduces domestic output and employment. To
avoid narrowing the gap between output and capacity in world oil markets, and thus driving
up the spot and then the list price of oil, policy must restrain the demand for oil. But
if this is done by slowing growth, the real income lost through lower production is even
greater than the real income saved by avoiding higher oil import bills. According to
Nordhaus' simulations, the optimal policy is to raise domestic energy prices for users in
the industrial countries, through either taxes or equivalent conservation measures. By
restraining demand for oil, high domestic prices keep world crude oil prices low while
permitting a normal expansion of output and real incomes.
B. Mathematical Economics
Researchers at the Cowles Foundation have continued to investigate the Walrasian model
of general equilibrium. The classical study of existence of Walrasian equilibria is Gerard
Debreu's Theory of Value, published as Cowles Foundation Monograph No. 16 in 1959. Since this
demonstration of the consistency of the Walrasian framework, general equilibrium theorists
have pursued a variety of directions. Major issues associated with the Walrasian model,
such as uniqueness and stability of equilibria, and the generality of excess demand
functions, have been explored. Other work has made the equilibrium framework available for
policy analysis by developing and applying algorithms for the computation of equilibrium
prices. A third major research goal has been to relax the assumptions of earlier work, and
so extend the range of applicability of the theory. All three of these directions have
been represented in recent research of Cowles staff members and visitors.
An early hope of general equilibrium theorists was that the assumption of rational
behavior of consumers would place significant restrictions on the nature of aggregate
excess demand functions. In 1974, Debreu completed the work initiated by Hugo Sonnenschein
by demonstrating that any continuous function from the unit simplex in R1 into
R1 satisfying the Walras' Law could be decomposed into 1 functions each
representing the excess demand of a rational consumer (Debreu, "Excess Demand
Functions," Journal of Mathematical Economics, 1, 1974; Sonnenschein,
"Market Excess Demand Functions," Econometrica, 40, 1972). In CFDP 643 John Geanakoplos gave a geometric
proof of Debreu's theorem. In CFDP 642
with Polemarchakis, Geanakoplos showed that when there are fewer consumers than goods,
then the standard neoclassical assumptions do put restrictions on the aggregate excess
demand function: with one consumer there are the Slutsky conditions (negative definiteness
and symmetry of its Jacobian on a space of dimension 1 1) and conversely any
function satisfying the Slutsky conditions is the excess demand of a rational consumer.
Furthermore, for any m < l, a function x(p) satisfies Slutsky conditions on a
space of dimension l m if and only if it is the aggregate excess demand of
m rational consumers.
In a similar spirit Geanakoplos and Geoffrey Heal in CFDP 651 gave a geometric proof of the transfer paradox analyzed by
Leontief, Samuelson, and Chichilnisky: in a two agent, Walrasian stable economy the
transfer of endowment from agent 1 to agent 2 cannot hurt the utility of 2, but if there
is a third agent then in fact the transfer can hurt agent 2 and help agent 1.
All of the above results depend crucially on the income effects in a rational
consumer's excess demand: indeed, it is the income effect matrix of rank 1 that gradually
erodes the Slutsky restrictions on excess demands as consumers are added one by one to the
model.
Tjalling Koopmans and Hirofumi Uzawa also completed a theoretical study of demand
functions in the period of this report. This study originated in a discussion that took
place in the summer of 1974 at the International Institute for Applied Systems Analysis in
Laxenburg, Austria. The discussion, which involved Uzawa, T.N. Srinivasan, Nordhaus and
Koopmans, was brought to its present state in CFDP 654, and was submitted for publication
by Koopmans and Uzawa under the title "Constancy and Constant Differences of Price
Elasticities of Demand." The starting point was a concern that econometric practice
in estimating such elasticities often proceeds from an assumption that own-price and
cross-price elasticities of demand for several goods are constant over a wide area of the
space.
The first half of the study shows that, under these assumptions of constancy, the
constant values can only be 1 for all own-price elasticities, and 0 for all
cross-price elasticities. Thus, from mere logic, the assumption of constant elasticities
is found to contradict the idea that the values of the price elasticities should be
estimated. In particular, the demand functions for the n goods, say, must have the simple
form

which, in turn, can be derived by maximization of a utility function of the form

under the budget constraint

where y is a given positive number. Conversely, postulating this form of the
utility function implies the constant values of own- and cross-price elasticities of
demand already indicated.
The second half of the study starts by postulating maximization of the so-called CES
("constant elasticity of substitution") utility function,

where 0 < rho < 1 and the a, are positive numbers, derives the form of the
corresponding demand function, and looks for any constancy properties that the price
elasticities of demand may exhibit. It is found that in this case it is the pairwise
differences between the elasticities of demand for any one good i with regard to the
prices of any two goods, j, k which are constant. For instance, in the case of n = 3 goods
altogether, the nine price elasticities nuij of this kind depend on only four
numbers, the three own-price elasticities nu11, nu22, nu33,
and the parameter rho, in the manner indicated in the following table:

The elasticities nuij themselves need not be constant. The paper, to be
circulated first as CFDP 654, gives explicit formulae for the price elasticities nuij
and the underlying demand functions.
Koopmans also completed a second study in which he has had a longstanding interest. In
its simplest form, the problem concerns the properties of a function of the form F(x,y) =
f(x) + g(y), (x and y scalars) which, besides being additively separable, is assumed to be
quasiconvex. That is, if (x0, y0) and (x1, y1)
are two distinct points in a two-dimensional space, then the maximum of F(lambda x0
+ (1 lambda)x1, lambda y0 + (1 lambda)y1)
on the interval 0 x 1 is attained in one, the other, or both endpoints, lambda = 0 or
lambda = 1. The most frequent application of this concept in economics has been to
quasi-concavity of utility functions, the subject of a much cited paper by Arrow and
Endhoven. Since a function is quasiconcave if and only if its negative is quasiconvex, the
above problem also bears on that of the form of utility functions that are both
quasiconcave and additively separable.
One of the early findings with respect to the above quasiconvex function F(x,y) was
that both f(x), g(y) are continuous and right and left differentiable in all points, while
at most one of them can fail to be convex.
A research visit by Gerard Debreu to the Cowles Foundation in 1976 led to a
collaboration in which earlier findings and conjectures were reformulated and additional
new ones developed and proved.
A finding of considerable interest is illustrated by the diagram at right.
It describes a property linking the graphs of the component functions f(x) and g(y) of
F(x,y) in a different way for each pair of arbitrarily selected points, x0 in
the domain of f, y0 in that of g. The drawn curve represents a monotonic
segment of the graph of f, containing the point (x0, f(x0)). The
dotted curve represents a linear transform of a similar segment of the graph of g. The
linear transform has the effect that the dotted curve has the point (x0, f(x0))
in common with the drawn curve. In addition, the fraction alpha/beta in the definition of
the transform can be so chosen that the dotted curve in no point exceeds the drawn curve.
Moreover, the construction permits a separating function, represented by the dashed curve,
where the number a has been so chosen that each point on the dashed curve is located, on
its vertical, between the points of the other two curves on that same vertical. Finally,
the separating function has the mathematical form defined by

It is puzzling to see a nonlinear separating curve, defined in terms of a logarithmic
function, arise as an implication of quasiconvexity combined with additive separability.
Since a quasiconvex function is only the negative of a quasiconcave function, the question
of the possible application of this finding to utility and its maximization may well be
raised.
In CFDP 524, Roger Howe demonstrated
that differentiability is a generic property of convex functions (defined on some fixed
convex set), in the sense that the set of differentiable convex functions is a countable
intersection of open dense subsets of the space of all convex functions. This is in
contrast to the well-known fact that in the space of all continuous functions, the
differentiable functions are non-generic.
In another paper, Howe studied the tendency to convexity of the vector sum of sets (CFDP 538). The paper focuses on the extent
to which the vector sum Sigma Vi of subsets Vi fills up its convex
hull Co(Sigma Vi). It starts with a proof of the ShapleyFolkman theorem
which bounds the distance between a general point of Co(Sigma Vi) and the
closest point of Sigma Vi. Then it shows that under various regularity
assumptions on the Vi (non-empty interior, smoothness of boundary, non-zero
measure, arcwise connectivity), the set Sigma Vi tends to fill up the inside of
Co(Sigma Vi).
One of the assumptions needed to guarantee continuity of demand functions, and thus
existence of competitive equilibria, is that consumers have convex preferences. There have
been many papers proving existence of approximate equilibria in exchange economies with
the bounds on the excess demands depending on some measure of the non-convexity of
preferences.
The recent joint work of Robert Anderson (who visited the Cowles Foundation during
198081), M. Ali Khan and Salim Rashid takes a different approach. Under minimal
assumptions, it is shown that any exchange economy possesses a price so that market excess
demand is bounded above by a constant which depends on the number of agents and the size
of the endowments but is independent of the preferences. The constant is of the order of
the square root of the number of agents times the largest endowment. The paper, entitled
"Approximate Equilibria with Bounds Independent of Preferences," will appear
shortly in the Review of Economic Studies.
There is an intimate connection between the existence of Walrasian equilibria for an
exchange economy and Brouwer's fixed point theorem. All proofs of existence of equilibria
rely critically on Brouwer's theorem. In addition, in 1962, Uzawa demonstrated that
existence of equilibrium prices for all economies whose excess demand functions are
continuous and obey Walras' Law implies Brouwer's theorem (Uzawa, "Walras' Existence
Theorem and Brouwer's Fixed Point Theorem," Economic Studies Quarterly, 13,
1962). Debreu's proof that Walras' Law and continuity characterize market excess demand
functions then shows the full equivalence of the Walrasian problem for an exchange economy
and Brouwer's fixed point theorem. When production is introduced into the general
equilibrium model, Kakutani's fixed point theorem is the tool used to demonstrate
existence.
In Cowles Foundation Monograph No. 24
published in 1973, Herbert Scarf in collaboration with T. Hansen developed techniques for
computing equilibrium prices for general equilibrium models, both with and without
production. These techniques involve the identification of approximate fixed points of
continuous mappings of the simplex into itself. Since the publication of this monograph
there have been several major refinements of Scarf's original algorithm.
A. Talman, who visited the Cowles Foundation in 198081, co-authored two
monographs on simplicial fixed point algorithms with G. Van der Laan (Variable
Dimension Fixed Point Algorithms and Triangulations, and Simplicial Fixed Point Algorithms,
Mathematisch Centrum, Amsterdam, 1980). In these monographs they developed procedures
which have the novel property that they can be initialized at an arbitrary point in the
simplex. This allows efficient use of information as the grid size is refined and closer
approximations to fixed points are sought. Talman and van der Laan's procedure involves
working with subsets of the simplex whose dimension varies in the course of the algorithm.
Computational experience suggests that this variable dimension algorithm significantly
decreases the time necessary to achieve the desired degree of agreement between a point in
the simplex and its image under the mapping.
In related work, Talman and Van der Heyden have used the ideas of the variable
dimension algorithm to generalize Lemke's algorithm for the linear complementarity problem
(CFDP 600). While Lemke's procedure
always begins at a pre-assigned point, the new algorithm can start anywhere, and is thus
better suited to performing sensitivity analyses and other forms of parametric analysis of
programming problems.
In CFDP 542, Howe studied the linear
complementarity problem from a different perspective. This paper observed that the
formulation of the linear complementarity problem as the problem of inverting a piecewise
linear, positive homogeneous map from Rn to itself allows one, under
non-degeneracy assumptions, to apply degree theory toward understanding linear
complementarity. A number of well-known results on linear complementarity are explained in
terms of degree theory. It is shown that all known algorithms, in particular Lemke's
algorithm, apply only to problems whose associated map has degree 1, whereas there exist
problems of arbitrarily large degree; in fact, the maximum degree for problems of a given
dimension grows exponentially with the dimension.
Another application of fixed point algorithms is to the constructive proofs of a
theorem stating that n-person games which obey a certain technical condition, known
as balancedness, have non-empty cores. Interest in this result arises partly from the fact
that market games involving consumers with convex preferences satisfy the balancedness
conditions. In CFDP 575, Van der Heyden
generalized Scarf's procedure for computing a point in the core of a balanced n-person
game. Scarf's procedure allows each coalition only a finite number of choices. Confronted
with a game that is not finite, Scarf's procedure must first approximate the
characteristic set of each coalition by a finite union of translated non-negative orthants
and then compute a point in the core of the approximating discrete game. Van der Heyden
develops a procedure which works directly with the characteristic sets, bypassing the
discrete approximation required by Scarf.
For the calculation to be easily implemented, the characteristic sets must be given by
a union of polyhedral sets. This case is developed in "A Note on Scarf's
Theorem," within a framework which is an abstraction of the core problem, and which
also generalizes the main theorem [Theorem 4.2.3] in Scarf's monograph, "The
Computation of Economic Equilibria."
The applicability of fixed point algorithms to models of urban land use was
investigated by Donald Richter, who visited the Cowles Foundation during 197980. In
his paper, Richter synthesized and generalized recent literature on the use of fixed point
methods to compute approximate numerical solutions to general equilibrium models of urban
land use ("A Computational Approach to Resource Allocation in Spatial Urban
Models," Regional Science and Urban Economics, 10, 1980). He showed that a
broad array of spatial urban models, including ones involving endogenously generated
externalities, can be studied within the context of a single unifying computational
framework.
In the standard formulation of the Walrasian model the number of agents and the number
of commodities are both taken to be finite. There are several situations of economic
interest in which it is natural to allow infinite numbers of agents or commodities.
First, the assumption of an infinite number of agents is necessary to formalize the
notion of a large but finite economy or an economy having "many" (as opposed to
a "few") agents. The many-agent assumption is at the heart of the perfect
competition notion which posits that agents are price takers or, at best, have a
negligible or infinitesimal influence on the equilibrium price.
Second, infinite dimensional commodity spaces arise naturally when the notion of
commodity is extended to include the time of production and consumption or the state of
the world in which production or consumption occurs.
In economies with infinite dimensional commodity spaces, the assumption that utility
functions are continuous in the given topology can have significant behavioral
implications. These behavioral implications were investigated by Donald Brown and Lucinda
Lewis in CFP 525 as a first step in
analyzing economies with infinite dimensional commodity spaces. Brown and Lewis continued
their analysis of infinite economies in CFDP
581, where they established the existence of a competitive equilibrium in an economy
having a continuum of agents and a countable number of commodities, using nonstandard
analysis.
The commodity spaces arising in the study of financial markets are quite rich and
require new techniques for their analysis. In fact, it appears that a natural mathematical
structure for the space of securities is a Riesz space or vector lattice; to express calls
or puts as functions of the underlying security, the nonlinear operations of max and min
are introduced and are required to be compatible with vector addition and scalar
multiplication: hence a vector lattice. Stephen Ross and Brown are currently looking at
the issues of spanning and arbitrage in the framework of Riesz spaces (Cowles Foundation
Preliminary Paper 82682).
One of the difficulties arising in the equilibrium analysis of financial markets is
demonstrating the existence of demand functions, given the standard assumptions on tastes
technically, budget sets need not be compact. C.D. Aliprantis and Brown have shown
the existence of a competitive equilibrium in an economy with a Riesz space of
commodities, assuming that demand functions exist and are continuous (CalTech Social
Science Working Paper #427).
Geanakoplos studied the equilibria of the "consumption loan" model introduced
by Samuelson in his seminal paper in 1958 ("An Exact Consumption Loan Model with or
without the Social Contrivance of Money," Journal of Political Economy, 66,
1958). This model, which has played an important role in many recent works in
macroeconomics, involves an infinite number of finite-lived agents and an infinite number
of commodities. Geanakoplos demonstrated that such models, despite incorporating the twin
hypotheses of agent maximization and market-clearing are isomorphic to finite models in
which not all markets clear and in which the social endowment exceeds the sum of the
individual endowments. This explains the remarkable properties of these models, that
typically possess a continuum of equilibria which are not Pareto optimal. Moreover, using
nonstandard analysis Geanakoplos analyzed the dimension of the equilibrium manifold of
such economies.
By adding production Geanakoplos managed to retain the above qualitative results and
also to address some of the issues claiming the attention of the non-Walrasian schools,
namely the Keynesian and Sraffian schools. Only in such a model with a continuum of
equilibria can the twin Walrasian axioms referred to above be maintained while an analysis
of government purchases of paper assets is undertaken. If in addition one supposes that
the labor market need not clear then one can derive the familiar KeynesianHicksian
apparatus of IS-LM curves, but now moving through time in a dynamic model.
One of the central assumptions employed by general equilibrium theorists is that
production sets are convex, ruling out economies of scale in production. This assumption
guarantees the existence of a price vector supporting any efficient production plan, from
which the decentralization theorems of general equilibrium analysis follow. However, it is
widely recognized that economies of scale are significant in many industries dominated by
large corporations, so that the assumption of convexity may be insufficient to account for
the behavior of producers in these industries.
With non-convex production sets, alternatives to price-taking behavior are needed both
in the selection of efficient production plans and in the specification of behavioral
rules consistent with economy-wide equilibrium. Over the last three years, Brown has
collaborated with Geoffrey Heal in undertaking a general equilibrium analysis of an
economy in which certain producers have increasing returns to scale. Their notion of
equilibrium in an economy with increasing returns is that suggested by Hotelling. A
marginal cost pricing equilibrium is a set of consumption plans, production plans, prices
and lump sum taxes such that households are maximizing utility subject to after-tax
income; firms with decreasing returns to scale are maximizing profits; firms with
increasing returns to scale are controlled and price at marginal cost; all markets clear
and finally the lump-sum taxes cover the losses of the increasing returns to scale sector.
The major results are the existence of a marginal cost pricing equilibrium (CalTech
Social Science Working Paper #415); a second welfare theorem for economies with increasing
returns to scale (University of Essex Discussion Paper #179), and the demonstration that
the first welfare theorem does not hold for economies with increasing returns to scale (CFP 519).
Richard Mclean, who visited the Cowles Foundation during 197980, studied the
effects of indivisibilities (which are an economically important type of non-convexity) on
the optimal assignment of activities among locations (CFDP 540).
Another study of the impact of indivisibilities is that of Mamuro Kaneko, who was at
the Cowles Foundation from 198082. In CFDP
571, Kaneko presents a model of a rental housing market in which houses are treated as
indivisible commodities. He presents several comparative static propositions demonstrating
how competitive rents change with certain parameters of the model.
The most important example of non-convex production sets are those arising from an
activity analysis model in which the activity levels are required to assume integral
values. During the period of this report, Scarf continued his study of these production
sets. Let the columns of the matrix

be the list of available production plans with inputs represented by negative numbers
and outputs by positive entries. The production set Y consists of all vectors x in Rm
+1 with
x < Ah,
as h ranges over the integral vectors in n-dimensional Euclidean space. An
example of such a production set (with m and n both equal to 2) is given in the following
figure, which is taken from the paper "Production Sets with Indivisibilities, Part I:
Generalities," published in Econometrica in January, 1981.
The economic content of the duality theorem for linear programming asserts that a
vector of activity levels satisfying the resource constraints will be optimal if there is
a vector of prices which yields zero profit for the activities in use, and non-positive
profits for the remaining activities. The simplex method, which is the most frequently
used algorithm for solving linear programming problems, can be viewed as a price
adjustment mechanism in which prices, and activity levels, are systematically revised
until the appropriate profitability conditions are satisfied. In the context of the
activity analysis model with continuous activity levels, the role of prices provides an
important interpretation of the functioning of markets in the optimal allocation of
resources.
When activity levels are restricted to being integers, and more generally when the
production set is not convex, prices are no longer available to test whether a feasible
set of activities is optimal. A systematic search for a vector of prices which satisfy
appropriate profitability conditions can no longer be the basic goal of an algorithm for
solving integer programming problems.
The approach adopted by Scarf is to replace the use of prices in solving integer
programs by the concept of a neighborhood system which limits the search required to
verify that a given vector of activity levels is the optimal solution.
Let us assume that every vector h of integral activity levels in Rn has
associated with it a neighborhood N(h). The association is arbitrary, aside from the
following two conditions:
1. N(h) = N(0) + h, so that the neighborhood associated with
two lattice points are translates of each other, and
2. If k in N(h) then h in N(k) , which states that the property of
being neighbors is a symmetric relation.
A vector of activity levels, satisfying the constraints of the programming problem is
then defined to be a local maximum (with respect to the neighborhood system) if every
neighbor either violates some of the inequalities or yields a smaller value of the
objective function. A major result of the paper "Production Sets with
Indivisibilities, Part I: Generalities," is to demonstrate that under mild conditions
an activity analysis matrix A will have associated with it a unique, minimal
neighborhood system for which a local maximum is global for every specification of
factor endowments.
This particular neighborhood system which depends on the technology matrix,
but not on the factor endowment is given by the following construction. By
selecting the vector b = (b0, b1,..., bm)', place the
inequalities Ah > b so that they define a lattice free region in n dimensional
space. Then relax the inequalities, systematically, so that no further relaxation is
possible without introducing a lattice point. In this process some of the inequalities
will be relaxed forever; the remaining constraint planes will each contain a lattice point
which satisfies all of the other inequalities. The following figure illustrates this
process when m = 5 and n = 2.
Each such application of this process will yield a finite set of lattice points whose
convex hull is a polyhedron with integral vertices and which contains no lattice points
other than its vertices; such a polyhedron is termed an integral polyhedron. A specific
collection of integral polyhedra associated with the matrix A is obtained as the
relaxations are carried out in different ways starting from arbitrary lattice free
regions. The unique, minimal neighborhood system for which a local maximum is global is
then obtained by defining two lattice points to be neighbors if they are vertices of a
common integral polyhedron associated with A.
If the collection of integral polyhedra associated with a given technology is known,
then the related integer programming problems can be solved by path following techniques
which are virtually identical to simplicial algorithms for approximating fixed points of a
continuous mapping. Depending on the factor endowment, a specific sequence of integral
polyhedra is calculated which terminates with a polyhedron one of whose vertices is the
optimal solution to the integer programming problem.
In CFDP 649, Philip White, Andrew
Caplin, and Van der Heyden demonstrate an intimate relationship between this path of
simplices and the path followed by a version of the dual simplex method for the linear
programming relaxation of the integer programming problem. They show that this particular
dual simplex path is always contained in the polyhedral path generated by Scarf's
procedure, and that any polyhedron encountered in the integer programming algorithm
intersects this simplex path. Thus as the grid of lattice points is successively refined,
Scarf's procedure converges to an efficient algorithm for the linear program. These
results also show that the integer programming algorithm can be initialized at the
solution of the corresponding linear programming relaxation.
In the paper "Production Sets with Indivisibilities, Part II: The Case of Two
Variables," Scarf has given a complete description of the integral polyhedra
associated with a matrix A in which the number of columns is equal to two. The specific
information is then used to develop an algorithm terminating in a number of steps which is
polynomial in the data of the problem. In this paper Scarf conjectured that the general
integer program with a fixed number of variables has a polynomial algorithm; a result
which was demonstrated in a remarkable paper by H.W. Lenstra, Jr., "Integer
Programming with a Fixed Number of Variables," making use of sophisticated techniques
from that branch of mathematics known as the Geometry of Numbers.
Aside from special cases, a complete description of the collection of integral
polyhedra is difficult to obtain when the number of variables is greater than or equal to
three. As a consequence Scarf's research has concentrated on an analysis of properties
possessed by such a collection with a view to improving numerical techniques and finding
economically significant implications for the theory of the firm. The following two
examples are illustrative.
First of all, we note that it is easy to demonstrate that an integral polyhedron in n
dimensions can have no more than 2n vertices. This observation leads immediately to the
following conclusion, which was independently obtained by Bell and Doignon: Let the
integer program have a finite maximum. If m > 2n 1 then at least one of the
inequalities can be discarded and the resulting integer program will have the same
solution. Moreover the bound is sharp in the sense that there are integer programs with m
= 2n 1 whose optimal solution is changed by discarding any of the inequalities.

This conclusion should be contrasted with the corresponding situation in linear
programming, in which an inequality can be discarded as long as m > n. The simplex
method can, in fact, be viewed as a systematic search for that subset of n inequalities
whose optimal solution satisfies the remaining constraints. Part of the difficulty
associated with solving integer programs can be attributed to the fact that a large number
of inequalities are required to characterize the solution.
As a second example, Scarf has given a detailed study of integer programs in which the
technology matrix A has four rows and three columns, in the paper "Integral Polyhedra
in Three Space." The major conclusion, drawing on previously unpublished work by
Roger Howe, states that each matrix has associated with it a family of parallel planes
l1h1 + l2h2 + l3h3
= c,
where (l1,l2,l3) are relatively prime integers, and c
ranges over all integral values. For any b = (b0, b1, b2,
b3)' if the system Ah > b has integral solutions on the two planes l1h1
+ l2h2 + l3h3 = c and l1h1
+ l2h2 + l3h3 = c' it will also have integral
solutions on each intermediary plane l1h1 + l2h2
+ l3h3 = c", with c" an arbitrary integer between c and
c'. This conclusion permits us to solve the integer programs associated with A, by solving
the two variable problems on each such plane. Specifically the solution on such a plane
will permit us to say on which side of the plane the optimal solution to the original
three variable problem lies. An economic interpretation might permit us to say that the
three variable problem can be decentralized: one agent selects the plane l1h1
+ l2h2 + l3h3 = c, and a second agent solves
the simpler problem on this plane. The argument which yields an optimal solution is
transferred back to the first agent, who can then determine the direction in which c
should be changed in order to move to the optimal solution. A suitable analogue of this
type of decentralization for higher dimensions would be of great significance for integer
programming.
C. Game Theory
Another major topic of research at the Cowles Foundation has been the development of
n-person game theory in relation to economics. Game theory involves a richer specification
of the decision-making environment than does general equilibrium theory. In the Walrasian
framework the price system serves as the sole means by which individuals are guided to
mutually consistent decisions. In contrast, game theory explicitly recognizes the
interdependence between agents' choices, so that both the strategies open to
decision-makers and the extent of their information of each others choices must be
specified. In addition the degree of cooperation between agents becomes a major
consideration.
The originators of game theory felt that analysis of these more intricate
decision-making environments would serve both to clarify the range of issues for which the
simple Walrasian assumptions suffice, and also to provide alternative analytic tools where
the Walrasian framework would not be appropriate. The subsequent development of the field
has seen the introduction of many solution concepts for n-person games, such as the
bargaining set and the core for cooperative games. and the Nash equilibrium for
non-cooperative games. Much work has gone into clarifying the connections between
game-theoretic concepts and the Walrasian equilibrium for market games. More recently, the
economics profession as a whole, and game theory in particular, has witnessed a resurgence
of interest in the role of agents' information in shaping their decisions. Several Cowles
staff members and visitors are recent contributors to these areas of research.
One of the most important equilibrium concepts for cooperative games is the core of the
game. This was first introduced by Edgeworth in 1881 in his book Mathematical Psychics
(Kegan Paul, London), and was then reintroduced and placed in an explicitly game-theoretic
setting by Martin Shubik in 1959 ("Edgeworth Market Games," in A.W. Tucker and
R.D. Luce, eds., Contributions to the Theory of Games IV, Princeton University
Press). Edgeworth's original analysis demonstrated that in an economy with two types of
agents, both with convex preferences, the core shrinks to the set of competitive
equilibria as the economy is replicated. This core convergence theorem was generalized to
economies with an arbitrary number of types of agent by Debreu and Scarf in 1963 ("A
Limit Theorem on the Core of an Economy," International Economic Review, Vol.
4). Anderson's results provide the most general convergence statements currently available
when agents have convex preferences ("An Elementary Core Equivalence Theorem," Econometrica,
Vol. 46, 1978).
Anderson's recent work on core theory extended convergence results for cores of
exchange economies with convex preferences to "most" exchange economies with
nonconvex preferences. Several different formulations for "most" are given.
Perhaps the most appealing is the following: suppose that a sequence of economies is
produced by a sequence of independent samples from a fixed distribution of agents'
characteristics. Under appropriate assumptions, convergence will hold with probability
one. The resulting paper, "Strong Core Theorems with Nonconvex Preferences," has
been tentatively accepted, subject to revision, by Econometrica.
One of the original motivations for studying the core of market economies was the
belief that the core would be non-empty in many economies for which no Walrasian
equilibrium exists. In particular it was hoped that the core might provide an analytic
apparatus suited to the study of indivisibilities and other forms of nonconvexity.
Martine Quinzii, who visited the Cowles Foundation during 1982, studied the existence
of the core in an economy with indivisibilities. More precisely, the model studied was the
following: there are n agents in the economy and two goods. The first one is a perfectly
divisible good called money, the second is a good which exists in the form of indivisible
items (houses, for example). It is assumed that no agent has initially more than one
indivisible item, and that he has no use for more than one item. The core of this economy
is a distribution of money and houses among the agents such that no coalition of agents
can find a redistribution of its own resources which is preferred by all its members.
This model is a generalization of the two models of exchange of indivisible goods
previously studied in the literature from the point of view of the core. These are the
model of ShapleyScarf of exchange of houses among n agents with compensation in
money and the model of ShapleyShubik of a market between buyers and sellers of
houses. It was proved for these two models that the core was non-empty. Quinzii
generalized this result to the general exchange model by proving that the economy is
balanced (a property introduced by Shapley and sufficient to imply the existence of the
core). Another interesting property of this model is that, under assumptions ensuring the
presence of money in the economy, the core and the competitive equilibria coincide. This
means that all core allocations can be obtained by a competitive market for the
indivisible items.
In CFDP 563, Kaneko considers an
assignment game without side payments and proves the nonemptiness of the core. He then
analyzes a market model with indivisible goods but without the transferable utility
assumption; the nonemptiness of the core and the existence of a competitive equilibrium of
the market model are shown, using the first result.
In CFDP 620, co-authored with Myrna
Wooders, Kaneko considers a generalization of assignment games, called partitioning games.
Given a finite set N of players, there is an a priori given subset pi of coalitions of N
which are available for blocking. Necessary and sufficient conditions for the
non-emptiness of the cores of all games with essential coalitions pi are developed.
Convergence results for finite economies generally have even stronger analogues for
games in which there is a non-atomic continuum of players. For instance, Aumann
demonstrated that in the setting of a continuum the core and the competitive equilibria
actually coincide ("Markets with a Continuum of Traders," Econometrica,
Vol. 32, 1964). More recently Geanakoplos showed that in a continuum of traders economy
with transferable utility any allocation in the bargaining set must give to each agent his
marginal contribution, and hence be in the core.
Another solution concept for cooperative games is the Shapley value, which is a measure
of the contribution made by each participant to a cooperative game. The Shapley value is
always efficient, in the sense that it specifies allocations such that no alternative
allocation would improve everyone's welfare. Recently, attention has been focused on
generalizations and analogues of the Shapley value that do not enjoy the efficiency
property.
In their recent paper, Pradeep Dubey together with Abraham Neyman and Robert Weber
consider this topic from an axiomatic viewpoint ("Value theory without
Efficiency," Mathematics of Operations Research, Vol. 6, 1981). They
characterize the class of operators that is obtained by omitting the efficiency axiom from
the axioms defining the Shapley value.
In CFDP 610, Dubey and Neyman
provide an axiomatic analysis of the equivalence of various equilibrium concepts for
non-atomic games with transferable utility. When thus restricted to games with
transferable utility and smooth preferences, the equivalence phenomenon is even more
striking in that many solutions coincide at a unique outcome. Dubey and Neyman derive a
"meta-equivalence" theorem: any solution coincides with this payoff if and only
if it satisfies their axioms.
The best-known solution concept for games without cooperation among players is the Nash
equilibrium. This is a set of choices such that each player's chosen strategy is optimal
in the face of others' choices. Such equilibria are frequently inefficient, as illustrated
by well-known games such as the prisoners' dilemma, in which both prisoners could be made
better off if they acted in cooperation.
The issue of whether Nash equilibria of market games are efficient was studied by Dubey
and J. Rogawski in CFDP 622 and CFDP 631. A general theorem is presented
and applications are made to non-cooperative market games.
Dubey's paper, "Price-Quantity Strategic Market Games" published in Econometrica,
Vol. 50, No. 1, January 1982, is in the rapidly growing series of articles on strategic
approaches to economic equilibrium. He begins with a standard Walrasian exchange economy
with a finite number of traders and commodities. This is recast as a game in strategic
form in essentially two different ways. There is a trading-post for each commodity to
which traders send contingent statements about how much they wish to buy and sell, and at
what prices. In Model 1, the trading point is determined by the intersection of the
aggregate supply and demand curves. In Model 2, trade takes place so as to meet as many
contingent statements as possible. Each buyer whose orders are filled pays the price he
quoted, using a fiat money which can be borrowed costlessly and limitlessly. But after
trade is over there is a settlement of accounts, and a penalty is levied on those who are
bankrupt. The Nash equilibria of each of these games are then examined.
In CFDP 601, Kaneko introduces a new
solution concept which he calls the conventionally stable set for an n-person
noncooperative game. This new solution concept is based on the von
NeumannMorgenstern stable set, particularly on their interpretation of it as
"standards of behavior." This first paper provides the definition of
conventionally stable sets and applies the new solution to zero-sum 2 person games, the
prisoner's dilemma, the battle of the sexes, and games with a continuum of players.
In CFDP 614, Kaneko applies his
theory to monopolistic and oligopolistic markets. A market model with a finite number of
producers and a continuum of buyers is presented and is then formulated as a strategic
game in which the producers' strategies are prices and the buyers' strategies are demands
for commodities. It is shown that a conventionally stable set in this game corresponds to
a conventionally stable one in a game where the producers are the only players but the
buyers are replaced by demand functions. Furthermore, it is shown that the theory of the
conventionally stable set is compatible with the classical monopoly solution, the
kinked-demand-curve solution and the leaderfollower solution.
When repeated plays of a game are being considered, it becomes important to specify
each player's information about others' moves at each stage of the game. In two recent
papers (CFDP 625 and CFDP 629) Dubey and Kaneko explore the
relation between information patterns and Nash equilibria in such extensive games.
Within the economic literature, much attention has been focused on the revelation of
information through the market prices of commodities in rational expectations equilibria.
In recent work with Hugo Sonnenschein of Princeton University, Anderson proposed a new
idealization of rational expectations equilibrium, and proved several general existence
theorems. Their central point is that agents must form models of the economy based on a
finite number of observations, and the resulting approximations made by agents rule out
the examples of non-existence of equilibria previously exhibited.
An alternative method of modelling the connections between prices and information is to
imbed the issue in an explicitly game-theoretic setting. In CFDP 634, Geanakoplos together with Dubey and Shubik formulated a
critique of the theory of rational expectations. That theory is designed to show how the
diverse pieces of information held in various hands can be utilized by the economy as if
it were all known by one agent (or all agents). What it in fact shows is how prices can
reveal information. It does not begin to explain how information is put into the prices to
be revealed. The theory also suffers from a grave paradox: if information is costly to
acquire, no agent will bother getting any, since he can costlessly wait for the prices to
reveal them anyway. In this paper, it was demonstrated that if one realistically models
the economic process, paying close attention to how information is disseminated in a
well-defined game, then indeed prices eventually reveal information, but not
until agents with superior information have been able to profit from that information.
In an earlier paper on the revelation of information in strategic market games,
Geanakoplos and Polemarchakis showed that given two agents 1 and 2 with the same priors
but different information and given any event A, the simple communication of the agents'
opinions back and forth must lead them to the same opinion (CFDP 639). This opinion, however, need not be the same common opinion
they would hold if they communicated all their information rather than just their
opinions. Moreover, Geanakoplos showed that this communication process could contain an
arbitrarily long repetition of the same message: to the outside observer it could appear
that 1 and 2 were simply repeating themselves, although in fact such repetition is still
informative. Later Geanakoplos extended this framework to allow the agents to accept or
refuse a generalized bet: eventually one refused, even though for an arbitrarily long time
period they might both be willing to take opposite sides of the bet. This process is
reminiscent of labor strikes, where both parties repeat their demands, until one suddenly
compromises.
Carolyn Pitchik, who visited the Cowles Foundation during 198081 studied the
theory and economic applications of games of timing. In CFDP 579, Pitchik extends the classical existence and uniqueness
results for games of timing, relaxing the zero-sum and differentiability conditions on the
payoff kernels. The classical games of timing were such that equilibria always existed; in
contrast, necessary and sufficient conditions for existence of equilibria in a wider class
of games are exhibited along with a characterization of a subset of any existing set of
equilibria.
The applicability of games of timing to economics is exemplified in the two papers by
Pitchik (joint with Martin J. Osborne of Columbia University): "Equilibria for a
Three-Person Location Problem" (CFDP
628) and "Are Large Firms More Powerful than Small Ones?" (Columbia
discussion paper no. 124, revised version to appear as a CFDP).
In the former paper, they find all equilibria, within a wide class, of the following
simplified three-firm location model of Hotelling (in which the price variable is
ignored), thus filling a gap in the location literature. Three firms produce the same good
with constant unit costs. Each firm selects a point on some line segment at which to
locate. The potential consumers of the good are uniformly distributed on the line segment.
Each consumer buys one unit of the good from the least costly source (where the constant
unit cost of travelling is included). In the latter paper, they study a model of duopoly
in which the firms, which have different capacities, share the profits from collusion
according to the relative potency of their threats of price-cutting. Up to its capacity,
each firm can produce with constant unit costs. The solution concept used (the Nash
variable-threat bargaining solution) predicts that the large firm always earns lower
profits per unit of capacity than the small one. Thus, the model provides an explanation
for the higher profit rate earned by smaller firms in industries where there is implicit
or explicit collusion.
During the last three years, Shubik's research has been on five closely intertwined
topics with major intellectual concerns being the theory of money and financial
institutions and the development of game theory.
The five topics have been 1) the theory of money and financial institutions, 2) the
development of game theory and its applications in general, 3) the applications of
economic analysis and game theory to defense problems, 4) the applications of economic
analysis and game theory to problems in bidding, auctions and in oligopolistic
competition, and 5) the methods and uses of experimental, teaching and operational gaming.
The forging of a basic link between micro and macro economics involves recasting the
ArrowDebreu general equilibrium model as a game in strategic form. Since 1970, when
Shubik first succeeded in constructing a closed exchange economy as an intrinsically
symmetric game in strategic form, he, together with several colleagues, has elaborated
models of strategic market games demonstrating the need for the invention of various
financial institutions and instruments. In particular, models where commodity money, fiat
money, other types of credit, bankruptcy laws, stocks, bonds, notes, and future markets
appear have been studied. The models themselves are neither equilibrium nor disequilibrium
models. They are merely well defined and playable games. This means that the rules of the
game have to be sufficiently well described that some method for price formation, for the
functioning of markets, for the trade in bonds, etc., will be operationally specified.
Shubik has been pursuing this approach to money and financial institutions for the last
12 years and expects to continue this work in collaboration with Dubey and other scholars.
Some 25 years ago, Shapley and Shubik commenced work on what originally was to be a
collection of volumes on the theory of games and its applications to political economy.
Shapley has disassociated himself from these books although research collaboration
continues.
In 1981, Shubik completed Volume I of the originally projected work entitled Game
Theory in the Social Sciences: Concepts and Solutions (MIT Press, 1982). Volume II of Game
Theory in the Social Sciences Political Economy: A Game Theoretic Approach is now
completed and is in press.
Volume III in the series is planned to cover money and financial institutions. Shubik
expects to complete this work partially in collaboration with Dubey and others in the
course of the next few years.
Shubik has always maintained an interest in defense problems and in the uses and
limitations of mathematics in the study of defense. In particular, he has collaborated
with Robert Weber on the extension of game theoretic analysis to network defense, and they
have also succeeded in linking these results up with some theoretical problems in game
theory concerning the relationship between the value of a cooperative game and the
noncooperative equilibria of a related game in strategic form. Shubik has also been
working with Paul Bracken on problems of nuclear warfare and command and control.
Shubik has continued his interest in the investigation of oligopoly theory and together
with Matthew Sobel, is taking some steps toward the construction of multi-stage models of
the firm with an explicit capital and financial asset structure.
Allied to much of the work noted above, is an ongoing interest in the techniques of
gaming. Some years ago Garry Brewer and Shubik did a major study on the uses of war
gaming. Shubik has recently been concerned with the value of war gaming as a strategic
planning device and possibly even more importantly as an educational device to draw
attention to the extreme dangers of nuclear war.
D. Microeconomics
Nordhaus' recent work has centered on the economics of energy and natural resources. In
Cowles Foundation Monograph No. 26, The
Efficient Use of Energy Resources, Nordhaus addressed the issue of how fast low-cost
energy resources such as oil and gas should be exploited.
Economic theory suggests that efficient use of energy resources entails using cheap
before expensive resources. In addition, each resource will, in a competitive market, have
a "royalty" attached to it. The royalty will be zero for resources that are not
scarce, positive for those that are. For all resources, the royalty will be rising at the
market interest rate.
By working backward from exhaustion, we can determine what an efficient price for oil
or other resources would be. The basic result can easily be seen where there are no
extraction costs roughly accurate for Mideast oil today. In this case, at the point
when substitution of the next resource (higher-cost oil, coal, etc.) occurs, the price of
Mideast oil and its substitute must be equal. For concreteness, call that year 2020. In
2020, then, the royalty on Mideast oil must equal the cost of the substitute. Since, in an
efficient market, the royalty must rise at the interest rate, the royalty today must be
the discounted value of the royalty in 2020. If the discount rate is 6 percent, and the
substitute costs $40.00, then in 1975 the royalty on Mideast oil, and its efficiency
price, must equal $40.00/(1.06)45 = $2.90.
The monograph describes the construction of a model designed to determine the efficient
path for using energy resources under far more realistic assumptions than in the simple
example above. The model has two components. The first component is the "demand"
side of the energy market. It reports the results of a detailed econometric model of
energy demand and then shows how these results can be used in the energy model. The second
component is the technology: estimates of the extent of energy resources, as well as the
costs of extraction and conversion. Alternative models of cost of extraction are briefly
described.
One major spillover from the model construction are the estimates of energy demand
functions. These rely on a combination of techniques for estimating the
price-responsiveness of energy demanded in the United States and Europe. The basic result
is that energy demand is shown to be moderately elastic with respect to price, with
elasticities in the range of 0.5 to 1.0 depending on the sector, country, and
specification.
The most important investigation relates to the estimate of the efficiency price of
oil, given in Chapter 5. Relying on the model, and the (clearly unrealistic) assumption
that the energy market is competitive, Nordhaus estimates that the efficient price of oil
(for the mid-1980s in 1982 prices) is $3.70 per barrel. This compares with a price of
approximately $32 per barrel in 1981 (again in 1982 prices). The reason the calculated
efficiency price is so surprisingly low is that the cost of the next substitute resource
is relatively modest, and the time at which substitution occurs is distant. Extensive
sensitivity analysis in Chapter 5 gives a range of $3.35 to $6.15 per barrel still
well below the present market price.
To see whether the Organization of Petroleum Exporting Countries (OPEC) is responsible
for the enormous discrepancy between actual and calculated efficiency price, Chapter 1
investigates the theory of monopoly in resource markets. Under limited but plausible
assumptions it is shown that the monopoly price will be set at approximately the
substitute price. In the example above, then, if a monopolist had control of the oil
market, he would set the price at slightly below the substitute (say $39.00), rather than
at the competitive price of $2.90.
The temptation to attribute the rise of the world oil price from 1972 to 1982 to the
effective monopolization of the world oil market by OPEC is reinforced by the result in
Chapter 5 that the market price in the late 1960s and early 1970s was virtually equal to
the calculated efficiency price. Chapter 6 looks more carefully at the empirical support
for this hypothesis, both in the current study and in other studies. Most studies make a
motivational hypothesis that OPEC is interested in maximizing its discounted profits (the
"wealth maximizing monopolist"). The basic result of this and other economic
studies indicates that the wealth-maximizing price for OPEC oil today lies at the top end
of the $17 to $33 per barrel range (in 1982 prices). These studies confirm that the price
rise after 1972 can be traced basically to the virtual monopolization of the international
oil market.
In other work with applications to the pattern of energy exploitation, Geanakoplos
considered the optimal behavior of an oligopolistic corporation with significant positions
in several markets. Geanakoplos showed that an oligopolistic oil producer with access to
two sources of oil, cheap and expensive, should under some circumstances not extract all
the cheap oil before beginning to produce the expensive oil. On the contrary, the
production of cheap oil has a potentially large strategic cost stemming from the increased
aggression of the rival firms in subsequent time periods once the first producers
"trump cards" have been played. Similarly a subsidy of a firm's output in a
given market will necessarily hurt the firm in another market, if the firm has increasing
marginal costs. Thus a firm that suddenly finds itself able to produce alone in another
market will always lose total profits if that new opportunity is not
extremely favorable.
In CFDP 640, Geanakoplos and
Takatoshi Ito applied general equilibrium methods to examine the consequences of optimal
contracting between a risk averse worker and a risk neutral employer whose labor
requirements are random. If severance pay is allowed and employers know workers outside
prospects, then the optimal contract will set severance pay so that workers will be
indifferent to whether they are laid off or not, and there will thus be no involuntary
unemployment. However, if the firm is unable to observe the actual outside pay of laid off
workers, then this indifference need not hold. In particular, it is shown that if workers
have decreasing absolute risk aversion, the optimal contract will make the average worker
prefer the uncertainty associated with a lay-off to the certain wage with the firm.
In two recent studies of the evolution of industrial structure, Iwai has developed
models in which firms constantly strive for survival and growth through their innovative
and imitative activities. The principal objective of this research is to illustrate the
Schumpeterian process of "creative destruction."
In CFDP 602, Iwai proposed a simple
stochastic model of industrial structure, which explains how the dynamic processes of
firms' technological innovations and imitations interact with each other and shape the
evolutionary pattern of the industry's state of technology both in the middle- and
long-run. In the middle-run, it was shown that the process of technological imitations
works essentially as an equilibrating force that continually moves the industry towards a
static equilibrium of perfect technological knowledge, whereas the function of
technological innovation lies precisely in upsetting this tendency by forcing the state of
technology to be more diverse. It was also demonstrated that in the long-run a certain
statistical regularity emerges out of this seemingly random pattern of the dynamic
interactions between imitations and innovations, in the sense that the cross-sectional
distribution of technological efficiencies among firms asymptotically approaches a
non-degenerate long-run average distribution. While new technological knowledge constantly
flows into the industry, actual production methods of a majority of firms always lag
behind it, and a multitude of diverse production methods with a wide range of efficiencies
will forever coexist in the industry. Indeed, it is the macroscopic equilibrium of
microscopic (technological) disequilibria that characterizes the "long-run" of
the industry.
In CFDP 603, Iwai extended the model
of the first paper to take account of the differential impact of diverse technological
developments among firms on their growth capabilities and the consequent repercussions on
the evolutionary pattern of the state of technology. It was first demonstrated that if
neither innovation nor imitation were possible, only the most efficient firms would
survive in the long-run competitive struggle for limited resources for capacity expansion.
This is exactly what the doctrine of "economic selection" has been telling us.
Once, however, the possibility of technological imitations was allowed in the model, this
doctrine was shown to lose much of its relevance. In this case the most efficient firms
will again monopolize the whole productive capacity of the industry, but the technological
imitation of the efficient firms by the less efficient ones will eventually allow most of
the existing firms to join the rank of the most efficient. This is more akin to the
Lamarkian mechanism than the rigid natural selection process. Finally, if the possibility
of occasional technological innovations was also allowed in the model, it was possible to
prove that the processes of capacity growth, imitation and innovation will interact with
each other and work to maintain the relative shape of the industry's state of technology
in a statistically balanced form in the long-run. The blind force of economic selection
working through the differential growth rates among firms with diverse efficiencies is
constantly outwitted by the firms' imitation activities and intermittently disrupted by
their innovative activities.
In any theory of corporate growth, research and development expenditures play a central
role. An examination of the empirical evidence led John Beggs to propose tentatively that
an important component of research and development, and subsequent patenting, occurs in
response to adverse external circumstances in a firm's product market. This so-called
"defensive R&D hypothesis" is supported by two of Beggs' studies, CFDP 588 and NBER Working Paper 952. The
first is an analysis of industry data at the two-digit SIC level considering the joint
time series interaction between R&D and industry profits measured as rate of return on
stockholder equity. The data are for 14 industries for the period 1959 to 1979. The
empirical results indicated that a "shock" to industry profits has an inverse
effect on R&D effort. Increases in the rate of growth of profits slow the rate of
growth of R&D and, conversely. The second study involved data collected for twenty
more narrowly defined industries during the period 1850 through to 1954. Patent data were
used as the indicator of technological change. The explanatory variable of concern was
industry value added. For each industry the rate of change of patenting and the rate of
change of value added were measured relative to the national aggregate rates of change of
patenting and gross domestic product respectively. This was done to remove trade cycle
effects. Again the results showed a negative correlation between the two variables,
supporting the defensive R&D hypothesis.
In CFDP 580, Christophe Chamley
investigated the role of capital markets and institutions such as limited liability in
determining the allocation of funds among would-be entrepreneurs. He analyzed these issues
in a simple theoretical model of occupational choice (with entrepreneurs, employees, and
financial institutions). Entrepreneurs may use their liability form as a signal of their
ability. Equilibria (which may be multiple) are inefficient. Moreover, the social value of
the institution of limited liability is ambiguous. In particular a tax on limited
liability firms (corporations are an example) may be desirable when the possibilities of
substitution between occupations are relatively large, and the diseconomies of scale in
each firm are relatively small.
Chamley has also investigated the effects of taxation on the intertemporal allocation
of resources, using a series of aggregate stylized models incorporating rational
expectations of the future price changes induced by tax reform. In CFP 535, Chamley analyzed the efficiency
cost of capital income taxation in this framework. This cost is induced by the price
distortion in the intertemporal allocation of resources. It increases with the elasticity
of substitution between capital and labor in the production function and with the
elasticity of substitution between consumption levels at different dates. The first effect
seems significantly more important than the second. The welfare gain obtained by the
abolition of the capital income tax is smaller when expectations are not rational (for
example, it can be cut in half when expectations are myopic). Also the allocation
efficiency cost of the corporate tax is larger than the intertemporal welfare cost.
Recently, Chamley extended this framework to analyze the optimal combination of taxes
on incomes of capital and labor (or on consumption). One of the results is that the
differential efficiency cost of the capital income tax depends only on the production
technology when the growth rate and the discount rate are identical.
In CFDP 554, the tax rates are
optimized over time together with the government deficit. The optimal formulae for the
second best taxation of different goods consumed in the same period are similar but not
identical to those obtained in standard atemporal models. The difference arises because of
the incidence of tax reform on accumulated assets which are in fixed supply in the
short-run. This effect is the reason for the time inconsistency of optimal policies.
Alvin Klevorick's research on economic theory and antitrust policy has focused on the
pricing behavior of dominant firms and has involved collaborative work with Paul Joskow of
the Massachusetts Institute of Technology and Richard Levin of Yale. In their article,
"A Framework for Analyzing Predatory Pricing Policy," Yale Law Journal,
213 (1979), Joskow and Klevorick develop a decision-theoretic analytical framework that
explains and clarifies how various factors and judgments enter into the formulation of a
policy toward "predatory pricing" a dominant firm's use of price to
restrict competition by driving out existing rivals or excluding potential ones.
Consideration of the links between certain firm and market characteristics, on the one
hand, and the probabilities of error, error costs, and rule-implementation costs, on the
other, lead Joskow and Klevorick to propose that a two-tiered "structuralist"
rule-of-reason approach be applied in cases of alleged predatory pricing. They argue that
such an approach most appropriately accounts for the uncertainty and the costs of making
incorrect decisions that inhere in the formulation of a policy toward pricing behavior. In
the first stage, both the structural characteristics of the market in question and the
market power of the alleged predator firm would be examined to determine if they generate
a reasonable expectation that predatory pricing could occur and would impose significant
economic losses on society. A claim that predatory pricing had taken place could be
pursued only if a reasonable case could be made that there was a serious monopoly problem
in the industry. Only in such instances would one go on to the second stage: a
rule-of-reason inquiry into the behavior of the dominant firm. The substantive content of
the second-tier examination of pricing behavior draws, in an eclectic way, on the insights
suggested by predatory pricing "rules" that had been previously proposed by
other scholars including Areeda and Turner, Williamson, and Baumol but it
finds no one rule adequate to the task.
These relatively simple rules that have been proposed to set the boundaries for legal
pricing behavior of dominant firms are the principal focus of the theoretical work by
Klevorick and Levin in "A Welfare Analysis of Pricing Constraints on Dominant
Firms." The paper analyzes the welfare consequences of each of these rules using a
particular parameterization of the now-standard Gaskins model of a dominant firm that
faces a competitive fringe of existing firms and potential entrants. The aim is not to
find the "best" rule for, as Joskow and Klevorick argued, no one rule will
produce optimal results for all market situations. Instead, the objective is to
characterize the different sets of market conditions (demand elasticity, speed of entry
and exit, cost advantage of the dominant firm, social discount rate, etc.) under which
each of the different rules is likely to lead to welfare improvements over the
unconstrained situation.
Klevorick and Levin find that if an industry's initial conditions are such as to induce
an unconstrained dominant firm to permit entry (in the sense of Gaskins), none of the
rules offered by Areeda and Turner, Williamson, and Baumol will affect behavior.
Furthermore, under these conditions, a laissez-faire approach dominates a rule that
proscribes the sacrifice of short-term profits. A specific implication of this result is
that the discounted total surplus generated by Gaskins-style dynamic limit pricing is
larger than that generated by short-run profit-maximizing behavior ("umbrella
pricing"). It is shown that when the initial conditions in an industry are such as to
induce an intertemporally profit-maximizing dominant firm to drive competitors from the
market, the Baumol rule will always impose a binding constraint, and the
AreedaTurner rule will be binding under some conditions. It is interesting, though,
that neither the Baumol nor AreedaTurner rule unambiguously produces increased
welfare. Moreover, neither of the two rules dominates the other. The paper characterizes
the specific conditions that favor application of the AreedaTurner rule, the Baumol
rule, or a policy of non-intervention.
Klevorick has also continued his research on mathematical models of jury decision
making. His most recent work has focused on one of the virtues claimed for a jury decision
namely, that such a decision is based on more complete and better processing of the
information available than the verdict of any one juror deciding alone would be. During
the deliberation process, the argument goes, jurors exchange points of view and assemble
the evidence into a coherent picture that is more likely to be correct than is the view of
any one juror. Given the central and valuable role attributed to the information
processing that jury deliberation is supposed to achieve, it is striking that previous
models of the jury decision process including both abstract mathematical
formulations and simulation models are inattentive to this aspect of the jury's
work. Although these models depict how a jury might move to a verdict from the initial
views of its members, they do not provide any description or specification of how the
jurors' views are combined or how their various observations and insights are assimilated.
In a paper entitled "Information Processing and Jury Decisionmaking,"
Klevorick, together with Michael Rothschild of the University of Wisconsin-Madison and
Christopher Winship of Northwestern University, use a formal model to explore the
information processing function that jury deliberation performs. In particular, they
investigate when a jury that deliberates to a unanimous verdict can reach better decisions
ones with lower probabilities of error than a jury that bases its decision
on the view of the majority of its members immediately after the trial is concluded. What
is the gap between the quality of decisions reached using a first-ballot, majority-rule
procedure and the quality of those that would be generated by a jury making optimal use of
the information provided at the trial? The authors' strategy is to consider a simple model
of juror observations, though one that is richer than characterizations in the literature,
for which they can define precisely what an optimal jury decision rule would be. It is
assumed that jurors' observations are correlated normal random variables or, equivalently,
it is supposed that each juror receives the same information from the trial but different
jurors make independent errors of observation. With this specification, the jury faces a
problem in discriminant analysis, and Klevorick, Rothschild, and Winship use that
statistical theory to discuss the optimal jury decision rule. The paper shows that
deliberation has the potential for generating substantial improvement in the quality of
decisions, and demonstrates how that potential arises, especially the central role that
heterogeneity among jurors in terms of what they see and hear, what they believe
about the costs of erroneous decisions, and what differences there are in their
information processing capacities plays in determining how much improvement is
possible. Of particular interest is the fact that if jurors differ in what they see and
hear at the trial but share the same view of the relative cost of erroneous convictions
and erroneous acquittals and have the same individual abilities to process information,
then for large juries both the first-ballot majority rule procedure and optimal
deliberation yield results on the efficiency frontier but the former fails to produce the
socially optimal mix of false acquittals and false convictions.
Klevorick has been engaged in research on the economics of mental health, and he has
focused, in particular, on the effects of regulation on the delivery of mental health
services. He and Thomas McGuire of Boston University are formulating and analyzing
theoretical models of the demand for and supply of mental health services that take
explicit account of the extent of insurance coverage for such services, the multiplicity
of types of providers, and the impact of direct government regulation on the delivery
system. This theoretical work will form the basis for an empirical analysis using a short
term series of cross-section observations.
In his paper, "Regulation and Cost Containment in the Delivery of Mental Health
Services," Klevorick considers the various forms in which regulations appear in the
mental health services area. After discussing a cost-benefit specification of the
regulatory goal, he focuses on the importance and complexity of substitution relationships
in the mental health area and the impact of these relationships on regulation.
Gerald Kramer's research on the theory of electoral systems continued in the period of
this report. His recent paper on existence of electoral equilibrium extended the theory of
electoral competition by establishing the existence of candidate equilibria under quite
general assumptions, in particular concerning the dimensionality of the underlying policy
or issue space over which the candidates compete. In the classical HotellingDowns
case this space is one-dimensional, and under the usual assumptions on voter and candidate
preferences, there exists a pure strategy equilibrium in which both candidates adopt
policies at the median of the voter policy-preference distribution. It is well known,
however, that when the policy space is of greater dimensionality, pure strategy equilibria
generally will not exist. Thus analysis of electoral competition in the general and
substantively important multidimensional case must be based either on explicit hypotheses
about disequilibrium behavior, or else on a more general equilibrium concept, such as that
involving the use of mixed strategies. In the article, "Existence of Electoral
Equilibrium," Kramer focuses on the second of these possibilities, characterizing
candidate behavior in terms of an equilibrium in the domain of mixed strategies.
In a paper on electoral stability and the dynamics of electoral competition, Kramer is
concerned with stability properties of a class of competitive electoral processes. The
basic structure is one in which two political parties compete for votes over a
multidimensional space Rk of issues or policies, by adopting specific electoral
platforms, or positions in the issue space; each is interested in maximizing its electoral
prospects by finding a platform which is preferred by as many voters as possible. If this
electoral contest is modeled as a symmetric two-player game, a pure strategy equilibrium
of the classical HotellingDowns variety almost never exists (if k > 1). Moreover,
this formulation ignores an obvious asymmetry in the roles of the incumbent party and its
opponent, since an incumbent's choice of platform is heavily constrained by the need to
defend its record in office.
This asymmetry suggests a natural dynamic reformulation of the electoral process: the
two parties are assumed to compete repeatedly, over an infinite sequence of elections. In
each election the party whose platform is preferred by a majority is elected; it is
assumed to then enact the policy package it advocated, and to defend this same policy in
the next election. The "out" party may adopt any policy it wishes, to maximize
its prospects. In general, the incumbent's policy will always be defeated, and the two
parties will alternate in office. As the process is repeated over time, a sequence of
successively-enacted "winning" policies is thus generated. The analysis focuses
on the long-run behavior of these sequences, or trajectories, of policies.
Voter political preferences are assumed to be representable by satiated, additive
utility functions with parameters a1, ..., al, of the form ua(x)
= f0(x) + alfl(x) + . . . + alfl(x)
(satisfying standard concavity and smoothness conditions). Alternatively, if we think of x
as a vector of arbitrary characteristics or attributes of parties or candidates, voter
political attitudes toward these objects are assumed to have a common underlying
structure, which reflects a shared set f0, ... , fl of underlying
criteria or factors (specific factors may be weighted very differently, or not at all, by
different sectors of the electorate). With this structure, a society or electorate is then
characterized as a probability distribution l on Rl, the parameter space.
These assumptions imply the existence of a set S subset Rk of policies,
typically a small, proper subset of the Pareto Optimal set, which is a dynamic equilibrium
for the system in any of several senses. S contains all the equilibrium points, or
possible steady states, of the system, and is also a region of recurrence for any
trajectory. S is also asymptotically stable, in the following sense: there exists a
neighborhood Ndelta(S) of S such that any trajectory must eventually enter and
thereafter remain inside Ndelta(S); moreover, delta tends to zero as the
electorate increases in size.
Thus, in large electorates, any vote-maximizing trajectory will eventually be found in
or near the set S.
The size of S in some sense reflects the degree or extent of possible instabilities in
the system. In general this will depend on the distribution of voter preferences; if l
< k , however, sharper and essentially distribution-free results can be obtained. In
this case S coincides with a particular set which has been studied in the voting social
choice literature, the minmax set. The minmax set is typically quite small in large
electorates. Moreover, as the size of the electorate increases in the manner described
above the minmax set collapses to a point. Thus in large electorates satisfying these
conditions, competitive vote maximization will eventually draw both parties into the
immediate vicinity of the unique minmax point, and two-party electoral competition will be
very stable.
The minmax set is also a natural extension of the Condorcet criterion for social choice
under majoritarian democratic rule. Hence from a normative point of view, political
competition under these conditions leads not simply to stability alone, but also to the
attainment of a democratic social optimum.
The work on election theory outlined above shares with most recent work the assumption
that political parties compete over a multidimensional space of issues or policy
variables. In his paper, "Electoral Politics in the Zero-Sum Society," Kramer
develops a theory of competition under an alternative structure, in which candidates
compete by directly offering particular benefits and services to voters. The analysis
presumes a symmetry in the roles of incumbent and challenger, in that the former
necessarily commits himself to an allocation first, by his actions in office, thereby
presenting the challenger with a fixed target to optimize against. Voters tend to discount
the challenger's promises to some degree in comparing them to the benefits currently being
received under the incumbent, and cast their votes so as to maximize the level of benefits
received. Under these circumstances, Kramer establishes that challengers optimally pursue
a "divide and conquer" strategy of bidding for a minimum winning coalition of
voters. Incumbents, by contrast, pursue a more even-handed strategy, attempting to appeal
to all their constituents. The model thus predicts distinctive differences in the behavior
of challengers and incumbents, with no tendency for the candidates to converge on a common
strategy or position, as in the classical Downsian case.
In the paper, an issue is defined as a measure which, if taken, would generate a fixed
distribution of benefits and costs, and on which each candidate must take a position.
Kramer obtains a simple classification of issues according to their electoral
consequences, and shows that one important category of issues which he labels the
"controversial" issues is strategically important. The existence of a
controversial issue invariably works to the disadvantage of the incumbent; hence he always
has an incentive to suppress or remove it from the electoral arena altogether, if he can.
If he cannot, it will then be optimal for the incumbent to favor the issue if and only if
it is one which produces a (positive) net social benefit. Even with this optimal position,
however, under general conditions the incumbent will nevertheless be defeated, by a
challenger who opposed the issue and who will therefore not enact it, even though it would
be socially optimal to do so. These results thus support the doubts expressed by Thurow
and others, concerning the inability of a competitive democratic system to deal
effectively with major issues when distributional considerations become politically
important. They also imply, however, that Thurow's proposed reforms, to strengthen party
responsibility, would not help, since the problem lies in the nature of the competitive
process itself.
In his paper, "The Ecological Fallacy Revisited: Aggregate- versus
Individual-Level Findings on Economics and Elections," Kramer seeks to explain a
puzzling feature of voting behavior. Several aggregate-level studies have found a
relationship between macroeconomic conditions and election outcomes, operating in
intuitively plausible directions. More recent survey-based studies, however, have been
unable to detect any comparable relationship operating at the individual-voter level. One
recently proposed explanation for this persistent discrepancy is that voters actually
behave in an altruistic or "sociotropic" fashion, responding to economic events
only as they affect the general welfare, rather than in terms of self-interested
"pocket-book" considerations.
Kramer argues that the discrepancies between the macro- and micro-level studies are a
statistical artifact, arising from the fact that observable changes in individual welfare
actually consist of two unobservable components, a government-induced (and politically
relevant) component, and an exogenous component caused by life-cycle and other politically
irrelevant factors. He shows that because of this, individual-level cross-sectional
estimates of the effects of welfare changes on voting are badly biased and are essentially
unrelated to the true values of the behavioral parameters of interest: they will generally
be considerable underestimates, and may even be of the wrong sign. An aggregate-level
time-series analysis, on the other hand, will often yield reasonably good (if somewhat
attenuated) estimates of the underlying individual-level effects of interest. Thus, in
this case, individual behavior is best investigated with aggregate- rather than
individual-level data.
It is also shown that the evidence for sociotropic voting is artifactual, in the sense
that the various findings and evidence which ostensibly show sociotropic behavior are all
perfectly compatible with the null hypothesis of self-interested "pocketbook"
voting.
In other works combining economic and political considerations, Donald Richter studied
existence of equilibrium in the context of a Walrasian economy which includes a finite
number of local governmental jurisdictions and a continuum of perfectly mobile consumers
whose jurisdictions of residence are endogenously determined. In equilibrium, each
jurisdiction finances its own provisions of public goods using a proportional endowment
income tax, no other affordable tax-expenditure package for the jurisdiction exists which
all its residents prefer, and no consumer wants to move.
John Roemer, who was a visitor at the Cowles Foundation for the 197980 academic
year continued his construction of a general theory of exploitation. This general theory
is intended to clarify the differences between Marxian and non-Marxian thinkers in the
kinds of relationships they view as exploitative. In constructing this theory, it was
necessary to remove the obsolete labor theory of value from its traditionally central role
in models of exploitation.
The general theory of exploitation which Roemer has developed uses concepts of property
relations, not labor value, to characterize exploitation. Some simple cooperative game
theory was used to specify exploitation. Exploitation is defined at a given allocation
with respect to some conception of an alternative, specified by the characteristic
function of some game. How one chooses the characteristic function determines one's
ethical preferences. Roemer then showed how characteristic functions could be constructed
which characterized notions of feudal, capitalist, socialist, status and neoclassical
exploitation. Capitalist and Marxian exploitation are equivalent in simple models where
both can be defined. In each case, the characteristic function was generated by imagining
a change in underlying property relations. As an immediate consequence of the definition,
and if certain simple assumptions hold (super-additivity of the characteristic function
and Pareto-optimality of the initial allocation), the non-exploitative allocations with
respect to a given theory of exploitation consist precisely of the core of the game in
question. As a corollary to this approach, Roger Howe and Roemer developed as well an
application of the method to Rawlsian justice, constructing the game whose core consists
precisely of the Rawlsian-just allocations.
In addition to the theory of exploitation, an endogenous theory of classes was
developed. Both class and exploitation status of agents in a general equilibrium model
emerge endogenously, as a consequence of agents optimizing against constraints which
specify their wealth. People put themselves into certain classes as an optimal procedure.
They also end up being exploited or exploiting (or neither), and the important theorem
relates these two characteristics of agents, and is called the Class Exploitation
Correspondence Theorem.
The theorem states that an agent who optimizes by selling labor power is necessarily
capitalistically exploited, and an agent who optimizes by hiring labor power is
necessarily an exploiter. The CECP provides microfoundations for ideas of class which have
heretofore been only macro concepts. One need not define an agent as being of a
certain class, but that emerges from economic behavior derived from the initial asset
positions of agents.
Roemer's research at Cowles was presented in CFDP 543, CFDP 544
and CFDP 545, while a more general
summary of his research is contained in his recently published book, A General Theory
of Exploitation and Class, published in 1982 by Harvard University Press.
E. Econometrics
A major and growing concern of econometric research in recent years has been the
effective pooling of cross section and time series economic data. This topic has been the
subject of theoretical and applied research by John J. Beggs. In CFDP 633 (never
completed) Beggs extends modern methods of time series analysis to the efficient
aggregation of contemporaneous time series processes which have the same
autoregressive-moving average (ARMA) form. Important applications include the analysis of
stock prices of corporations trading in the same or similar markets, overlapping
commodities futures contracts, time series on major economics aggregates, such as
unemployment, drawn from each of the 50 States individually, etc. Using an error
components structure, an efficient method of pooling such contemporaneous time series can
be developed which makes use of a spectral decomposition of each series. As the time
series sample becomes large the periodogram ordinates are asymptotically uncorrelated
across frequencies, so that the only remaining correlation is among the cross-sectional
replications at each frequency. Optimally weighted, the periodogram ordinates at each
frequency are chi-squared distributed with degrees of freedom equal to twice the number of
available cross-sectional replications. This effectively removes the need for smoothing
the periodogram through averaging adjacent periodogram ordinates, and hence removes the
bias which this practice introduces in the formulation of the underlying ARMA model.
Simulation, Monte Carlo, and actual empirical application all indicate that the proposed
procedure performs exceedingly well, even in situations with time series samples as short
as twenty-five periods. CFDP 646 (never distributed) successfully applies
these techniques to the study of commodities futures prices.
The theory of nonlinear regression has been another topic of substantial interest in
recent years in econometrics. In CFDP 573
and CFDP 549 P.C.B. Phillips takes
issue with recent theoretical work on the nonlinear simultaneous equations model. Much of
the latter work has emphasized the importance of the normality assumption and, more
generally, correct distributional assumptions about the equation errors in establishing
the consistency of the non-linear full information maximum likelihood (FIML) estimator. In
this respect, the general non-linear model appears to be very different from the linear
simultaneous equations model, where it is known that the consistency of FIML based on the
hypothesis of normally distributed errors is maintained for a wide class of alternative
error distributions. One aim of CFDP 573
is to provide examples which show that normality is not necessary for the consistency of
non-linear FIML even when there are major non-linearities in the structural functions; and
the analysis suggests a general procedure for constructing non-normal error distributions
for which the consistency of non-linear FIML is maintained. The analysis also demonstrates
the intimate relationship that exists between the form of the non-linear functions
admitted into the structural specification of the model and the tail behavior of the error
distribution which is permissible if an asymptotic theory is to be developed. An
additional aim of the paper is to prove a possibility theorem which demonstrates that when
non-linear FIML is consistent under normality, it is always possible to find non-normal
error distributions for which the consistency of non-linear FIML is maintained. The
procedure that is developed for finding a class of error distributions which preserve the
consistency of non-linear FIML can be applied more generally and will be useful in other
contexts. Many additional problems associated with the asymptotic properties of non-linear
FIML under alternative, plausible error distributions arise in this discussion and some of
these are currently the subject of a continuing investigation.
Another topic of theoretical research by Phillips concerns the exact finite sample
distributions of econometric statistics. In "The Exact Finite Sample Density of
Instrumental Variables Estimators in an Equation with n + 1 Endogenous Variables" the
exact finite sample density function of the instrumental variable estimator was found for
the most general single equation case. CFDP 609 extends
some of this work to extract the marginal densities of individual coefficients and
presents graphical analyses which illustrate the effect of various parameter changes on
the sampling distributions. These computations indicate, amongst other things, how the
sampling distributions concentrate more slowly from sample size increases as the number of
endogenous variables in the equation grow.
CFDP 621 surveys the literature in
the field of exact finite sample theory for simultaneous systems. This review covers
methods of derivation, derives useful generic formulae for estimator forms and considers
results that relate to both structural and reduced forms. A new line of approach to the
distribution of the limited information maximum likelihood (LIML) estimator is suggested
in CFDP 621 and is systematically
explored in the general single equation case in CFDP 626. The latter paper shows that for a leading overidentified
case, LIML has a multivariate Cauchy distribution.
Methods of approximating the complex analytic forms of many small sample distributions
in econometrics have formed the basis of several additional papers by Phillips ("A
Saddlepoint Approximation to the Distribution of the k-Class Estimator of a Coefficient in
a Simultaneous System," with A. Holly, "Finite Sample Theory and the
Distributions of Alternative Estimators of the Marginal Propensity to Consume," CFDP 562, CFDP 608, CFDP 609).
Certain of these (the first two papers in particular) have concentrated on traditional
methods based on asymptotic expansions and evaluations of their adequacy, in these and
other papers, have shown the need for improvements. The purpose of CFDP 562 and CFDP 608 is to suggest a new approach to
small sample theory that allows for a convenient integration of analytical, experimental
and purely numerical directions of research. The approach centers on a flexible technique
of approximating distributions which can accommodate information from sources as diverse
as the following: (i) exact analytical knowledge concerning the distribution, its moments
or its tail behavior; (ii) alternative approximations based on crude asymptotic theory or
more refined asymptotic series; (iii) purely numerical data arising perhaps from numerical
integrations of moments or at certain isolated points in the distribution; or even (iv)
soft quantitative information of the Monte Carlo variety. The first part of CFDP 608 by Phillips explores the
properties of rational function approximants which are best according to the uniform norm
for a general class of probability density functions. Characterization, uniqueness and
convergence theorems for these approximants are given. In the second part of the article,
an operational procedure for extracting rational approximants with good global behavior is
devised. It involves modifications to multiple-point Padé approximants which will
typically utilize purely local information about the behavior of the body and the tails of
the distribution. The new procedure is applied to a simple simultaneous equation estimator
and gives exceptionally accurate results even for tiny values of the concentration
parameter. Extensions to this work are currently under way.
In CFP 560 Phillips shows that the
formula for the characteristic function of the F distribution in the literature was
incorrect and implied the existence of all moments. Correct formulae for the central and
non-central cases were derived. CFP 546
illustrates a simple method of finding the latent root sensitivity formulae for a matrix. CFDP 567 proves a general theorem on tail
expansions for densities given the asymptotic behavior of the characteristic function in
the locality of the origin. The resulting formula is applied to the stable densities. In
CFDP 568 (written jointly with E. Maasoumi), some errors in the literature on the
asymptotic theory of instrumental variable estimators in dynamic models are corrected. The
paper also considers the adequacy of presently used design methods in experimental work of
the Monte Carlo variety in econometrics and makes certain recommendations about the way in
which such work is conducted and reported.
During 1980, Taylor and Fair initiated research on the estimation and solution of
nonlinear rational expectations models. Most econometric applications of rational
expectations techniques have concentrated on linear models. The reason for this
concentration is largely pragmatic: econometric analysis of nonlinear models has been
cumbersome and frequently intractable. However, nonlinearities arise naturally in many
macroeconometric applications. Models incorporating the government budget constraint in a
rational expectations framework, exchange rate models which emphasize the constraint that
the current account surplus is equal to the rate of increase in claims on the rest of the
world, and adding-up constraints or portfolio risk factors in financial models are some
examples. Risk modelling brings in conditional second order moments of the underlying
distribution of the endogenous variables, thereby giving rise to nonlinearities. Research
on this topic is reported in CFDP 564
by Fair and Taylor. The paper develops tractable procedures for extending the rational
expectations approach to nonlinear systems, thereby broadening the range of problems which
can be handled using this approach. A general class of nonlinear models, which included
nonlinearities in variables, parameters, and expectations but with additive disturbances,
was considered in this research. The basic approach is an extension from the linear case.
In the linear case a reduced form version of a rational expectations model can be
calculated explicitly, and this form can be used to calculate the likelihood function
directly. In the nonlinear case it is not feasible to calculate the reduced form
explicitly, but one can use numerical simulation rather than direct computation.
Experimentation with the simulation technique in nonlinear models indicates that the
approach is feasible even though it is likely to be dominated by more direct approaches in
linear models. The experiments focused either on simulation of large nonlinear systems or
full estimation of small linear systems.
SEMINARS
In addition to periodic Cowles Foundation staff meetings, at which members of the staff
discuss research in progress or nearing completion, the Foundation also sponsors a series
of Cowles Foundation Seminars conducted occasionally by staff but most frequently by
colleagues from other universities or elsewhere in Yale. These speakers usually
discuss recent results of their research on quantitative subjects and methods. All
interested members of the Yale community are invited to these Cowles Foundation
Seminars, which are frequently addressed to the general economist including
interested graduate students. The following seminars occurred during the past three years.
| 1979 |
|
| October 12 |
DONALD RICHTER, Yale, "A Computational
Approach to Resource Allocation in Spatial Urban Models" |
| October 19 |
JOHN ROEMER, Yale and University of
California, Davis, "Origins of Exploitation and Class: Value Theory of Pre-Capitalist
Economies" |
| November 9 |
GEOFFREY HEAL, Columbia, "Necessary
and Sufficient Conditions for a Resolution of the Social Choice Paradox" |
| November 14 |
OLIVER HART, Cambridge, "A Model of
Imperfect Competition with Keynesian Features" |
| November 16 |
WILLIAM SAMUELSON, Boston University,
"The Simple Economics of Bargaining" |
| November 28 |
GYORGY SZAKOLCZAI, University of Texas,
"Hungarian Price Reform of 1980" |
| November 30 |
JOHN BEGGS, Yale, "Pooling Cross
Sections in Time Series Analysis" |
| December 7 |
ANDREW POSTELWAITE, Princeton,
"Strategic Behavior and a Notion of Ex Ante Efficiency in a Voting Model" |
| 1980 |
|
| March 7 |
GREGORY CHOW, Princeton, "Estimation
of Econometric Models with Rational Expectations" |
| March 14 |
VOLKER BOHM, University of Mannheim,
"A Simple Macroeconomic Disequilibrium Model" |
| April 11 |
JOHN B. TAYLOR, Yale, "Measuring the
Real Effects of Disinflation with Rational Expectations and Labor Contracts" |
| April 18 |
SCOTT BOORMAN, Yale, "Estimating
Unreported Income" |
| April 22 |
PAUL MILGROM, Northwestern, "The
Equilibrium Limit Pricing Doesn't Limit Entry." |
| April 23 |
GLENN LOURY, University of Michigan,
"A Theory of `Oligopoly" |
| April 25 |
DONALD BROWN, Cowles, "The Rate of
Interest in a Perfect Loan Market" |
| May 14 |
ALBERTO HOLLY, Ecole Polytechnique and
University of Lausanne, "The LR Test, the Wald Test and Kuhn Tucker Test in
Non-linear Models with Inequality Constraints" |
| May 16 |
ANGUS DEATON, Princeton, "Labor
Supply, Commodity Demand and Rationing" |
| May 19 |
HIROFUMI UZAWA, University of Tokyo,
"Disequilibrium Analysis and Keynes' General Theory" |
| June 11 |
LAURENCE WEISS, Cowles, "Missing
Information and the Cycle" |
| June 26 |
JEFFREY SACHS, NBER, "Some New Angles
on Macroeconomic Simulations" |
| September 12 |
PRAKASH CHANDER, Indian Statistical
Institute, "Dimensional Requirements for Efficient Decentralized Mechanisms" |
| October 10 |
STANLEY BLACK, Yale and Vanderbilt
University, "On the Political Economy of Inflation in Open Economies" |
| October 17 |
ROBERTO MARIANO, University of
Pennsylvania, "The Asymptotic Behavior of Predictors in a Non-Linear System" |
| October 24 |
WILLIAM TAYLOR, Bell Labs, "Panel Data
and Unobservable Individual Effects: Estimating the Returns to Schooling" |
| October 31 |
EUGENE FAMA, University of Chicago,
"Inflation, Output, Real Return and Capital Investment" |
| November 7 |
ALAN AUERBACH, Harvard University,
"Taxation, Portfolio Choice and Debt-Equity Ratios: A General Equilibrium Model" |
| November 12 |
GERALD JAYNES, Yale, "Unemployment and
Inflation in Macro Equilibrium" |
| November 14 |
DAVID WISE, Harvard University, "Test
Scores, Educational Opportunities and Individual Choice" |
| November 21 |
OLIVIER BLANCHARD, Harvard University,
"Production and Inventory Behavior of the U.S. Automobile Industry" |
| December 3 |
KATSUHITO IWAI, Cowles, "Schumpeterian
Dynamics" |
| December 5 |
GEORGE RHODES, Jr., Colorado State
University, "Interpretations and Extensions of Identifiability Testing in Linear
Models" |
| December 12 |
ROBERT ANDERSON, Cowles and Princeton,
"An Elementary Approach to Core Convergence Theorems" |
| 1981 |
|
| February 6 |
STANLEY BLACK, Yale and Vanderbilt
University, "Consistent Estimation of the Limited Dependent Variable Threshold
Regression Model by Ordinary Least Square." |
| February 13 |
AVINASH DIXIT, Princeton, "Trade in
Natural Resources and Capital Goods" |
| February 20 |
JERRY HAUSMAN, MIT, "The Effect of
Taxes on Labor Supply" |
| February 27 |
PETER DIAMOND, MIT, "Aggregate Demand
Management in Search Equilibrium" |
| March 6 |
ROBERT SHILLER, University of Pennsylvania
and NBER, "The Determinants of the Variability of Stock Prices" |
| April 3 |
JEFFREY SACHS, Harvard University and NBER,
"Current Account Movements and the Real Exchange Rate in the 1970s: A Comparative
Study" |
| April 16 |
DAVID KREPS, Stanford University,
"Sequential Equilibria." |
| April 23 |
LARRY WEISS, Cowles, Informal workshop on
current research |
| April 24 |
ALVIN ROTH, University of Illinois,
"The Economics of Matching: Stability and Incentives" |
| May 1 |
DONALD BROWN, Cowles, "Existence of a
Market Equilibrium Subject to a Budget Constraint" |
| May 8 |
LARRY SUMMERS, MIT, "Inflation and the
Valuation of Corporate Equities." |
| May 15 |
MAMORU KANEKO, Yale and University of
Tsukuba, "The Nash Social Welfare Function and the Nash Bargaining Solution" |
| June 5 |
ROGER GORDON, Bell Labs, "Taxation of
Corporate Capital Income: Tax Revenues vs. Tax Distortions" |
| September 18 |
JERRY HAUSMAN, MIT, "Specification
Tests for LOGIT Models" |
| September 24 |
ROBERT HALL, Hoover Institution, Stanford
University, "The Excessive Sensitivity of Employment to Demand" |
| October 2 |
JOHN GEANAKOPLOS, Yale, "Understanding
Infinite Horizon Models: the 1959 Samuelson Consumption-Loan Model and the 1965 Diamond
Capital Model" |
| October 9 |
MARTIN WEITZMAN, MIT, "Increasing
Returns to Scale and the Foundations of Unemployment Equilibrium Theory" |
| October 16 |
THOMAS ROTHENBERG, U. of California,
Berkeley, "Approximate Normality of Generalized Least Square Estimates" |
| October 23 |
GUILLERMO CALVO, Columbia University,
"Staggered Contracts and Exchange Rate Policy" |
| November 6 |
PAUL R. MILGROM, Northwestern University,
"A Theory of Auctions and Competitive Bidding" |
| November 13 |
THOMAS SARGENT, Univ. of Minnesota &
Visiting Professor at Harvard, "The Real Bills Doctrine vs. The Quantity Theory: A
Reconsideration" |
| November 20 |
JOHN HARTWICK, Queens University,
"Learning About and Exploiting Exhaustive Resource Deposits of Uncertain Size" |
| December 11 |
JOHN WHALLEY, Univ. of Western Ontario,
"General Equilibrium Calculations of Distributional and Efficiency Effects of
Taxes" |
| 1982 |
|
| February 5 |
CHIRSTOPHE CHAMLEY, Yale, "Efficient
Taxation in a Stylized Model of Intertemporal General Equilibrium" |
| February 19 |
BENGT HOLMSTROM, Northwestern University,
"A Theory of Western Wage Dynamics" |
| February 26 |
LARS HANSEN, Carnegie-Mellon U. and U. of
Chicago, "Estimation Procedures for Nonlinear Rational Expectations Models" |
| March 26 |
JOHN TAYLOR, Princeton University,
"The Swedish Investment Funds System as a Stabilization Policy Rule" |
| April 2 |
DAVID KREPS, Stanford and Yale,
"Rational Learning and Rational Expectation." |
| April 16 |
WILLIAM NORDHAUS, Yale, "Are Real
Interest Rates Really High?" |
| April 23 |
RUDIGER DORNBUSCH, "Intertemporal
Trade Theory" |
| April 30 |
DENIS SARGAN, L.S.E. and visiting professor
at University of Florida, "Some Problems with Muellbauer's Method of Estimating
Rational Expectations Models" |
| May 3 |
Professor TAKATOSHI ITO, University of
Minnesota, "A Comparison of Japanese and United States Macroeconomic Behavior by a
Vector Autoregressive Model" |
| May 10 |
Professor V.K. CHETTY, Indian Statistical
Institute and Visitor at Cowles Foundation, "Economics of Price and Distributional
Controls: Indian Sugar Industry" |
CONFERENCES
The Cowles Foundation hosted and financed a Summer Workshop in Econometrics over the
period June 2021, 1982. P.C.B. Phillips initiated the project, organized the program
and invited participants to Yale from Europe, Australia, Canada and the USA. Five papers
were presented during the day. Topics ranged from extensions of the theory of the linear
model to encompass fully dependent variable regressors and errors to a new method of
improving periodogram estimates by smoothing of economic unit cross sections and to
developmental work on nonlinear stochastic prediction and Kalman filtering. Two of the
papers involved empirical applications as well as methodological developments.
The Center for Competitive and Conflict Systems Research at the Cowles Foundation
sponsored three conferences which were initiated and organized by Professor Martin Shubik.
Two of these conferences were held at the Seven Springs Conference Center in Mt. Kisco,
NY. In November, 1979 the conference topic was "Mathematical Models of Conflict and
Combat: Uses and Theory"; in December, 1979, the conference topic was "Auctions
and Competitive Bidding: Uses, Theory and Development." The most recent conference
was held in New Haven in May, 1982 and the topic was "Game Theory."
PUBLICATIONS AND PAPERS
MONOGRAPHS
See complete LISTING OF MONOGRAPHS
(available for download)
COWLES FOUNDATION PAPERS
See complete listing of PAPERS
COWLES FOUNDATION DISCUSSION PAPERS
See complete listing of DISCUSSION PAPERS
OTHER PUBLICATIONS AND PAPERS
This contains papers which were published during the period and resulted from work at
the Cowles Foundation, papers published while the author was a staff member, and a few
other papers referred to in the text in the Report.
BEGGS, JOHN J.
- "Energy Conservation in American Industry" (with Merton J. Peck), in Institutional
Barriers to Energy Conservation, ed. J.C. Sawhill, The Brookings Institution, 1982.
BRAINARD, WILLIAM C.
- "Tax Reform and Income Redistribution: Issues and Alternatives" (with J.
Tobin, J. Shoven, J. Bulow), in Essays in Economics, Vol. 3, ed. J. Tobin, MIT
Press, 1981.
FAIR, RAY C.
- "Estimated Output, Price, Interest Rate and Exchange Rate Linkages among
Countries," Journal of Political Economy, June 1982, pp. 507535.
- "The Effect of Economic Events on Votes for President: 1980 Results," Review
of Economics and Statistics, May 1982, pp. 322325.
GEANAKOPLOS, JOHN
- "On the Disaggregation of Excess Demand Functions," Econometrica, 1980
(with Heraklis Polemarchakis).
KLEVORICK, ALVIN K.
- "Discussion of Richard B. Stewart, The Resource Allocation Role of Reviewing
Courts: Common Law Functions in a Regulatory Era," in Clifford S. Russell, ed.,
Collective Decisionmaking: Applications from Public Choice Theory, Johns Hopkins,
1979.
- "A Framework for Analyzing Predatory Pricing Policy" (with P.L. Joskow), Yale
Law Journal, December 1979.
- "Discussion of Roger R. Betancourt, The Analysis of Patterns of Consumption
in Underdeveloped Countries," in Robert Ferber, ed., Consumption and Income
Distribution in Latin America, ECIEL, Organization of American States, 1980.
- "Regulation and Cost Containment in the Delivery of Mental Health Services,"
in Thomas G. McGuire and Burton Weisbrod, Economics and Mental Health, National
Institute of Mental Health, Series EN No. 1, U.S. Government Printing Office, Washington,
DC, 1981.
- Roundtable Discussion on Predatory Practices in Steven C. Salon, ed., Strategy,
Predation, and Antitrust Analysis, Federal Trade Commission, Washington, DC, 1981.
- "Commentary on Warren Greenberg, Provider-Influenced Insurance Plans and
Their Impact on Competition: Lessons from Dentistry,' and Clark C. Havinghusrt and Glenn
M. Hackbarth, 'Enforcing the Rules of Free Enterprise in an Imperfect Market: The Case of
Individual Practice Associations'," in Mancur Olson, editor, A New Approach to the
Economics of Health Care, American Enterprise Institute for Public Policy Research,
Washington, DC, 1981.
- "Discussion of Robert D. Willig and Elizabeth E. Bailey, Income-Distribution
Concerns in Regulatory Policymaking,' " in Gary Fromm, editor, Studies in Public
Regulation, MIT Press, Cambridge, Mass. 1981.
NORDHAUS, WILLIAM D.
- The Efficient Use of Energy Resources
, Yale University Press, 1979.
- "The Interaction between Oil and the Economy in Industrial Countries," Brookings
Papers on Economic Activity, 2, 1980.
- "Tax-Based Incomes Policies: A Better Mousetrap?" in An Incomes Policy for
the United States: New Approaches, ed. M.P. Clandon and R.R. Cornwall, Martinus
Mijhoff, Boston, 1981.
PHILLIPS, PETER C.B.
- "A Saddlepoint Approximation to the Distribution of the k-Class Estimator of a
Coefficient in a Simultaneous System" (with A. Holly), Econometrica, Vol. 47,
No. 6, November 1979, pp. 15271548.
- "The Concentration Ellipsoid of a Random Vector," Journal of Econometrics,
Vol. 11, No. 2.3, October/December 1979, pp. 363365.
- "Finite Sample Theory and the Distributions of Alternative Estimators of the
Marginal Propensity to Consume," Review of Economic Studies, Vol. 47, No. 1,
January 1980, pp. 183224.
- "The Exact Finite Sample Density of Instrumental Variable Estimators in an Equation
with n+1 Endogenous Variables," Econometrica, Vol. 48, No. 4, May 1980, pp.
861878.
- "Marginal Densities of Instrumental Variable Estimators in the General Single
Equation Case," Advances in Econometrics, Vol. 2, No. . 1981, pp.
- "A Model of Output, Employment, Capital Formation and Inflation" (with R.W.
Bailey and V.B. Hall), Advances in Econometrics, Vol. 3, No. 1, 1982, pp.
- "Best Uniform and Modified Pade Approximation of Probability Densities in
Econometrics," Chapter 5 in Advances in Econometrics, ed. W. Hildenbrand,
Cambridge University Press, 1982, pp. 123167.
- "The True Characteristic Function of the F Distribution," Biometrika,
Vol. 69, No. 1, April 1982, pp. 261264.
- "A Simple Proof of the Latent Root Sensitivity Formula," Economic Letters,
Vol. 9, 1982, pp. 5759.
- "Yale Examinations and Problem Series in Econometrics," pp. 327 in E.
Tower (ed.), Economics Exams, Puzzles and Problems, 1981, Durham: Eno River Press.
- "Comments on the Unification of Asymptotic Theory of Non-Linear Econometric
Models," Econometric Reviews, Vol. 1, No. 2, 1982, pp. 193200.
SCARF, HERBERT E.
- Comment on "On the Stability of Competitive Equilibrium and the Patterns of Initial
Holdings: An Example," International Economic Review, Vol. 22, No. 2, June
1981.
SHUBIK, MARTIN
- "Computers and Modelling," in Future Impact of Computers: A 20Year
View, ed. M.L. Dertouzos and J. Moses, MIT Press, 1979.
- "On the Number of Types of Markets with Trade in Money: Theory and Possible
Experimentation," in Research in Experimental Economics, Vol. 1, ed. V.L.
Smith, JAI Press, 1979.
- "Oskar Morgenstern, a Biography," International Encyclopedia of the Social
Sciences, Biographical Supplement, Vol. 18, The Free Press, 1979, pp. 541544.
- "Unconventional Methods of Economic Warfare," Conflict, Vol. I, No. 3,
1979, pp. 211229.
- The War Game
, Harvard University Press, 1979 (with G. Brewer).
- "Entry in Oligopoly Theory: A Survey," Eastern Economic Journal, Vol.
5, No. 12, 1979, pp. 281289 (with K. Nti).
- "The Capital Stock Modified Competitive Equilibrium," in Models of Monetary
Economies, ed. J.H. Karaken and N. Wallace, Federal Reserve Bank of Minneapolis, 1980.
- "A Strategic Market Game with Price and Quantity Strategies," Zeitschrift
fur Nationalokonomie, Vol. 40, No. 12, 1980, pp. 2534.
- Market Structure and Behavior
, Harvard University Press, 1980 (with R.E. Levitan).
- "Stochastic Games, Oligopoly Theory and Competitive Resource Allocation," in Dynamic
Optimization and Mathematical Economics, ed. Pan-Tai Liu, Plenum Publishing Co., 1980,
pp. 89104 (with M.J. Sobel).
- "Game Theory Models and Methods in Political Economy," in Handbook of
Mathematical Economics, ed. K.J. Arrow and M.C. Intriligator, North-Holland, 1981.
- "A Price-Quantity Buy-Sell Market with and without Contingent Bids," in
Studies in Economic Theory and Practice, ed. J. Los et. al., North-Holland, 1981.
- "Society, Land, Love or Money (A Strategic Model of How to Glue the Generations
Together)," Journal of Economic Behavior and Organization, Vol. 2, No. 4,
December 1981.
- "The Profit Maximizing Firm: Managers and Stockholders," Economic Applique,
1981, pp. 13691388 (with P. Dubey).
- "Noncooperative Oligopoly with Entry," Journal of Economic Theory, Vol.
24, No. 2, April 1981, pp. 187204 (with K. Nti).
- Game
Theory in the Social Sciences, Vol. I, MIT Press, 1982.
- "War Gaming: For Whom and What?" Policy Sciences, Vol. 15, 1982.
- "Strategic War: What Are the Questions and Who Should Ask Them?" Technology
in Society, Vol. 4, 1982, pp. 155179 (with P. Bracken).
- "The Shuttle Utilization: A Strategic Analysis," Technology in Society,
Vol. 4, 1982, pp. 75100 (with P. Hambling).
TOBIN, JAMES
- "Diagnosing Inflation: A Taxonomy," forthcoming in volume of Conference Papers
(Israel, June 1979), Academic Press.
- "The Volcker Shock," New York Times (Sunday Financial Section),
November 1 I, 1979.
- "The Federal Budget and the Constitution," Taxing and Spending, Fall
1979, pp. 2736; "Rejoinder," Winter 1979, pp. 6871.
- Discussion: Of Karl Brunner, "The Control of Monetary Aggregates," in Controlling
Monetary Aggregates, III, Conference Series No. 23, Federal Reserve Bank of Boston,
October 1980, pp. 6975.
- Asset Accumulation and Economic Activity
(Reflections on Contemporary Macroeconomic
Theory), Yrjo Jahnsson Lectures, Oxford: Basil Blackwell, 1980.
- "Harry Gordon Johnson, 19231977," in Proceedings of The British
Academy, Oxford University Press, Vol. LXIV, 1980.
- "Spending Limits and Measures of Government Size," Puerto Rico Economic
Quarterly (First Federal Savings Bank of Puerto Rico), Vol. 2, 1980.
- "Sleight of Mind" (Laetrile, alchemy, and Reaganomics), The New Republic,
March 21, 1981.
- "The Reagan Economic Program: Budget, Money, and the Supply Side," in The
Reagan Economic Program, Supplement to Federal Reserve Bank's Economic Review (San
Francisco), May 1981.
- "Monetary Policy: The Collision Course," Journal of the Federation of
American Scientists, Vol. 34, No. 5, June 1981, pp. 57.
- "Supply-Side Economics: What Is It? Will It Work?" Economic Outlook U.S.A.,
Vol. 8, No. 3, Summer 1981, pp. 5153.
- "Energy Strategy and Macroeconomic Policies" (held in New York City), Donald
S. MacNaughton Symposium, Proceedings, 1981, Syracuse Univ.
- "Reflections Inspired by Proposed Constitutional Restrictions on Fiscal
Policy," in Economic Regulation, Papers in Honor of James R. Nelson, eds.
Kenneth D. Boyer and William G. Shepherd, Michigan State University Press, 1981, pp.
341367.
- Comments on: "Supply versus Demand Approaches to the Problem of Stagflation,"
by Michael Bruno and Jeffrey Sachs, in Macroeconomic Policies for Growth and Stability
(A European Perspective), Symposium 1979, Herbert Giersch, ed., J.C.B. Mohr, 1981.
- Review of: Keynes' Monetary Thought: A Study of Its Development, by Don Patinkin,
in Journal of Political Economy, Vol. 89, 1, 1981.
- Comments on "Taxation and Corporate Investment: A q-Theory Approach," by
Lawrence H. Summers, in Brookings Papers on Economic Activity, 1, eds. W.C.
Brainard and George L. Perry, Brookings Institution, 1981, pp. 133139 (with Philip
White).
- "Does Fiscal Policy Matter?" Center for Research on Economic Policy, Stanford
University, May 14, 1982).
- Essays in Economics: Theory and Policy
, Vol. III, MIT Press, 1982.
- "Inflation," in Encyclopedia of Economics, eds. Douglas Greenwald,
McGraw-Hill, 1982, pp. 510523.
- "Steering the Economy Then and Now," in Papers in Honor of Walter W. Heller,
eds. J.A. Pechman and N.J. Simler, W.W. Norton and Co., 1982, pp. 1145.
- "The Wrong Mix for Recovery," Challenge, MayJune 1982, pp.
2127 (based on paper delivered at 1982 Financial Outlook Conference).
VAN DER HEYDEN, LUDO
- "Uncertainty in Energy and Planning Models," Optimal Analysis Corporation,
April 1980 (with Kenneth J. Arrow, William W. Hogan and Ephraim Rubin).
- "Scheduling Jobs with Exponential Processing and Arrival Times on Identical
Processors so as to Minimize the Expected Make-span," Mathematics of Operations
Research, Vol. 6, 1981, pp. 305312.
- "The Convergence of Scarf's Integer Programming Algorithm to a Dual Simplex
Algorithm," November 1981.
- "A Refinement Procedure for Computing Fixed Points Using Scarf's Primitive Sets,
"Mathematics of Operations Research, Vol. 7, 1982. pp. 295313.
- "Restricted Primitive Sets in a Regularly Distributed List of Vectors and
Simplicial Subdivisions with Arbitrary Refinement Factors," Mathematics of
Operations Research, Vol. 7, 1982, pp. 383400.
|