PURPOSE AND ORIGIN
The Cowles Foundation for Research in Economics at Yale University, established as an
activity of the Department of Economics in 1955, has as its purpose the conduct and
encouragement of research in economics, finance, commerce, industry, and technology,
including problems of the organization of these activities. The Cowles Foundation seeks to
foster the development of logical, mathematical, and statistical methods of analysis for
application in economics and related social sciences. The professional research staff are,
as a rule, faculty members with appointments and teaching responsibilities in the
Department of Economics and other departments.
The Cowles Foundation continues the work of the Cowles Commission for Research in
Economics, founded in 1932 by Alfred Cowles at Colorado Springs, Colorado. The Commission
moved to Chicago in 1939 and was affiliated with the University of Chicago until 1955. In
1955 the professional research staff of the Commission accepted appointments at Yale and,
along with other members of the Yale Department of Economics, formed the research staff of
the newly established Cowles Foundation.
RESEARCH ACTIVITIES
1. Introduction
The Cowles Commission for Research in Economics was founded approximately forty years
ago by Alfred Cowles, in collaboration with a group of economists and mathematicians
concerned with the application of quantitative techniques to economics and the related
social sciences. This methodological interest was continued with remarkable persistence
during the early phase at Colorado Springs, then at the University of Chicago, and since
1955 at Yale.
One of the major interests at Colorado Springs was in the analysis of economic data by
statistical methods of greater power and refinement than those previously used in
economics; this was motivated largely by a desire to understand the chaotic behavior of
certain aspects of the American economy the stock market in particular
during the Depression years. The interest in statistical methodology was continued during
the Chicago period with a growing appreciation of the unique character and difficulties of
statistical problems arising in economics. An important use of this work was made in the
description of the dynamic characteristics of the U.S. economy by a system of
statistically estimated equations.
At the same time, the econometric work at Chicago was accompanied by the development of
a second group of interests also explicitly mathematical but more closely connected
with economic theory. The activity analysis formulation of production, and its
relationship to the expanding body of techniques in linear programming, became a major
focus of research. The Walrasian model of competitive behavior was examined with a new
generality and precision, in the midst of an increased concern with the study of
interdependent economic units, and in the context of a modern reformulation of welfare
theory.
The move to Yale in 1955 coincided with a renewed emphasis on empirical applications in
a variety of fields. The problems of economic growth, the behavior of financial
intermediaries, and the embedding of monetary theory in a general equilibrium formulation
of asset markets, were studied both theoretically and with a concern for the implications
of the theory for economic policy. Earlier work on activity analysis and the general
equilibrium model was extended, with a view to eventual applications to the comparative
study of economic systems and to economic planning at a national level. Along with the
profession at large, we have also seen in recent years a greater interest in the specifics
of income distribution, in the analysis of the effects of discrimination, and in the
development of analytical methods oriented to contemporary social and economic problems.
During the three year period covered by this report, as in the past, the Cowles
Foundation staff has exhibited a variety of individual research interests and
methodological orientations. The composition of the staff has changed gradually over the
years, and its intellectual activities have been stimulated by a substantial number of
distinguished visitors from the United States and abroad. Yet, despite this diversity, the
broad themes outlined above have continued to characterize current research interests. In
the report that follows, an effort has been made to relate particular papers and studies
of the last three years to these general topics, though no pattern of organization can do
complete justice to all the work that is summarized.
2. General Equilibrium Models and Game Theory
The Walrasian model of economic equilibrium under perfect competition exemplifies one
of the central themes of economic theory: the equilibrium arising from the interaction of
a variety of agents (persons or organizations) with differing economic motivations. As
distinct from the more general formulations of n person game theory, the Walrasian model
is quite specific about the nature of the agents and the strategies which are available to
them. The producing units of the economy are assumed to be fully aware of the
technological possibilities of production, and of the sequence of factor and output prices
which are expected to prevail at all moments of time in the future. Production and
investment decisions are motivated exclusively by the desire to maximize the present value
of the stream of profits flowing from these decisions.
The remaining agents in the economy are consuming units or households, each of which
derives its income from the sale of privately owned productive factors including
labor and allocates that income or wealth among the many categories of consumer
goods and saving by maximizing some indicator of utility. The decentralized production and
consumption decisions are linked together only by a knowledge of prevailing and expected
prices, and are in equilibrium with each other if these prices produce an equality of
supply and demand in all markets.
The Cowles Foundation has been associated for many years with the modern development of
the general Walrasian model. One of the major research achievements, summarized in
Debreu's fundamental monograph, Theory of Value (Cowles Foundation Monograph No. 17), has been to establish
the mathematical theorem that equilibrium prices do indeed exist for a model of this
complexity and generality.
While there are special examples of Walrasian models in which the existence of
equilibrium prices can be demonstrated by elementary arguments of a geometric or algebraic
character, the difficulties of the general problem have required the introduction of
complex and abstract mathematical techniques. An older generation of economists may have
been satisfied by the observation that there are as many equations as there are unknown
equilibrium prices, but this is by no means a convincing argument for the existence of
appropriate solutions. Nor can an argument be given by formalizing the conventional
intuition that the price of a commodity will rise if its demand exceeds supply and fall
otherwise. There is no difficulty in constructing acceptable demand functions for which
this sequence of prices, adjusted in response to the discrepancy between supply and
demand, fails to converge to an equilibrium system of prices. The use of abstract
mathematical tools, such as "fixed point" theorems, is fully justified by the
intrinsic difficulty of the subject.
But however satisfying it may be from an intellectual point of view to demonstrate the
internal consistency of the Walrasian model by means of an existence theorem, such results
are of little use in applying the competitive formulation to specific policy questions. In
order to discuss the influence of tariff changes in a multi-sector model of international
trade to take only one example it is necessary to be able to solve
explicitly for equilibrium price and production decisions, rather than merely asserting
the existence of prices which equilibrate supply and demand.
In a series of papers (CFP 262, CFP 271, CFP 277, CFP 308,
and CFDP 272), Scarf and Terje Hansen
have developed a new class of numerical algorithms which have, as one of their
applications, the determination of approximate equilibrium prices and production decisions
for a general Walrasian model. The algorithm has been tried on a large number of specific
though artificial examples in order to estimate the number of elementary iterations and
total computation time required to solve problems of a given size. The time seems to
increase roughly as the square of the number of commodity sectors; problems involving
twenty or fewer sectors can be done with considerable accuracy in less than 10 minutes of
computing time on an IBM 7094. This is considerably less efficient than the simplex method
for a linear programming problem of comparable size but it must be stressed that
the determination of equilibrium prices is a problem to which the conventional methods of
linear and non-linear programming cannot be applied.
At present several experimental applications of these computational techniques have
either been initiated or are being contemplated. These include a study of the implications
of various tariff proposals in an international trade model; the determination of
equilibrium prices for a centrally planned economy; and the redistributive aspects of
certain welfare proposals such as a negative income tax plan.
The details of the algorithm may be found in the series of papers previously referred
to. It should be remarked that this class of algorithms can be applied to a number of
problems other than that of calculating equilibrium price and production levels. An
appropriate variant of the algorithm can be used to approximate a "fixed point"
of an arbitrary continuous mapping of a closed, bounded, convex set into itself, and
another variant can be used under certain circumstances to determine a
vector in the core of an n person game. As Hansen has shown in CFDP 277, the methods can also be used to
solve non-linear programming problems, under customary convexity assumptions, and seem to
compare quite favorably in terms of speed of computation with alternative
numerical procedures such as the gradient method. Hansen and Scarf are engaged in writing
a monograph in which this methodology is summarized, along with parallel developments by
others including Harold Kuhn, Lloyd Shapley and Curtis Eaves.
For a number of years various authors have been concerned with the specific
introduction of monetary phenomena into the Walrasian model. In CFDP 295 Starr has investigated the role
of money in avoiding the necessity of coincidence in the wants of pairs of consumers who
are engaged in trade, thus making explicit and rigorous Jevon's analysis in Money and
the Mechanism of Exchange. In CFDP 300
Starr extends the Walrasian analysis to markets where exchange is a resource using
activity and takes place between pairs of traders rather than in the single market for all
traders of general equilibrium analysis. This results in a well defined demand for media
of exchange. A subsequent paper studies the existence and optimality of equilibrium in a
monetary economy, and the role of government policy especially taxation in
maintaining the value of currency. The argument is made that the government's willingness
to accept tax receipts in monetary rather than real form offers some constraints on the
value of money which might otherwise be absent from the general equilibrium model.
Demand functions, and the underlying individual preferences from which they are
derived, form one of the basic ingredients of the Walrasian model. Traditionally
economists, and more recently mathematical political scientists, have assumed that
individuals are able to order the alternatives confronting them in a complete and
transitive fashion. The completeness assumption, which requires that an individual be
capable of explicit preference or indifference between any pair of
alternatives, regardless of their dissimilarity, has been criticized as being
unnecessarily strong. In a recent paper, "Acyclicity and Choice," Brown is
concerned with a relaxation of customary assumptions on individual preferences by
requiring merely that every finite set of alternatives contains at least one
"maximal" element, i.e., one which is not inferior to any other element of the
set. The extent to which traditional demand theory can be modified in the light of this
more general approach, is still an open question.
The basic motivation of n person game theory is to provide an analytical setting for
economic and political problems, which is free of the specific behavioristic assumptions
of the Walrasian model, and therefore capable of addressing some important aspects of
economic reality such as increasing returns to scale, and the public sector
to which the neo-classical model is not applicable. Game theory works with a series of
concepts which are logically prior to those of the Walrasian model: rather than assuming
that producers and consumers respond passively to competitively determined prices, this
analytic framework permits the typical agent to have at his disposal an essentially
arbitrary collection of strategies. These may range from the selection of a sequence of
votes in a political model to the choice of price and output plans in a study of
oligopolistic behavior. All that is required, for a game to be specified in
"normal" form, is that each player have a fully delineated set of strategies
not necessarily finite in number and a systematic procedure for ranking all
the outcomes that they may arise from independent choices of strategies by all of the
participants.
Of the several procedures which have been suggested for the solution of an n person
game, two may be emphasized because of their similarity to traditional methods of analysis
in economic theory. The concept of a non-cooperative equilibrium point proposed by Nash,
is a direct generalization of Cournot's solution of the oligopoly problem, and is also a
basic notion in theories of imperfect competition. According to this method of solution, a
selection of strategies, one by each player in the game, is in equilibrium if no player
can improve his utility by a unilateral action, assuming no change on the part of the
remaining players.
In Cournot's original work each of several firms are engaged in producing the same
commodity, under possibly different cost conditions, and each has as its only strategic
choice the selection of a level of output. Shubik, in collaboration with Richard Levitan
of IBM, has explored (CFDP 270, CFDP 287, CFDP 289) a number of variations of this basic model, including the
possibility of the simultaneous selection of price and output levels, capacity constraints
on production, and the introduction of uncertainty in demand.
In addition to these theoretical studies, Shubik and Stern (CFDP 236, CFDP 240, CFDP 274)
have reported on a series of experimental games whose object is to provide statistical
evidence about the type of solution procedure actually adopted in specific situations. In
these papers the Nash equilibrium point is compared with alternative solutions, for
example that selection of strategies which maximizes the sum of the payoffs to the two
players. A related contribution by Friedman (CFDP 246) surveys the area of experimental research in oligopoly.
A second major proposal for the solution of n person games is that of the core, a
notion which has its roots in Edgeworth's treatise, "Mathematical Psychics,"
published in 1886. In order for a cooperative concept, such as the core, to be applicable,
the game must be described by specifying for each coalition of players the collection of
utility vectors which it can achieve. A utility vector designates a level of utility for
each player. It is then in the core if, first, it is achievable by all of the players
acting collectively and secondly, no coalition can by itself achieve a higher utility for
each of its members.
The successful application of this concept to economics has been largely to those
models of production and exchange in which the collection of achievable utility vectors
can readily be defined for each coalition. In a model in which each consumer begins the
trading period with a stock of commodities and has a utility function for final
consumption, the utility vectors achievable by a coalition are most naturally taken to be
those arising from an arbitrary redistribution of that coalition's assets among its
members.
But this simplicity disappears if the game is expressed in normal form in terms
of the strategies open to each player since no coalition of less than all the
players will then be able to dictate the outcome of the game, independently of the
strategic choices made by the complementary coalition. (This point is illustrated by
Shubik [CFDP 288] in a discussion of
externalities in production.) An appropriate generalization of the concept of the core to
games in normal form was first suggested by Aumann: for him a joint strategy choice is in
the core if no coalition can select alternative strategies which guarantee higher utility
levels for all of its members, regardless of what the complementary coalition
chooses to do.
The distinctions between the Nash equilibrium point, and the core defined in this way
are illustrated by Shubik (CFDP 274)
using the well-known example of the Prisoner's Dilemma, a two-person non-zero sum game,
with two strategies for each player and with the pair of payoff matrices:

representing the utilities of each player. The Nash equilibrium for this game requires
each player to select his second strategy, and thereby results in a pair of utilities
distinctly inferior to that which can be achieved by cooperation. On the other hand the
utility vector (10,10) is in the core, according to our definition no coalition can
guarantee a higher utility vector for its members, independently of the actions of the
complementary coalition. For example if player one were to attempt a higher utility he
would have to move to his second strategy, and would be vulnerable to a corresponding
change in player two's strategy.
In CFDP 293, Scarf has given a set
of conditions on a general n person game which imply that the core is not empty. Aside
from some technical conditions, it is sufficient to require that each player has a convex
set of strategies, and that each player's utility function be a quasi-concave
function of all of the strategies jointly.

|
One interesting application of this theorem is to an exchange model with external
effects in consumption, so that each individual's utility depends not only on his own
vector of consumption, but is also a function of the bundles consumed by some or all of
the remaining consumers. If this generalized utility function has indifference surfaces
which are convex from above (see figure at right), then the theorem of this paper is
applicable. We conclude that there is an allocation of society's initial holdings, which
cannot be improved upon by any specific redistribution of the assets of any coalition, if
the complementary coalition is subsequently permitted to re-distribute its own initial
assets in an arbitrary fashion.
There is an intimate connection between the core of a market game and the competitive
equilibria for the same model as the number of consumers tends to infinity. If the
appropriate neo-classical assumptions are made, these two modes of analysis yield in the
limit precisely the same set of production and distribution plans. This observation has
stimulated a considerable body of work in which the number of agents are assumed to be
infinite an idealization which is designed to avoid the technicalities of passing
to the limit. In CFDP 258, Kannai
provides a set of sufficient conditions for the existence of a non-empty core in such a
game.
In a game theoretic formulation, alternative social outcomes are ordered independently
by each of the participants, and partially controlled by the selection of appropriate
strategies. This may be contrasted with those theories of voting in which strategic
choices are downgraded, and replaced by rules such as majority voting which amalgamate
individual preferences into a social choice.
Voting mechanisms are often unstable because of the phenomenon of "cyclical"
majorities pointed out by Kenneth J. Arrow in Cowles Foundation Monograph 12, Social Choice and
Individual Values. As Arrow's theorem indicates, any specific mechanism such as
majority voting which satisfies a few intuitively plausible conditions will
inevitably produce an intransitive social ordering (i.e. one containing cycles) for some
assortment of individual preferences. One possible resolution of this difficult) is to
relax the requirement that alternative social states be completely ordered; perhaps it is
sufficient to assume merely that the social ordering be "acyclic," in the sense
used by Brown for individual orderings.
There are a number of conditions requiring for example, some degree of
similarity in preferences which have been shown sufficient to eliminate
intransitivities and to restore stability to majority rule. As Kramer has demonstrated,
however (CFDP 284), these conditions
turn out to be extraordinarily restrictive when applied to problems involving the
distribution of economic goods, or in models of the public sector; for many public sector
decisions pure majority rule appears to be inherently unstable.
Kramer's subsequent research has moved in the direction of employing game theoretic
concepts and taking account of the institutional structure in which voting occurs. Some
preliminary results were reported in a paper, "Theory of Electoral Systems,"
presented at the Eighth World Congress of the International Political Science Association
in September 1970.
In collaboration with Lepper, Kramer has also extended some earlier empirical work on
Congressional elections. As in the case of the previous work, their paper is based on the
assumption that constituents vote for the incumbent if they are satisfied with the results
of the policies attributed to the incumbent's party; the policy consequences specifically
incorporated in the regression analysis are measures of economic prosperity. The paper,
"Congressional Elections," which was completed and will appear in a forthcoming
volume concerned with applications of quantitative methods in political and economic
history (edited by R.W. Fogel), extends the earlier analysis to tentative experiments with
data at the county level.
3. Planning and Comparative Economic Systems
One of the major questions in the theory of economic planning is the extent to which
the multiplicity of economic decisions confronting a central planning board can be
effectively decentralized. Must the planners themselves be involved in the choice of
elementary techniques of production, and the determination of highly disaggregated levels
of output, or can decisions of this sort be delegated to smaller economic units,
constrained only by broad guidelines issued by the central planning board?
This question has been examined by many authors since the initial proposal by Oscar
Lange that certain aspects of the Walrasian model could be directly translated into a
program for the efficient decentralization of a planned economy. According to this
analysis the choices of consumers sensitive to relative prices would
constitute the ultimate source of demand for all goods and services, and producers would
be constrained only to maximize profit at the equilibrium price system. All information
about preferences and scarcity would be transmitted to producers exclusively in terms of
relative prices.
Not only is the actual practice of planning considerably more subtle than this bare
scheme taking into consideration information about quantities both as indicators of
scarcity and as goals but there may also be sound theoretical reasons for departing
from the Walrasian model. The inadequacies of the competitive model for planning purposes
form the central theme of Janos Kornai's profound study, "Anti-Equilibrium,"
which will be published shortly by Springer. The book, which was completed during Kornai's
visit to the Cowles Foundation in the spring of 1970, is a comprehensive discussion of
virtues and deficiencies of various analytical approaches to planning, and should be a
focal point of discussions for years to come.
The theory of planning has been illuminated by the introduction of numerical techniques
for the solution of programming problems, which are simultaneously capable of
interpretation as decentralized decision making processes. For example the
"decomposition" method for linear programming first introduced by Dantzig
and Wolfe can be interpreted as an iterative procedure in which the prices of some
factors of production are quoted by the planners; the sectors independently maximize
profits based on these prices, which are then systematically revised in terms of the
discrepancy between supply and demand.
In CFP 332, Weitzman investigates
the interaction between the center and various sectors of an economy in which price
guidelines are replaced by quantity targets, which are revised if the sectors are able to
convince their superiors that technological considerations preclude the fulfillment of
their assigned quota. In the process the sectors attempt to impart a notion of the
constraints that are binding and the direction in which a new quota must move if it is to
be feasible. The paper then demonstrates how efficient planning can take place even though
each sector and the center has incomplete information about the economy as a
whole.
The function of the method of material balances in Soviet type economies is to insure
coordination between anticipated supplies and projected demands of factors of production.
In "Material Balances under Uncertainty" (CFDP 286), Weitzman studies a simplified aggregate model of this
procedure, which is used to determine the optimal balance between the production of
intermediate and final goods.
Koopmans, in collaboration with J.M. Montias, has been working on the development of
formal methods for the description and comparison of economic systems. The research was
originally undertaken as an assignment for a Conference on the Comparison of Economic
Systems, held at the University of Michigan in November, 1968, at the initiative of
Professor Alexander Eckstein of that University. They have approached the assignment in
the spirit of a search for ways in which formal theorizing and model construction can
contribute ideas and footholds for analysis in the description and comparison of economic
systems.
Ideas that were emphasized in the description include the notion of custody as a
relation between persons (or organizations) and goods, that is found to be important in
all modern economic systems; the notion of transfers of custody in specific stages of
processing of the good in question; the use of linear graphs in representing
organizational relations between persons; the relation between information available to
the members of an organization and the actions they take; and the use of the concept of a
hierarchy, for instance, in the discussion of the international corporation.
The evaluative comparison discusses alternative criteria that may be adopted or
rejected, or given different weights, by different comparers. The effects of the attrition
of information under vertical transmission in a hierarchical organization is then examined
as an application of the criterion of efficiency.
Finally, on the empirical side of comparative economic studies undertaken at the Cowles
Foundation, Weitzman has attempted to measure production relationships for the Soviet
economy. In "Soviet Postwar Economic Growth and Capital Labor Substitution," CFP 333, a constant elasticity of
substitution production function is directly estimated on the basis of Soviet data, the
results are interpreted and comparisons are made with results obtained from U.S. data.
4. Micro-Economic Theory and Policy Applications
One of the major virtues that economic theorists have long attributed to a market
economy is that the resulting production and distribution decisions are optimal in the
sense that no alternative suggestions for production and redistribution will improve the
utility of every member of society. The range of application of this theorem may be
somewhat diminished by an awareness of the restrictive assumptions required for its
validity for example, non-increasing returns to scale, perfect futures and
insurance markets, and the costless flow of information, to name only a few. There is also
substantial difficulty in incorporating a variety of considerations which lie between
economics and political theory, such as external effects in production and consumption,
and the taxing, spending and regulatory activities of the government sector. But the major
objection undoubtedly resides in the weakness of the optimality criterion itself, and its
complete indifference to considerations of equity.
An older tradition typified by Bentham saw in the social welfare function a major
technique for describing society's ethical judgments on economic matters. But in the late
19th and early 20th centuries economists from Pareto to Samuelson tended to view utility
as an ordinal and subjective rather than cardinal and tangible concept. On the other hand,
everyday discussions which attempt to judge the fairness of a specific distribution of
society's resources are rarely couched in terms of utility. Fairness and equity can be
seen in terms of principles such as the approximate equality of income, and the
provision of essential goods and services at some agreed upon level which can be
expressed in real terms and debated with no direct reference to utility.
Proposals for a negative income tax in which households receive a graduated
supplement if their income falls below a minimum level are specific examples of
redistribution plans designed to alleviate the inequalities of privately generated income.
In the previous three year Report reference was made to the joint work of Mieszkowski,
Tobin and Joseph Pechman of the Brookings Institution on the mechanics of alternative
negative income tax proposals. A summary of their findings appear in the paper "Is a
Negative Income Tax Practical?", published in the Yale Law Review (November
1967). During the period of this report both of these staff members continued their
interest in the economics of poverty and inequality. Tobin's debate with W. Allen Wallis
on Welfare Programs: An Economic Appraisal was published in 1968. He also
contributed to the 1968 Brookings volume Agenda for the Nation, an essay
"Raising the Incomes of the Poor," which gave a general assessment of the
sources and possible remedies for poverty in the United States. In April 1970, Tobin gave
the Henry Simons Memorial Lecture at the University of Chicago, "On Limiting the
Domain of Inequality" (subsequently published in the Journal of Law and Economics,
October 1970). In this lecture he considered, with illustrative examples, the appropriate
role of "specific egalitarianism," i.e., efforts to distribute particular goods
less unequally than the market would distribute them given the extent of the
prevailing income inequality.
Tobin was also associated, as a consultant on the experimental design, with the
experimental test of "graduated income incentives," i.e., negative income tax,
conducted in New Jersey by the Institute for Research on Poverty of the University of
Wisconsin, and by Mathematica, Inc. This series of experiments is aimed, in part, at
exploring the labor supply response to various forms of a negative income tax an
important ingredient in discussing the implications for efficiency, and the costs, of
income maintenance plans.
Taxation is one form of governmental economic activity that has raised classical
problems in regard both to efficiency and to the distribution of real income. Mieszkowski
has continued to work on the theory of tax incidence; his survey article on this topic,
"Tax Incidence Analysis: The Effects of Taxes on the Distribution of Income,"
appeared in the Journal of Economic Literature, in December 1969. He has also
revised and extended the work on the distributional implications of the property tax,
referred to in the last three year Report. Basically the paper (CFDP 304) argues (1) that the property tax
system decreases the after tax return on capital by an amount equal to the mean rate of
tax for the nation and (2) raises the cost of capital to cities with high rates of tax and
lowers it for those jurisdictions whose tax rates are low relative to the mean rate.
In the paper, "Tax Incentives for Low Income Housing," Nordhaus examined,
using a simple model of a two-sector housing market, the effects on rents and on housing
supply of various subsidy plans and tax incentives for low income housing. This review
identified one problem external diseconomies, such as fire hazards associated with
slum housing which may require subsidies to correct the associated market
inefficiency. On the other hand, poverty would call either for general income supplements
or if a "specific egalitarian" approach is taken for particular
types of rent supplements or rebates.
The economics of housing markets and real estate values was also examined in two
studies by Mieszkowski. One project with Thomas King, a Yale graduate student, is based on
a sample of rental units in New Haven containing information on the quality of the units
and the characteristics of the household that occupies them. The basic motivation for this
survey was to determine whether Negroes pay more for housing of comparable quality and
hence have their real incomes reduced by the rent differential. Previous studies on this
topic relied on aggregative census data and suffered from imperfect controls on the
quality of the units. Mieszkowski's study, which is virtually complete, reaches the
following conclusions:
- Controlling for the size, quality and other characteristics of the unit, the sample of
black households pay about 11% more for rental housing than do whites.
- Households headed by a male in all black areas pay about 11% more than in integrated
areas.
- Whites who live in mixed areas pay slightly less for housing than whites who live in
exclusively white areas.
- The discrimination against households headed by black women is substantially greater
than discrimination against households headed by black men. Households headed by white
women do not appear to be discriminated against relative to households headed by white
men, and
- An education variable, which is taken as a proxy for socio-economic status decreases
rents paid by blacks but has no effect on the rents paid by white households.
Mieszkowski and Grether are also involved in a study of the determinants of real estate
values in New Haven, sponsored by the Institute of Social Science at Yale. Several years
ago the New Haven Board of Realtors made available to members of the Yale Department of
Economics their master cards containing detailed information on original asking and final
selling prices, and physical characteristics of properties sold in approximately 10
thousand transactions. Based on these data, the project will estimate the effect on
housing value of variables such as property tax burdens, the distance between the unit and
central work locations, and "neighborhood effects," e.g., the volume of traffic,
crime rates and the quality of public services. The results may provide indirect evidence
on the demand for public services (and the willingness to be taxed for them), information
related to alternative land use patterns and consequences of zoning, and some further
evidence on the existence and degree of market discrimination.
Higher rental costs based on discrimination may tend to increase disparities in nominal
earnings; in a similar fashion, the alleged phenomenon that the poor pay more for goods
and services than their affluent counterparts would if true worsen the
problems of poverty otherwise identified on the basis of money income. A study conducted
by Klevorick, in conjunction with Roger Alcaly of Columbia University, investigated the
relationship between food prices and income levels in different neighborhoods of New York
City. The basic data bank used for the analysis was a comprehensive survey of food prices,
conducted in New York City during the summer of 1967 by the New York City Council of
Consumer Affairs. Using the price data from the New York City survey and data on the
median family income for forty-six neighborhoods in New York, the simple relationship
between price and neighborhood income was investigated employing a single-equation linear
regression model. The regression results suggest that, after separating chain stores from
small independent stores, the commodity-by-commodity prices of food items on retail
merchant's shelves do not, with few exceptions, rise with decreases in neighborhood
income.
Care must be taken in interpreting these results. For example, while the direction of
commodity price variations with income and the significance of such variations are similar
for chain stores and other stores, for most commodities the mean price of the item is
higher in other stores than in chain stores. And, the evidence suggests that low-income
neighborhoods have a higher proportion of small independent stores than do higher-income
areas. This point and other qualifications of the regression results are presented in the
paper "Do the Poor Pay More for Food," CFDP 290. These qualifications, taken in conjunction with the results
themselves, make it clear that a complete investigation of the important question of
whether the cost of food is greater to the poor will require data of even higher quality
than those in the New York City survey and discussion that goes beyond the level of
comparing relative prices in stores serving areas with different incomes.
Klevorick also contributed to a study Higher Education in the Boston Metropolitan
Area: A Study of the Potential and Realized Demand for Higher Education in the Boston SMSA
undertaken for the Boston Metropolitan Area Planning Council and the Board of Higher
Education of the Commonwealth of Massachusetts. The objective of the study was to provide
the basic analysis necessary for planning the future of higher education in the Boston
SMSA (Standard Metropolitan Statistical Area). Klevorick's participation in the study
centered on the preparation of policy proposals for financial aid to students, taking into
account the egalitarian objective of equal educational opportunity, the role of this
individual state in the nation's effort to attain this goal, and the incentive effects of
various forms of aid on the student's educational decisions.
During the academic year 196970, Guy Orcutt then on leave from the Urban
Institute in Washington, DC held the Irving Fisher visiting professorship in the
Department of Economics, and was also a member of the staff of the Cowles Foundation. One
of Orcutt's major research interests is in the construction of detailed microeconomic
models applicable to the evaluation of alternative policies concerned with such
issues as poverty, congestion, and waste disposal and in the use of simulation
techniques for their solution. He continued, during his visit, to collaborate with the
staff of the Urban Institute on the development of a micro analytical model of the United
States population, oriented toward assessing the effects of a range of government policies
that directly affect the distribution of income and assets.
Regulation of the prices charged by particular industries for example, public
utilities is another way in which governments intervene in market economies.
Beginning with the contribution by H. Averch and L.L. Johnson in 1962, a number of
economists have discussed the behavior of a profit-maximizing monopolist whose output
price is constrained so as to yield a preassigned rate of return on capital. The paper
"Input Choices and Rate-of-Return Regulation: An Overview of the Discussion," by
Klevorick and William J. Baumol of Princeton University, focuses particularly on the
consequences of rate regulation for efficiency and the implications for public policy.
Klevorick has continued his interest in this topic by asking whether the
AverchJohnson formulation can offer any guidance about what the "fair rate of
return" ought to be. He uses the AverchJohnson model to predict how the
firm will behave when the fair rate of return is set at various levels. With this firm
reaction function in hand, the regulators then choose the value of the allowed rate of
return that will lead the firm to maximize social welfare. The results of this research
raise questions about the conventional view, as expressed through the history of judicial
and regulatory commission proceedings, that the fair rate of return allowed to a regulated
firm always should be equal to the market cost of capital. Because of the large number of
specific assumptions that are needed to obtain concrete results, the specific conclusions
are primarily suggestive. Nevertheless, the research does make the important point that
the choice of values for regulatory control instruments should be viewed in a broader
context than that suggested by current practices.
Klevorick has also attempted to provide more realistic models of the regulatory
process, with particular emphasis on its dynamic character and on the interplay between
the regulatory agency and the firm it is regulating. One of the models, for example,
assumes that the occurrence of regulatory reviews can be described by a stochastic
process. The firm believes there is a certain probability that a review will occur in a
particular year and it acts on this belief. Klevorick has investigated what the optimal
policy would be for a regulated firm maximizing the discounted present value of profit in
such an environment. Models of this type enable one to predict how the
"tightness" of regulatory policy for example, the mean time between
reviews affects the operating efficiency, in both static and dynamic terms, of the
regulated firm.
Technological change has been viewed by many economists as a highly significant
component of economic growth, yet it is incorporated only in rather simple and stylized
form in aggregated growth models. Nordhaus has been concerned with a study of
technological change and invention primarily at a microeconomic level which
might give more insight into the ways in which a variety of government policies could
influence these components of economic growth. He has, first, proposed certain theoretical
models of the inventive process which can explain the rate and direction of inventive
activity. These include models of microeconomic determination of the level of research,
models of the economics of patents, and aggregate movements in the rate and direction of
technological change. Secondly, he has considered, within the context of these models, the
influence of such policies on technological change as the duration of patents, the rate
and direction of government-produced research, and subsidy systems for private research
and development. Finally, he has made some empirical estimates of these models to
determine their predictive ability.
During the past two years Brainard, together with several graduate students, has been
conducting a study of the economics of pollution of the Connecticut River. The study is an
attempt to determine the optimal pattern of sewage treatment for obtaining any desired
level of water quality and to compare the costs of such treatment with uniform code
enforcement of the type embodied in most legislation. In order to investigate this
question a programming model of the river has been constructed which builds on
technological information provided by the Army Corp of Engineers, and which incorporates
estimates of the cost of various levels of sewage treatment. The model breaks the river
into approximately 100 sections and includes a similar number of "pollution
activities" which can be varied by changing the level of treatment. In addition to
providing a means of computing the "least cost" method of pollution, subject to
a water quality constraint, the model provides shadow prices on the water quality at
various places on the river. These "prices" in turn are valuable in determining
how quality standards should be changed and are useful in the selection of river sites to
be developed for recreational and other purposes.
5. Rational Behavior under Risk
One of the continuing interests of the Cowles Foundation over the last two decades has
been the development of a theory of economic behavior in the presence of uncertainty.
During the period of this report, research has focused on three areas: the development of
basic concepts, extensions of portfolio models, and the development of general equilibrium
models incorporating considerations of risk.
The modern theory of consumer behavior works with two distinct approaches: the
"ordinal" utility theory of consumers' choice among many commodities, and the
Von NeumannMorgenstern "cardinal" utility theory of consumer behavior
under risk generally with one commodity or with many commodities whose price ratios
are fixed. Stiglitz (CFDP 262) showed
that these two theories are in fact intimately related; he investigated the mutual
relationships between alternative restrictions on the indifference map and assumptions
about consumer behavior under risk. For instance, it is shown that if an individual is
risk neutral at all price ratios in a given region, then the income-consumption curves
must all be straight lines (in that region).
In another paper dealing with basic concepts, Stiglitz and Michael Rothschild
(Massachusetts Institute of Technology) explore the problem of characterizing degrees of
uncertainty. In CFDP 275, they consider
four possible criteria by which riskiness might be judged: (1) If uncorrelated noise is
added to a random variable, the new random variable is riskier than the original. (2) If X
and Y have the same mean but every risk averter (a person with a concave utility function
is a risk averter) prefers X to Y, then X is less risky than Y (3) If the probability
distribution of Y has more weight in the tails than the distribution of X, then Y is
riskier than X. (4) If Y has a greater variance than X, then Y is riskier than X.
Rothschild and Stiglitz show that the first three criteria lead to single definition of
greater riskiness different from that of the fourth. This definition may be phrased in
terms of the indefinite integrals of differences of cumulative distribution functions.
They also show how their definition can be applied to a variety of economic and
statistical problems; for example, the effects of increased risk on portfolio allocation,
savings behavior, and firm investment policy are investigated.
One of the earliest and most fruitful areas of application of the theory of behavior
under uncertainty has been to problems of portfolio allocation among alternative risky
assets. Until recently, the models examined in detail have usually been based either on
the simplifying assumptions of only one period and two assets, one of which is safe
(money) and the other risky, or on the assumption that a mean-variance analysis is
applicable. Recent research in this area has been primarily concerned with extending the
portfolio model to eliminate these limitations.
In an earlier Cowles Foundation paper, Tobin had showed that, with mean-variance
analysis, the portfolio decision could be divided into two parts: what proportion of the
portfolio to invest in the risky assets (as a whole) and, independently, the proportion to
be invested in each particular risky asset. Cass and Stiglitz (CFP 329) investigated the necessary and
sufficient conditions for such a portfolio separation theorem to obtain. They also
examined the more general question of when a set of mutual funds could provide all the
market opportunities desired by a class of individuals. The general class of utility
functions for which mutual funds are acceptable include as special cases the quadratic
utility function, those with constant relative risk aversion, and those with constant
absolute risk aversion.
Cass and Stiglitz have continued their research into the structure of portfolios with
many assets, by investigating the effects of a change in wealth on portfolio allocation.
When there are only two assets, it can be shown that (a) the wealth elasticity of demand
for the safe asset is greater or less than unity as relative risk aversion is increasing
or decreasing; (b) the variance of the rate of return increases or decreases with wealth
as relative risk aversion is decreasing or increasing; (c) the certainty equivalent rate
of return to the optimal portfolio increases or decreases with wealth as relative risk
aversion is increasing or decreasing. In a multi-asset world, if there are as many states
of nature as securities, only the second and third properties hold, and if there are more
states of nature than securities, only the third remains valid.
The extensions of the portfolio model to a simultaneous consideration of portfolio
allocation and saving behavior has been the concern of four papers (CFP 288 and CFP 330 and CFDP 268
and CFDP 260) in the last three years.
Tobin, in CFP 288 discussed below,
considers a two period model which incorporates uncertainty about rates of return and
future income. Stiglitz was primarily concerned with analyzing the allocation between
short, and long term bonds in a two period model. Following earlier work of Tobin,
Stiglitz points out that the conventional view whereby long term bonds are treated as
risky and short term bonds as safe is incorrect; which asset is risky depends on the
investor's consumption patterns. Accordingly, it is shown that even when the long term
interest factor is equal to the product of the expected short term factors, all
individuals who consume in both periods purchase some long term bonds, and they may in
fact specialize in long term bonds (if they are not very risk averse). Properties of the
demand functions for short and long term bonds are analyzed; the pattern of allocation is
shown, for instance, to be sensitive to relative rates of return, but not necessarily to
the level of the interest rate.
The capital-budgeting problem faced by a firm is analogous to the multi-period
portfolio-selection problem faced by an individual investor. A prototype of the firm's
capital-budgeting problem would picture an existing firm planning an investment program
for the next T periods. Taking account of the anticipated cash throw-offs from the
resources owned at the beginning of the investment program and the capital-market options
available to it, the firm then would be depicted as choosing the optimal subset of
investments to make from among the opportunities available over the planning horizon.
Since the returns from (and, perhaps, the cash outlays on) the various projects are not
known with certainty and can be characterized as random variables, the firm's
capital-budgeting problem is an example of decision-making under risk in which a decision
taken now has a stream of future effects.
Two aspects of the capital-budgeting problem just described have provided the focal
points for Klevorick's research in this area during the last three years. First, while the
criterion function of the capital-budgeting problem for a firm must necessarily evaluate
risks that occur at different points in time, almost all previous characterizations of
attitudes towards risk have been in terms of a single-period utility-of-wealth function
(as, for example, in the traditional discussion of risk aversion and the more recent
developments due to K.J. Arrow and J.W. Pratt). The question naturally arises, then,
whether the discussion of attitudes toward risk and statements about the behavioral
implications of these attitudes can be extended to a multi-period context.
One part of Klevorick's work on the capital-budgeting decision has addressed itself to
just this question. The relationship between concavity of the decision-maker's utility
function and risk aversion that is familiar in the case of a single-stage utility function
can readily be shown to obtain in the case of a multi-period utility function. The
emphasis of further research has been on extending the ArrowPratt concept of
decreasing absolute risk aversion to the case of multi-period decisions. This extension
has been completed successfully in terms of insurance-policy purchases in perfect capital
markets. A set of sufficient conditions has been derived for the existence of multi-period
decreasing absolute risk aversion and it has been shown that this set of conditions is a
generalization of the Pratt condition for the single-period case. It has also been proved
that, in the case of an additive multi-period utility function, the necessary and
sufficient condition for decreasing absolute risk aversion is that the Pratt condition
obtain for each individual period's utility function.
The second aspect of the capital-budgeting problem upon which Klevorick's research has
concentrated concerns the appropriate form that the objective function in that problem
should assume. There is general agreement on the appropriate criterion function for an
investment-planning decision made in a world of perfect capital markets and perfect
certainty. On the other hand, the question of what the appropriate function is when
capital markets are imperfect and projects' returns and costs are uncertain remains open
for debate. An examination of the alternative candidates for the maximand has led to the
development of a case in favor of the maximization of expected utility by a corporation
budgeting capital in the presence of risk and imperfect capital markets.
The research on both issues just discussed will form part of the material to be
included in a monograph on capital budgeting under risk which Klevorick has been preparing
for publication.
Hakannson has also investigated multi-period models of capital budgeting. His use of a
sequential decision-making model provides an extensive generalization to the model
originally formulated by Phelps (CFP 192).
The effect of taxation on risk taking was one of the first problems investigated by
means of portfolio analysis. It was shown (papers by Tobin and Lepper in Cowles Monograph 19) that proportional
taxes with full loss-offsets increase risk taking if the investors' utility of income
functions are quadratic (i.e., in one of the two cases where mean-variance analysis
applies). Stiglitz has shown that this result does not hold generally; the effects of
alternative taxes (with and without loss offset provisions) are shown to depend on the
level of relative risk aversion as well as on how relative and absolute risk aversion
change with wealth. In addition, a diagrammatic interpretation of increasing and
decreasing relative (and absolute) risk aversion is provided in CFP 293.
Development of general equilibrium models for economies with uncertainty has been
focused primarily on the formulation of models in which insurance markets are not complete
and perfect. In "A Re-Examination of the ModiglianiMiller Theorem" (CFP 314), Stiglitz explicitly considered
the problem of bankruptcy. He showed that the financial policy (debt-equity ratio) of
firms had no effect on any of the real variables of the economy the
ModiglianiMiller result provided firms did not go bankrupt.
Furthermore, bankruptcy made no difference if individuals had identical expectations and
their preferences could be described in terms of means and variances. However, when these
conditions are not satisfied, there does exist an optimal financial policy for the firm.
In more recent work, Stiglitz has been concerned with the analysis of investment policy
and choice of technique in an economy with a stock market.
6. Descriptive and Optimal Growth Theory
Just as the static general equilibrium model under certainty can be extended to include
uncertainty, so there is also an immediate way in which the model can be extended to
accommodate economic problems of a dynamic character. If commodities are distinguished not
only by their physical aspects but also by the calendar date of their availability, the
entire apparatus of the Walrasian model can be used to analyze the problems of allocation
over time.
This procedure, attractive as it may be, does have at least one substantial drawback:
it requires the independent decision making units to be in full possession of the relevant
prices which will indeed prevail in the future. Production units must be aware of future
labor and material costs, interest rates, and the competitive prices which will be
obtained for their outputs. Consuming units are required to be informed, not only of the
various components of their own future income streams, but also of future prices and
interest rates, since these may influence current consumption and savings decisions.
Much of the research in that area known as "descriptive" growth theory has
been concerned with the replacement of the complex maximizing behavior required in the
Walrasian model by a variety of simpler and possibly more realistic assumptions about
individual behavior. For example, savings behavior, which might otherwise be obtained by
the maximization of utility subject to a budget constraint, may be replaced by an
assumption that saving is for each individual or in the aggregate directly
proportional to income. And production decisions may be described as motivated largely by
short run profit maximization, independently of the course of future prices and interest
rates.
These simplifications of descriptive growth theory are one attempt to cut through the
complexities of a more elaborate model in order to focus on the dynamic aspects of a
problem. A similar concentration on intertemporal choices has been obtained by a number of
other simplifications, within the context of maximizing behavior. For example a single
utility function or social welfare function may replace the variety of
individual utility functions typically assumed in a more disaggregated model; stocks of
machinery may be replacedas inputs into production by a fictitious
homogeneous capital good; and production possibilities may be described by a highly
simplified relation between current output and aggregate inputs of capital and labor.
In a series of papers extending over a number of years, Koopmans has been concerned
with the study of growth paths in an economy in which the social welfare function for
consumption over time is assumed to have the property that the relative ranking of future
consumption streams that coincide in the first period is independent of the levels of
consumption achieved in that first period. In work described in an earlier three-year
report, Koopmans has characterized the utility functions which possess this and related
properties. The class of utility functions thus obtained is a substantial generalization
of the discounted sum of single period utility functions so frequently encountered in
optimal growth theory. In a joint publication with Richard Beals (CFP 309) this class of utility functions
is applied to an aggregate growth model involving a single good which serves
interchangeably as a capital or consumption good. The work differs from most of the
published work on optimal growth in that the optimality criterion allows the rate of
discounting of future utility flows to depend on the future consumption path, rather than
being assumed constant.
In subsequent work Koopmans has reverted to the simple optimality criterion in which
the discount rate is a constant, in order to study a different complication: that of
introducing into the model many goods, classified as consumption goods, resources, and
capital goods. The technology is of the von Neumann type with resources and consumption
goods added, and is constant over time. The first object of study is a stationary state
resulting from maximization of a discounted sum of utilities derived from future
consumption flows provided the initial stock is such as to be self-preserving under
the maximization. It is studied how the stationary capital stock depends on the constant
discount rate.
The mathematical essence of this problem appears to be a concave non-linear programming
problem in which additional constraints are placed on the dual variables (specifically, on
the shadow prices of capital goods). It is hoped that analysis of this problem will help
to obtain a better understanding of several difficulties in capital theory that have been
discussed in the recent literature. It may also help in discussing problems of optimal
taxation where capital goods may be among those taxed.
The problem of aggregation in a growth model involving many sectors was also examined
by Weitzman in CFDP 292. The paper
attempts to define constant "stationary equivalents" for the principal variables
in a nonstationary optimal growth path. For instance, the path of consumption is
represented by the utility value of that constant consumption vector whose discounted sum
of utilities over the infinite future equals that of the optimal path itself. Similar
definitions are introduced for the capital stock and the total product. It is then found
that the derivative of this aggregated total product with respect to the aggregated
capital stock equals the discount rate.
Weitzman has also studied an extension of the classical optimal growth problem
originally discussed by Frank Ramsey in 1928. In this paper (CFDP 273) output is limited by the smaller
of two capacities: (1) that of a directly productive capital stock whose contribution is
subject to decreasing returns, and (2) that of overhead capital, itself produced under
increasing returns to scale. In a model in which the overhead capacity limitation is
ignored, the optimal path is that derived by Ramsey. If the direct capital limitation is
ignored, the problem is one of optimal stepwise capacity expansions similar to those
studied by Alan S. Manne and other authors. When both limitations are recognized, the
optimal path travels along Ramsey's curve in those phases when overhead capacity is ample,
but holds consumption constant in alternate periods in order to accumulate goods to be
embodied in the next increase in overhead capacity.
One of the major limitations of the simple neoclassical models of growth
developed for example by Ramsey and by Solow was the requirement of a single,
homogeneous, malleable capital good, i.e., a good whose complementary labor input was not
fixed at the time of construction, and whose output could be increased by increasing the
supply of such labor. Not only was the assumption unrealistic, but it also made it
impossible to ask several of the central questions concerning economic growth: What
determines the choice of technique; how are different capital goods allocated to different
sectors, and what are the effects of capital gains on growth? Moreover, in the descriptive
models, because of these assumptions about the nature of capital (as well as the
requirement that savings be a simple function of income) future wage rates and interest
rates play no role in the pattern of growth. Attempts to alleviate the limitations of the
simple neo-classical model have been the subject of a number of papers by Cowles staff
members and others during the past three years.
These limitations on the malleable capital model were, at least partially, realized by
the early 1960's; both Solow and Johansen formulated models in which machinery of varying
capital-labor ratios could be constructed, even though no modification of the labor
requirement was permitted subsequent to production. Investigation of the dynamics of these
models had, however, been limited because, unlike the simpler growth models, the growth
processes were now described by a seemingly complicated mixed differential-difference
equation system. In CFP 299 and CFP 287, Cass and Stiglitz developed
methods for analyzing both descriptive and optimal versions of these models.
For the descriptive model, both of the polar cases of static expectations and perfect
foresight about future factor prices were investigated. In the case of static
expectations, convergence of the growth path was assured under weak conditions. Paths
which are consistent with fully perfect foresight converge to a balanced growth path, but
there exist many paths consistent with limited short-run perfect foresight which do not.
These results confirmed those of earlier studies by Hahn, Shell and Stiglitz showing that
with heterogeneous capital goods stability depended crucially on the expectations
hypothesis used. The optimal growth path with non-malleable capital is shown to be quite
different from that of the malleable case; consumption need not be monotonic indeed
wages, and the capital intensity of the machine constructed, may fluctuate. There may also
be discontinuities in the choice of technique, in which two types of machines are
manufactured without constructing machines of intervening capital intensities.
The assumption of malleable capital has also been made in econometric estimates of
production functions. If capital is not in fact malleable, a specification error is
therefore being made. The nature of this error is investigated by Cass and Stiglitz in CFP 299 for three estimation problems, the
estimation of the rate of technical change, the estimation of investment functions, and
the estimation of production functions with constant elasticity of substitution.
The implications of a similar misspecification in the context of an optimal growth
model were examined by Newbery (CFDP 281).
Assuming that the economy is accurately described by a vintage model with fixed
coefficients, a series of numerical simulations are obtained, which describe the loss in
utility occasioned by following an "optimal" path the path being
calculated under the assumption of malleable capital.
The property of malleability may be contrasted with another type of substitution
possibility: capital is said to be "shiftable" if, after its production, it can
be used in any one of several sectors, should economic conditions warrant such a choice.
In CFDP 266, Weitzman introduced a
generalization of a two sector model of optimal economic growth first considered by the
Soviet economist G.A. Feldman in 1928. In addition to consumption and investment sectors,
Weitzman's model includes a raw material sector which can be used to provide inputs into
either of the other two sectors thereby providing a degree of shiftability which is
not captured by the previous formulation. Under fairly general conditions, however, the
optimal growth paths for the two models coincide and the increased flexibility will not be
exploited.
In CFP 313, Karl Shell (University
of Pennsylvania), M. Sidrauski and Stiglitz investigated the role played by changing
prices of capital goods, relative to one another and relative to those of consumer goods
(capital gains and losses), on individual savings and on portfolio choices in a number of
simple neoclassical growth models. Dynamic behavior in models of this sort is shown to
differ substantially from those in which capital gains play no role. In a particular
example in which the nominal supply of money is assumed to change at a constant rate, the
economy starting with a fixed endowment of capital and labor, and a given money
supply will approach a long run balanced growth only for certain specifications of
initial prices.
In CFP 319, Stiglitz considered a
two-sector economy in which there was embodied technical change, so that newer machines
are "more efficient" than previously produced machines. The central problem
posed was how would an economy allocate the different kinds of machines between the two
sectors; it was shown that all the newer machines would be allocated to the labor
intensive sector, assuming one sector to be unambiguously more labor intensive than the
other.
In the analysis of many of the conventional problems in economics, static assumptions
are made with the hope that the dynamic effects may be safely ignored. The development of
neoclassical growth theory in recent years, however, has made available tools by which
some of these problems may be reexamined in an explicitly dynamic setting. That is the
object of the two papers about to be described.
In "Factor Price Equalization in a Dynamic Economy," Stiglitz examines
several classical trade propositions, using a model with two factors of production
capital and labor in each of two countries. Alternative descriptions of savings
behavior are examined: the Marxian savings hypothesis that a given fraction of profits are
saved and the "rational" savings hypothesis that each country acts as if it were
maximizing an intertemporal utility function with a constant discount rate. It is shown
that under certain conditions at least one of the two countries specializes so that factor
prices and interest rates are never equalized; indeed, under the normal capital intensity
hypothesis, factor price differentials actually increase over time.
In the second of these papers (CFP 320),
the major determinants of economic inequality were examined in the context of a growth
model. Attention was focused on four factors: (a) savings behavior, (b) inheritance
policies, (c) differences in factor productivities, and (d) human reproduction rates; an
attempt was made to identify those factors which tended to increase or decrease equality
in the distribution of wealth. The methodology of growth theory was also applied by
Newbery (CFDP 278) to a model of an
underdeveloped country containing a modern industrialized sector alongside of a
traditional sector engaged, for example, in agricultural production at a subsistence
level. The implicit prices for investment, labor, and consumption are determined by a
programming problem in which a discounted sum of utilities for workers in both sectors is
maximized.
A production plan consistent with a specific technology is "efficient" if its
inputs cannot be decreased without entailing a corresponding decrease in at least one of
the outputs. In a model involving a finite number of production periods, an efficient plan
will maximize the present value of the stream of profits when compared to all alternative
feasible plans for some appropriate selection of prices and interest rates implicit
in the efficient plan. This simple relation between efficiency and the maximization of
present value may however be lost in those models typically discussed in growth theory in
which an infinite sequence of decision periods are permitted. In the paper, "Present
Values Playing the Role of Efficiency Prices in the One-Good Growth Model," Cass and
Menaham Yaari analyze this relationship in the context of a one sector growth model of the
type introduced by Ramsey. It turns out that present values do have a maximality property,
thereby providing a price interpretation of efficiency; but the interpretation in the
infinite horizon model is weaker and somewhat more subtle than the customary one.
7. Macroeconomics and Monetary Theory
As in the case of descriptive growth theory, short-run macroeconomics including
its monetary aspects is distinguished from general equilibrium theory by less
formal consideration of the rational behavior of economic agents in a large number of
markets. Aggregation across large numbers of decision-making units remains, by definition,
an essential characteristic of macroeconomics. However, macroeconomic research has
increasingly broken away from the simplifications of early Keynesianism in two respects.
First, more care has been taken in formulating aggregate analogues to reasonable
assumptions about the behavior of individual consumers or producers. Thus, aggregate
capital investment functions are more frequently related to profit maximizing conditions
for competitive firms, and money demand functions are similarly related to the portfolio
choices of individual households. Second, more explicit consideration is given to market
equilibria (or disequilibria) for a larger number of goods and financial assets, and to
such macroeconomic identities as the summation of assets and liabilities to total wealth.
These trends in macroeconomic theory have entailed costs by making models more
complicated to solve resulting in greater interest in computer simulation
techniques and by making aggregate relations considerably more difficult to
estimate empirically requiring the application of more elaborate econometric
procedures and raising new questions about econometric techniques. The resulting benefits
are richer implications, and in some cases, modifications of earlier policy conclusions
from macro-economic theory.
Most macroeconomic and monetary models have, for convenience, treated portfolio choice
and saving as separate decisions. The typical individual is assumed to decide first how
much wealth he desires and how fast to accumulate it, and second, how to apportion it
among various assets. This is neither realistic nor analytically satisfactory. In CFP 288, Tobin considers the optimal rates
of growth of "outside" money and of financial intermediation needed to place an
economy on an efficient growth path. The solution to this problem is shown to involve both
the amount of saving and the division of savers' portfolios between capital and monetary
assets. Thus, in the second part of the paper, Tobin sets forth a preliminary analysis in
which the two decisions are integrated; amounts saved depend on the menu of assets
available, and on the probability distributions of their returns. Ebel worked further on
the same subject while at the Cowles Foundation.
Work in both theoretical and empirical monetary economics at the Cowles Foundation has
been relevant to a number of monetary controversies that have intensified in the past
three years. The issues concern: the relative importance of monetary and fiscal policies
in affecting short-run economic fluctuations and determining long-run trends; the proper
indicators and targets of monetary policy; the impact stabilizing or destabilizing
of active discretionary government policy intended to stabilize the economy. The
general spirit of the work at Cowles is opposed to the "monetarist" school.
Papers expressing the point of view developed at the Cowles Foundation have been noted in
earlier reports (CFP 224, 1964; CFP 229, 1965; CFP 257, 1967). In the period covered by
the current Report, Tobin completed and published work showing how misleading, as
indications of causation, can be temporal leads and lags between time series of money
stock and money income. Indeed an "ultra-Keynesian" model in which the money
stock has no causal importance is shown to generate lead-lag patterns exactly like those
exhibited by Friedman and Schwartz in support of their "monetarist" position.
Friedman's own "permanent income" model of money demand, a theory which
attributes business fluctuations to cycles in the rate of growth of the money stock, is
shown to imply lead-lag observations quite different from the observed pattern (CFP 323). In CFP 296, Tobin and Craig Swan examined the
statistical fit of the same "permanent-income" model of U.S. data and its
ability to forecast money income from money stock series. The model was not successful.
Research on monetary theory and policy at Cowles has been inclusive and eclectic. Many
assets not just those dubbed as "money" have monetary importance.
Many institutions not just commercial banks play significant roles. Monetary
policy is important, but there is no iron mechanical link from monetary variables to
business activity. Tobin published an expository article describing this "general
equilibrium" approach (which was described briefly in the previous Report) in the first issue of the new Journal of Money,
Credit, and Banking ("A General Equilibrium Approach to Monetary Theory,"
Vol. 1, February 1969, pp. 1529).
As mentioned in the 1967 Report, Brainard and Tobin conducted
simulations of "general equilibrium" financial models and their adjustment
paths. Their results were published in CFP
279. In addition to the substantive theory embodied in the models used in the
simulations, this paper makes two main methodological points. One point similar to that of
CFP 323 (cited above), but more complex
and general, is that temporal sequences of peaks and troughs are without value as
indicators of causation. The other is that both equilibrium and disequilibrium models must
explicitly respect Walras' law. In this context the law is that, since the demands for
various assets are always constrained to sum to total wealth, anything which affects the
demand for one asset without affecting total wealth must have equal and opposite effects
on the demand for at least one other asset. Econometric models typically ignore this
important principle, and the paper presents simulations that illustrate the unhappy
consequences of doing so.
The model of CFP 279 is theoretical and the
simulations are illustrative. But the same framework is the basis for an empirical model
of the financial sector of the U.S. economy which is being estimated by students of the
macroeconomics workshop under the direction of Brainard and Tobin.
Although research has shifted away from sole emphasis on the role of commercial banks
in monetary phenomena, these financial institutions continue to play an important role in
the adjustments of portfolios, interest rates and credit flows. During the summer of 1968,
Hester continued work on a forthcoming monograph, Bank Management and Portfolio
Behavior, which he is preparing in collaboration with James L. Pierce of the Board of
Governors of the Federal Reserve System. This monograph both extends the application of
the portfolio theoretic approach to commercial banks and includes fairly extensive
empirical investigation of their behavior.
An application of the general equilibrium-portfolio approach to a question of policy is
provided by Tobin's paper CFP 322,
criticizing the reliance of policy-makers on deposit interest ceilings for commercial
banks and thrift institutions. The basis of this criticism is the allocative and
distributive effects of such ceilings in periods of restrictive monetary policy. An
investigation by Hester of credit flows from financial to real sectors during the unusual
period of the 1966 "credit crunch" leads to conclusions closely related to these
arguments of Tobin. In CFP 311, Hester
suggests that output of the housing industry was cut back in 1966 by forcing suppliers of
mortgage loans into a position from which they could not successfully compete for new
funds. This was brought about by Federal Reserve open market operations which forced
market interest rates up while, simultaneously, the Federal Home Loan Bank Board imposed
effective ceilings on the interest rates which savings and loan associations could pay on
savings deposits and shares.
Another policy application was given by Tobin in a paper ("Monetary
Semantics") questioning the value of the monetarists' search for a single indicator
and target of monetary policy and in particular challenging the suitability of the
quantity of money (currency plus bank deposits) for this role.
Turning to the real sectors of the economy, macroeconomic models characteristically
include two basic building blocks, a set of one or more investment relations and a set of
one or more consumption functions. Bischoff has been concerned with examining the behavior
of business fixed investment, and particularly with examination of the effects of monetary
and fiscal policy on this form of investment. The "putty-clay" model of
production relations, in which factor proportions are variable only up to the point at
which new machines are installed implies, under several simplifying assumptions, that the
short-run elasticity of investment demand with respect to changes in the quasi-rental cost
of investment goods will not exceed the long-run elasticity. In contrast, the initial
demand response to a change in output will exceed the long-run response. In an empirical
application to the demand for equipment (CFDP
250), Bischoff found that the estimated dynamic response patterns conform to the
suggestions of the theoretical model: factors affecting the quasi-rents, including the
investment tax credit, the long-term interest rate, and the yield of equities, are found
to have substantial long-run effects but smaller short-run effects.
In "A Model of Nonresidential Construction in the United States" (CFP 325), Bischoff examines the demand for
investment in structures. The estimated response patterns to changes in quasi-rents do not
conform to the predictions of the "putty-clay" model; as with the response to
output changes, the short-run elasticity exceeds that for the long run. This is consistent
with the intuitive observation that building is likely to adapt more flexibly than
machines to various factor proportions. The results also suggest that equity yields exert
substantial effects on construction demands in the short run, and to a lesser extent in
the long run.
One central aspect of the "general-equilibrium" monetary theory discussed
above is the importance it assigns to discrepancies between market valuations of old
capital goods and the cost of new capital goods. In particular, securities markets
continuously re-value the plant and equipment of corporations; the relation of these
valuations to the costs of new plant and equipment determine how large a claim on their
future earnings corporations must give up in order to finance new investments. The
hypothesis that this relation is a significant factor in corporate investment decisions is
being tested by the macroeconomics workshop and has performed well in preliminary tests by
Bischoff.
The value of the long-run elasticity of investment demand with respect to the
quasi-rental cost of the services of investment goods has been the subject of a
controversy between Dale Jorgenson and Robert Eisner. In CFP 301, Bischoff shows that the conclusions drawn are critically
dependent on the precise assumption made about the stochastic process which generates the
disturbances in the equation being studied.
"The Lag Between Orders and Production of Machinery and Equipment: A
Re-examination of the KarekenSolow Results," reconsiders a portion of a study
of lags in monetary policy and shows that the lag in the machinery industry between orders
and production is considerably shorter than the earlier study indicated. Bischoff's
reexamination relies on more sophisticated methods for estimating distributed lags and
serially correlated errors than the original authors used. Bischoff has recently surveyed
contemporary innovations in specification and estimation of distributed lag models.
Aggregate consumption functions received extensive treatment in early econometric
literature because of the concurrent Keynesian emphasis on the importance of the
consumption multiplier. A number of alternative theories of aggregate consumption behavior
have been developed from underlying microeconomic theories of rational consumer behavior.
A standard result of these theories, in a static monetary economy, is that a consumer's
demand functions for commodities are homogeneous of degree zero in prices, money income,
and money wealth. This condition has been defined as the absence of money illusion.
Aggregating over all commodities and all consumers, this absence-of-money illusion result
implies that the economy's aggregate real consumption should be a function of aggregate
real income and aggregate real wealth, but not the price level. The world in which
consumers make their decisions and take their actions is, however, quite different from
the world of traditional consumer theory where rationality and perfect information always
prevail. This observation suggests the following basic question to which research by
Klevorick and William Branson of Princeton University was directed: If one estimates a
short-run consumption function taking account of distributed-lag adjustments,
simultaneous-equation relationships and the like, will the resulting short-run
relationship show that money illusion is present?
Branson and Klevorick estimated such a "money-illusion consumption function"
for the United States based on quarterly data for the sample period 1955-I1965-IV.
Using an AndoModiglianiBrumberg life-cycle hypothesis model, they estimated a
consumption function that allowed the general price level to play an independent role in
determining the level of per capita real consumption. The estimates and the tests to which
they are subjected lead to the conclusion that the price level does indeed have an
independent effect on real consumption.
To the extent that consumers are subject to money illusion, it is reasonable to expect
them to have unfavorable attitudes toward general price increases even if matched or
exceeded by increases in nominal income. In a paper presented in 1968 to the American
Political Science Association, Lepper extended previous work of Kramer's (see preceding Report) on voters' responses in
Congressional elections to economic fluctuations. Experimentation, at the national level
of aggregation, with a number of different specifications and sample periods indicated
that changes in the consumer price index might have some independent influence on voters'
satisfaction with economic conditions in addition to the influence of unemployment
(or change in real income) but the statistical evidence was very weak and was not
replicated in experiments with county data. In this same paper, she elaborated the model
of voter choice underlying her own, and Kramer's earlier work and speculated on the
implications of the regression analysis for policy makers' choice of a national
unemployment target.
Wage, price and productivity relations, which were frequently omitted from early
aggregate econometric models, have proven to be among the most difficult to estimate
econometrically. In regard to prices, this generalization is one of the major conclusions
of Nordhaus' review of "Recent Developments in Price Dynamics" (CFDP 296) in
which he compares and evaluates nine econometric studies of price behavior. In the same
paper, Nordhaus develops a model of long-run profit maximizing behavior for firms, which
provides rules for price behavior. He is then able to compare the specification of price
relations implied by this theoretical model as modified by qualitative arguments
concerning short-run price adjustments with the existing econometric studies. In
most cases he finds that omission of theoretically relevant variables is a reasonable
explanation for the deviation of estimated coefficients from values that would be expected
on theoretical grounds.
In the macroeconomic workshop directed by Brainard and Tobin, Nordhaus has engaged in
theoretical and empirical work on the problems of explaining cyclical productivity
movements. His approach is based on dynamic models of producers' choice which incorporate
uncertainty and fixed costs in changing the labor force.
Assumptions about wage and price adjustments and the nature of expectations about
future prices, wages, and sales play crucial roles in explaining unemployment. On a
theoretical level, Nordhaus has also explored the determinants of unemployment in a
decentralized economy subject to uncertainty in demand. In a preliminary paper, he shows
that rational producers will choose to have excess capacity (capital), and that some
secular unemployment of labor will be normal, in an economy subject to random fluctuations
in demand. Furthermore, the average capital-labor ratio will be biased downward. The
amount of unemployment of capital and labor, and the size of the bias in the capital-labor
ratio depend on the variability of demand. These conclusions follow from the assumption of
a conventional "putty-clay" production function and the absence of short-run
cyclical adjustments in prices and wages, even when long-run behavior conforms to all the
conventional neo-classical assumptions.
In CFP 290, "Output, Wages and
Employment in the Short Run," Stiglitz and R.M. Solow of the Massachusetts Institute
of Technology consider the determination of unemployment in the short-run, i.e.,
when the capital stock is fixed. The authors present a model of short-run
"equilibrium" in which money wages are not rigid yet neither do real wages rise
in a depression nor do firms sell all they would like at the given prices and wages. Two
types of short-run equilibrium are identified, "demand constrained" and
"supply constrained." Income shares, in the first of these, are determined
according to the so-called Cambridge theory of distribution but, in the latter, marginal
productivity theory applies.
8. Econometric Methodology
Econometric investigations have long been concerned with the analysis of economic
relationships using as statistical evidence a sequence of observations over time. But it
is only within the last five or ten years that the use of spectral techniques for the
analysis of economic time series has been widely advocated. This methodology, which has
had a long history of application in engineering and the physical sciences, can be applied
to any stochastic process which is stationary in the sense that the correlation between
any two observations made at two distinct points in time depends only on the
time difference. Given this assumption, and some mild additional conditions, these
correlations and other statistical aspects of the process can be estimated by a single
sequence of observations, if it is sufficiently long.
Any appropriately regular function of time can be represented, over a finite interval,
by its Fourier Series: a representation in terms of elementary periodic functions of
differing frequencies and with amplitudes which measure the contribution of that
frequency. When the function of time is generated by a stochastic process the amplitude
associated with a given frequency will itself be a random variable. If the process is
stationary, in the sense previously defined, the decomposition may be taken to have the
property that the random amplitudes at two differing frequencies are mutually orthogonal
or in the special case of amplitudes with a normal distribution, independent of
each other. The sum (or in some cases, the integral) of the variances of these amplitudes,
for all frequencies less than a given frequency, is defined to be the spectrum of the
process.
The spectrum fully determines the covariances of the process and as such can be used as
an alternative description, at least in so far as the second order moments are concerned.
One of the major virtues of the spectral description of a stationary process is that a
number of elementary transformations of the process such as smoothing or filtering
of the observations can be represented by a simple modification of the spectrum. On
the other hand the spectrum itself may have less of an intuitive appeal for economic time
series than in other applications where it may be capable of a concrete physical
interpretation. Other possible drawbacks are that spectral techniques may require either a
longer series of observations than is typically available in economics, or the assumption
of stationarity over a longer horizon than is justified.
One of the major applications of spectral techniques has been to the appraisal of the
seasonal adjustment procedures developed by the Bureau of the Census. This work,
originally initiated by Nerlove, led to the development of several informal criteria for
good seasonal adjustment. These criteria were expressed in terms of the spectral
distributions of the series both before and after seasonal adjustments, and on the
properties of the cross spectrum (based on the covariance between lagged observations in
two series) between the adjusted and unadjusted series. Though apparently plausible, these
criteria were essentially ad hoc and were not developed with reference to any
specific purpose of seasonal adjustment. Grether and Nerlove (CFDP 261) considered several possible
definitions of "optimal" seasonal adjustment in terms of the series
themselves and showed that even under ideal circumstances each of the definitions
led to seasonal adjustment procedures which violated some or all of the spectral criteria.
This work therefore casts some doubt on the spectral criteria and shows the need for
specifying the objectives of seasonal adjustment directly.
The same basic statistical theory was applied by Grether (CFDP 279) to the problem of deriving
distributed lag models when the economic agent's behavior depends upon his forecast of the
value of an economic time series or upon an estimate of some component of a time series
(e.g. the seasonal component). Distributed lags of the so-called rational type were
derived and it was shown how the order of the lag depends upon the forecast horizon and on
the properties of the series being forecast. As an example these results were applied to a
simple model of inventory adjustment and production smoothing.
Research at the Cowles Foundation on spectral techniques was considerably strengthened
by the visit of Professor E.J. Hannan, of the Australian National University, for the
second semester of 19691970. During Hannan's visit final revisions were made on a
volume entitled Multiple Time Series which was published in the fall of 1970. The
volume is a comprehensive survey of spectral techniques applied to stationary time series
in which the observation at each moment of time is a vector rather than a single number.
Many econometric models make the simplifying assumption that the residuals, or the
incremental stochastic input in the current observation, are independent of each other at
successive moments of time. Not only is this assumption implausible for a variety of
models, but it cannot be maintained under a number of transformations which are
customarily used to simplify the analysis. For example, if one series of observations is
obtained from a second by means of a distributed lag plus an error term, then the first
series of observations may in many cases also be represented in an auto-regressive form.
If the errors are assumed to be independent of each other in one of these formulations
then they will be serially correlated in the other.
During his visit to Cowles, Hannan investigated a number of basic econometric problems,
under the general assumption that the residuals form a stationary stochastic process. In CFDP 294 "Time Series Regression with
Linear Constraints," written jointly with R.D. Terrell (Australian National
University), optimal procedures are given for estimating a system of regressions with
linear constraints on the coefficients; asymptotically efficient estimates are obtained
for the regression coefficients and the parameters of the residual process, and an
application to the estimation of systems of demand equations is discussed.
A second paper CFDP 298
"Non-Linear Time Series Regression" is concerned with the estimation of
regression parameters which occur in a non-linear fashion a problem which was
previously considered by R.I. Jennrich ("Non-Linear Least Squares Estimators," Anal.
Math. Stat., Vol. 40 (1969)) under the restrictive assumption that the residuals are
identically distributed, independent random variables.
In CFDP 291, Hannan and D.F.
Nicholls (Australian National University) have treated the problem of estimating the
parameters of an equation with lagged dependent variables, exogenous variables and moving
average disturbances. The presence of the moving average structure in the errors greatly
complicates estimation due to the essential non-linear nature of the problem. Hannan and
Nicholls suggest a computational procedure for jointly estimating all the parameters of
the equation and prove that their method is asymptotically efficient. In addition to
treating the general model they also treat the case in which the equation arises from a
transformation of a rational distributed lag model.
During the period of this report, Nerlove continued to explore the important area of
estimating dynamic economic relationships from a time series of cross-section data. The
methodology had previously been introduced by Pietro Balestra and Nerlove in their study
of the demand for natural gas, and has also been used in work originally undertaken with
Donald Hester on estimating the rates of return on investments in individual common
stocks. A condensed summary of this later project was reported by Nerlove in the paper
"Factors Affecting Differences Among Rates of Return on Investments in Individual
Common Stocks."
Since the general problem of determining the properties of alternative estimation
techniques for time series of cross-section data has proved to be rather intractable
analytically, Monte Carlo methods have been used extensively. Two series of experiments
have been performed so far and are reported in the two papers by Nerlove, CFDP 266 and CFDP 257. The second paper shows quite clearly the
relevance of large-sample properties, especially consistency and efficiency, in
small-sample situations.
In CFDP 271, Nerlove explored the
properties of certain estimates which arise in connection with a specific model in the
study of cross section data over time. At each moment of time, each observation in the
cross-section is assumed to be a function of a number of independent variables, with an
error term depending both on the time and the individual in the cross-section. An analysis
of the least squares estimates is carried out under the assumption that the error process
can be decomposed into three independent parts: one specific to the individual, one
related to the time of observation, and a remainder independent of the other two
processes.
The previous three year report described in detail the important work by J. Kadane on
the comparison of estimators for simultaneous equation models, under the assumption that
the parameters of the error process are small. Asymptotic moments for these estimates can
be derived for finite samples as the variance of the errors tend to zero
instead of the customary asymptotic procedure which requires the number of observations to
tend to infinity. Two applications of this work are reported in the papers, "Testing
Overidentifying Restrictions when the Residuals are Small" (CFP 326), and "Comparison of k-Class
Estimators when the Disturbances are Small" (CFDP 269), both by Kadane. In the second of these papers, involving
models with no lagged endogenous variables, it is shown that for equations in which the
degree of overidentification is no larger than six, the two stage least squares estimator
uniformly dominates the limited information maximum likelihood estimator.
GUESTS
The Cowles Foundation is pleased to have as guests scholars and advanced students from
other research centers in this country and abroad. Their presence contributes stimulation
and criticism to the work of the staff and aids in spreading the results of its research.
To the extent that its resources permit, the Foundation has accorded office, library, and
other research facilities to guests who are in residence for an extended period. The
following visited or were associated with the organization in this manner during the past
three years.
HELMUT FRISCH (University of Vienna), JanuaryMay, 1969.
Sponsored by the Ford Foundation.
WAHIDUL HAQUE (University of Toronto), JuneAugust, 1967.
MACIEJ KROL (Planning Commission, Council of Ministers, Warsaw,
Poland), FebruaryJuly, 1969. Sponsored by the Ford Foundation.
HARL EDGAR RYDER, JR. (Brown University), September,
1969June, 1970. Sponsored by the Ford Foundation.
LUIGI TDMASINI (Rome University), September 1968July 1969.
Sponsored by the Italian government.
COWLES FOUNDATION SEMINARS AND CONFERENCES
Seminars
In addition to periodic Cowles Foundation staff meetings, at which members of the staff
discuss research in progress or nearing completion, the Foundation also sponsors a series
of Cowles Foundation Seminars conducted by colleagues from other Universities or elsewhere
in Yale. These speakers usually discuss recent results of their research on quantitative
subjects and methods. All interested members of the Yale community are invited to these
Cowles Foundation Seminars, which are frequently addressed to the general economist
including interested graduate students. The following seminars occurred during the past
three years.
| 1967 |
|
| November 10 |
PETER DIAMOND, M.I.T., "Optimal
Taxation and Public Investment." |
| November 30 |
ROY RADNER, University of California,
Berkeley, The Role of Prices as Information Signals in the Allocation of Resources under
Uncertainty." |
| 1968 |
|
| January 2 |
H. NIKAIDO, Osaka University, "Income
Distribution and Growth in a Monopolist Economy" |
| March 27 |
T.N. SRINIVASAN, Indian Statistical
Institute and Visiting Professor of Economics at Stanford University, "Optimal
Savings under Uncertainty" |
| April 5 |
VERNON L. SMITH, Brown University,
"Economics of Production from Natural Resources" |
| April 30 |
JANOS KORNAI, Computing Center of the
Hungarian Academy of Sciences, "Mathematical Planning and Economic Reform in
Hungary" |
| May 3 |
AMARTYA SEN, Yale and University of Delhi,
"Interpersonal Aggregation and Social Choice" |
| May 15 |
JACQUES DREZE, University of Chicago and
University Catholique de Louvain, "Two Certainty Equivalents Theorems for Savings
under Uncertainty" |
| May 24 |
MERTON MILLER, University of Chicago,
"Portfolio Theory and the Structure of Interest Rates" |
| May 31 |
BENEDIKT KORDA, Higher School of Economics,
Prague, "Recent Economic Events in Czechoslovakia" |
| June 14 |
JAMES MIRRLEES, Cambridge University and
Visiting Professor at Massachusetts Institute of Technology, "Some Theory of Optimum
Income Taxation" |
| October 24 |
HAROLD WATTS, University of Wisconsin,
"The Negative Income Tax Experiment in New Jersey" |
| December 6 |
CHRISTOPHER A. SIMS, Harvard University,
"Some Pitfalls of Approximate Specification in Distributed Lag Estimation" |
| December 13 |
EDMUND PHELPS, University of Pennsylvania,
"Non-Walrasian Aspects of Employment and Inflation Theory" |
| 1969 |
|
| January 24 |
EDWARD J. HANNAH, Australian National
University, "Mixed Moving Average Autoregressive Processes" |
| February 14 |
KENNETH J. ARROW, Harvard University,
"Existence of Temporary Equilibrium" |
| March 19 |
MRS. JOAN ROBINSON, University of
Cambridge, England, "Unresolved Questions in Capital Theory," (jointly with
Economic Growth Center) |
| April 18 |
MICHAEL FARRELL, Gonville and Caius
College, Cambridge, England, "Natural Selection in Economics" |
| April 22 |
LAWRENCE KLEIN, University of Pennsylvania,
"The Theory of Economic Prediction" |
| April 25 |
DONALD TUCKER, The Urban Institute,
"Money Demand and Market Disequilibrium" |
| May 16 |
ASSAR LINDBECK, University of Stockholm and
Columbia University, "Stabilization Policy in 'Narrow Band' Economies" |
| 1970 |
|
| January 16 |
ROBERT E. HALL, University of California,
Berkeley, "Inflationary Bias in Labor Markets" |
| March 6 |
DUNCAN FOLEY, Massachusetts Institute of
Technology, "Economic Equilibrium with Marketing Costs" |
| March 11 |
ALAN S. MANNE, Stanford University, "A
Dynamic Multi-Sector Model for Mexico, 196880" |
| April 10 |
JAMES BUCHANAN, Virginia Polytechnic
Institute, "Notes on the Theory of Supply" |
| April 17 |
Zvi GRILICHES, Harvard University,
"Estimating Production Functions from Micro-Data" |
| April 24 |
JOHN W. KENDRICK, The George Washington
University, "Postwar Productivity Trends and Relationships" |
| May 8 |
ARNOLD ZELLNER, University of Chicago,
"Bayesian Inference in the Analysis of Log-Normal Distributions and Regressions" |
Conferences
The Cowles Foundation was also the host for two conferences in the Fall of 1968. The
first of these, held on September 2021, was concerned with the Federal Reserve
BoardM.I.T. forecasting model and more general considerations of model building and
policy simulation. The second was a Symposium on Economic Growth Theory, which took place
on November 2224.
FINANCING AND OPERATION
The Cowles Foundation relies largely on gifts, grants and contracts to finance its
research activities. Yale University contributes to the Cowles Foundation the use of a
building at 30 Hillhouse Avenue which provides office space, a seminar room, and related
facilities. The University also supports the Foundation's research and administration
through paying or guaranteeing part or all of the non-teaching fractions of the salaries
of three permanent staff members.
The gifts of the Cowles family are the cornerstone of the financial support of the
Cowles Foundation. These gifts provide a permanent source of untied funds that assure the
staff continuing research support, that permit the staff freedom to shift the balance of
their time among various subjects of research, and that provide for general operating
expenses not appropriately chargeable to grants and contracts for work on specific topics.
In addition, a growing amount of financial support has come from grants and contracts from
the National Science Foundation, the Ford Foundation, the Office of Naval Research and
other, usually private, sources. The amount of this support varies from time to time and
much of it has, in the past, been tied to specific research projects. For two of the past
three years, however, the Cowles Foundation has been fortunate in having sizeable
institutional grants from the National Science, and Ford Foundations. The National Science
Foundation grant is a "continuing" grant providing annual funding for the period
July 1968 through June 1973. Additional funds for support of the general program of the
Cowles Foundation and for a program of visiting staff members were generously provided by
the Ford Foundation for the same period. This Ford visitors program is intended specially
to facilitate visits by Eastern European economists, and also by scholars in disciplines
other than economics but related to interests of Cowles Foundation staff. These guests are
regular members of the Cowles Foundation staff for the period of their stay
generally four months or longer.
The major part of Cowles Foundation expenditures is accounted for by research salaries
(and associated fringe benefits). The rest of the budget consists of office and library
salaries, overhead expenses such as the costs of preparing and distributing manuscripts,
and the costs of computing services (the Yale Computer Center currently makes available
the services of a direct coupled IBM 70407094 system, an IBM time-sharing computer
and necessary auxiliary equipment).
The pattern of Cowles Foundation income and expenditures in recent years is outlined in
the table below.
| ANNUAL INCOME
AND EXPENDITURES OF THE COWLES FOUNDATION |
Average
for |
INCOME |
EXPENDITURES |
| Total |
Permanent |
Temporary,
including project support |
Total |
Research
salaries |
Other |
| Cowles Family Gifts |
Yale |
Total |
196164
$(000)
% |
179
100 |
41
22.9 |
12
6.7 |
53
29.6 |
126
70.4 |
180
100 |
112
62.2 |
68
37.8 |
196467
$(000)
% |
250
100 |
44
17.6 |
14
5.6 |
58
23.2 |
192
76.8 |
244
100 |
148
60.7 |
96
39.3 |
196770
$(000)
% |
357
100 |
49
13.7 |
17
4.8 |
66
18.5 |
291
81.5 |
346
100 |
221
63.9 |
125
36.1 |
During the period of this report, the research staff of the Cowles Foundation included
18 or 19 members in faculty ranks. This size was determined by an interplay of
considerations including financial constraints, limitations of space at 30 Hillhouse
Avenue, and opportunities to bring to the Foundation colleagues who would complement or
supplement current research activities. The balance among ranks of the staff in residence
varied from year to year depending largely upon leaves of absence and the opportunities to
compensate for such absences by visiting appointments. Excluding staff members on such
visiting appointments, the staff included six tenured faculty, of the Departments of
Economics, Administrative Science and Political Science, and 9 to 11 faculty on term
appointments. On average, both the permanent and the younger members of the staff devoted
about half of their professional effort in the academic year, and up to two full months in
the summer, to their research and to seminars and discussions with their colleagues.
These activities were supported by the services of five secretaries and manuscript
typists who, under the direction of Miss Althea Strauss, prepared and circulated Cowles
Foundation Papers and Discussion Papers. A varying number of student research assistants
and two part-time computer programmers, Mrs. Elizabeth Bockelman and Mrs. Marilyn Hurst,
assisted directly in the research studies.
A small library, most recently under the supervision of Mrs. Patricia Graczyk, is
maintained in the building of the Cowles Foundation. It makes research materials readily
available to the staff and supplements the technical economics and statistics collections
of other libraries on the Yale campus. The collection includes a permanent collection of
some 5,200 books and 180 journals primarily in the fields of general economics,
mathematical economics, econometric studies and methods, statistical methods and data;
numerous pamphlets from Government sources and international organizations; series of
reprints from 22 research organizations at other universities in the United States and
abroad; and a rotating collection of recent unpublished working papers. Although the
library is oriented primarily to the needs of the staff, it is also used by other members
of the Yale faculty and by students of the University.
PUBLICATIONS AND PAPERS
MONOGRAPHS 19341970
See complete LISTING OF MONOGRAPHS
(available for download)
COWLES FOUNDATION PAPERS
See complete LISTING OF COWLES FOUNDATION
PAPERS
COWLES FOUNDATION DISCUSSION PAPERS
See complete LISTING OF COWLES FOUNDATION
DISCUSSION PAPERS
OTHER PUBLICATIONS
BISCHOFF, CHARLES
- "The Lag between Orders and Production of Machinery and Equipment: A Reexamination
of the KarekenSolow Results," presented at the summer meetings of the
Econometric Society, August, 1968.
- "Plant and Equipment Spending in 1969 and 1970," Brookings Papers on
Economic Activity, 1970 (1), pp. 127132.
- "The Effect of Alternative Lag Distributions," in Tax Incentives and
Capital Spending, Gary Fromm, ed., The Brookings Institution, 1970, pp. 61130,
forthcoming CFP.
CASS, DAVID
- "The Implications of Alternative Saving and Expectations Hypotheses for Choices of
Technique and Patterns of Growth (with M.E. Yaari), Memorandum of The Hebrew University of
Jerusalem, January 1970.
FRIEDMAN, JAMES
- "A Noncooperative View of Oligopoly," forthcoming in International Economic
Review.
HESTER, DONALD
- "Deposit Forecasting and Portfolio Behavior" (with James L. Pierce), abstract,
Econometrica, Vol. XXXVI, No. 5 (October 1968).
KLEVORICK, ALVIN
- Higher Education in the Boston Metropolitan Area; A Study of the Potential and Realized
Demand for Higher Education in the Boston SMSA (with A.J. Corazzini, E. Bartell, D.J.
Dugan, H.G. Grabowski, and J.H. Keith, Jr.), Massachusetts Board of Higher Education,
1969.
- "Review of Howard Raiffa, Decision Analysis," Journal of Finance,
December 1969.
KOOPMANS, TJALLING
- "On the Descriptions and Comparison of Economic Systems" (with J. Michael
Montias) forthcoming in Proceedings of the Conference on Comparative Economic Systems,
edited by Alexander Eckstein (University of California Press).
- "A Model of a Continuing State with Scarce Capital," presented at the
Symposium on National Economy Modelling held in Novosibirsk, U.S.S.R., June 2227,
1970.
KRAMER, GERALD
- "Short-Term Fluctuations in U.S. Voting Behavior, 18961964," paper
presented at the 1968 Annual Meetings of the American Political Science Association;
forthcoming in American Political Science Review, March 1971.
- "Theory of Electoral Systems," paper presented at the Eighth World Congress of
the International Political Science Association, March 1970.
- "The Effects of Precinct-level Canvassing on Voter Behavior," The Public
Opinion Quarterly, Winter 197071.
- "Congressional Elections" (with S.J. Lepper), forthcoming in Dimensions of
Quantitative Research in History, edited by R.W. Fogel.
LEPPER, SUSAN
- "Voting Behavior and Aggregate Policy Targets," presented at the 1968 Annual
Meetings of the American Political Science Association.
- "Congressional Elections" (with G.H. Kramer), forthcoming in Dimensions of
Quantitative Research in History, edited by R.W. Fogel.
MIESZKOWSKI, PETER
- "Is a Negative Income Tax Practical?" (with James Tobin and Joseph Pechman), Yale
Law Journal, Vol. 77, November 1967.
- "The Effects of the Corporate Tax," (with John Cragg and Arnold Harberger), Journal
of Political Economy, Vol. 75, December 1967.
- "Effects of the Carter Proposals on the Corporate Tax," Proceedings of the
Twentieth Tax Conference of the Canadian Tax Foundation, 1968.
- "Carter on the Taxation of International Income Flows," National Tax
Journal, Vol. 22, March 1969.
NERLOVE, MARC
- "Experimental Evidence on the Estimation of Dynamic Economic Relations from a
Time-Series of Cross Sections," Economic Studies Quarterly, Vol. 18 (December
1967), pp. 4274.
- "Factors Affecting Differences among Rates of Return on Investments in Individual
Common Stocks," Review of Economics and Statistics, 50: 31231 (August
1968).
- "Love and Life between the Censuses: A Model of Family Decision Making in Puerto
Rico, 195060" (with Paul Schultz), forthcoming RAND research report.
NORDHAUS, WILLIAM
- Invention, Growth and Welfare: A Theoretical Treatment of Technological Change,
Massachusetts Institute of Technology Press, 1969.
- "Is Growth Obsolete?" (with James Tobin), presented at a National Bureau of
Economic Research Colloquium, December 1970.
ORCUTT, GUY
- "Should Aggregation prior to Estimation be the Rule" (with John B. Edwards), The
Review of Economics and Statistics, November 1969.
- "Data, Research and Government," Papers and Proceedings, annual meetings of
the American Economic Association, 1969.
- "Simulation, Modeling and Data," forthcoming in the volume of the Panel on
Economics of the Behavioral and Social Sciences Survey Committee of the National Research
Council and the Social Science Research Council.
SHUBIK, MARTIN
- "Transfer of Technology and Simulation Studies," in D.L. Spencer and A.
Woroniak (eds.), The Transfer of Technology to Developing Countries, Frederick A.
Praeger, New York, 1967, pp. 119140.
- "On the Study of Disarmament and Escalation," The Journal of Conflict
Resolution, Vol. XII, No. 1, March 1968, pp. 83101. Reprinted in C.J. Smith
(ed.), Readings in the Social Science of Conflict Resolution, Notre Dame, Indiana:
University of Notre Dame Press.
- "Information, Rationality and Free Choice in a Future Democratic Society," DAEDALUS
(Journal of the American Academy of Arts and Sciences), Summer 1967, pp. 771778.
- "Welfare, Economic Structure and Game Theoretic Solutions," in F. Zwicky and
A.G. Wilson (eds.), New Methods of Thought and Procedure, Springer-Verlag, New
York, 1967, pp. 228245.
- "Gaming Costs and Facilities," Management Science Theory,
14, 11, July 1968, pp. 629660.
- "A Two Party System, General Equilibrium and the Voters' Paradox," Zeitschrift
fur Nationalokonomie, 28, 1968, pp. 341354.
- "Preface" to J. Cross, The Economics of Bargaining, New York, Basic
Books, Inc., 1969.
- "Foreword," to R. Farquharson, Theory of Voting, New Haven, Yale
University Press, 1969.
- "World Perspectives and Prospects" in Proceedings of 11th Annual Symposium,
TIMS College on Planning, Los Angeles, November 1113, 1968.
- "On the Core of an Economic System with Externalities," The American
Economic Review, LIX, 4, Part 1, September 1969, pp. 678684 (joint with
Shapley).
- "A Bibliography with Some Comments," in Buehler, I. and H. Nutini, Game
Theory, University of Pittsburgh Press, Pittsburgh, 1969, pp. 253261.
- "On Different Methods for Allocating Resources," Kyklos, Vol. XXIII,
Fasc. 2, 1970, pp. 332337. (Also P-4161, RAND Corporation, Santa Monica, California,
July 1969.)
- Concluding Remarks for Aix en Provence Colloque International du C.N.R.R., July 1967, in
La Decision, Centre Nationale de la Resetche-Scientifique, Paris, pp. 335336.
- "Homo Politicus and the Instant Referendum," Public Choice, Fall Issue,
1970, pp. 7984.
- "Games of Status," Behavioral Science, March 1971.
- "A Curmudgeon's Guide to Microeconomics," Journal of Economic Literature,
VIII, 2, June 1970, pp. 405434.
STIGLITZ, JOSEPH
- "A New View of Technical Change" (with A. Atkinson), Economic Journal,
September 1969.
- "Capital, Wages and Structural Unemployment" (with George A. Akerloff), The
Economic Journal, June 1969, pp. 269281.
- "Reply to Mrs. Robinson on the Choice of Technique," Economic Journal,
June 1970, pp. 420422.
- "Increasing Risk: II. Its Economic Consequences," Journal of Economic
Theory, forthcoming.
- "On the Optimality of the Stock Market Allocation of Investment Among Risky
Assets," paper presented to the Far Eastern meetings of the Econometric Society, June
1970, Tokyo.
- "Some Aspects of the Pure Theory of Corporate Finance, Bankruptcy, and
Take-overs," paper presented to a conference at Hakone, Japan, June 1970.
TOBIN, JAMES
- "Is a Negative Income Tax Practical?" (with Joseph A. Pechman and Peter
Mieszkowski), Yale Law Journal, Vol. 77, November 1967.
- "Unemployment and Inflation: The Cruel Dilemma," in Price Issues in Theory,
Practice and Policy, A. Phillips, ed., University of Pennsylvania Press, 1967.
- "Appraising the Nation's Economic Policy, 196667," Comments prepared for
American Statistical Association panel discussion at December 1967 Joint Meetings;
published in Proceedings.
- "Third Rational Debate Seminar," 196768, James Tobin and W. Allen
Wallis, in Welfare Programs: An Economic Appraisal, Washington, DC: 1968, American
Enterprise Institute for Public Policy Research.
- Comment on "Mean-Variance Analysis in the Theory of Liquidity Preference and
Portfolio Selection," Borch, Karl and Martin Feldstein, Review of Economic Studies,
January 1969, pp. 1314.
- "A General Equilibrium Approach to Monetary Theory," Journal of Money,
Credit, and Banking, Vol. 1, February 1969, pp. 1529.
- "Monetary Semantics," in Targets and Indicators of Monetary Policy,
Institute of Government and Public Affairs, UCLA, K. Brunner, ed., Chandler Publishers,
California, 1969, pp. 165174.
- "The Case for a Negative Income Tax," Money and the Poor: Public Welfare,
The Negative Income Tax, or What? Proceedings of a Conference co-sponsored by the
University of Connecticut Schools of Law and Social Work, February 3, 1969, pp.
6067.
- "On Limiting the Domain of Inequality," Journal of Law and Economics,
Vol. XIII (2), October 1970, pp. 263277.
- "Macroeconomics," in Economics (The Behavioral and Social Sciences
Survey), Prentice-Hall, Inc., 1970, pp. 4454.
- "Is Growth Obsolete?" (with William Nordhaus), forthcoming. For National
Bureau of Economic Research Colloquium, San Francisco, December 10, 1970.
WEITZMAN, MARTIN
- "A Model of the Demand for Money by Firms: Comment," Quarterly Journal of
Economics, February 1968.
- "A Branch and Bound Algorithm for Zero-One Mixed Integer Programming Problems"
(with D. A. Kendrick and R.A. Davis), forthcoming in Operations Research.
YAARI, MENAHEM
- "The Implications of Alternative Saving and Expectations Hypotheses for Choices of
Technique and Patterns of Growth (with D. Cass), Memorandum of The Hebrew University of
Jerusalem, January 1970.
|