PURPOSE AND ORIGIN
The Cowles Foundation for Research in Economics at Yale University, established as an
activity of the Department of Economics in 1955, has as its purpose the conduct and
encouragement of research in economics, finance, commerce, industry, and technology,
including problems of the organization of these activities. The Cowles Foundation seeks to
foster the development of logical, mathematical, and statistical methods of analysis for
application in economics and related social sciences. The professional research staff are,
as a rule, faculty members with appointments and teaching responsibilities in the
Department of Economics and other departments.
The Cowles Foundation continues the work of the Cowles Commission for Research in
Economics, founded in 1932 by Alfred Cowles at Colorado Springs, Colorado. The Commission
moved to Chicago in 1939 and was affiliated with the University of Chicago until 1955. In
1955 the professional research staff of the Commission accepted appointments at Yale and.
along with other members of the Yale Department of Economics, formed the research staff of
the newly established Cowles Foundation.
The Econometric Society, an international society for the advancement of economic
theory in its relation to statistics and mathematics, is an independent organization which
has been closely associated with the Cowles Commission since its inception. The
headquarters of the Society were moved from Chicago to Yale in 1955.
NOTE ON REFERENCES TO PUBLICATIONS
The following abbreviations are used throughout this report in referring to publications
or working papers of the Cowles Foundation and Cowles Commission:
CCNS: Cowles Commission New Series Papers
CFP: Cowles Foundation Papers
CFDP: Cowles Foundation Discussion Papers
Monographs are referred to by number, and Special Publications by title.
The other publications of each staff member are designated
by letter in the list, and are referred to by author and letter in the text. |
RESEARCH ACTIVITIES
1. Introduction
Taking the long view, perhaps the principal element of continuity in the research of
the Cowles Foundation (and of its predecessor the Cowles Commission) is its identification
with econometrics. What is econometrics? It consists of the design and use of rigorous
formal methods of logic, mathematics, and statistical inference for application to
economic theory, to the testing of economic hypotheses, and to the estimation of economic
relationships. The common commitment to such methods is what gives unity and identity to
the work of the Cowles Foundation.
A focus of this kind allows great diversity in the substantive topics of individual
pieces of research. Over the years, along with increasing availability and acceptance of a
variety of econometric methods and a richer flow of data, the substantive diversity of
econometric research has increased, in the profession at large as well as in the
activities of the Cowles Foundation. Yet it occasionally happens that, without specific
planning to that effect, a broader common substantive interest becomes apparent in several
independently conceived studies. In our research of the last few years, one particular
aspect of economic choice has turned out to be a common thread of a number of studies,
both theoretical and empirical.
The individual studies in question were addressed to such widely different problem
areas as timing preference, economic growth, capital formation, saving and consumption,
intergeneration transfers, choice of assets, and monetary theory and policy. The common
thread that is found to run through all these pieces of research is a concern with the
bridging of time in economic choice. How does the individual make his choices between
present and future rewards for his efforts, or between present and future returns on his
assets? Where such a choice is made by or for an entire society, how can it be made
consistently in all the detailed decisions bearing on that choice?
Studies in which the bridging of time is an important element will first be reported on
together, proceeding from the abstract and theoretical to the empirical and concrete.
Thereafter, the report proceeds to a variety of other studies in which the element of
intertemporal choice is less central or conspicuous, though rarely absent: studies of
economic equilibrium in competition between many or few, of production capability, of
inventory fluctuations and economic forecasting, of bank lending, of international trade,
of labor mobility, of managerial economics, of organization theory, and of decision-making
under uncertainty. The report concludes with a description of research concerned with the
development of statistical and mathematical tools and methods of econometrics and of
economic theory.
For support of the research reported below, the Cowles Foundation is indebted to a
variety of donors. The firm financial basis for the Cowles Foundation's continuing
activity is provided by Alfred Cowles and other members of the Cowles Family, and by the
University. Additional general support for the research program is being given through a
five-year grant by the Rockefeller Foundation. The work in organization theory, in
decision-making, in managerial economics, on economic equilibrium, and on the choice of
productive techniques in economic development, is supported by the Office of Naval
Research. Specific grants from the National Science Foundation have supported a Monte
Carlo study by Summers of certain methods of estimating economic behavior constants, work
by Koopmans on stationary utility and on development paths arising from its maximization,
and the initial phase of the work by Srinivasan on choice of production techniques in
economic development.
2. Timing Preference
The idea that most people, against a background of an assured even flow of consumption
over time, would prefer a given additional benefit to come in the near future rather than
in a more distant future, has a long history in economic thought. It was first put forward
by von Bohm Bawerk, and developed further by Irving Fisher. The same idea was examined
from a more abstract point of view in a study by Koopmans (CFP 151), which was continued, first with the assistance of Diamond,
and thereafter during Koopmans' year of leave at Harvard University in cooperation with
Richard Williamson, mathematician at Dartmouth College. In order to avoid imposing an
arbitrary termination date on what actually is an indefinite, open-ended, future, this
study considers preference to be interpreted as social rather than individual
preference among consumption programs for an infinite future. Each such program is
itself an infinite sequence of consumption bundles imagined available with certainty in
successive future periods. The two key postulates of the study are (a) that this
preference is expressible by a utility function with certain continuity properties, and
(b) that while preference may depend in some way on the lengths of time to elapse between
the time of choice and the times of availability of the consumption bundles in prospect,
preference is not otherwise dependent on the time of choice itself. The term stationary
utility was used to express these properties. With the help of three further postulates
that seem acceptable as a basis of inquiry it was found that preference for early timing
of future benefits, or impatience as Fisher has called it, is a logical consequence of the
postulates. In the context of this study, therefore, impatience is no longer an observed
psychological trait of most people. Neither, for that matter, is it an ethical principle
in balancing the opposed interests of present and future generations nor a
violation of such a principle. It is simply an unexpected but inescapable implication of
reasonable postulates which when considered individually apyear to say nothing about
timing preference. It is the openendedness of time that introduces the
"paradox," if paradox it is.
3. Choice of Development Paths
To simplify matters further, imagine now that only a single good enters into
consumption in all future periods. Imagine further an extremely simple technology in
which, starting with a given initial stock of that good, that part of the stock not
consumed in the first period grows at a constant "technological" growth rate to
form the initial stock of the second period, and so on. In an economy that simple,
consumption and investment are one single decision. What will be the outcome if a program
is to be chosen under these circumstances in such a way that its utility is maximized?
Preliminary results of Koopmans' work on this problem indicate that the answer depends
critically on the "subjective" interest rate implied in the utility function
being maximized. To define this interest rate compare a program with a constant
consumption flow for all future periods with two slightly better programs, one with only
the first-period consumption somewhat increased, the other with only the second-period
consumption increased. If these increments are small and so chosen that the two better
programs are equally desirable, then the percentage excess of the increment in consumption
specified for the second period over that specified for the first period is the subjective
interest rate in question. It is a numerical measure for the degree of impatience.
Now if there exists a constant consumption flow such that for that flow the subjective
interest rate equals the given technological growth rate, then an initial stock that just
makes that constant consumption flow possible will actually cause that flow to be chosen
through utility maximization. A larger initial stock leads to a larger consumption flow in
each future period, but not necessarily larger by the same amount in each period. As time
goes on, the increment in consumption due to the increased initial stock will (a) tend to
zero or (b) become larger and larger, depending on whether, for a slightly larger constant
consumption flow, the subjective rate of interest is (a) larger or (b) smaller than for
the original constant flow.
It is hoped that the study of highly simplified and artificial models of this kind will
throw some light on the much more complicated realities of economic development, and
perhaps also suggest concepts and tools with which to approach these realities. Such is
also the motivation of Srinivasan's doctoral dissertation (Yale 1961), a study of criteria
for choice from a constant collection of techniques of production in economic development.
While total saving and total investment still remains in some sense a single decision,
this study concentrates on the choice of the particular investment good or goods to be
produced at each time. In Srinivasan's model a single consumer good can be produced by
each one of an infinite sequence of techniques, which are labeled in order of increasing
capital requirements, and simultaneously in order of decreasing labor requirements.
Capital, on the other hand, is produced by a single method using capital and labor, and is
subject to depreciation at a constant rate regardless of use. The labor force is assumed
to grow at a constant proportionate rate. It is shown that the economy considered is
capable of generating a maximum sustainable rate of consumption per worker. This rate is
attained at each point of time along a "terminal path" in which only one most
productive technique of producing the consumer good is used. This path is reached at such
time at which every worker in the consumption goods industry is supplied with the full
amount of capital required by that technique. Since that amount is not initially available
in the actual circumstances of developing countries, approaches to the "terminal
path" from an initial position not on it are also considered. In particular, the path
which minimizes the time needed to reach the "terminal path" from below is
derived. It is shown further that the growth paths that result from applying some of the
investment criteria proposed in the literature are likewise of an extreme character. It is
suggested by this analysis that the problem of formulating simple operational criteria for
the choice of technique in economic development is still unsolved. A somewhat more general
problem of characterizing the class of all efficient growth paths is also discussed by
Srinivasan (CFDP 117).
Chenery made a critical survey (CFP 161)
of the literature on development planning for a underdeveloped countries, in order to
evaluate the various criteria for allocating investment funds and other scarce resources.
In particular, he contrasted and compared the criteria suggested by the traditional
doctrine of comparative advantage in the international division of labor, and by more
recent theories of growth, especially balanced growth. He observed that the comparative
advantage point-of-view has not recognized differences between prices of the factors of
production and the real opportunity costs of their use, changes over time in quantity and
quality of factors, increases in productivity obtainable by drastic increases in the scale
of production, and interdependence of production processes. On the other hand, modern
growth theories have tended to overlook the very real advantages from international trade
stressed by classical authors.
Chenery suggests "activity analysis" (or "linear programming")
models as a suitable framework through which to meet and overcome most of the objections
to both approaches and to combine their points of view. However, the advantages of
large-scale production are not expressible in such models without substantial
modification.
Finally, his study discusses planning procedures in use in a number of countries in the
light of this analysis.
The effect of scale of production on productivity, emphasized by Chenery, is central to
a study by Manne addressed primarily to the development planning of the firm. This study (CFDP 54R) concerns the optimal degree of
excess capacity to be built into a new facility such as a pipeline, a steel plant, or a
super highway, if the facility will be long-lived while demand is expected to grow
strongly over time. This study considers both the case where the growth of demand is
foreseen with certainty, and the case where a certain randomness in the growth of demand
is assumed. In the latter case the question is examined in which way the degree of
uncertainty concerning future demand affects the optimal capital expansion policy, and the
expected cost of production incurred under that policy. It is found, as one might
anticipate, that greater uncertainty raises cost, but also, less readily foreseen, that
greater uncertainty calls for larger facilities to be added when expansion is due, and
hence for a higher average over time of excess capacity. These findings apply to the case
where unsatisfied demand is lost forever. Modifications are indicated for the case in
which a backlog of unsatisfied demand can be carried over into subsequent periods.
Beckmann studied (CFDP 120) the
growth path resulting from a monetary policy that holds the interest rate fixed. Other
assumptions specify a production function with no effects of scale on productivity, and a
constant rate of population growth. These assumptions are embedded in a "static"
Keynesian model made "dynamic" by including the percentage rate of change in the
price level among the determinants of investment, consumption, money supply, and real
money demand functions. It is found that initial unemployment leads to an unstable process
of balanced growth. From initial full employment a Wicksellian cumulative process
develops: There exists a natural rate of interest such that at interest rates below the
natural rate, balanced growth is attained and the price level rises; at interest rates
above the natural rate both employment and the price level are falling. The
"optimal" interest rate which maximizes per capita consumption at any time
during balanced growth is shown to equal the rate of growth of population.
4. The Impact of Technical Progress on Growth
Most of the foregoing studies presuppose a given technology. New dimensions of economic
growth are opened up if one recognizes technological progress. In fact, recent
quantitative studies of growth in the United States have led economists to downgrade the
effectiveness of investment in raising output per worker, and accordingly to revise upward
the effect of technological change. Still, if there were no fresh investments to embody
technological advances, productivity would grow hardly at all. The modernizing effect of
investment was formalized by Robert Solow of the Massachusetts Institute of Technology in
an aggregative model of production. In this model the substitutability of capital (of any
given vintage) for labor and conversely is described by a so-called CobbDouglas
production function.
In order to trace in greater detail the implications of this "new" view of
investment, Phelps (in CFDP 110) compared two models of growth, having the same fraction
of income invested, the same rate of depreciation on capital, and the same
CobbDouglas model of substitution between labor and capital. The two models also
have the same annual percentage rate of increase in output, from given capital and labor
inputs, resulting from technical progress. However, in one model, representing the
"old" view, output is determined on the basis of technical progress up to the
time of the production in question, whereas in the other model, reflecting the
"new" view, only technical progress up to the time of construction of the
capital used in production is taken into account.
It was found that in the long run output from a given labor input is no more sensitive
to investment in the "new" model than in the "old" model. This is
because the long-run average age of the capital stock depends only upon the average growth
rate and the depreciation rate and neither, in these CobbDouglas models, depend upon
investment policy in the long run. However, the new model grants a greater role to
investment in the short run. Given the fraction of output to be saved, output is quicker
to approach its long-run equilibrium growth path in the new model than in the old model.
In that sense the new model offers a higher return to increased thrift. Of course, the two
models will not interpret present conditions in the same way, hence need not forecast the
same output path corresponding to a given investment policy.
In work now in progress, Phelps is attempting to measure the importance of some of the
factors in the growth of selected economies in Europe and North America since the war.
This work involves aggregate and sector econometric models in which investment serves both
to deepen and modernize the capital stock.
5. Saving and Consumption
We return to the total savings decision, and look on it now as a resultant of the
savings decisions of individual households and other decision units. Phelps has considered
what is the effect, on the household's desire to save, of uncertainty as to the future
market value of the accumulated assets. One model developed to study this problem (CFDP 101) resembles Ramsey's in "A
Mathematical Theory of Saving": consumption is continuous through time. But in the
present paper, unlike Ramsey's, capital is subjected to randomly timed gains and losses.
Some of the consequences of this capital risk are the following: The optimum consumption
rate does not depend simply upon the expected income flow. To the consumer with risk
aversion, the riskiness of income (capital) makes a difference. In the untruncated
(horizonless) and undiscounted case (no pure time preference) the presence of capital risk
produces a smaller consumption rate than is found in the comparable limiting
"riskless" case. Ramsey's conclusion that the optimum saving rate is independent
of the interest rate when future utility is not discounted fails to carry over to the
present case.
The same problem is attacked in another model (CFDP 109) in which consumption decisions and capital growth (or loss)
occur at periodic intervals rather than continuously. The rate of growth of unconsumed
capital from period to period is a random variable. Each period the household consumes a
portion of its capital (including its wages) and the capital remaining is left to grow or,
worse luck, decay. Some of the results of this analysis are: Consumption is an increasing
function of the household's capital and the household's age. The "hump saving"
phenomenon, where the individual's net worth reaches a maximum at some point in middle age
if everything goes according to plan, may not arise if capital is rather risky, despite
its net expected productivity. The proposition that risky income has a smaller impact upon
consumption than certain income is supported by certain examples. Just as the effect of
variations in the rate of return on capital upon the propensity to consume is
indeterminate without some restrictive assumptions on the utility function, so too the
effect of variations in the degree of risk depends upon the shape of the utility function.
A similar study by Beckmann (CFDP 68)
deals with the opposite case, where the individual expects to carry his assets into the
future without risk, but where the income from his labor is subject to uncertainty. Let
income be a random variable independently and identically distributed in all periods of an
indefinitely long future, and let the utility function for current consumption be
unchanging through time, whereas future utilities are discounted at constant compound
interest rates. If wealth, including accumulated savings, yields a riskless return at a
constant market rate of interest, the optimal allocation of the consumer's income to
saving and consumption is a function of wealth only. To determine it, one expresses the
utility of wealth as the maximum sum of the utility of consumption in the current period
and of the discounted expected value of the utility of wealth at the end of the current
period. The shape of the utility-of-wealth function thus defined is concave, whenever the
utility-of-consumption function is concave. It follows that consumption is a
non-decreasing function of wealth. The consumption level changes with shifts in the
expected value of income and also responds to temporary increments (windfalls), but the
former effect is substantially larger.
A considerable amount of empirical work bearing on the same group of questions was done
during the period of this report. The research on the structure and dynamics of household
balance sheets which was described in the 195658 report was completed in early 1959
and resulted in a joint paper by Watts and Tobin. This paper was presented at The
Conference on Consumption and Saving held at The Wharton School on March
3031, 1959, and was subsequently published (CFP 165) in Vol. ii supports the notion that there is an
"average" or "preferred" portfolio papers presented at the Conference.
The evidence provided by this study II of Consumption and Saving, edited by Irwin Friend,
along with other of assets and debts for a household which varies among households
according to their economic and social circumstances. An analysis of the financial flows
of households, such as durable goods purchases, debt repayment or acquisition, saving in
various forms, was also made. This analysis disclosed that the flows in each period tend
to eliminate discrepancies between the "average" or "preferred"
portfolios and the household's actual portfolio at the beginning of the period.
The 1950 Survey of Consumer Expenditures made by the Bureau of Labor Statistics, which
provided the data for the WattsTobin paper, has also provided data for three studies
by Watts to test hypotheses put forward by Milton Friedman of the University of Chicago.
In the first of these studies an attempt was made, by using the method of instrumental
variables, to identify and obtain estimates of the elasticity of consumption with respect
to transitory or temporary income. One of Friedman's hypotheses asserts that this
elasticity is equal to zero. Unfortunately the assumptions required to test this
hypothesis are quite stringent, and are apparently not met by variables employed in the
analysis. To the extent that any conclusion is warranted by the evidence produced, it is
that the assumption of zero elasticity is extreme. On the other hand, it is clear that
consumption is less sensitive to "transitory" income changes than to
"permanent" income changes. Some of the other parameters of Friedman's model
were estimated also, and, if internal consistency and reasonableness can be used as
criteria, these estimates are less sensitive to departure from the assumptions made in
choosing the estimating procedure.
The second of these studies culminated in a paper by Watts (CFDP 99) in which an operational permanent
income variable is defined and given a tentative trial to evaluate its explanatory
ability. This variable makes a household's permanent income a function of the average
incomeage relation (income profile) of "similar" households together with
the deviation of the household's current or recent income from that profile. The
definition of this variable depends on two parameters. One is a discount factor which
specifies the declining schedule of weights applied to uncertain and distant future
receipts. The other is a constant which describes the "extrapolative tendency,"
i.e., the tendency to believe that current deviations from the average profile will be
maintained in the future. The empirical evaluations, although very rough, showed that the
newly-defined variable was definitely superior to measured annual income for explaining
household saving behavior. It seems likely that additional work along these lines will
prove fruitful.
A third investigation focuses on Friedman's specification that permanent consumption is
a constant proportion of permanent income regardless of the level of permanent income. The
instrumental variable approach is used in this study with individual cities used as the
classifying criteria. If average permanent income varies among cities while average
transitory income is zero or a constant then it is possible to obtain unbiased estimates
of the elasticity which, by Friedman's hypothesis, is unity. While the above is an
oversimplification of the argument, after allowance is made for price differences and
"compensating differentials" in wages, the evidence fails to provide support for
Friedman's hypothesis of a unitary income elasticity of consumption.
Another area of interest was stimulated by Watts' participation in the Working
Conference on Family Research Models of the Social Science Research Council. A brief paper
was presented at the January 1960 meeting which outlined a model of consumer choice in
which consumption of many goods and services is made to depend on a smaller number of
"activity levels." It is hoped that some useful insights and hypotheses can be
found by viewing consumer choice in this way.
There is little evidence in the literature of any attempt to discover the most
appropriate form of curve to represent the growth of demand for a new commodity. Neither
the "logistic" nor the "exponential" adjustment processes which are
commonly used seem entirely satisfactory. Bain developed a model of the growth of
ownership of television in the United Kingdom, in which the growth path is represented by
a "cumulative log-normal" curve. The relation between the parameters of this
curve and cross-section economic variables was investigated, and a time-series analysis
was carried out in which the effects of changes in television service characteristics and
credit restrictions on the rate of growth of ownership were estimated. The results of this
analysis were free of some of the contradictions obtained in an earlier study using the
logistic curve, and suggest that the "lognormal" curve might be applied more
frequently in studies of growing demand.
6. Intergeneration Transfers of Wealth
Alfred Marshall stressed the desire to improve the welfare of the succeeding generation
as the principal determinant of consumer saving.
Tobin and Guthrie undertook a pilot study of this issue. The results (CFDP 98) show that parental support of
education is the most prevalent form of transfers among consumers of moderate income
levels. Although public and private programs for financing the support of the older
generation have grown substantially in recent years there is still a large volume of
support by the younger generation.
7. Choice of Assets
Markowitz's study of investment portfolio selection, described in earlier reports, was
published in 1959 as Cowles Foundation Monograph No. 17
under the title Portfolio Selection with the subtitle Efficient Diversification
of Investments. Part III, which forms the heart of this book, discusses the
simultaneous maximization of expected return, and of security of return, by suitable
diversification of investment. It also analyzes how more of either of these objectives can
be obtained by accepting less of the other. These analyses are preceded, in Part I, by
illustrative examples of problems and answers, and in Part II, by an exposition of
pertinent mathematical and statistical tools. Part IV, finally, discusses the principles
of choice under uncertainty that underlie the entire study.
8. Monetary Theory and Policy
Tobin is approaching monetary theory as part of the theory of general equilibrium in
asset markets. This equilibrium results from the individual choices and preferences that
guide households, business firms, and financial institutions in managing their individual
balance sheets. The theory of asset choice which Tobin employs is built both upon
Markowitz's work on portfolio selection just described, and on Tobin's own initially
independent but similar research over an extended period. His principal work during the
period of this Report has been the preparation of a book on monetary theory, which was
nearly completed at the time he took leave in January 1961 to accept appointment to the
President's Council of Economic Advisers. The nature of the book is indicated by the
following titles of chapters completed in preliminary draft form:
National Wealth and Individual Wealth
Properties of Assets
The Theory of Portfolio Selection
The Demand for Money
Growth and Fluctuation in a Two-Asset Economy
The Monetization of Capital
The Theory of Commercial Banking
The Monetary Mechanism
Financial Intermediaries and the Effectiveness of Monetary
Controls
(The last of these chapters has been reproduced as CFDP 63.)
These chapters have been used, at Yale and elsewhere, in advanced and graduate
instruction in monetary theory. A related paper, which will be adapted for inclusion in
the book, is the research memorandum on "The Pure Theory of Debt Management"
which Tobin prepared in 1960 for the Commission on Money and Credit.
The purpose of the book is to develop a theory describing the manner in which money
markets, capital markets, and financial institutions accommodate the supplies of assets of
various kinds to the demands and preferences of the individuals and business firms who own
and manage wealth. The entire range of assets is considered: currency, bank deposits,
government obligations, private debts, equities, durable consumers' goods, durable
producers' goods. "Money" is not treated as a case apart. Rather equality
between the demand for and the supply of monetary assets is considered as a part of
general equilibrium in asset markets. Other assets are more or less close substitutes for
"money," even if they are not generally accepted media of payments. The book
tries to derive systematically the implications of this obvious fact. Interest rates,
capital gains, equity yields, rents and profits from owning real property serve as the
"prices" which adjust to equalize supplies and demands in asset markets. But
these "prices" also affect and in turn are affected by current
production, saving, and investment. Asset markets are closely linked to markets for goods
and services. One of the principal objectives of the book is to describe these links, in
order to illuminate the consequences which monetary events have for business activity and
economic growth. In this respect, the book elaborates and builds upon Tobin's paper
"A Dynamic Aggregative Model" (Journal of Political Economy, April 1955,
pp. 103115).
The key to monetary control is the government's ability to determine the supplies and
sometimes the yields of certain basic assets (currency, bank reserves, and government
debt). The mechanics and objectives of monetary policy and debt management are discussed
from this point of view.
An econometric study of the effects of monetary policy on interest rates was made by
Okun at the instigation of the Commission on Money and Credit. In a paper prepared during,
but circulated after, the period of this
Report ("Monetary Policy, Debt Management and Interest Rates: A Quantitative
Appraisal," CFDP 125), he presents
estimates of the effects of various monetary policy and debt management operations on the
yields of long-term U.S. government bonds and Treasury bills The results are obtained from
aggregative time-series data for quarterly periods covering 1960 to 1959. For all standard
types of policy actions the estimated effects on interest rates are in the expected
direction; as anticipated, the short rate is shown to be much more volatile than the long
rate.
The most striking and most controversial aspect of Okun's results is that open market
operations appear to have very similar effects whether they are conducted by means of
bills or of long term bonds. Therefore, swapping bills for bonds has a very slight
estimated effect on yields. Theoretically, this suggests that government securities of
different maturities are close substitutes to a substantial group of investors. Yet, the
rate differential is variable: it is determined principally by the overall degree of
tightness in financial markets. The greater volatility of the bill rate means that the
differential of long yields over short yields narrows as the entire interest rate
structure rises. Inelastic expectations about future interest rates can readily account
for the greater variability of short rates.
Lovell studied some aspects of the theory of inflation (CFDP 90). Money and the rate of interest
were reintroduced into the post-Keynesian dynamic analysis of the inflationary gap in
order that the consequences of financing government expenditure by creating money might be
investigated. These investigations revealed that under a variety of assumptions concerning
the nature of expectations about future price levels a continued injection of new money
into the economy may lead to a forced saving equilibrium characterized by a stable real
money supply and constant rate of price increase. Conditions likely to cause the economy
to degenerate into a situation of runaway inflation were also specified in this study.
9. Competitive Equilibrium and Games of Strategy
Debreu's fundamental study in this field, Theory of Value, An Axiomatic Analysis of
Economic Equilibrium. was published in 1959 as Cowles
Foundation Monograph No. 16. The first few chapters introduce mathematical
representations of producers' and consumers' opportunities and motivations, and of the
notions of commodities and prices through which these opportunities and motivations are
expressed. Chapter 5 defines equilibrium and states conditions under which a private
ownership economy admits of an equilibrium. Chapter 6 indicates in what sense such an
equilibrium can be called optimal, and conversely proves that any such optimum can be
realized by an equilibrium. The last Chapter 7 extends these results to the case where
producers' opportunities and consumers' opportunities and preferences relating to future
periods are subject to uncertainty. References to earlier work of other authors are
appended to each chapter.
In subsequent work, Debreu has further unified and strengthened the various results
obtained on the existence of an equilibrium. This is achieved by introducing the concept
of a quasi-equilibrium and proving a general existence theorem for quasi-equilibria.
Simplifying somewhat, one may say that this new concept differs from an equilibrium in
that consumers are treated as expenditure minimizers for a given utility level rather than
as utility maximizers for a given expenditure level. By this device one can bypass one of
the main difficulties met with in the study of this problem and later conclude easily from
the existence of a quasi-equilibrium to that of an equilibrium.
Besides the existence and the optimality properties of competitive equilibrium, recent
literature has been occupied with the problem of stability of equilibrium. Scarf, in a
study (CFDP 79, (A)) begun at Stanford
University and completed at the Cowles Foundation, contributed to this topic by
constructing a few counter-examples to conjectures previously hoped to be true. These
examples relate to markets in three commodities, in which the price adjustment process is
described by relating the rate of change in each price to the excess demand for the
corresponding commodity. The excess demand is in turn derived from demand functions
obtained through utility maximization. In the examples shown, no equilibrium is approached
over time from any initial non-equilibrium set of prices.
In the foregoing studies equilibrium means competitive equilibrium, in the sense
that each producer, trader or consumer takes market prices as given to him, either because
he has no influence over them or because he does not use to his own advantage such
influence as he might have. This is in contrast with games of strategy where all available
moves may be used. It has been harder to analyze the implications of that more realistic
assumption. In studying a special class of market games first introduced by Edgeworth in
1881, Shubik proved (A) and interpreted (CFDP
107) a property which if extended to more general cases may become an important link
between game theory and the theory of competitive equilibrium. The core of a game is
defined as the set of all those outcomes (or "imputations") that cannot be
changed to its own collective advantage by any coalition of players within the rules of
the game. For the games in question, Shubik found that if the number of participants on
both sides of the market is increased in proportion without limit, the core of the game
converges to the corresponding competitive equilibrium.
Another method for the study of game situations is through experimental play with a
moderate number of players. Shubik (CFDP
105) made some experiments with 10 Yale seniors as the players, to study the effect of
the structure of the game on behavior, and the learning processes that go on during the
play. He is also constructing a more detailed business game for both teaching and research
purposes (CFDP 115, Part 1, Part 2, Part 3).
10. Production Capability
In economy-wide planning or forecasting, the productive capacity of an economy as of a
certain year is often expressed by a Gross National Product (GNP) figure computed at the
prices of some base year. While adequate for some purposes, this procedure taken literally
would imply that production capability so measured is independent of the particular
product-mix that is desired. However, specific capacities, bottlenecks, and substitution
possibilities enter into the appraisal whether any given product-mix that stays within the
total GNP estimate is actually feasible. Process Analysis, a technique for making more
detailed capability estimates that depend on the product-mix, was the topic of a
symposium, held April 2426, 1961 at the Cowles Foundation, organized by Manne and
Markowitz. Participants in the symposium, other than Cowles Foundation staff members,
included Anne Carter, Harvard Research Project on the Structure of the American Economy;
Tibor Fabian, of Lybrand, Ross Bros. and Montgomery, New York City; Daniel Gallik, New
York City; Earl Heady, Center for Advanced Study in the Behavioral Sciences; Marvin
Hoffenberg, Operations Research Office, Bethesda, MD; Walter Isard, Wharton School,
University of Pennsylvania; Thomas Marschak, University of California; T.Y. Shen, Wayne
State University, Detroit; Thomas Vietorisz, United Nations; Marshall Wood, National
Planning Association, Washington, DC. Manne and Markowitz will also edit a forthcoming
Cowles Foundation Monograph, "Studies in Process Analysis," containing the
papers presented at the symposium. This collection of papers is intended to argue the
desirability and demonstrate the feasibility of constructing economy-wide interindustry
models on the basis of technological data, rather than of historical money flows alone.
Each of the individual papers is formulated in terms of an "activity analysis"
framework. The three substantive sections of the volume are devoted to the following
sectors: (1) the petroleum and chemical industries manufacturing, transportation, and
plant location; (2) agriculture; and (3) primary metals and metalworking.
11. Inventory Fluctuations
Lovell has made a systematic study to relate inventory practices of individual firms to
cyclical fluctuations in the level of aggregate economic activity. The following chart
contrasts the course of Gross National Product (GNP) over the past thirty years with a
hypothetical GNP series representing what gross output would have been if there had been
no net inventory investment. The hypothetical series is obtained by subtracting from
actual GNP both inventory investment and an estimate of the consumption it generates. The
chart shows that reductions in inventory deepened the trough of the great depression of
the thirties. Both the 1937 and 1949 recessions appear to be explained by declines in
inventory investment. A sizable replenishment of stocks did smooth the task of
reconversion following World War II. At the close of the Korean emergency, on the other
hand, a reduction in inventory investment deepened the recession.
The effect upon GNP of investment in plant and equipment is revealed by a second
hypothetical series plotted on the chart. (This second hypothetical series is derived
under the assumption that inventories and other categories of investment were not
affected. On the other hand, the full multiplier effects through consumption are taken
into account. The effects of zero investment in plant and equipment upon productivity are
excluded.) It appears that investment in plant and equipment generates a much larger
proportion of effective demand than inventory investment. Although a considerable
contribution to economic growth may be attributed to fixed investment, however,
fluctuations in inventory investment must bear primary responsibility for cyclical
movements in output. With the exception of the post World War II reconversion, inventory
investment has been perverse in timing and magnitude, contributing to the generation of
cyclical fluctuations, to booms and unemployment. An explanation of this perverse
characteristic of inventory behavior requires both empirical and theoretical research.
Lovell's investigations suggest that inventory investment can be explained by an
accelerator model complicated by delayed adjustment and by errors in forecasting sales
volume. In an investigation of manufacturing inventory behavior (CFDP 86) quarterly deflated time series
data for five durable goods industries and the nondurable manufacturing aggregate were
utilized. A more recent investigation based on deflated quarterly GNP data suggest that
the same model serves to explain the behavior of the non-farm business inventory
aggregate. The evidence suggests that inventory investment is insensitive to interest rate
changes. Speculative purchasing of inventories in advance of anticipated price changes
does not appear to be of major importance. Errors in anticipating future sales volume are
small in magnitude; while this result means that on the average firms forecast accurately,
this is no doubt partly due to a cancelling of individual errors. The empirical
investigations lead to the conjecture that inventory investment in the United States might
have been even more perverse in timing and magnitude during the postwar period if it were
not for errors of expectations and delayed adjustment behavior. This stands out clearly in
the Korean war period.
A more complicated method of analysis was employed in the continued investigation
of this conjecture. It was necessary to construct a theoretical model of the inventory
cycle and to explore its stability properties for alternative values of the parameters of
the system (CFDP 89). The approach of traditional aggregative analysis was rejected in
favor of a framework of many sectors, as it was felt that interactions between individual
firms are a crucial factor in explaining the generation of cyclical fluctuations in total
inventories. Theoretical analysis of the stability properties of the many-sector model
revealed that small marginal desired inventory coefficients, delayed adjustment behavior,
and errors of expectations contribute to stability; on the other hand, it was found,
within the context of this quantity adjustment model in which production takes time, that
the assumption of correct if myopic anticipations implies that the economy is unstable. An
investigation of certain properties of linear stochastic difference equations has helped
determine the effects of random shocks upon the behavioral characteristics of the model.
In principle, the theoretical model is capable of empirical implementation; in practice,
only limited progress has been made thus far in assembling estimates of the various
parameters of the system into a complete system of equations describing the inventory
cycle.
12. Forecasting of Economic Activity
The cyclical character of fluctuations in national income has created particular
interest in the prediction of turning-points in the direction of economic activity. In CFP 144, Okun stresses the need for an
objective criterion of accuracy in evaluating and refining techniques for predicting
turning-points. During a slump, it is safe and hence devoid of content to assert
that an upturn is coming. The statement becomes meaningful only when a date is attached to
the prediction. The purpose of forecasting turning-points is to improve the estimate of
their timing. The paper advances a suggested method for evaluating the accuracy of the
predicted dating of business-cycle peaks and troughs.
An ideal criterion of predictive value would consider the benefits or costs due to the
influence of the forecast on decision-making. But no such attempt is made in this paper.
The suggested technique of appraisal forecast-month scoring simply counts correct
and incorrect predicted changes in the direction of economic activity. The operation of
forecast-month scoring is illustrated on historical data for naive forecasts, projections
based on the distribution of lengths of past business-cycle phases, leading indicators,
and diffusion indices of the National Bureau of Economic Research. Premature warnings of
recession appear to be the most serious danger in the mechanical use of leading
indicators.
Two earlier evaluations of forecasting procedures (CFDP 40, CFDP 45),
described in our previous report, were extended
to include more recent experience and were published during the present report period (CFP 153, CFP 135, respectively).
Okun also investigated relationships between output and employment in an effort to
improve prediction of unemployment and to estimate the potential output of the economy. A
number of statistical relationships suggest that each one-percent increment in real Gross
National Product is associated with a decline of about one-third of a percentage point in
the ratio of unemployment to the civilian labor force. The percentage gain in output far
exceeds the reduction in the unemployment ratio because increased economic activity leads
to longer hours of work, greater participation in the labor force, and higher
productivity. The retarding effect of recessions on the growth of productivity is clearly
in evidence.
13. Terms of Bank Lending
Hester completed a doctoral dissertation (Yale 1961) which investigated commercial bank
lending. The terms at which a bank is willing to lend, such as the rate of interest,
maturity, amount, etc., were assumed to be a function of the financial strength of a loan
applicant, the portfolio position of the bank, and the "tightness" of the money
market. Multiple regression analyses of samples of term loans and samples from the 1955
and 1957 Federal Reserve surveys of business loans confirmed this hypothesis. Significant
variables included a borrower's profits, his profit rate, his ratio of current assets to
current liabilities, his demand deposit balances, and his total assets; the lending bank's
deposit instability and loan-deposit ratio; and the prime loan rate of interest in the
money market.
A bank may, of course, lend to firms in a similar financial position at very different
terms. Using canonical correlation, estimates of the degree of substitutability among
various terms of lending were made. For example, in one sample it was found that a
borrower with particular financial characteristics could expect to borrow $100,000 on an
unsecured basis for one month at an effective rate of 5%, for ten months at 5.43%, for
eight years at 5.86%, etc. Another borrower representing a greater risk to the lender
might be able to obtain on an unsecured basis $10,000 for three years at 8%, but he must
pay 10.4% for a similar loan of $20,000.
The availability of credit doctrine states that as monetary authorities cause interest
rates to rise, lenders simultaneously increase their credit standards, thereby reinforcing
the effectiveness of monetary policy. Hester found, with one exception noted below, no
empirical evidence of such credit rationing by commercial banks. Although borrowers must
pay higher loan interest rates when other interest rates in the economy rise, no evidence
of increased collateral requirements, scaling down of loans, or shortening of maturities
was found, except in the case of term loans by large banks where maturities appear to
shorten.
It was found both in this study, and in similar research by Porter, that considerable
individual, random variation exists among different loans, bank officers, banks, and
borrowers. Hence statistical methods that process information about large numbers of loans
are essential to investigations in this field.
14. International Trade
Krause made an empirical investigation of the relationships between imports and other
sectors of the United States economy. It is evident from the dearth of quantifiable
evidence as to these relationships that commercial policy decisions have had to be made
without any real knowledge or reliable estimates of their economic consequences. Moreover,
the attempt to provide such estimates using aggregate relationships and time series data
is subject to inherent statistical difficulties. Finally, many of the problems of real
interest are of a structural rather than an aggregate nature and thus not subject to
investigation in terms of aggregates. Disaggregative studies using cross section
techniques over time therefore seem more promising. Krause's first effort (A) to study the
relationship between changes in our commercial policy and the United States economy was a
modest attempt to appraise the effect of a particular group of tariff reductions on the
level of imports. By comparing over time the growth of imports of a group of products that
have been the recipients of legislated tariff reductions to a similiar group of products
without such reductions, one can test for the significance of the observed differences in
effects. While this work was aimed at a narrow question, it became clear that many of the
assumptions concerning non-included variables were unrealistic and thus the conclusions
were far from completely satisfactory. It was therefore deemed necessary to broaden the
approach and take directly into account the most important economic variables for which
data were available.
The next phase of the study was devoted to making statistical estimates of an
explanatory equation for the quantity of imports of the United States since World War II,
using as explanatory variables import prices relative to the competitive domestic price,
the height of the ad valorem tariff and the amount of domestic production of the
comparable product. The results of this part of the study are most encouraging and
indicate that some insight has been gained as to the determinants of U.S. imports from
1947 to 1958 (CFDP 102). Within a given
level of aggregate imports, mainly determined by aggregate income, relative prices are
particularly important in determining the patterns of trade. This is by no means a
surprising result but the indicated degree of responsiveness of imports to price changes
adds weight to other evidence against the 'elasticity pessimism' school of thought. Tariff
changes made after 1947, on the other hand, do not seem to have important effects. This
result is somewhat unexpected in view of the political heat that the tariff issue
generates. However, the large tariff reductions made in the year 1947 did lead to
significantly larger imports. These seemingly conflicting results can be explained by
recognizing that the observed range of variation of tariffs has been very narrow since
1947 as compared to the large change in 1947 itself, and that the procedure for choosing
the products for tariff reductions has been drastically altered since 1947. While imports
are likely to have replaced domestic production in some lines of activity, the results of
this study indicate that in general increasing imports are associated with growing
domestic production. This finding has major political importance in that adjustment to
freer trade is always easiest within a growth setting.
The recently completed final phase of Krause's study (CFDP 119) is an attempt to investigate to what extent import
competition affects the pricing behavior of oligopolistic industries. A theoretical model
was considered with alternative possible links between imports and oligopolistic pricing.
The steel industry was selected for empirical study since it is a highly concentrated
industry with many products facing various conditions of import competition. The
statistical results indicate that for the period studied (19541958), domestic prices
were unresponsive to changes in the conditions of competitive imports. This can be
explained by institutional factors within the steel industry, and by the fact that imports
are small relative to domestic production. While this result may not hold for other
industries or even for the steel industry in some other time period, it is also clear that
the discipline of import competition will not always "restrain" administratively
determined domestic prices.
A study of the revenue received from foreign tourists in 58 countries in the years
195556 was made by Guthrie (CFDP 93)
in an attempt to discover some economic explanations for differences in receipts between
countries. Travel fares between countries, the volume of exports, and the amount of
emigration in the past are associated with differences in receipts from tourists. After
allowing for the effects of these variables in a regression equation, the residuals are
interpreted as a quantitative measure of the differences in qualitative attractiveness of
the various countries for tourists.
15. Labor Mobility in Agriculture
The tendency for agricultural income in some areas of the United States, notably the
Southeast, to be markedly below incomes in the non-farm sector has quite generally been
interpreted as a symptom of immobility of the farm labor force the counterpart in
agriculture of "frictional unemployment" in the industrial sector. It is not
clear, however, that the behavior of farm persons of all ages can be so simply assessed.
Berry made an empirical analysis by narrow age categories of the net migration of farm
labor from agriculture during the decade 19401950. The analysis considered white
rural farm male persons who were between the ages of fifteen and sixty-four in 1940. Rates
of net migration from the rural farm labor force were obtained for each of the ten
five-year age groups 1519, 2024, 6064. Explanatory variables included
measures of 1940 farm income, farm income change during the 19401950 decade,
distance to and size of the nearest major city, and the degree of farm ownership within
the farm labor force Substantial differences were found in the behavior of farm persons of
different ages. For the younger age groups, the relationship between off-the-farm
migration and the 1940 level of farm income was both significant and negative as expected.
On the other hand, this relationship tended almost to disappear for the intermediate age
groups (such as 3035), and became noticeably positive when older workers were
considered. Low farm income, which appropriately acts as an incentive to migration by
younger farm persons, is evidently a deterrent to the migration of older workers, who with
higher incomes would retire to nearby cities.
A high percentage of hired farm workers within the farm labor force was found generally
consistent with higher than average rates of migration. The greater the distance to the
nearest major city the less rapid was the migration of young farm workers, but the more
rapid the migration within all other age groups, reflecting perhaps that the availability
of part-time nonfarm employment may be less the more distant a major nonfarm labor market.
Other findings also differ among the ten age groups.
Although necessarily preliminary, these results are suggestive. The decrease in
mobility with advancing age emphasizes the long run social cost of even temporary urban
unemployment which, by retarding the migration of younger persons, may more permanently
commit to agriculture a significant fraction of this group.
16. Managerial Economics
A multiple-purpose water development project has several identifiable products, such as
flood control, electric power, irrigation, etc. In such situations, it is a first problem
of managerial economics to design and operate the system in such a way that no less of any
of these benefits is obtained than is possible, in view of how much of each of the other
benefits is obtained. Once it is known how to achieve this, a second problem is to
determine quantitatively how much of any one of these benefits must be given up, if a
stated increase in another one is to be obtained. Manne made a study (CFDP 95) of such "trade-off"
ratios for a simplified and hypothetical project involving one storage reservoir and a
three-season annual cycle of operations. This study applies a probabilistic programming
technique (CFP 148) described under the
heading "mathematical tools."
Beckmann contributed a paper on "Principles of Optimal Location for Transportation
Networks" to a Symposium on Quantitative Problems in 1960. This paper is in part a
further development of the "continuous Geography, organized by W. Garrison in Chicago
and held on May 56, model of transportation" described in the Twenty Year Report 19321952 and of ideas proposed by W.
Prager of Brown University. Suppose that traffic is generated with a known continuous
density over an extended region and terminates at a given single point. The cost of
transportation per unit distance and unit volume is constant throughout the region. One
may, however, construct a trunk road passing through the point of destination on which the
cost of transportation is smaller per unit distance. The capacity of the road is unlimited
and the cost of construction and maintenance per unit distance is given. The problem of
determining the optimal extent and location of the trunk road (more generally of a road
network) may be analysed by means of the calculus of variations. The following necessary
condition is obtained: At any point along the trunk road its curvature should be
proportional to the excess of flow entering from the convex side over the flow entering
from the concave side. Conditions for the optimal termination points of the trunk line and
for the optimal angles between roads at junctions may also be found. At a junction the
"forces" composed of the costs of construction and of transporting the existing
flow per unit distance must be in equilibrium. (This is analogous to the well known
Weberian conditions for the optimal location of a plant shipping to and/or from a set of
given locations.) For a network the problem reduces to the combinatorial one of comparing
the optimal graphs resulting from the various possible topological configurations of
networks. This paper is to be published in the Proceedings of the Symposium mentioned
above.
Another study by Beckmann is concerned with production smoothing and inventory control.
Suppose that quarterly demand for a given product is a random variable, identically and
independently distributed; that the cost of changing the rate of production is
proportional to the absolute value of this change; and that production cost, inventory
carrying cost, and shortage penalty cost are all proportional to the amounts involved. For
a given inventory level and a given past production rate, what is the best new rate of
production? Analysis by "dynamic programming" has shown that the optimal policy
is of the following type: For each inventory level there exist two limiting rates of
production; if the existing rate of production falls between these two limits, do not
change it; if it is above, reduce it to the upper limit, if below raise it to the lower
limit. The limiting rates of production considered as a function of inventories may be
regarded as curves which border the set of points for which the production process is
"in control." The shape of these curves can not be determined analytically.
Subsequent computations done at Brown University for typical values of the various cost
parameters have shown that the boundary curves tend to be approximately linear with a
slope of almost 45°.
Hooper revised and did further work on a joint paper (A) with David S. Stoller of the
RAND Corporation. This paper is concerned with the problem of finding conditions under
which individual service facilities should be aggregated in order to perform a certain
workload in an optimal way. An example of the problem would be the following: Assume that
two persons are available to service customers in a grocery store and that servicing
consists of the two operations of totaling and receiving the money due from the customer
and of bagging groceries. Assume further that the decision has been made that either
separate parallel facilities shall be used or that the two clerks shall work as a team in
one facility. The problem is, which of these two arrangements, team or separate, is
optimal. This, although seemingly a simple problem, does not yield to intuitive reasoning,
since the decision is shown to depend on the value of the parameters of the distributions
involved as well as the efficiency of the team.
17. Information and Organization
The economic decision-maker is usually uncertain about the outcome of his actions. By
seeking information he can diminish uncertainty and thus make his actions more effective
for the achievement of his goals. By and large, information is the more useful, the more
complete and accurate it is. But it is not costless. To be worthwhile, information must
contribute (on the average) more towards the achievement of the goal than the cost of
obtaining it. Marschak (CFP 146) asked
whether and how the "value of information" to its user (its "demand
price" if there is a market for information) and its cost (or "supply
price") are related to the "amount of information." This latter quantity is
used by communication engineers to measure the average amount of uncertainty that is
removed when one of a given set of messages is received. This measure is completely
determined by the number and the probabilities of possible messages, independently of
their contents and their potential uses: it is the weighted average of the logarithms of
those probabilities, also known to physicists as "entropy."
For example, all sets of three equally probable messages have the same information
amount, which in turn is larger than the information amount contained in any set of two
equally probable messages. Yet consider a speculator who has to choose between buying and
selling a fixed amount of stock. Suppose he can ask either of two specialists, A and B,
for forecasts of the trend of next week's prices. A can tell him one of two things:
whether the stock will "rise" or "not rise." B can tell him one of
three things: whether the stock will "rise by 5 points or more," "fall by 5
points or more," or "change in either direction by less than 5 points." As
measured by the communication engineer, B's information amount is therefore larger than
A's if, as before, we assume in each case equal probability of the possible messages. It
is also plausible that, since B's kind of information is more detailed than A's, B has to
spend more effort or money to get it, and will therefore have to ask a higher "supply
price" as a minimum reward that would make his sacrifice worthwhile. Yet, A's
forecast, though less detailed, happens to have greater value for the speculator than B's.
Of course, whenever the future price change is large, both A and B are of equal help for
deciding whether to buy or sell stock. But, whenever the price change is moderate, B is of
no help while A is. For, ignoring commissions, the speculator will buy on rise, sell on
fall, moderate or not. He should consider, as a maximum offer for a specialist's services,
a higher "demand price" in the case of A than in the case of B. This example
illustrates the fact that the demand price, or the "value of information" to the
user depends, not only on the number and the probabilities of potential messages, but also
on the payoff that can be obtained by the user who wishes to choose the best decision in
response to a given message.
Similar considerations apply also to the "noise" in an information channel,
that is, the added uncertainty due to error. If our specialist A would be known to err
more often than B, comparison between the respective "amounts of information"
would be even less favorable for A. Yet his messages may be the more useful ones.
The subject of what constitutes useful information was further studied in McGuire's
"Comparison of Information Structures" (CFDP 71). "Information structure" (or "information
rule") specifies, for each state of the decision-maker's environment, the message
that he will receive about the state of his environment. To use his information structure
well, the decision-maker also has to fix some "action rule" (or
"strategy"): this will state the best response to any given message.
The Figure at the left is a "flow chart," like those used in making
programs for computers. In each "box" (function, "operator"), certain
variables ("inputs," represented by circles) are processed into certain other
variables ("outputs"). The decision-maker can choose among available information
rules and action rules. But he has to accept as given: the probability distribution of
external events; the "payoff function," which states the usefulness of each
action when performed in a given state of the environment; and the "organizational
cost function," which tells how much it costs to obtain certain kinds of information,
and to apply certain decision rules.
An organization of several men differs from a single decision-maker in that the goals
may vary from member to member, and that in general, the members must act on the basis of
different information. Marschak and Radner are continuing their work on a Cowles
Foundation monograph, Economic Theory of Teams. In the theory of teams one
abstracts from the divergence of goals. One may assume, for example, in a first analysis,
that "perfect incentives" (e.g., well-designed bonuses) make each member act in
the common interest while acting in his own. There remains the difference in information,
the central feature of the theory of teams. It surely does not pay to let all executives
of a firm, or all officers of a government agency, have identical information. But
precisely what kinds of messages should a given member, entrusted with given kinds of
actions, receive, either through his own observations, or from other members? What actions
should he perform, and what messages should he send, on the basis of information he
obtains? This will be described by the "controlled" boxes of our diagram. Which
information and decision rules are best will depend on the nature of the
"uncontrolled" boxes. One of these, the "organizational cost function"
has so far been neglected for purposes of analysis. Instead, the values of various
information structures were compared: the "gross" average payoffs obtained when
a given information structure is combined with the most appropriate decision rule. Such
comparisons were carried out in Radner's paper "The Evaluation of Information in
Organizations" presented at the 1960 Berkeley Symposium on Mathematical Statistics
and Probability (and to be incorporated in the monograph). He compared the values of
information structures such as "management by exception" (reporting exceptions
to headquarters, or holding emergency conferences); "partitioned communication"
(with some results on the effects of the number and size of "departments");
"dissemination of independent information"; and "erroneous observations and
erroneous instructions."
In each case, optimal decision rules had to be computed. It was possible to do this by
differential calculus, assuming smoothness of the payoff function. This assumption also
helped to measure the effect of varying certain important parameters, such as the degree
of "complementarity" between the members' actions. However, the smoothness
assumption was dropped, for example, in Marschak's CFP 150a and CFP 150b
and in Radner's CFP 128. In the latter,
linear programming methods were introduced.
Team problems that have arisen in practice were studied in a paper by Beckmann on
airline reservations described in our previous Reports; and in a study of "Team
Models of a Sales Organization" by McGuire (CFP 160) inspired by observing marketing practices in wholesale
bakeries. His solution calls for methods of non-linear programming.
Only towards the end of the period covered by this report, did the group give
systematic attention to the cost of information and decision. The
"indivisibility" of each member of an organization, analogous to that of large
pieces of equipment, leads to the study of "fixed" costs, and of
"capacities" not of machines but of men. Decision and information rules
are associated with costs because their implementation requires efforts of problem-solving
and of effective observation and communication. The ability to do these things quickly and
well is limited, though varying from person to person. These limitations (neglected,
incidentally, in the theory of games in its original form) cannot be assumed away without
making useless any normative theory of organization that is, any theory of how to
make an organization efficient. To assume all managers infinitely quick and wise is like
assuming all industrial plants to be infinitely large.
Some experimental pilot studies on measuring "managerial capacity" in a
rather narrow sense were started by Becker and Marschak, with the collaboration of Watts.
Subjects had to solve problems of a simple "operations research" type, and the
reward was the larger, the larger the profit that would be earned if the solution had been
applied. For example: "After learning the current price, decide to sell or to
postpone selling merchandise; there is no time limit on sale; storage costs are $20 per
day; the price on any day can be anything between $300 and $500, with equal
probabilities." The time spent on solution was measured. The solution arrived at
"intuitively" by the respondent was compared with the optimal solution that can
be obtained by precise mathematical reasoning of a kind that is not likely to be applied
in practice under present market conditions for mathematically trained executives. A good
intuitive solution should at least grasp the salient qualitative features of the optimal
one. For example, in the case just described, the optimal solution is to sell when the
price is above a certain level; this level is constant, regardless of the time elapsed. In
another experiment (with no storage cost, and with limited time in which to sell) this
cut-off price should, on the contrary, decrease as the deadline approaches. How much
experience does a given person need to grasp the qualitative essence of solutions of such
problems?
18. Decision Making under Uncertainty
Many studies described in the present Report take their point of departure in some
hypothesis or principle stating how decisions under uncertainty are made, or how it is
recommended that they be made. This includes some of the models of saving, the studies of
portfolio selection, of monetary theory, and some of those in managerial economics. In the
present section we describe a few studies concerned with such hypotheses themselves,
either through experimental test, or through theory construction facilitating such test.
One experimental study by Becker another ramification of the concern with
managerial ability deals with consistency over time of preferences with regard to risk.
The following experiment usually shows up a subject's inconsistency, except possibly after
several repetitions have given him the opportunity to "learn." The experimenter
finds the smallest amount that the subject would accept in exchange for lottery ticket
that gives him a 5050 chance of getting either 0 or 100 dollars. This cash value
call it x may be, for example, $40 or even $30 if the subject strongly
dislikes risk. If he likes a gamble he will evaluate the lottery ticket at more than $50.
In this way, let the subject evaluate the cash values (to him) of the following lottery
tickets:
Lottery Ticket
|
5050 Chance of
|
Subject's estimate
of Cash Value |
| No. 1 |
$ 0 or $ 100 |
$ x |
| No. 2 |
$ 0 or $ x |
$ y |
| No. 3 |
$ 100 or $ x |
$ z |
| No. 4 |
$ y or $ z |
$ u |
Clearly lottery No. 2 should be equally acceptable, for a consistent
subject, as a lottery in which the chances of getting $0 or $100 are in the ratio of 3 to
1. Similarly, lottery No. 3 should be equivalent to one in which those chances are 1 to 3.
It follows that lottery No. 4 should be equivalent to lottery No. 1, and that therefore
the amount u should be the same as x. Few people exhibit this consistency (or, what is
really the same thing, the ability to think sufficiently clearly and fast) on the first
few trials. This and similar experiments by Becker suggest, however, that consistency
tends to increase (the difference between x and u diminishes) as experience is being
gained by the subject.
Another set of hypotheses explicitly accept a certain lack of consistency, or perhaps a
taste for variability, but permit one to predict behavior in the statistical sense. These
are hypotheses on the "stochastic man," a weakened, hence possibly more
realistic, variant of the "economic man." On a pre-campaign cartoon in the New
Yorker, a poll interviewer gets the answer. "I'd say I'm about 42% for Nixon, 39% for
Rockefeller, 19% for Kennedy." Suppose we interpret this by predicting that the
chances of the man's voting for these three candidates are as 42 to 39 to 19. Then the
strongest of the stochastic choice hypotheses advanced so far, that of R.D. Luce, would
suggest: if Rockefeller abandons the contest, the odds on our man's voting for each of the
two remaining candidates will be still 42 to 19 any third alternative is irrelevant
to the relative probabilities attached to any other two.
The weakest of the proposed stochastic choice hypotheses extends to pairwise
choices only. Let p(a,b) be the probability that the subject will prefer a to b.
The hypothesis says: if both p(a,b) and p(b,c) are larger than one-half,
then p(a,c) is also larger than one-half ("weak stochastic
transitivity").
The most important hypothesis of intermediate strength was proposed by Fechner a
century ago, and has been used by psychologists ever since for the scaling of subjective
"sensations" of various kinds. This hypothesis says that p(a,b), the
probability that the subject will name a rather than b as the better (or the brighter,
heavier, louder) object, is the larger, the larger the difference between two
corresponding numbers va and vb, called "sensations" (the economist is reminded
of "utilities"). The existence of such numbers (and thus, indeed, the validity
of the hypothesis) does not seem to have ever been tested statistically, presumably for
lack of a precise mathematical model. However, such models are now becoming available. Our
previous Report described, an axiomatic study by
Debreu of preference structures of this kind. It also describes another study by Debreu,
similar in kind but different in content, of the construction of a cardinal utility scale
if, for simplicity's sake, the alternatives from which experimental subjects choose are
limited to sure prospects and to even chances as between just two sure prospects. Both
studies, after some further improvements, were published in the period of this Report (CFP 125, CFP 141). The mathematical tool of both studies is described below.
Marschak (CFP 155; and earlier, with
Block, CFP 147) studied the logical
relations between the several stochastic hypotheses here discussed, and other ones found
in the literature. Some implausible implications of Luce's strong hypothesis were pointed
out. This was also done, from another point of view, by Debreu (B). Some further
experimental work on choices between lotteries, or "investment portfolios," was
started at the Cowles Foundation and continued in 196061 (by Becker, De Groot and
Marschak) at the Western Management Science Institute, University of California at Los
Angeles, with the support of the Behavioral Research Service of the General Electric
Company. One provisional result is similar to the one obtained in the experiments on
consistency: At least the stronger stochastic hypotheses do not begin to predict well till
a learning period has elapsed. The subject's initial lack of consistency in his attitude
toward risk is better explained by an adherence to some arbitrary as it were
"magical" patterns, chosen perhaps on grounds of symmetry, or apparent
simplicity. Sometimes, after a learning period, and often quite suddenly, behavior becomes
strictly consistent and hence predictable, rather than stochastic: "in a flash,"
the subject conceives what he really wants. This poses again the question: by what methods
of selection and training can society increase the number of organization leaders and
decision makers who are able to perceive organization goals and to grasp the essence of a
given risk situation?
19. Statistical Tools of Econometric Research
Many econometric models posit relationships (often linear) between a set of dependent
variables, a set of independent variables, and a set of unobservable random variables or
"disturbances." The "parameters" of such a model, i.e., the behavior
constants one wishes to study, are initially unknown and are estimated using sample
observations on the dependent and independent variables. Once the parameters have been
estimated the model can be used to predict the values of the dependent variables given
some values for the independent variables. Econometric models are evaluated on the basis
of how well they predict. The predicted values of the dependent variables will usually be
different from the true values which actually occur in the prediction period. Thus an
inevitable "prediction error" occurs. This error can be divided into two parts.
One part is due to the fact that estimated parameters rather than the true parameters are
used, and the other part is due to the random disturbance which occurs in the prediction
period. While the size of this 'forecasting error is of greatest interest at the time the
prediction is made, of course at that time only a probabilistic statement can be made
about it. Hooper and Arnold Zellner have discussed (CFDP 77R) the construction of a probabilistic forecast region for the
error of forecast when several dependent variables are predicted simultaneously from a
multi-equation regression model.
As an example of such a multivariate forecast region we can consider the following
model, due to Haavelmo (Journal of the American Statistical Association, March
1947, pp. 105122) :

where the jointly dependent variables are deflated consumers' expenditures per capita (ct)
and deflated disposable income per capita (yt). The independent
variables are a constant x1t = 1 and gross investment dollars
per capita, deflated (x2t). The "disturbances" are v1t
and vet. Two forecast regions are presented in the diagram for this model. The meaning of
ellipse A is that, on the average, 95% of the time the true values of c and y will be
contained in ellipse A when a prediction is made using as values for the independent
variables x1F = 1 and x2F = 100.
Ellipse B indicates a similar forecast region when the values of the independent variables
in the forecast period are x1F = 1 and x2P
= 200.
Hooper has also been concerned with the development of a criterion by which
multi-equation models can be evaluated on the basis of sample observations. Assume that an
investigator has two alternative multi-equation models, which are equally plausible on
logical grounds, how does he decide on the basis of sample observations which model is to
be retained for further use? For single equation models one criterion which is used, among
others, for a choice between models is that one prefers the model which has the larger
multiple correlation coefficient R2, as computed from the sample. Analogously,
in a study (B) completed before joining the staff, Hooper developed a generalized
correlation coefficient for multi-equation models, called the trace correlation. A logical
further extension of the trace correlation is the concept of generalized partial
correlation coefficients. For a single equation, partial correlation coefficients have
been useful in indicating which independent variables should be used in a model.
Analogously, Hooper has developed generalized partial trace correlation coefficients for
multi-equation models (CFDP 97). These partial trace correlation coefficients give a
measure of how much of the variation in the set of jointly dependent variables is
explained by a particular exogenous variable, after the influence of the other exogenous
variables has been removed.
Hooper is presently finishing a study of specification errors in multi-equation models.
His main result is that the trace correlation can be used to distinguish, on the basis of
sample observations, between correctly and incorrectly specified multi-equation models.
Both Hooper and Watts are turning to the methodological problems of econometric
studies based on "cross-section" data, obtained at one time or at successive
moments of time. Watts has been enabled, through a Social Science Research Council
Fellowship, to devote the year 196162 full-time to a systematic study of this topic.
Hooper is pursuing in particular the connections between cross-section studies and the
aggregation problem.
Summers continued his investigation of the small-sample properties of simultaneous
equations estimators by the use of Monte Carlo techniques described in our previous Report. Some tentative results from this
study were presented in CFDP 64.
Dhrymes, in CFDP 122, shows why the
least-squares estimators of the parameters of a CobbDouglas production function are
biased estimators. He then derives a set of estimators for these parameters which are
unbiased, sufficient, and consistent.
20. Mathematical Tools
Almost nothing is known on a theoretical level about the efficiency of the simplex
procedure for linear programming problems. Experience with such problems indicates that
the number of iterations required for the simplex method to converge is unexpectedly
small. On the other hand, the known theoretical bounds on that number are very much
higher. Scarf explored this problem and obtained small bounds for a quite restricted class
of programming problems. The type of analysis required in this problem is of a subtle
mathematical character, involving the relationships among the signs of the subdeterminants
of a given matrix. He intends to return to the problem in the future, not only because of
its inherent interest, but also because of the light it may shed on integer programming
problems.
Beckmann has considered a calculus of variations problem (A) arising when a commodity
is to be allocated over time subject to upper and lower bounds on both its stock and its
flow. Necessary and sufficient "efficiency" conditions are developed for a
piecewise continuous flow function to represent an optimal allocation. This model applies
to the problem of optimal water storage in a hydroelectric system, and to various problems
of production smoothing treated in the recent literature.
Manne has indicated (CFP 148) how
the methods of linear programming can be used to solve sequential decision problems in
probabilistic models usually studied by the method of "dynamic programming." The
essential idea is to adopt, as the unknowns of the problem, not the decision rule sought,
but the discrete joint probability distribution of the state of the system and of the
decision adopted, assuming that that decision rule is followed. The assumption of an
infinite horizon serves to make that distribution independent of time. Once it is found by
linear programming methods, the decision rule sought is easily obtained. One application
to a multiple-purpose water development project was discussed above. Another one lies in
the field of inventory control under probabilistic demand conditions.
The two studies by Debreu on the axiomatics of cardinal utility, already discussed,
both employed as their principal mathematical tool a theorem of Thomsen and Blaschke that
gives a topological characterization of three families of parallel straight lines in a
plane. The same tool can also be applied to another problem in utility theory that was
previously treated by calculus techniques. The problem is that of independent commodities,
i.e., of finding sufficient conditions for the preferences of a consumer for n-commodity
bundles to be representable by a sum of n functions
u1(x1)
+ ...+ un(xn)
where x1 denotes the quantity of the ith commodity. Debreu showed (CFP 156) that, aside from continuity
conditions, it is not only necessary, but also sufficient, for the preferences to satisfy
the following independence condition: Fix any m of the xi and consider
the resulting preferences for the bundles of n m remaining commodities
whose quantities can still vary. Then these preferences should be independent of the
particular values chosen for the xi that have been fixed, and this
should be so regardless of how many and which variables were fixed. Since the method used
is independent of differentiability assumptions, the result is more natural, and can be
readily generalized to independent groups of commodities rather than single commodities.
In their work on the theory of stochastic choice, Block and Marschak had been led to
establish a certain identity arising from probability considerations and tying together
different aspects of stochastic behavior. Debreu gave a brief proof of that identity based
on a probability-theoretical argument (CFP
149).
Our Report for 195254 mentions a paper in which Debreu
has gathered several of the separation theorems for convex sets most useful in economic
theory, with proofs. This note has now been published as an appendix in CFP 136.
RESEARCH CONSULTANTS
A Research Consultant to the Cowles Foundation is a scholar at some other institution
who maintains an active interest in the research program of the Foundation, manifested in
exchanges of ideas and results with members of the Foundation's staff. Some Consultants
are previous members of the staff, and some are completing research begun at the Cowles
Commission or Foundation or pursuing further investigations stimulated by such research.
Where a real relationship exists between the work of a Consultant and the program of the
Cowles Foundation, the Foundation welcomes the opportunity to include the results in its
publications.
The following were Research Consultants during the whole or part of the period covered
by this report.
THEODORE W. ANDERSON
Dept. of Mathematical Statistics
Columbia University
New York, New York
KENNETH J. ARROW
Applied Mathematics & Statistics Laboratory
Serra House
Stanford University
Stanford, California
GORDON M. BECKER
Itek Corporation
700 Commonwealth Avenue
Boston 15, Massachusetts
MARTIN J. BECKMANN
Department of Economics
Brown University
Providence, Rhode Island
H. DAVID BLOCK
Department of Engineering
Cornell University
Ithaca, New York
CARL F. CHRIST
Department of Political Economy
Johns Hopkins University
Baltimore, Maryland
H.T. DAVIS
Radiation Laboratories
University of California
P.O. Box 808
Livermore, California |
MORRIS H. DEGROOT
Department of Mathematics
Carnegie Institute of Technology
Pittsburgh 13, Pennsylvania
TRYGVE HAAVELMO
University Institute of Economics
Frederiksgate 3
Olso, Norway
CLIFFORD G. HILDRETH
Dept. of Agricultural Economics
Michigan State University
East Lansing, Michigan
WILLIAM C. HOOD
Department of Political Economy
University of Toronto
Toronto, Canada
HENDRIK S. HOUTHAKKER
Department of Economics
Harvard University
Cambridge, Massachusetts
LEONID HURWICZ
School of Business Administration
University of Minnesota
Minneapolis, Minnesota
LAWRENCE R. KLEIN
Wharton School of Finance & Commerce
Department of Economics
University of Pennsylvania
Philadelphia 4, Pennsylvania |
LIONEL W. MCKENZIE
Department of Economics
University of Rochester
Rochester, New York
HARRY MARKOWITZ
The RAND Corporation
1700 Main Street
Santa Monica, California
JACOB MARSCHAK
Graduate School of Business Administration
University of California
Los Angeles, California
ROY RADNER
Department of Economics
University of California
Berkeley, California
THOMAS C. SCHELLING
Harvard University Center for International Affairs
6 Divinity Avenue
Cambridge, Massachusetts
HERBERT A. SIMON
Graduate School of Industrial Administration
Carnegie Institute of Technology
Pittsburgh, Pennsylvania
ROBERT SUMMERS
Wharton School of Finance
Department of Economics
University of Pennsylvania
Philadelphia 4, Pennsylvania |
GUESTS AND VISITS
The Cowles Foundation is pleased to have as guests advanced students and scholars from
other research centers in this country and abroad. Their presence both stimulates the work
of the staff and aids in spreading the results of its research. To the extent that its
resources permit, the Foundation has accorded office, library, and other research
facilities to guests who are in residence for an extended period. The following were
associated with the organization in this manner during the past three years.
RAGNAR BENTZEL (Sweden). October 1958. Sponsored by
Industriens. Utredningsinstitut, Stockholm, Sweden.
ROBERT EISNER (USA). AugustDecember 1958. Sponsored by the
Ford Foundation. Returned to Northwestern University, Illinois.
WALTER D. FISHER (USA). September 1960August 1961.
Sponsored by the John Simon Guggenheim Memorial Foundation. Returned to Kansas State
University.
HOLGER GAD (Denmark). September 1958January 1959. Sponsored
by the Rockefeller Foundation. Returned to University of Aarhus, Denmark.
BERNARD GOODMAN (USA). September 1958August 1959. Sponsored
by the Ford Foundation. Returned to Wayne State University, Michigan.
GEORGE G. JUDGE (USA). September 1958June 1959. Sponsored
by the Social Science Research Council. Returned to Oklahoma State University.
JOZEF LUKASZEWICZ (Poland). JanuaryJune 1960. Sponsored by
the Ford Foundation (through the Institute of International Education). Returned to
Mathematical Institute of the Polish Academy of Sciences, Wroclaw, Poland.
EDWIN S. MILLS (USA). FebruaryAugust 1961. Sponsored by the
Ford Foundation. Returned to Johns Hopkins University, Maryland.
JEAN GERARD MORREAU (Netherlands). JulyOctober 1959.
Sponsored by the Nederlandse Organisatie voor Zuiver Wetenschappelijk Onderzoek. Returned
to Amsterdam.
JOHN DAVID PITCHFORD (Australia). AugustDecember 1959.
Sponsored by the Rockefeller Foundation. Returned to the University of New South Wales,
Australia.
D.V. RAJALAKSMAN (India). October 1959May 1960. Sponsored
by the Rockefeller Foundation. Returned to University of Madras, India.
GEORGE J. STOLNITZ (USA). September 1959September 1960.
Sponsored by the National Science Foundation. Returned to Indiana University.
BJORN THALBERG (Norway). September 1959July 1960. Sponsored
by the Rockefeller Foundation. Returned to University of Oslo, Norway.
ARNOLD ZELLNER (USA). JanuarySeptember 1959. Sponsored by
the National Science Foundation. Returned to University of Washington.
During the year 195859 the Director, James Tobin, was in Europe on sabbatical
leave. He used the opportunity to have a useful exchange of views and experience at major
centers of econometric research in Europe: Rotterdam, Econometric Institute and
Netherlands Economic Institute; London School of Economics; Oxford Institute of
Statistics; Department of Applied Economics, University of Cambridge; University of
Manchester; IFO-Institute fur Wirtschaftsforschung, Munich; University of Stockholm, and
Konjunkturinstitutet, Stockholm; University of Uppsala; University of Oslo.
SEMINARS
| 1958 |
|
| October 10 |
ROBERT EISNER, Northwestern University,
"Capital Expenditures and Expectations." |
| October 14 |
RAGNAR BENTZEL, University of Uppsala,
"An Investigation of the Swedish Consumption Patterns" |
| November 7 |
ERICH SCHNEIDER, University of Kiel,
"On the Influence of Changing Exchange Rates on the Balance of Payments" |
| December 12 |
LAWRENCE R. KLEIN, University of
Pennsylvania, "An Econometric Model of the United Kingdom Postwar
Quarters" |
| 1959 |
|
| January 9 |
DANIEL ELLSBERG, Harvard University,
"The Theory and Practice of Blackmail." |
| January 23 |
GERALD THOMPSON, Ohio Wesleyan University,
"A Further Generalization of the von Neumann Dynamic Model" |
| February 13 |
ROBERT STROTZ, Massachusetts Institute of
Technology, (on leave from Northwestern University) "Price Expectations, Optimality,
and Equilibrium." |
| March 13 |
H.S. HOUTHAKKER, Stanford and Harvard
Universities, "International Comparisons of Consumers' Preferences" |
| March 26 |
MARCEL BOITEUX, Electricite de France,
"La Tarification Marginaliste de 1'Electricite de France" |
| April 10 |
CARL KAYSEN, Harvard University, "Some
New Data on Plants and Firms" |
| April 17 |
ALAIN ENTHOVEN, RAND Corporation, "The
Neoclassical Theory of Money and Economic Growth" |
| April 24 |
MANFRED KOCHEN, IBM Research Center,
"Some Problems in Organizational Structure" |
| May 25 |
MAURICE ALLAIS, Institute of Statistics,
University of Paris, "Influence of the Capital-Output Ratio on Real National
Income" |
| October 30 |
MICHAEL FARRELL, University of Cambridge
and Carnegie Institute of Technology, "Some Remarks on the British Capital
Market" |
| November 13 |
JAMES DURBIN, London School of Economics,
"Estimation of Parameters in Time Series Regression Models" |
| December 4 |
ROBERT SOLOW, Massachusetts Institute of
Technology, "Estimation of Distributed Lags" |
| December 11 |
BENOIT MANDELBROT, IBM Research Center,
"A New Family of Stochastic Models of Income Distribution: The ParetoLevy
Random Variables and Processes" |
| 1960 |
|
| January 8 |
RICHARD R. NELSON, RAND Corporation,
"Uncertainty, Learning, and Research and Development Decision-Making" |
| January 22 |
JOHN LINTNER, Harvard University,
"Research on Earnings, Dividends, and Stock Prices" |
| February 12 |
ZVI GRILICHES, National Bureau of Economic
Research, Inc., "Is Aggregation Necessarily Bad?" |
| April 8 |
FRANCIS M. BATOR, Massachusetts Institute
of Technology, "On 'Balanced' Growth" |
| April 15 |
WILLIAM C. HOOD, University of Toronto,
"Problems in the Regulation of Privately Owned Public Utilities" |
| April 29 |
JOHN DENIS SARGAN, Leeds University and
University of Chicago, "Towards a More Realistic Theory of Stability" |
| May 13 |
HENRI THEIL, Director of the ECONOMETRIC
INSTITUTE, Netherlands School of Economics, and Harvard University, "The Design of
Socially Optimal Decisions" |
| October 19 |
GUY H. ORCU'rT, University of Wisconsin,
"Simulation of Social Systems" |
| December 16 |
ROBERT SUMMERS, University of Pennsylvania,
"An Econometric Look at Military Cost Estimates" |
| 1961 |
|
| January 13 |
EDWARD B. BERMAN, Operations Evaluation
Group, Navy Department, "The Normative Interest Rate" |
| March 10 |
GEORGE J. FEENEY, General Electric Company,
"Oligopolistic Behavior in a Markovian Market" |
| March 17" |
RICHARD ROSETT, University of Rochester,
"Models of the Stock Options Market" |
| March 24 |
MARTIN J. BECKMANN, Brown University,
"Wicksell's Cumulative Process and Some Models of Economic Growth" |
| April 14 |
HENDRIK S. HOUTHAKKER, Harvard University,
"Short-term Price Movements as a Stochastic Process" |
| April 21 |
WASSILY LEONTIEF, Harvard University,
"Welfare Analysis as Applied to Public Enterprise" |
| April 28 |
EDWIN MANSFIELD, Carnegie Institute of
Technology, "Acceptance of Technological Change" |
MANAGEMENT SEMINARS
These seminars, initiated in 1956, are aimed at promoting knowledge in the management
sciences. The meetings serve as a medium for the two-way exchange of ideas between members
of the Yale academic community and management people in Connecticut industries.
| 1958 |
|
| Feburary 5 |
MARTIN SHUBIK, General Electric Company,
"Maximization Aims in Business Enterprises." |
| April 22 |
GEORGE B. DANTZIG, RAND Corporation,
"Linear Programming." May 28. JACOB MARSCHAK, Yale University, "The Theory
of Organization." |
| October 21 |
W. REED SMITH, U.S. Rubber Company,
"Applications of Experimental Design." |
| November 10 |
ERICH SCHNEIDER, University of Kiel,
"On the Realism of Marginalist Thinking in Business Problems." |
| November 25 |
RALPH GOMORY and E.M.L. BEALE, Princeton
University, "Integer Solutions to Linear Programs." |
| 1959 |
|
| January 28 |
JULIUS ARONOFSKY, Socony Mobil Company,
"Linear Programming Applications in an Integrated Oil Company." |
| March 11 |
ROBERT FETTER, Yale University,
"Production Planning for a Multi-Product Facility." |
| May 6 |
WILLIAM S. STAPAKIS and KENNETH R. BLAKE,
United Aircraft Corporation, "Some Theoretical Results on the Job Shop Scheduling
Problem." |
| October 20 |
HARRY MARKOWITZ, General Electric Company,
"Computer Simulation of Production Processes." |
| November 12 |
ARTHUR YASPAN, Lybrand, Ross Bros., and
Montgomery, "Inventory Policies." |
| November 30 |
GEORGE FEENEY, General Electric Company,
"Operational Games as Marketing Experiments." |
| 1960 |
|
January 14 |
JOHN GESSFORD, International Paper Company,
"Some Inventory Models and Their Optimal Policies." |
LIBRARY
MERLE E. HOCHMAN, Librarian
The principal goal of the Cowles Foundation library is to make readily accessible to
staff members important past and current literature in economics, especially quantitative
economics, and related works in mathematics and statistics. The library also accommodates
other members of the Department of Economics and graduate students in their research and
study programs.
The library collection includes some 3,860 books, 150 journals, thousands of pamphlets,
and much recent unpublished material. About 890 of the books were acquired during the
three-year period covered by this report. These can be divided by subject into the
following categories: economics, 64%; collections of statistical data, 10;0; statistical
theory, 8%; mathematics, 7%; reference books, 4%; all others, 7%. Current books, ordered
shortly after their publication, accounted for 88% of the new acquisitions.
Books circulate for a period of one month and journals for two days. They may be
renewed by staff members only. Some 250 books which are in demand for graduate economics
courses are kept on reserve, circulating overnight and weekends only.
THE ECONOMETRIC SOCIETY
The Econometric Society is an international society for the advancement of economic
theory in its relation to statistics and mathematics. Its main object is the promotion of
studies directed toward unification of the theoretical quantitative and the empirical
quantitative approaches to economic problems and penetrated by the kind of constructive
and rigorous thinking that has come to dominate the natural sciences. Any activity which
promises ultimately to further such a unification of theoretical and factual studies in
economics is considered to be within the sphere of interest of the Society.
At the present time the Econometric Society publishes a quarterly journal, Econometrica.
It holds one European and one or two North American meetings each year. As an
international organization, the officers of the Econometric Society represent many
different countries. The major governing body of the Society is its Fellows. At the
present time these number 126, and a maximum of six additional fellows are elected each
year. Membership in the Society is open to anyone seriously interested in the objectives
of the Society. Institutional memberships are also available in order to solicit the
support of interested business firms and research organizations. In addition to the 1,530
members, there are 1,732 non-member subscribers to the journal, mainly libraries, business
firms, and research organizations.
Three individuals, Irving Fisher, Professor of Economics at Yale, Ragnar Frisch,
Professor of Economics at the University of Oslo, and Charles Roos, a research fellow at
Princeton, were instrumental in the founding of the Society in 1930, two years prior to
the establishment of the Cowles Commission. Initially the Society had less than 200
members, and its activities were restricted to the arrangement of small meetings at which
papers were read and discussed. Because of the small membership and the minimal dues, it
was not possible to publish a journal. With the founding of the Cowles Commission in 1932,
a proposal was made that the Commission support the activities of the Econometric Society,
and enable it, among other things, to publish a journal. After due consideration this
proposal was adopted, and the first issue of the journal Econometrica was published
in 1933. In the following years the Society grew, and with the increase in membership and
subscriptions it became somewhat more self-supporting. But costs were also rising, and the
Cowles Commission continued to bear a considerable portion of the administrative expenses
of the Society. The two organizations were administered jointly.
With the establishment of the Cowles Foundation at Yale University, it was decided to
separate the administrative functions of the Econometric Society from those of the Cowles
Foundation, and if possible to draw the financial support of the Society more fully from
its membership than had been done to date. A gradual reduction in the financial
contribution of the Cowles Commission, begun while the Society was still located in
Chicago, has been continued. At present the Society receives a contribution of $2,000 a
year from the Cowles Foundation; and it is expected that this level will be maintained in
the future. In 1960 the Cowles Foundation gave an additional $2,000 to the Society to help
cover the cost of publishing a special issue of Econometrica in honor of Ragnar
Frisch in the year of his sixty-fifth birthday. Efforts are being made to replace the
reduction in the Cowles Foundation contribution from such sources as institutional
memberships and an increase in individual memberships.
RICHARD RUGGLES
Professor of Economics, Yale University Secretary
PUBLICATIONS AND PAPERS
Monographs (19341961)
Cowles Commission (Nos. 115) and Cowles Foundation (Nos. 1617).
See Complete LISTING OF
MONOGRAPHS
Two further forthcoming monographs are in preparation, Studies in Process Analysis:
Economy-Wide Production Capabilities edited by Alan S. Manne and Harry M. Markowitz,
and Economic Theory of Teams by Jacob Marschak and Roy Radner.
Special Publications
Economic Aspects of Atomic Power, an exploratory study under the direction of
SAM H. SCHURR and JACOB MARSCHAK. 1950. 289 pages. An analysis of the potential
applicability of atomic power in selected industries and its economic effects in both
industrialized and underdeveloped areas. Orders should be sent to Princeton University
Press, Princeton, New Jersey.
Income, Employment, and the Price Level, notes on class lectures by JACOB
MARSCHAK. Autumn 1948 and 1949. 95 pages. Orders should be sent to Kelley and Millman, 80
East Eleventh Street, New York City.
Studies in the Economics of Transportation, by MARTIN J. BECKMANN, C. B.
MCGUIRE, and CHRISTOPHER B. WINSTEN, introduction by TJALLING C. KOOPMANS. 1956. 232
pages. This exploratory study of highway and railroad systems examines their theoretical
aspects and develops concepts and methods for assessing the capabilities and efficiency of
existing and projected traffic systems. Orders should be sent to Yale University Press,
New Haven, Connecticut.
Cowles Commission New Series Papers (to June 30, 1958)/Cowles Foundation Papers (19581961)
This series includes articles published by members of the research staff or by others
working in close association with them (available on-line)
See complete LISTING OF COWLES FOUNDATION
PAPERS
Special Papers
No. 1. JOHN R. MENKE, "Nuclear Fission as a Source of
Power," Econometrica, Vol. 15, October, 1947, pp. 314333.
No. 2. JACCB MARSCHAK, SAM H. SCHURR, and PHILLIP SPORN,
"The Economic Aspects of Atomic Power," Bulletin of the Atomic Scientists,
Vol. 2, Nos. 5 and 6, September, 1946, pp. 14; Proceedings Supplement of
American Economic Review, Vol. 37, No. 2, May, 1947, pp. 98117.
No. 3. TJALLING C. KOOPMANS, "Uses of Prices," Proceedings
of the Conference on Operations Research in Production and Inventory Control, pp.
17, Cleveland: Case Institute of Technology, 1954.
Cowles Foundation Discussion Papers
Discussion Papers are preliminary materials given limited circulation in mimeographed
form to stimulate private discussion and critical comment. The contributions contained in
Discussion Papers appear in more mature form in published papers and are reprinted as
Cowles Foundation Papers (available on-line).
See complete LISTING OF COWLES FOUNDATION
DISCUSSION PAPERS
Other Publications
MARTIN J. BECKMANN
- "Variational Programming," (Abstract), Econometrica, Vol. 27, April,
1959, pp. 269270.
- "An Inventory Model for Repair Parts Approximations in the Case of Variable
Delivery Time" (Letter to the Editor), Operations Research, Vol. 7,
MarchApril, 1959, pp. 464471.
- "Das Gleichgewicht des Verkehrs" Jahrbuch fur die Ordnung von Wirtschaft
and Gesellschaft, Vol. 11, 1959, pp. 133147.
- "Production Smoothing and Inventory Control," Operations Research, Vol. 9, No.
4, JulyAugust 1961, pp. 456467.
GERARD DEBREU
- Theory of Value, Monograph No. 17, John Wiley and Sons (1959).
- Review of R. Duncan Luce, "Individual Choice Behavior: A Theoretical
Analysis," American Economic Review, Vol. L, No. 1, March 1960, pp.
186188.
- "L'incertitude et faction," Economie Appliquee. Vol. 13, (1960), pp.
111116.
- "Separation Theorems for Convex Sets," Appendix to CFP 136, pp. 9598.
JOHN W. HOOPER
- "The Aggregation of Servicing Facilities In Queueing Processes," with David S.
Stoller, circulated as Preprint No. 66 by The Institute of Management Science, September
1959.
- "Simultaneous Equations and Canonical Correlation Theory," Econometrica,
Vol. 27, No. 2, April 1959, pp. 245256.
LAWRENCE B. KRAUSE
- "United States Imports and the Tariff," Papers and Proceedings of the
American Economic Association, Vol. XLIX, May, 1959, pp. 542551.
HERBERT SCARF
- "Some Examples of Global Instability of the Competitive Equilibrium," International
Economic Review, Vol. I, September 1960, pp. 157172.
MARTIN SHUBIK
- "Edgeworth Market Games," R.D. Luce and A.W. Tucker, eds., Contributions to
the Theory of Games, IV, Princeton, Princeton University Press, 1958, pp.
267278.
JAMES TOBIN
- "Towards a General Kaldorian Theory of Distribution," a note, Review of
Economic Studies, Vol. XXVII, No. 73, pp. 119120, February 1960.
- "Reply to Professor Eisner," The Economic Journal, Vol. LXIX, pp.
599600, September 1959.
- Rejoinder to Professor Katona, "A Final Remark," The Review of Economics
and Statistics, Vol. XLI, No. 3, p. 319, August 1959.
- "Towards Improving the Efficiency of the Monetary Mechanism," The Review of
Economics and Statistics, August 1960, pp. 276279.
- "Money, Capital, and Other Stores of Value," American Economic Review
(Papers and Proceedings), Vol. LI, No. 2, May 1961, pp. 2637.
HAROLD W. WATTS
- "Agricultural Supply Analysis: Discussion," Journal of Farm Economics,
Vol. 42, May 1960, pp. 477478.
- "Discussion," Household Decision-Making, ed. Nelson N. Foote, New York
University Press, 1961, pp. 109113.
|