SlideShare a Scribd company logo
1 of 241
Download to read offline
The Sciences of the Artificial
Third edition
Herbert A. Simon
title : The Sciences of the Artificial
author : Simon, Herbert Alexander.
publisher : MIT Press
isbn10 | asin : 0262193744
print isbn13 : 9780262193740
ebook isbn13 : 9780585360102
language : English
subject Science--Philosophy.
publication date : 1996
lcc : Q175.S564 1996eb
ddc : 300.1/1
subject : Science--Philosophy.
Page iv
© 1996 Massachusetts Institute of Technology
All rights reserved. No part of this book may be reproduced in any form by any
electronic or mechanical means (including photocopying, recording, or
information storage and retrieval) without permission in writing from the
publisher.
This book was set in Sabon by Graphic Composition, Inc.
Printed and bound in the United States of America.
Library of Congress Cataloging-in-Publication Data
Page v
Simon, Herbert Alexander, 1916
The sciences of the artificial / Herbert A. Simon.3rd ed.
p. cm.
Includes bibliographical references and index.
ISBN 0-262-19374-4 (alk. paper).ISBN 0-262-69191-4 (pbk.: alk.
paper)
1. Science Philosophy. I. Title.
Q175.S564 1996
300.1'1dc20 96-12633
CIP
Page vi
To Allen Newell
in memory of a friendship
Page vii
Contents
Preface to Third Edition ix
Preface to Second Edition xi
1
Understanding the Natural and Artificial Worlds
1
2
Economic Rationality: Adaptive Artifice
25
3
The Psychology of Thinking: Embedding Artifice in Nature
51
4
Remembering and Learning: Memory As Environment for
Thought
85
5
The Science of Design: Creating the Artificial
111
6
Social Planning: Designing the Evolving Artifact
139
7
Alternative Views of Complexity
169
8
The Architecture of Complexity: Hierarchic Systems
183
Name Index 217
Subject Index 221
Page ix
Preface to Third Edition
As the Earth has made more than 5,000 rotations since The Sciences of the
Artificial was last revised, in 1981, it is time to ask what changes in our
understanding of the world call for changes in the text.
Of particular relevance is the recent vigorous eruption of interest in complexity
and complex systems. In the previous editions of this book I commented only
briefly on the relation between general ideas about complexity and the particular
hierarchic form of complexity with which the book is chiefly concerned. I now
introduce a new chapter to remedy this deficit. It will appear that the devotees of
complexity (among whom I count myself) are a rather motley crew, not at all
unified in our views on reductionism. Various among us favor quite different
tools for analyzing complexity and speak nowadays of "chaos," "adaptive
systems," and "genetic algorithms." In the new chapter 7, "Alternative Views of
Complexity'' ("The Architecture of Complexity" having become chapter 8), I sort
out these themes and draw out the implications of artificiality and hierarchy for
complexity.
Most of the remaining changes in this third edition aim at updating the text. In
particular, I have taken account of important advances that have been made since
1981 in cognitive psychology (chapters 3 and 4) and the science of design
(chapters 5 and 6). It is gratifying that continuing rapid progress in both of these
domains has called for numerous new references that record the advances, while
at the same time confirm and extend the book's basic theses about the artificial
sciences. Changes in emphases in chapter 2 reflect progress in my thinking about
the respective roles of organizations and markets in economic systems.
Page x
This edition, like its predecessors, is dedicated to my friend of half a lifetime,
Allen Newell but now, alas, to his memory. His final book, Unified Theories of
Cognition, provides a powerful agenda for advancing our understanding of
intelligent systems.
I am grateful to my assistant, Janet Hilf, both for protecting the time I have
needed to carry out this revision and for assisting in innumerable ways in getting
the manuscript ready for publication. At the MIT Press, Deborah Cantor-Adams
applied a discerning editorial pencil to the manuscript and made communication
with the Press a pleasant part of the process. To her, also, I am very grateful.
In addition to those others whose help, counsel, and friendship I acknowledged
in the preface to the earlier editions, I want to single out some colleagues whose
ideas have been especially relevant to the new themes treated here. These include
Anders Ericsson, with whom I explored the theory and practice of protocol
analysis; Pat Langley, Gary Bradshaw, and Jan Zytkow, my co-investigators of
the processes of scientific discovery; Yuichiro Anzai, Fernand Gobet, Yumi
Iwasaki, Deepak Kulkarni, Jill Larkin, Jean-Louis Le Moigne, Anthony
Leonardo, Yulin Qin, Howard Richman, Weimin Shen, Jim Staszewski, Hermina
Tabachneck, Guojung Zhang, and Xinming Zhu. In truth, I don't know where to
end the list or how to avoid serious gaps in it, so I will simply express my deep
thanks to all of my friends and collaborators, both the mentioned and the
unmentioned.
In the first chapter I propose that the goal of science is to make the wonderful
and the complex understandable and simple but not less wonderful. I will be
pleased if readers find that I have achieved a bit of that in this third edition of
The Sciences of the Artificial.
HERBERT A. SIMON
PITTSBURGH, PENNSYLVANIA
JANUARY 1, 1996
Page xi
Preface to Second Edition
This work takes the shape of fugues, whose subject and counter subject were first
uttered in lectures on the opposite sides of a continent and the two ends of a
decade but are now woven together as the alternating chapters of the whole.
The invitation to deliver the Karl Taylor Compton lectures at the Massachusetts
Institute of Technology in the spring of 1968 provided me with a welcome
opportunity to make explicit and to develop at some length a thesis that has been
central to much of my research, at first in organization theory, later in economics
and management science, and most recently in psychology.
In 1980 another invitation, this one to deliver the H. Rowan Gaither lectures at
the University of California, Berkeley, permitted me to amend and expand this
thesis and to apply it to several additional fields.
The thesis is that certain phenomena are "artificial" in a very specific sense: they
are as they are only because of a system's being moulded, by goals or purposes,
to the environment in which it lives. If natural phenomena have an air of
"necessity" about them in their subservience to natural law, artificial phenomena
have an air of "contingency" in their malleability by environment.
The contingency of artificial phenomena has always created doubts as to whether
they fall properly within the compass of science. Sometimes these doubts refer to
the goal-directed character of artificial systems and the consequent difficulty of
disentangling prescription from description. This seems to me not to be the real
difficulty. The genuine problem is to show how empirical propositions can be
made at all about systems that, given different circumstances, might be quite
other than they are.
Page xii
Almost as soon as I began research on administrative organizations, some forty
years ago, I encountered the problem of artificiality in almost its pure form:
. . . administration is not unlike play-acting. The task of the good actor is to know and play his role,
although different roles may differ greatly in content. The effectiveness of the performance will
depend on the effectiveness of the play and the effectiveness with which it is played. The
effectiveness of the administrative process will vary with the effectiveness of the organization and
the effectiveness with which its members play their parts. [Administrative Behavior, p. 252]
How then could one construct a theory of administration that would contain more
than the normative rules of good acting? In particular, how could one construct
an empirical theory? My writing on administration, particularly in Administrative
Behavior and part IV of Models of Man, has sought to answer those questions by
showing that the empirical content of the phenomena, the necessity that rises
above the contingencies, stems from the inabilities of the behavioral system to
adapt perfectly to its environment from the limits of rationality, as I have called
them.
As research took me into other areas, it became evident that the problem of
artificiality was not peculiar to administration and organizations but that it
infected a far wider range of subjects. Economics, since it postulated rationality
in economic man, made him the supremely skillful actor, whose behavior could
reveal something of the requirements the environment placed on him but nothing
about his own cognitive makeup. But the difficulty must then extend beyond
economics into all those parts of psychology concerned with rational behavior
thinking, problem solving, learning.
Finally, I thought I began to see in the problem of artificiality an explanation of
the difficulty that has been experienced in filling engineering and other
professions with empirical and theoretical substance distinct from the substance
of their supporting sciences. Engineering, medicine, business, architecture, and
painting are concerned not with the necessary but with the contingent not with
how things are but with how they might be in short, with design. The possibility
of creating a science or sciences of design is exactly as great as the possibility of
creating any science of the artificial. The two possibilities stand or fall together.
These essays then attempt to explain how a science of the artificial is possible
and to illustrate its nature. I have taken as my main examples the
Page xiii
fields of economics (chapter 2), the psychology of cognition (chapters 3 and 4), and
planning and engineering design (chapters 5 and 6). Since Karl Compton was a
distinguished engineering educator as well as a distinguished scientist, I thought it not
inappropriate to apply my conclusions about design to the question of reconstructing
the engineering curriculum (chapter 5). Similarly Rowan Gaither's strong interest in
the uses of systems analysis in public policy formation is reflected especially in
chapter 6.
The reader will discover in the course of the discussion that artificiality is interesting
principally when it concerns complex systems that live in complex environments. The
topics of artificiality and complexity are inextricably interwoven. For this reason I
have included in this volume (chapter 8) an earlier essay, "The Architecture of
Complexity," which develops at length some ideas about complexity that I could touch
on only briefly in my lectures. The essay appeared originally in the December 1962
Proceedings of the American Philosophical Society.
I have tried to acknowledge some specific debts to others in footnotes at appropriate
points in the text. I owe a much more general debt to Allen Newell, whose partner I
have been in a very large part of my work for more than two decades and to whom I
have dedicated this volume. If there are parts of my thesis with which he disagrees,
they are probably wrong; but he cannot evade a major share of responsibility for the
rest.
Many ideas, particularly in the third and fourth chapters had their origins in work that
my late colleague, Lee W. Gregg, and I did together; and other colleagues, as well as
numerous present and former graduate students, have left their fingerprints on various
pages of the text. Among the latter I want to mention specifically L. Stephen Coles,
Edward A. Feigenbaum, John Grason, Pat Langley, Robert K. Lindsay, David Neves,
Ross Quillian, Laurent Siklóssy, Donald S. Williams, and Thomas G. Williams, whose
work is particularly relevant to the topics discussed here.
Previous versions of chapter 8 incorporated valuable suggestions and data contributed
by George W. Corner, Richard H. Meier, John R. Platt, Andrew Schoene, Warren
Weaver, and William Wise.
A large part of the psychological research reported in this book was supported by the
Public Health Service Research Grant MH-07722 from the National Institute of
Mental Health, and some of the research on
Page xiv
design reported in the fifth and sixth chapters, by the Advanced Research
Projects Agency of the Office of the Secretary of Defense (SD-146). These
grants, as well as support from the Carnegie Corporation, the Ford Foundation,
and the Alfred P. Sloan Foundation, have enabled us at Carnegie-Mellon to
pursue for over two decades a many-pronged exploration aimed at deepening our
understanding of artificial phenomena.
Finally, I am grateful to the Massachusetts Institute of Technology and to the
University of California, Berkeley, for the opportunity to prepare and present
these lectures and for the occasion to become better acquainted with the research
in the sciences of the artificial going forward on these two stimulating campuses.
I want to thank both institutions also for agreeing to the publication of these
lectures in this unified form, The Compton lectures comprise chapters 1, 3, and
5, and the Gaither lectures, chapters 2, 4, and 6. Since the first edition of this
book (The MIT Press, 1969) has been well received, I have limited the changes
in chapters 1, 3, 5, and 8 to the correction of blatant errors, the updating of a few
facts, and the addition of some transitional paragraphs.
Page 1
1
Understanding the Natural and the Artificial Worlds
About three centuries after Newton we are thoroughly familiar with the concept
of natural science most unequivocally with physical and biological science. A
natural science is a body of knowledge about some class of things objects or
phenomena in the world: about the characteristics and properties that they have;
about how they behave and interact with each other.
The central task of a natural science is to make the wonderful commonplace: to
show that complexity, correctly viewed, is only a mask for simplicity; to find
pattern hidden in apparent chaos. The early Dutch physicist Simon Stevin,
showed by an elegant drawing (figure 1) that the law of the inclined plane
follows in "self-evident fashion" from the impossibility of perpetual motion, for
experience and reason tell us that the chain of balls in the figure would rotate
neither to right nor to left but would remain at rest. (Since rotation changes
nothing in the figure, if the chain moved at all, it would move perpetually.) Since
the pendant part of the chain hangs symmetrically, we can snip it off without
disturbing the equilibrium. But now the balls on the long side of the plane
balance those on the shorter, steeper side, and their relative numbers are in
inverse ratio to the sines of the angles at which the planes are inclined.
Stevin was so pleased with his construction that he incorporated it into a
vignette, inscribing above it
Wonder, en is gheen wonder
that is to say: "Wonderful, but not incomprehensible."
This is the task of natural science: to show that the wonderful is not
incomprehensible, to show how it can be comprehended but not to
Page 2
Figure 1
The vignette devised by Simon Stevin to
illustrate his derivation of the law of
the inclined plane
destroy wonder. For when we have explained the wonderful, unmasked the
hidden pattern, a new wonder arises at how complexity was woven out of
simplicity. The aesthetics of natural science and mathematics is at one with the
aesthetics of music and painting both inhere in the discovery of a partially
concealed pattern.
The world we live in today is much more a man-made,1
or artificial, world than it
is a natural world. Almost every element in our environment shows evidence of
human artifice. The temperature in which we spend most of our hours is kept
artificially at 20 degrees Celsius; the humidity is added to or taken from the air
we breathe; and the impurities we inhale are largely produced (and filtered) by
man.
Moreover for most of us the white-collared ones the significant part of the
environment consists mostly of strings of artifacts called "symbols" that we
receive through eyes and ears in the form of written and spoken language and
that we pour out into the environment as I am now doing by mouth or hand. The
laws that govern these strings of
1.
I will occasionally use "man" as an androgynous noun, encompassing both sexes, and "he," "his,"
and "him" as androgynous pronouns including women and men equally in their scope.
Page 3
symbols, the laws that govern the occasions on which we emit and receive them,
the determinants of their content are all consequences of our collective artifice.
One may object that I exaggerate the artificiality of our world. Man must obey
the law of gravity as surely as does a stone, and as a living organism man must
depend for food, and in many other ways, on the world of biological phenomena.
I shall plead guilty to overstatement, while protesting that the exaggeration is
slight. To say that an astronaut, or even an airplane pilot, is obeying the law of
gravity, hence is a perfectly natural phenomenon, is true, but its truth calls for
some sophistication in what we mean by "obeying" a natural law. Aristotle did
not think it natural for heavy things to rise or light ones to fall (Physics, Book
IV); but presumably we have a deeper understanding of "natural" than he did.
So too we must be careful about equating "biological" with "natural." A forest
may be a phenomenon of nature; a farm certainly is not. The very species upon
which we depend for our food our corn and our cattle are artifacts of our
ingenuity. A plowed field is no more part of nature than an asphalted street and
no less.
These examples set the terms of our problem, for those things we call artifacts
are not apart from nature. They have no dispensation to ignore or violate natural
law. At the same time they are adapted to human goals and purposes. They are
what they are in order to satisfy our desire to fly or to eat well. As our aims
change, so too do our artifacts and vice versa.
If science is to encompass these objects and phenomena in which human purpose
as well as natural law are embodied, it must have means for relating these two
disparate components. The character of these means and their implications for
certain areas of knowledge economics, psychology, and design in particular are
the central concern of this book.
The Artificial
Natural science is knowledge about natural objects and phenomena. We ask
whether there cannot also be "artificial" science knowledge about artificial
objects and phenomena. Unfortunately the term "artificial" has a pejorative air
about it that we must dispel before we can proceed.
Page 4
My dictionary defines "artificial" as, "Produced by art rather than by nature; not
genuine or natural; affected; not pertaining to the essence of the matter." It
proposes, as synonyms: affected, factitious, manufactured, pretended, sham,
simulated, spurious, trumped up, unnatural. As antonyms, it lists: actual, genuine,
honest, natural, real, truthful, unaffected. Our language seems to reflect man's
deep distrust of his own products. I shall not try to assess the validity of that
evaluation or explore its possible psychological roots. But you will have to
understand me as using "artificial" in as neutral a sense as possible, as meaning
man-made as opposed to natural.2
In some contexts we make a distinction between "artificial" and "synthetic." For
example, a gem made of glass colored to resemble sapphire would be called
artificial, while a man-made gem chemically indistinguishable from sapphire
would be called synthetic. A similar distinction is often made between "artificial"
and "synthetic" rubber. Thus some artificial things are imitations of things in
nature, and the imitation may use either the same basic materials as those in the
natural object or quite different materials.
As soon as we introduce "synthesis" as well as "artifice," we enter the realm of
engineering. For "synthetic" is often used in the broader sense of "designed" or
"composed.'' We speak of engineering as concerned with "synthesis," while
science is concerned with "analysis." Synthetic or artificial objects and more
specifically prospective artificial objects having desired properties are the central
objective of engineering activity and skill. The engineer, and more generally the
designer, is concerned with how things ought to be how they ought to be in order
to attain goals,
2.
I shall disclaim responsibility for this particular choice of terms. The phrase "artificial intelligence"
which led me to it, was coined, I think, right on the Charles River, at MIT. Our own research group
at Rand and Carnegie Mellon University have preferred phrases like "complex information
processing" and "simulation of cognitive processes." But then we run into new terminological
difficulties, for the dictionary also says that "to simulate" means "to assume or have the mere
appearance or form of, without the reality; imitate; counterfeit; pretend." At any rate, "artificial
intelligence" seems to be here to stay, and it may prove easier to cleanse the phrase than to dispense
with it. In time it will become sufficiently idiomatic that it will no longer be the target of cheap
rhetoric.
Page 5
and to function. Hence a science of the artificial will be closely akin to a science
of engineering but very different, as we shall see in my fifth chapter, from what
goes currently by the name of "engineering science."
With goals and "oughts" we also introduce into the picture the dichotomy
between normative and descriptive. Natural science has found a way to exclude
the normative and to concern itself solely with how things are. Can or should we
maintain this exclusion when we move from natural to artificial phenomena,
from analysis to synthesis?3
We have now identified four indicia that distinguish the artificial from the
natural; hence we can set the boundaries for sciences of the artificial:
1. Artificial things are synthesized (though not always or usually with full
forethought) by human beings.
2. Artificial things may imitate appearances in natural things while lacking, in
one or many respects, the reality of the latter.
3. Artificial things can be characterized in terms of functions, goals, adaptation.
4. Artificial things are often discussed, particularly when they are being
designed, in terms of imperatives as well as descriptives.
The Environment As Mold
Let us look a little more closely at the functional or purposeful aspect of artificial
things. Fulfillment of purpose or adaptation to a goal involves a relation among
three terms: the purpose or goal, the character of the artifact, and the
environment in which the artifact performs. When we think of a clock, for
example, in terms of purpose we may use the child's definition: "a clock is to tell
time." When we focus our attention on the clock itself, we may describe it in
terms of arrangements of gears and the
3.
This issue will also be discussed at length in my fifth chapter. In order not to keep readers in
suspense, I may say that I hold to the pristine empiricist's position of the irreducibility of "ought" to
"is," as in chapter 3 of my Administrative Behavior (New York: Macmillan, 1976). This position is
entirely consistent with treating natural or artificial goal-seeking systems as phenomena, without
commitment to their goals. Ibid., appendix. See also the well-known paper by A. Rosenbluth, N.
Wiener, and J. Bigelow, ''Behavior, Purpose, and Teleology," Philosophy of Science, 10 (1943):18
24.
Page 6
application of the forces of springs or gravity operating on a weight or pendulum.
But we may also consider clocks in relation to the environment in which they are
to be used. Sundials perform as clocks in sunny climates they are more useful in
Phoenix than in Boston and of no use at all during the Arctic winter. Devising a
clock that would tell time on a rolling and pitching ship, with sufficient accuracy
to determine longitude, was one of the great adventures of eighteenth-century
science and technology. To perform in this difficult environment, the clock had
to be endowed with many delicate properties, some of them largely or totally
irrelevant to the performance of a landlubber's clock.
Natural science impinges on an artifact through two of the three terms of the
relation that characterizes it: the structure of the artifact itself and the
environment in which it performs. Whether a clock will in fact tell time depends
on its internal construction and where it is placed. Whether a knife will cut
depends on the material of its blade and the hardness of the substance to which it
is applied.
The Artifact As "Interface"
We can view the matter quite symmetrically. An artifact can be thought of as a
meeting point an "interface" in today's terms between an "inner" environment,
the substance and organization of the artifact itself, and an ''outer" environment,
the surroundings in which it operates. If the inner environment is appropriate to
the outer environment, or vice versa, the artifact will serve its intended purpose.
Thus, if the clock is immune to buffeting, it will serve as a ship's chronometer.
(And conversely, if it isn't, we may salvage it by mounting it on the mantel at
home.)
Notice that this way of viewing artifacts applies equally well to many things that
are not man-made to all things in fact that can be regarded as adapted to some
situation; and in particular it applies to the living systems that have evolved
through the forces of organic evolution. A theory of the airplane draws on natural
science for an explanation of its inner environment (the power plant, for
example), its outer environment (the character of the atmosphere at different
altitudes), and the relation between its inner and outer environments (the
movement of an air foil
Page 7
through a gas). But a theory of the bird can be divided up in exactly the same
way.4
Given an airplane, or given a bird, we can analyze them by the methods of
natural science without any particular attention to purpose or adaptation, without
reference to the interface between what I have called the inner and outer
environments. After all, their behavior is governed by natural law just as fully as
the behavior of anything else (or at least we all believe this about the airplane,
and most of us believe it about the bird).
Functional Explanation
On the other hand, if the division between inner and outer environment is not
necessary to the analysis of an airplane or a bird, it turns out at least to be highly
convenient. There are several reasons for this, which will become evident from
examples.
Many animals in the Arctic have white fur. We usually explain this by saying
that white is the best color for the Arctic environment, for white creatures escape
detection more easily than do others. This is not of course a natural science
explanation; it is an explanation by reference to purpose or function. It simply
says that these are the kinds of creatures that will "work;" that is, survive, in this
kind of environment. To turn the statement into an explanation, we must add to it
a notion of natural selection, or some equivalent mechanism.
An important fact about this kind of explanation is that it demands an
understanding mainly of the outer environment. Looking at our snowy
surroundings, we can predict the predominant color of the creatures we are likely
to encounter; we need know little about the biology of the creatures themselves,
beyond the facts that they are often mutually hostile, use visual clues to guide
their behavior, and are adaptive (through selection or some other mechanism).
4.
A generalization of the argument made here for the separability of "outer" from "inner"
environment shows that we should expect to find this separability, to a greater or lesser degree, in all
large and complex systems, whether they are artificial or natural. In its generalized form it is an
argument that all nature will be organized in "levels:" My essay "The Architecture of Complexity,''
included in this volume as chapter 8, develops the more general argument in some detail.
Page 8
Analogous to the role played by natural selection in evolutionary biology is the
role played by rationality in the sciences of human behavior. If we know of a
business organization only that it is a profit-maximizing system, we can often
predict how its behavior will change if we change its environment how it will
alter its prices if a sales tax is levied on its products. We can sometimes make
this prediction and economists do make it repeatedly without detailed
assumptions about the adaptive mechanism, the decision-making apparatus that
constitutes the inner environment of the business firm.
Thus the first advantage of dividing outer from inner environment in studying an
adaptive or artificial system is that we can often predict behavior from
knowledge of the system's goals and its outer environment, with only minimal
assumptions about the inner environment. An instant corollary is that we often
find quite different inner environments accomplishing identical or similar goals
in identical or similar outer environments airplanes and birds, dolphins and tuna
fish, weight-driven clocks and battery-driven clocks, electrical relays and
transistors.
There is often a corresponding advantage in the division from the standpoint of
the inner environment. In very many cases whether a particular system will
achieve a particular goal or adaptation depends on only a few characteristics of
the outer environment and not at all on the detail of that environment. Biologists
are familiar with this property of adaptive systems under the label of
homeostasis. It is an important property of most good designs, whether biological
or artifactual. In one way or an other the designer insulates the inner system from
the environment, so that an invariant relation is maintained between inner system
and goal, independent of variations over a wide range in most parameters that
characterize the outer environment. The ship's chronometer reacts to the pitching
of the ship only in the negative sense of maintaining an invariant relation of the
hands on its dial to the real time, independently of the ship's motions.
Quasi independence from the outer environment may be maintained by various
forms of passive insulation, by reactive negative feedback (the most frequently
discussed form of insulation), by predictive adaptation, or by various
combinations of these.
Page 9
Functional Description and Synthesis
In the best of all possible worlds at least for a designer we might even hope to
combine the two sets of advantages we have described that derive from factoring
an adaptive system into goals, outer environment, and inner environment. We
might hope to be able to characterize the main properties of the system and its
behavior without elaborating the detail of either the outer or inner environments.
We might look toward a science of the artificial that would depend on the
relative simplicity of the interface as its primary source of abstraction and
generality.
Consider the design of a physical device to serve as a counter. If we want the
device to be able to count up to one thousand, say, it must be capable of
assuming any one of at least a thousand states, of maintaining itself in any given
state, and of shifting from any state to the "next" state. There are dozens of
different inner environments that might be used (and have been used) for such a
device. A wheel notched at each twenty minutes of arc, and with a ratchet device
to turn and hold it, would do the trick. So would a string of ten electrical
switches properly connected to represent binary numbers. Today instead of
switches we are likely to use transistors or other solid-state devices.5
Our counter would be activated by some kind of pulse, mechanical or electrical,
as appropriate, from the outer environment. But by building an appropriate
transducer between the two environments, the physical character of the interior
pulse could again be made independent of the physical character of the exterior
pulse the counter could be made to count anything.
Description of an artifice in terms of its organization and functioning its interface
between inner and outer environments is a major objective of invention and
design activity. Engineers will find familiar the language of the following claim
quoted from a 1919 patent on an improved motor controller:
What I claim as new and desire to secure by Letters Patent is:
1 In a motor controller, in combination, reversing means, normally effective field-weakening means
and means associated with said reversing means for
5.
The theory of functional equivalence of computing machines has had considerable development in
recent years. See Marvin L. Minsky, Computation: Finite and Infinite Machines (Englewood Cliffs,
N.J.: Prentice-Hall, 1967), chapters 1 4.
Page 10
rendering said field-weakening means ineffective during motor starting and thereafter effective to
different degrees determinable by the setting of said reversing means . . .6
Apart from the fact that we know the invention relates to control of an electric
motor, there is almost no reference here to specific, concrete objects or
phenomena. There is reference rather to "reversing means" and "field-weakening
means," whose further purpose is made clear in a paragraph preceding the patent
claims:
The advantages of the special type of motor illustrated and the control thereof will be readily
understood by those skilled in the art. Among such advantages may be mentioned the provision of a
high starting torque and the provision for quick reversals of the motor.7
Now let us suppose that the motor in question is incorporated in a planing
machine (see figure 2). The inventor describes its behavior thus:
Referring now to [figure 2], the controller is illustrated in outline connection with a planer (100)
operated by a motor M, the controller being adapted to govern the motor M and to be automatically
operated by the reciprocating bed (101) of the planer. The master shaft of the controller is provided
with a lever (102) connected by a link (103) to a lever (104) mounted upon the planer frame and
projecting into the path of lugs (105) and (106) on the planer bed. As will be understood, the
arrangement is such that reverse movements of the planer bed will, through the connections
described, throw the master shaft of the controller back and forth between its extreme positions and
in consequence effect selective operation of the reversing switches (1) and (2) and automatic
operation of the other switches in the manner above set forth.8
In this manner the properties with which the inner environment has been
endowed are placed at the service of the goals in the context of the outer
environment. The motor will reverse periodically under the control of the
position of the planer bed. The "shape" of its behavior the time path, say, of a
variable associated with the motor will be a function of the "shape" of the
external environment the distance, in this case, between the lugs on the planer
bed.
The device we have just described illustrates in microcosm the nature of artifacts.
Central to their description are the goals that link the inner
6.
U.S. Patent 1,307,836, granted to Arthur Simon, June 24, 1919.
7.
Ibid.
8.
Ibid.
Page 11
Figure 2
Illustrations from a patent for a motor controller
to the outer system. The inner system is an organization of natural phenomena
capable of attaining the goals in some range of environments, but ordinarily there
will be many functionally equivalent natural systems capable of doing this.
The outer environment determines the conditions for goal attainment. If the inner
system is properly designed, it will be adapted to the outer environment, so that
its behavior will be determined in large part by the
Page 12
behavior of the latter, exactly as in the case of "economic man." To predict how
it will behave, we need only ask, "How would a rationally designed system
behave under these circumstances?" The behavior takes on the shape of the task
environment.9
Limits of Adaptation
But matters must be just a little more complicated than this account suggests. "If
wishes were horses, all beggars would ride." And if we could always specify a
protean inner system that would take on exactly the shape of the task
environment, designing would be synonymous with wishing. "Means for
scratching diamonds" defines a design objective, an objective that might be
attained with the use of many different substances. But the design has not been
achieved until we have discovered at least one realizable inner system obeying
the ordinary natural laws one material, in this case, hard enough to scratch
diamonds.
Often we shall have to be satisfied with meeting the design objectives only
approximately. Then the properties of the inner system will "show through." That
is, the behavior of the system will only partly respond to the task environment;
partly, it will respond to the limiting properties of the inner system.
Thus the motor controls described earlier are aimed at providing for "quick"
reversal of the motor. But the motor must obey electromagnetic and mechanical
laws, and we could easily confront the system with a task where the environment
called for quicker reversal than the motor was capable of. In a benign
environment we would learn from the motor only what it had been called upon to
do; in a taxing environment we would learn something about its internal structure
specifically about those aspects of the internal structure that were chiefly
instrumental in limiting performance.10
9.
On the crucial role of adaptation or rationality and their limits for economics and organization
theory, see the introduction to part IV, "Rationality and Administrative Decision Making," of my
Models of Man (New York: Wiley, 1957); pp. 38 41, 80 81, and 240 244 of Administrative
Behavior; and chapter 2 of this book.
10.
Compare the corresponding proposition on the design of administrative organizations: "Rationality,
then, does not determine behavior. Within the area of rationality behavior is perfectly flexible and
adaptable to abilities, goals, and
(footnote continued on next page)
Page 13
A bridge, under its usual conditions of service, behaves simply as a relatively
smooth level surface on which vehicles can move. Only when it has been
overloaded do we learn the physical properties of the materials from which it is
built.
Understanding by Simulating
Artificiality connotes perceptual similarity but essential difference, resemblance
from without rather than within. In the terms of the previous section we may say
that the artificial object imitates the real by turning the same face to the outer
system, by adapting, relative to the same goals, to comparable ranges of external
tasks. Imitation is possible because distinct physical systems can be organized to
exhibit nearly identical behavior. The damped spring and the damped circuit
obey the same second-order linear differential equation; hence we may use either
one to imitate the other.
Techniques of Simulation
Because of its abstract character and its symbol manipulating generality, the
digital computer has greatly extended the range of systems whose behavior can
be imitated. Generally we now call the imitation "simulation," and we try to
understand the imitated system by testing the simulation in a variety of
simulated, or imitated, environments.
Simulation, as a technique for achieving understanding and predicting the
behavior of systems, predates of course the digital computer. The model basin
and the wind tunnel are valued means for studying the behavior of large systems
by modeling them in the small, and it is quite certain that Ohm's law was
suggested to its discoverer by its analogy with simple hydraulic phenomena.
(footnote continued from previous page)
knowledge. Instead, behavior is determined by the irrational and non-rational elements that bound
the area of rationality . . . administrative theory must be concerned with the limits of rationality, and
the manner in which organization affects these limits for the person making a decision."
Administrative Behavior, p. 241. For a discussion of the same issue as it arises in psychology, see
my "Cognitive Architectures and Rational Analysis: Comment," in Kurt Van Lehn (ed.),
Architectures for Intelligence (Hillsdale, NJ: Erlbaum, 1991).
Page 14
Simulation may even take the form of a thought experiment, never actually
implemented dynamically. One of my vivid memories of the Great Depression is
of a large multi colored chart in my father's study that represented a hydraulic
model of an economic system (with different fluids for money and goods). The
chart was devised by a technocratically inclined engineer named Dahlberg. The
model never got beyond the pen-and-paint stage at that time, but it could be used
to trace through the imputed consequences of particular economic measures or
events provided the theory was right!11
As my formal education in economics progressed, I acquired a disdain for that
naive simulation, only to discover after World War II that a distinguished
economist, Professor A. W. Phillips had actually built the Moniac, a hydraulic
model that simulated a Keynesian economy.12
Of course Professor Phillips's
simulation incorporated a more nearly correct theory than the earlier one and was
actually constructed and operated two points in its favor. However, the Moniac,
while useful as a teaching tool, told us nothing that could not be extracted readily
from simple mathematical versions of Keynesian theory and was soon priced out
of the market by the growing number of computer simulations of the economy.
Simulation As a Source of New Knowledge
This brings me to the crucial question about simulation: How can a simulation
ever tell us anything that we do not already know? The usual implication of the
question is that it can't. As a matter of fact, there is an interesting parallelism,
which I shall exploit presently, between two assertions about computers and
simulation that one hears frequently:
1. A simulation is no better than the assumptions built into it.
2. A computer can do only what it is programmed to do.
I shall not deny either assertion, for both seem to me to be true. But despite both
assertions simulation can tell us things we do not already know.
11.
For some published versions of this model, see A. O. Dahlberg, National Income Visualized
(N.Y.: Columbia University Press, 1956).
12.
A. W. Phillips, "Mechanical Models in Economic Dynamics," Economica, New Series, 17 (1950):283
305.
Page 15
There are two related ways in which simulation can provide new knowledge one
of them obvious, the other perhaps a bit subtle. The obvious point is that, even
when we have correct premises, it may be very difficult to discover what they
imply. All correct reasoning is a grand system of tautologies, but only God can
make direct use of that fact. The rest of us must painstakingly and fallibly tease
out the consequences of our assumptions.
Thus we might expect simulation to be a powerful technique for deriving, from
our knowledge of the mechanisms governing the behavior of gases, a theory of
the weather and a means of weather prediction. Indeed, as many people are
aware, attempts have been under way for some years to apply this technique.
Greatly oversimplified, the idea is that we already know the correct basic
assumptions, the local atmospheric equations, but we need the computer to work
out the implications of the interactions of vast numbers of variables starting from
complicated initial conditions. This is simply an extrapolation to the scale of
modern computers of the idea we use when we solve two simultaneous equations
by algebra.
This approach to simulation has numerous applications to engineering design.
For it is typical of many kinds of design problems that the inner system consists
of components whose fundamental laws of behavior mechanical, electrical, or
chemical are well known. The difficulty of the design problem often resides in
predicting how an assemblage of such components will behave.
Simulation of Poorly Understood Systems
The more interesting and subtle question is whether simulation can be of any
help to us when we do not know very much initially about the natural laws that
govern the behavior of the inner system. Let me show why this question must
also be answered in the affirmative.
First, I shall make a preliminary comment that simplifies matters: we are seldom
interested in explaining or predicting phenomena in all their particularity; we are
usually interested only in a few properties abstracted from the complex reality.
Thus, a NASA-launched satellite is surely an artificial object, but we usually do
not think of it as "simulating" the moon or a planet. It simply obeys the same
laws of physics, which relate
Page 16
only to its inertial and gravitational mass, abstracted from most of its other
properties. It is a moon. Similarly electric energy that entered my house from the
early atomic generating station at Shipping port did not "simulate" energy
generated by means of a coal plant or a windmill. Maxwell's equations hold for
both.
The more we are willing to abstract from the detail of a set of phenomena, the
easier it becomes to simulate the phenomena. Moreover we do not have to know,
or guess at, all the internal structure of the system but only that part of it that is
crucial to the abstraction.
It is fortunate that this is so, for if it were not, the top down strategy that built the
natural sciences over the past three centuries would have been infeasible. We
knew a great deal about the gross physical and chemical behavior of matter
before we had a knowledge of molecules, a great deal about molecular chemistry
before we had an atomic theory, and a great deal about atoms before we had any
theory of elementary particles if indeed we have such a theory today.
This skyhook-skyscraper construction of science from the roof down to the yet
unconstructed foundations was possible because the behavior of the system at
each level depended on only a very approximate, simplified, abstracted
characterization of the system at the level next beneath.13
This is lucky, else the
safety of bridges and airplanes might depend on the correctness of the "Eightfold
Way" of looking at elementary particles.
Artificial systems and adaptive systems have properties that make them
particularly susceptible to simulation via simplified models. The characterization
of such systems in the previous section of this chapter
13.
This point is developed more fully in "The Architecture of Complexity," chapter 8 in this volume.
More than fifty years ago, Bertrand Russell made the same point about the architecture of
mathematics. See the "Preface" to Principia Mathematica: ". . . the chief reason in favour of any
theory on the principles of mathematics must always be inductive, i.e., it must lie in the fact that the
theory in question enables us to deduce ordinary mathematics. In mathematics, the greatest degree of
self-evidence is usually not to be found quite at the beginning, but at some later point; hence the
early deductions, until they reach this point, give reasons rather for believing the premises because
true consequences follow from them, than for believing the consequences because they follow from
the premises." Contemporary preferences for deductive formalisms frequently blind us to this
important fact, which is no less true today than it was in 1910.
Page 17
explains why. Resemblance in behavior of systems without identity of the inner
systems is particularly feasible if the aspects in which we are interested arise out
of the organization of the parts, independently of all but a few properties of the
individual components. Thus for many purposes we may be interested in only
such characteristics of a material as its tensile and compressive strength. We may
be profoundly unconcerned about its chemical properties, or even whether it is
wood or iron.
The motor control patent cited earlier illustrates this abstraction to organizational
properties. The invention consisted of a ''combination" of "reversing means," of
"field weakening means," that is to say, of components specified in terms of their
functioning in the organized whole. How many ways are there of reversing a
motor, or of weakening its field strength? We can simulate the system described
in the patent claims in many ways without reproducing even approximately the
actual physical device that is depicted. With a small additional step of
abstraction, the patent claims could be restated to encompass mechanical as well
as electrical devices. I suppose that any undergraduate engineer at Berkeley,
Carnegie Mellon University, or MIT could design a mechanical system
embodying reversibility and variable starting torque so as to simulate the system
of the patent.
The Computer As Artifact
No artifact devised by man is so convenient for this kind of functional
description as a digital computer. It is truly protean, for almost the only ones of
its properties that are detectable in its behavior (when it is operating properly!)
are the organizational properties. The speed with which it performs it basic
operations may allow us to infer a little about its physical components and their
natural laws; speed data, for example, would allow us to rule out certain kinds of
"slow" components. For the rest, almost no interesting statement that one can
make about an operating computer bears any particular relation to the specific
nature of the hardware. A computer is an organization of elementary functional
components in which, to a high approximation, only the function
Page 18
performed by those components is relevant to the behavior of the whole system.14
Computers As Abstract Objects
This highly abstractive quality of computers makes it easy to introduce
mathematics into the study of their theory and has led some to the erroneous
conclusion that, as a computer science emerges, it will necessarily be a
mathematical rather than an empirical science. Let me take up these two points in
turn: the relevance of mathematics to computers and the possibility of studying
computers empirically.
Some important theorizing, initiated by John von Neumann, has been done on
the topic of computer reliability. The question is how to build a reliable system
from unreliable parts. Notice that this is not posed as a question of physics or
physical engineering. The components engineer is assumed to have done his best,
but the parts are still unreliable! We can cope with the unreliability only by our
manner of organizing them.
To turn this into a meaningful problem, we have to say a little more about the
nature of the unreliable parts. Here we are aided by the knowledge that any
computer can be assembled out of a small array of simple, basic elements. For
instance, we may take as our primitives the so-called Pitts-McCulloch neurons.
As their name implies, these components were devised in analogy to the
supposed anatomical and functional characteristics of neurons in the brain, but
they are highly abstracted. They are formally isomorphic with the simplest kinds
of switching circuits "and" "or," and "not'' circuits. We postulate, now, that we
are to build a system from such elements and that each elementary part has a
specified probability of functioning correctly. The problem is to arrange the
elements and their interconnections in such a way that the complete system will
perform reliably.
The important point for our present discussion is that the parts could as well be
neurons as relays, as well relays as transistors. The natural laws governing relays
are very well known, while the natural laws governing
14.
On the subject of this and the following paragraphs, see M. L. Minsky, op. cit.; then John von
Neumann, "Probabilistic Logics and the Synthesis of Reliable Organisms from Unreliable
Components," in C. E. Shannon and J. McCarthy (eds.), Automata Studies (Princeton: Princeton
University Press, 1956).
Page 19
neurons are known most imperfectly. But that does not matter, for all that is
relevant for the theory is that the components have the specified level of
unreliability and be interconnected in the specified way.
This example shows that the possibility of building a mathematical theory of a
system or of simulating that system does not depend on having an adequate
micro theory of the natural laws that govern the system components. Such a
micro theory might indeed be simply irrelevant.
Computers As Empirical Objects
We turn next to the feasibility of an empirical science of computers as distinct
from the solid-state physics or physiology of their componentry.15
As a matter of
empirical fact almost all of the computers that have been designed have certain
common organizational features. They almost all can be decomposed into an
active processor (Babbage's "Mill") and a memory (Babbage's "Store") in
combination with input and output devices. (Some of the larger systems,
somewhat in the manner of colonial algae, are assemblages of smaller systems
having some or all of these components. But perhaps I may oversimplify for the
moment.) They are all capable of storing symbols (program) that can be
interpreted by a program-control component and executed. Almost all have
exceedingly limited capacity for simultaneous, parallel activity they are basically
one-thing-at-a-time systems. Symbols generally have to be moved from the
larger memory components into the central processor before they can be acted
upon. The systems are capable of only simple basic actions: recoding symbols,
storing symbols, copying symbols, moving symbols, erasing symbols, and
comparing symbols.
Since there are now many such devices in the world, and since the properties that
describe them also appear to be shared by the human central nervous system,
nothing prevents us from developing a natural history of them. We can study
them as we would rabbits or chipmunks and discover how they behave under
different patterns of environmental stimulation. Insofar as their behavior reflects
largely the broad functional
15.
A. Newell and H. A. Simon, "Computer Science as Empirical Inquiry," Communications of the
ACM, 19(March 1976):113 126. See also H. A. Simon, "Artificial Intelligence: An Empirical
Science," Artificial Intelligence, 77(1995):95 127.
Page 20
characteristics we have described, and is independent of details of their
hardware, we can build a general but empirical theory of them.
The research that was done to design computer time-sharing systems is a good
example of the study of computer behavior as an empirical phenomenon. Only
fragments of theory were available to guide the design of a time-sharing system
or to predict how a system of a specified design would actually behave in an
environment of users who placed their several demands upon it. Most actual
designs turned out initially to exhibit serious deficiencies, and most predictions
of performance were startlingly inaccurate.
Under these circumstances the main route open to the development and
improvement of time-sharing systems was to build them and see how they
behaved. And this is what was done. They were built, modified, and improved in
successive stages. Perhaps theory could have anticipated these experiments and
made them unnecessary. In fact it didn't, and I don't know anyone intimately
acquainted with these exceedingly complex systems who has very specific ideas
as to how it might have done so. To understand them, the systems had to be
constructed, and their behavior observed.16
In a similar vein computer programs designed to play games or to discover
proofs for mathematical theorems spend their lives in exceedingly large and
complex task environments. Even when the programs themselves are only
moderately large and intricate (compared, say, with the monitor and operating
systems of large computers), too little is known about their task environments to
permit accurate prediction of how well they will perform, how selectively they
will be able to search for problem solutions.
Here again theoretical analysis must be accompanied by large amounts of
experimental work. A growing literature reporting these experiments is
beginning to give us precise knowledge about the degree of heuristic power of
particular heuristic devices in reducing the size of the problem spaces that must
be searched. In theorem proving, for example, there has
16.
The empirical, exploratory flavor of computer research is nicely captured by the account of
Maurice V. Wilkes in his 1967 Turing Lecture, "Computers Then and Now," Journal of the
Association for Computing Machinery, 15(January 1968):1 7.
Page 21
been a whole series of advances in heuristic power based on and guided by
empirical exploration: the use of the Herbrand theorem, the resolution principle,
the set-of-support principle, and so on.17
Computers and Thought
As we succeed in broadening and deepening our knowledge theoretical and
empirical about computers, we discover that in large part their behavior is
governed by simple general laws, that what appeared as complexity in the
computer program was to a considerable extent complexity of the environment to
which the program was seeking to adapt its behavior.
This relation of program to environment opened up an exceedingly important
role for computer simulation as a tool for achieving a deeper understanding of
human behavior. For if it is the organization of components, and not their
physical properties, that largely determines behavior, and if computers are
organized somewhat in the image of man, then the computer becomes an obvious
device for exploring the consequences of alternative organizational assumptions
for human behavior. Psychology could move forward without awaiting the
solutions by neurology of the problems of component design however interesting
and significant these components turn out to be.
Symbol Systems: Rational Artifacts
The computer is a member of an important family of artifacts called symbol
systems, or more explicitly, physical symbol systems.18
Another important
member of the family (some of us think, anthropomorphically, it is the most
important) is the human mind and brain. It is with this family
17.
Note, for example, the empirical data in Lawrence Wos, George A. Robinson, Daniel F. Carson,
and Leon Shalla, "The Concept of Demodulation in Theorem Proving," Journal of the Association
for Computing Machinery, 14(October 1967):698 709, and in several of the earlier papers referenced
there. See also the collection of programs in Edward Feigenbaum and Julian Feldman (eds.),
Computers and Thought (New York: McGraw-Hill, 1963). It is common practice in the field to title
papers about heuristic programs, "Experiments with an XYZ Program."
18.
In the literature the phrase information-processing system is used more frequently than symbol
system. I will use the two terms as synonyms.
Page 22
of artifacts, and particularly the human version of it, that we will be primarily
concerned in this book. Symbol systems are almost the quintessential artifacts,
for adaptivity to an environment is their whole raison d'être. They are goal-
seeking, information-processing systems, usually enlisted in the service of the
larger systems in which they are incorporated.
Basic Capabilities of Symbol Systems
A physical symbol system holds a set of entities, called symbols. These are
physical patterns (e.g., chalk marks on a blackboard) that can occur as
components of symbol structures (sometimes called "expressions"). As I have
already pointed out in the case of computers, a symbol system also possesses a
number of simple processes that operate upon symbol structures processes that
create, modify, copy, and destroy symbols. A physical symbol system is a
machine that, as it moves through time, produces an evolving collection of
symbol structures.19
Symbol structures can, and commonly do, serve as internal
representations (e.g., "mental images") of the environments to which the symbol
system is seeking to adapt. They allow it to model that environment with greater
or less veridicality and in greater or less detail, and consequently to reason about
it. Of course, for this capability to be of any use to the symbol system, it must
have windows on the world and hands, too. It must have means for acquiring
information from the external environment that can be encoded into internal
symbols, as well as means for producing symbols that initiate action upon the
environment. Thus it must use symbols to designate objects and relations and
actions in the world external to the system.
Symbols may also designate processes that the symbol system can interpret and
execute. Hence the programs that govern the behavior of a symbol system can be
stored, along with other symbol structures, in the system's own memory, and
executed when activated.
Symbol systems are called "physical" to remind the reader that they exist as real-
world devices, fabricated of glass and metal (computers) or flesh and blood
(brains). In the past we have been more accustomed to thinking of the symbol
systems of mathematics and logic as abstract and disembodied, leaving out of
account the paper and pencil and human minds that were required actually to
bring them to life. Computers have
19.
Newell and Simon, "Computer Science as Empirical Inquiry," p. 116.
Page 23
transported symbol systems from the platonic heaven of ideas to the empirical
world of actual processes carried out by machines or brains, or by the two of
them working together.
Intelligence As Computation
The three chapters that follow rest squarely on the hypothesis that intelligence is
the work of symbol systems. Stated a little more formally, the hypothesis is that a
physical symbol system of the sort I have just described has the necessary and
sufficient means for general intelligent action.
The hypothesis is clearly an empirical one, to be judged true or false on the basis
of evidence. One task of chapters 3 and 4 will be to review some of the evidence,
which is of two basic kinds. On the one hand, by constructing computer
programs that are demonstrably capable of intelligent action, we provide
evidence on the sufficiency side of the hypothesis. On the other hand, by
collecting experimental data on human thinking that tend to show that the human
brain operates as a symbol system, we add plausibility to the claims for
necessity, for such data imply that all known intelligent systems (brains and
computers) are symbol systems.
Economics: Abstract Rationality
As prelude to our consideration of human intelligence as the work of a physical
symbol system, chapter 2 introduces a heroic abstraction and idealization the
idealization of human rationality which is enshrined in modern economic
theories, particularly those called neoclassical. These theories are an idealization
because they direct their attention primarily to the external environment of
human thought, to decisions that are optimal for realizing the adaptive system's
goals (maximization of utility or profit). They seek to define the decisions that
would be substantively rational in the circumstances defined by the outer
environment.
Economic theory's treatment of the limits of rationality imposed by the inner
environment by the characteristics of the physical symbol system tends to be
pragmatic, and sometimes even opportunistic. In the more formal treatments of
general equilibrium and in the so-called "rational expectations" approach to
adaptation, the possibilities that an information-processing system may have a
very limited capability for
Page 24
adaptation are almost ignored. On the other hand, in discussions of the rationale
for market mechanisms and in many theories of decision making under
uncertainty, the procedural aspects of rationality receive more serious treatment.
In chapter 2 we will see examples both of neglect for and concern with the limits
of rationality. From the idealizations of economics (and some criticisms of these
idealizations) we will move, in chapters 3 and 4, to a more systematic study of
the inner environment of thought of thought processes as they actually occur
within the constraints imposed by the parameters of a physical symbol system
like the brain.
Page 25
2
Economic Rationality: Adaptive Artifice
Because scarcity is a central fact of life land, money, fuel, time, attention, and
many other things are scarce it is a task of rationality to allocate scarce things.
Performing that task is the focal concern of economics.
Economics exhibits in purest form the artificial component in human behavior, in
individual actors, business firms, markets, and the entire economy. The outer
environment is defined by the behavior of other individuals, firms, markets, or
economies. The inner environment is defined by an individual's, firm's, market's,
or economy's goals and capabilities for rational, adaptive behavior. Economics
illustrates well how outer and inner environment interact and, in particular, how
an intelligent system's adjustment to its outer environment (its substantive
rationality) is limited by its ability, through knowledge and computation, to
discover appropriate adaptive behavior (its procedural rationality).
The Economic Actor
In the textbook theory of the business firm, an "entrepreneur" aims at
maximizing profit, and in such simple circumstances that the computational
ability to find the maximum is not in question. A cost curve relates dollar
expenditures to amount of product manufactured, and a revenue curve relates
income to amount of product sold. The goal (maximizing the difference between
income and expenditure) fully defines the firm's inner environment. The cost and
revenue curves define the outer environment.1
Elementary calculus shows how to
find the profit-maximizing
1.
I am drawing the line between outer and inner environment not at the firm's boundary but at the
skin of the entrepreneur, so that the factory is part of the external technology; the brain, perhaps
assisted by computers, is the internal.
Page 26
quantity by taking a derivative (rate at which profit changes with change in
quantity) and setting it equal to zero.
Here are all the elements of an artificial system adapting to an outer
environment, subject only to the goal defined by the inner environment. In
contrast to a situation where the adaptation process is itself problematic, we can
predict the system's behavior without knowing how it actually computes the
optimal output. We need consider only substantive rationality.2
We can interpret this bare-bones theory of the firm either positively (as
describing how business firms behave) or normatively (as advising them how to
maximize profits). It is widely taught in both senses in business schools and
universities, just as if it described what goes on, or could go on, in the real world.
Alas, the picture is far too simple to fit reality.
Procedural Rationality
The question of maximizing the difference between revenue and cost becomes
interesting when, in more realistic circumstances, we ask how the firm actually
goes about discovering that maximizing quantity. Cost accounting may estimate
the approximate cost of producing any particular output, but how much can be
sold at a specific price and how this amount varies with price (the elasticity of
demand) usually can be guessed only roughly. When there is uncertainty (as
there always is), prospects of profit must be balanced against risk, thereby
changing profit maximization to the much more shadowy goal of maximizing a
profit-vs.-risk "utility function" that is assumed to lurk somewhere in the
recesses of the entrepreneur's mind.
But in real life the business firm must also choose product quality and the
assortment of products it will manufacture. It often has to invent and design
some of these products. It must schedule the factory to produce a profitable
combination of them and devise marketing procedures and structures to sell
them. So we proceed step by step from the simple caricature of the firm depicted
in the textbooks to the complexities of real firms in the real world of business. At
each step toward realism, the problem
2.
H. A. Simon, "Rationality as Process and as Product of Thought," American Economic Review,
68(1978):1 16.
Page 27
gradually changes from choosing the right course of action (substantive
rationality) to finding a way of calculating, very approximately, where a good
course of action lies (procedural rationality). With this shift, the theory of the
firm becomes a theory of estimation under uncertainty and a theory of
computation decidedly non-trivial theories as the obscurities and complexities of
information and computation increase.
Operations Research and Management Science
Today several branches of applied science assist the firm to achieve procedural
rationality.3
One of them is operations research (OR); another is artificial
intelligence (AI). OR provides algorithms for handling difficult multivariate
decision problems, sometimes involving uncertainty. Linear programming,
integer programming, queuing theory, and linear decision rules are examples of
widely used OR procedures.
To permit computers to find optimal solutions with reasonable expenditures of
effort when there are hundreds or thousands of variables, the powerful
algorithms associated with OR impose a strong mathematical structure on the
decision problem. Their power is bought at the cost of shaping and squeezing the
real-world problem to fit their computational requirements: for example,
replacing the real-world criterion function and constraints with linear
approximations so that linear programming can be used. Of course the decision
that is optimal for the simplified approximation will rarely be optimal in the real
world, but experience shows that it will often be satisfactory.
The alternative methods provided by AI, most often in the form of heuristic
search (selective search using rules of thumb), find decisions that are "good
enough," that satisfice. The AI models, like OR models, also only approximate
the real world, but usually with much more accuracy and detail than the OR
models can admit. They can do this because heuristic search can be carried out in
a more complex and less well-structured problem space than is required by OR
maximizing tools. The price paid
3.
For a brief survey of these developments, see H. A. Simon, "On How to Decide What to Do," The
Bell Journal of Economics, 9(1978):494 507. For an estimate of their impact on management, see H.
A. Simon, The New Science of Management Decision, rev. ed. (Englewood Cliffs, NJ: Prentice-Hall,
1977), chapters 2 and 4.
Page 28
for working with the more realistic but less regular models is that AI methods
generally find only satisfactory solutions, not optima. We must trade off
satisficing in a nearly-realistic model (AI) against optimizing in a greatly
simplified model (OR). Sometimes one will be preferred, sometimes the other.
AI methods can handle combinatorial problems (e.g., factory scheduling
problems) that are beyond the capacities of OR methods, even with the largest
computers. Heuristic methods provide an especially powerful problem-solving
and decision-making tool for humans who are unassisted by any computer other
than their own minds, hence must make radical simplifications to find even
approximate solutions. AI methods also are not limited, as most OR methods are,
to situations that can be expressed quantitatively. They extend to all situations
that can be represented symbolically, that is, verbally, mathematically or
diagrammatically.
OR and AI have been applied mainly to business decisions at the middle levels
of management. A vast range of top management decisions (e.g., strategic
decisions about investment, R&D, specialization and diversification, recruitment,
development, and retention of managerial talent) are still mostly handled
traditionally, that is, by experienced executives' exercise of judgment.
As we shall see in chapters 3 and 4, so-called ''judgment" turns out to be mainly a
non-numerical heuristic search that draws upon information stored in large
expert memories. Today we have learned how to employ AI techniques in the
form of so-called expert systems in a growing range of domains previously
reserved for human expertise and judgment for example, medical diagnosis and
credit evaluation. Moreover, while classical OR tools could only choose among
predefined alternatives, AI expert systems are now being extended to the
generation of alternatives, that is, to problems of design. More will be said about
these developments in chapters 5 and 6.
Satisficing and Aspiration Levels
What a person cannot do he or she will not do, no matter how strong the urge to
do it. In the face of real-world complexity, the business firm turns to procedures
that find good enough answers to questions whose best answers are unknowable.
Because real-world optimization, with or with-
Page 29
out computers, is impossible, the real economic actor is in fact a satisficer, a
person who accepts "good enough" alternatives, not because less is preferred to
more but because there is no choice.
Many economists, Milton Friedman being perhaps the most vocal, have argued
that the gap between satisfactory and best is of no great importance, hence the
unrealism of the assumption that the actors optimize does not matter; others,
including myself, believe that it does matter, and matters a great deal.4
But
reviewing this old argument would take me away from my main theme, which is
to show how the behavior of an artificial system may be strongly influenced by
the limits of its adaptive capacities its knowledge and computational powers.
One requirement of optimization not shared by satisficing is that all alternatives
must be measurable in terms of a common utility function. A large body of
evidence shows that human choices are not consistent and transitive, as they
would be if a utility function existed.5
But even in a satisficing theory we need
some criteria of satisfaction. What realistic measures of human profit, pleasure,
happiness and satisfaction can serve in place of the discredited utility function?
Research findings on the psychology of choice, indicate some properties a
thermometer of satisfaction should have. First, unlike the utility function, it is not
limited to positive values, but has a zero point (of minimal contentment). Above
zero, various degrees of satisfaction are experienced, and below zero, various
degrees of dissatisfaction. Second, if periodic readings are taken of people in
relatively stable life circumstances, we only occasionally find temperatures very
far from zero in either direction, and the divergent measurements tend to regress
over time back toward the zero mark. Most people consistently register either
slightly below zero (mild discontent) or a little above (moderate satisfaction).
4.
I have argued the case in numerous papers. Two recent examples are "Rationality in Psychology
and Economics," The Journal of Business, 59(1986):S209 S224 (No. 4, Pt. 2); and "The State of
Economic Science," in W. Sichel (ed.), The State of Economic Science (Kalamazoo, MI: W. E.
Upjohn Institute for Employment Research, 1989).
5.
See, for example, D. Kahneman and A. Tversky, "On the Psychology of Prediction," Psychological
Review, 80(1973):237 251, and H. Kunreuther et al., Disaster Insurance Protection (New York: Wiley,
1978).
Page 30
To deal with these phenomena, psychology employs the concept of aspiration
level. Aspirations have many dimensions: one can have aspirations for pleasant
work, love, good food, travel, and many other things. For each dimension,
expectations of the attainable define an aspiration level that is compared with the
current level of achievement. If achievements exceed aspirations, satisfaction is
recorded as positive; if aspirations exceed achievements, there is dissatisfaction.
There is no simple mechanism for comparison between dimensions. In general a
large gain along one dimension is required to compensate for a small loss along
another hence the system's net satisfactions are history-dependent and it is
difficult for people to balance compensatory offsets.
Aspiration levels provide a computational mechanism for satisficing. An
alternative satisfices if it meets aspirations along all dimensions. If no such
alternative is found, search is undertaken for new alternatives. Meanwhile,
aspirations along one or more dimensions drift down gradually until a
satisfactory new alternative is found or some existing alternative satisfices. A
theory of choice employing these mechanisms acknowledges the limits on
human computation and fits our empirical observations of human decision
making far better than the utility maximization theory.6
Markets and Organizations
Economics has been concerned less with individual consumers or business firms
than with larger artificial systems: the economy and its major components,
markets. Markets aim to coordinate the decisions and behavior of multitudes of
economic actors to guarantee that the quantity of brussels sprouts shipped to
market bears some reasonable relation to the quantity that consumers will buy
and eat, and that the price at which brussels sprouts can be sold bears a
reasonable relation to the cost of producing them. Any society that is not a
subsistence economy, but has
6.
H. A. Simon, "A Behavioral Model of Rational Choice," Quarterly Journal of Economics,
6(1955):99 118; I. N. Gallhofer and W. E. Saris, Foreign Policy Decision-Making: A Qualitative and
Quantitative Analysis of Political Argumentation (New York: Praeger, in press).
Page 31
substantial specialization and division of labor, needs mechanisms to perform
this coordinative function.
Markets are only one, however, among the spectrum of mechanisms of
coordination on which any society relies. For some purposes, central planning
based on statistics provides the basis for coordinating behavior patterns.
Highway planning, for example, relies on estimates of road usage that reflect
statistically stable patterns of driving behavior. For other purposes, bargaining
and negotiation may be used to coordinate individual behaviors, for instance, to
secure wage agreements between employers and unions or to form legislative
majorities. For still other coordinative functions, societies employ hierarchic
organizations business, governmental and educational with lines of formal
authority running from top to bottom and networks of communications lacing
through the structure. Finally, for making certain important decisions and for
selecting persons to occupy positions of public authority, societies employ a
wide variety of balloting procedures.
Although all of these coordinating techniques can be found somewhere in almost
any society, their mix and applications vary tremendously from one nation or
culture to another.7
We ordinarily describe capitalist societies as depending
mostly on markets for coordination and socialist societies as depending mostly
on hierarchic organizations and planning, but this is a gross oversimplification,
for it ignores the uses of voting in democratic societies of either kind, and it
ignores the great importance of large organizations in modern "market" societies.
The economic units in capitalist societies are mostly business firms, which are
themselves hierarchic organizations, some of enormous size, that make almost
negligible use of markets in their internal functioning. Roughly eighty percent of
the human economic activity in the American economy, usually regarded as
almost the epitome of a "market" economy, takes place in the internal
environments of business and other organizations and not in the external,
between-organization environments of markets.8
To avoid misunderstanding, it
would be appropriate to call such
7.
R. A. Dahl and C. E. Lindblom, Politics, Economics, and Welfare (New York: Harper and
Brothers, 1953).
8.
H. A. Simon, "Organizations and Markets," Journal of Economic Perspectives, 5(1991):25 44.
Page 32
a society an organization-&-market economy; for in order to give an account of it
we have to pay as much attention to organizations as to markets.
The Invisible Hand
In examining the processes of social coordination, economics has given top
billing sometimes almost exclusive billing to the market mechanism. It is indeed
a remarkable mechanism which under many circumstances can bring it about that
the producing, consuming, buying and selling behaviors of enormous numbers of
people, each responding only to personal selfish interests, allocate resources so
as to clear markets do in fact nearly balance the production with the consumption
of brussels sprouts and all the other commodities the economy produces and
uses.
Only relatively weak conditions need be satisfied to bring about such an
equilibrium. Achieving it mainly requires that prices drop in the face of an
excess supply, and that quantities produced decline when prices are lowered or
when inventories mount. Any number of dynamic systems can be formulated that
have these properties, and these systems will seek equilibrium and oscillate
stably around it over a wide range of conditions.
There have been many recent laboratory experiments on market behavior,
sometimes with human subjects, sometimes with computer programs as
simulated subjects.9
Experimental markets in which the simulated traders are
"stupid" sellers, knowing only a minimum price below which they should not
sell, and "stupid" buyers, knowing only a maximum price above which they
should not buy move toward equilibrium almost as rapidly as markets whose
agents are rational in the classical sense.10
Markets and Optimality
These findings undermine the much stronger claims that are made for the price
mechanism by contemporary neoclassical economics. Claims that it does more
than merely clear markets require the strong assumptions of perfect competition
and of maximization of
9.
V. L. Smith, Papers in Experimental Economics (New York: Cambridge University Press, 1991.)
10.
D. J. Gode and S. Sunder, "Allocative Efficiency of Markets with Zero Intelligence Traders," Journal
of Political Economy, 101(1993):119 127.
Page 33
profit or utility by the economic actors. With these assumptions, but not without
them, the market equilibrium can be shown to be optimal in the sense that it
could not be altered so as to make everyone simultaneously better off. These are
the familiar propositions of Pareto optimality of competitive equilibrium that
have been formalized so elegantly by Arrow, Debreu, Hurwicz, and others.11
The optimality theorems stretch credibility, so far as real-world markets are
concerned, because they require substantive rationality of the kinds we found
implausible in our examination of the theory of the firm. Markets populated by
consumers and producers who satisfice instead of optimizing do not meet the
conditions on which the theorems rest. But the experimental data on simulated
markets show that market clearing, the only property of markets for which there
is solid empirical evidence, can be achieved without the optimizing assumptions,
hence also without claiming that markets do produce a Pareto optimum. As
Samuel Johnson said of the dancing dog, "The marvel is not that it dances well,
but that it dances at all "the marvel is not that markets optimize (they don't) but
that they often clear.
Order Without a Planner
We have become accustomed to the idea that a natural system like the human
body or an ecosystem regulates itself. This is in fact a favorite theme of the
current discussion of complexity which we will take up in later chapters. We
explain the regulation by feedback loops rather than a central planning and
directing body. But somehow, untutored intuitions about self-regulation without
central direction do not carry over to the artificial systems of human society. I
retain vivid memories of the astonishment and disbelief expressed by the
architecture students to whom I taught urban land economics many years ago
when I pointed to medieval cities as marvelously patterned systems that had
mostly just "grown" in response to myriads of individual human decisions. To
my students a pattern implied a planner in whose mind it had been conceived and
by whose hand it had been implemented. The idea that a city could acquire its
pattern as naturally as a snowflake was
11.
See Gerard Debreu, Theory of Value: An Axiomatic Analysis of Economic Equilibrium (New
York: Wiley, 1959).
Page 34
foreign to them. They reacted to it as many Christian fundamentalists responded
to Darwin: no design without a Designer!
Marxist fundamentalists reacted in a similar way when, after World War I, they
undertook to construct the new socialist economies of eastern Europe. It took
them some thirty years to realize that markets and prices might play a
constructive role in socialist economies and might even have important
advantages over central planning as tools for the allocation of resources. My
sometime teacher, Oscar Lange, was one of the pioneers who carried this
heretical notion to Poland after the Second World War and risked his career and
his life for the idea.
With the collapse of the Eastern European economies around 1990 the simple
faith in central planning was replaced in some influential minds by an equally
simple faith in markets. The collapse taught that modern economies cannot
function well without smoothly operating markets. The poor performance of
these economies since the collapse has taught that they also cannot function well
without effective organizations.
If we focus on the equilibrating functions of markets and put aside the illusions
of Pareto optimality, market processes commend themselves primarily because
they avoid placing on a central planning mechanism a burden of calculation that
such a mechanism, however well buttressed by the largest computers, could not
sustain. Markets appear to conserve information and calculation by assigning
decisions to actors who can make them on the basis of information that is
available to them locally that is, without knowing much about the rest of the
economy apart from the prices and properties of the goods they are purchasing
and the costs of the goods they are producing.
No one has characterized market mechanisms better than Friederich von Hayek
who, in the decades after World War II, was their leading interpreter and
defender. His defense did not rest primarily upon the supposed optimum attained
by them but rather upon the limits of the inner environment the computational
limits of human beings:12
The most significant fact about this system is the economy of knowledge with which it operates, or
how little the individual participants need to know in order to be able to take the right action.
12.
F. von Hayek, "The Use of Knowledge in Society," American Economic Review, 35(September
1945):519 30, at p. 520.
Page 35
The experiments on simulated markets, described earlier, confirm his view. At
least under some circumstances, market traders using a very small amount of
mostly local information and extremely simple (and non-optimizing) decision
rules, can balance supply and demand and clear markets.
It is time now that we turn to the role of organizations in an organization-&-
market economy and the reasons why all economic activities are not left to
market forces. In preparation for this topic, we need to look at the phenomena of
uncertainty and expectations.
Uncertainty and Expectations
Because the consequences of many actions extend well into the future, correct
prediction is essential for objectively rational choice. We need to know about
changes in the natural environment: the weather that will affect next year's
harvest. We need to know about changes in social and political environments
beyond the economic: the civil warfare of Bosnia or Sri Lanka. We need to know
about the future behaviors of other economic actors customers, competitors,
suppliers which may be influenced in turn by our own behaviors.
In simple cases uncertainty arising from exogenous events can be handled by
estimating the probabilities of these events, as insurance companies do but
usually at a cost in computational complexity and information gathering. An
alternative is to use feedback to correct for unexpected or incorrectly predicted
events. Even if events are imperfectly anticipated and the response to them less
than accurate, adaptive systems may remain stable in the face of severe jolts,
their feedback controls bringing them back on course after each shock that
displaces them. After we fail to predict the blizzard, snow plows still clear the
streets. Although the presence of uncertainty does not make intelligent choice
impossible, it places a premium on robust adaptive procedures instead of
optimizing strategies that work well only when finely tuned to precisely known
environments.13
13.
A remarkable paper by Kenneth Arrow, reprinted in The New Palgrave: A Dictionary of
Economics (London: Macmillan Press, 1987), v. 2, pp. 69 74, under the title of "Economic Theory
and the Hypothesis of Rationality," shows that to preserve the Pareto optimality properties of
markets when there is uncertainty
(footnote continued on next page)
Page 36
Expectations
A system can generally be steered more accurately if it uses feed forward, based
on prediction of the future, in combination with feedback, to correct the errors of
the past. However, forming expectations to deal with uncertainty creates its own
problems. Feed forward can have unfortunate destabilizing effects, for a system
can overreact to its predictions and go into unstable oscillations. Feed forward in
markets can become especially destabilizing when each actor tries to anticipate
the actions of the others (and hence their expectations).
The standard economic example of destabilizing expectations is the speculative
bubble. Bubbles that ultimately burst are observed periodically in the world's
markets (the Tulip Craze being one of many well-known historical examples).
Moreover, bubbles and their bursts have now been observed in experimental
markets, the overbidding occurring even though subjects know that the market
must again fall to a certain level on a specified and not too distant date.
Of course not all speculation blows bubbles. Under many circumstances market
speculation stabilizes the system, causing its fluctuations to become smaller, for
the speculator attempts to notice when particular prices are above or below their
"normal" or equilibrium levels in order to sell or buy, respectively. Such actions
push the prices closer to equilibrium.
Sometimes, however, a rising price creates the expectation that it will go higher
yet, hence induces buying rather than selling. There ensues a game of economic
"chicken," all the players assuming that they can get out just before the crash
occurs. There is general consensus in economics that destabilizing expectations
play an important role in monetary hyperinflation and in the business cycle.
There is less consensus as to whose expectations are the first movers in the chain
of reactions or what to do about it.
The difficulties raised by mutual expectations appear wherever markets are not
perfectly competitive. In perfect competition, each firm assumes that market
prices cannot be affected by their actions: prices are as much a part of the
external environment as are the laws of the physical world.
(footnote continued from previous page)
about the future, we must impose information and computational requirements on economic actors
that are exceedingly burdensome and unrealistic.
Page 37
But in the world of imperfectly competitive markets, firms need not make this
assumption. If, for example, there are only a few firms in an industry, each may
try to outguess its competitors. If more than one plays this game, even the
definition of rationality comes into question.
The Theory of Games
A century and a half ago, Augustin Cournot undertook to construct a theory of
rational choice in markets involving two firms.14
He assumed that each firm,
with limited cleverness, formed an expectation of its competitor's reaction to its
actions, but that each carried the analysis only one move deep. But what if one of
the firms, or both, tries to take into account the reactions to the reactions? They
may be led into an infinite regress of outguessing.
A major step toward a clearer formulation of the problem was taken a century
later, in 1944, when von Neumann and Morgenstern published The Theory of
Games and Economic Behavior.15
But far from solving the problem, the theory of
games demonstrated how intractable a task it is to prescribe optimally rational
action in a multiperson situation where interests are opposed.
The difficulty of defining rationality exhibits itself well in the so-called
Prisoners' Dilemma game.16
In the Prisoners' Dilemma, each player has a choice
between two moves, one cooperative and one aggressive. If both choose the
cooperative move, both receive a moderate reward. If one chooses the
cooperative move, but the other the aggressive move, the co-operator is
penalized severely while the aggressor receives a larger reward. If both choose
the aggressive move, both receive lesser penalties. There is no obvious rational
strategy. Each player will gain from cooperation if and only if the partner does
not aggress, but each will gain even more from aggression if he can count on the
partner to cooperate. Treachery pays, unless it is met with treachery. The
mutually beneficial strategy is unstable.
14.
Researches into the Mathematical Principles of the Theory of Wealth (New York: Augustus M.
Kelley, 1960), first published in 1838.
15.
Princeton: Princeton University Press, 1944.
16.
R. D. Luce and H. Raiffa, Games and Decisions (New York: Wiley, 1957), pp. 94 102; R. M.
Axelrod, The Evolution of Cooperation, (New York: Basic Books, 1984).
Page 38
Are matters improved by playing the game repetitively? Even in this case,
cleverly timed treachery pays off, inducing instability in attempts at cooperation.
However, in actual experiments with the game, it turns out that cooperative
behavior occurs quite frequently, and that a tit-for-tat strategy (behave
cooperatively until the other player aggresses; then aggress once but return to
cooperation if the other player also does) almost always yields higher rewards
than other strategies. Roy Radner has shown (personal communication) that if
players are striving for a satisfactory payoff rather than an optimal payoff, the
cooperative solution can be stable. Bounded rationality appears to produce better
outcomes than unbounded rationality in this kind of competitive situation.
The Prisoners' Dilemma game, which has obvious real-world analogies in both
politics and business, is only one of an unlimited number of games that illustrates
the paradoxes of rationality wherever the goals of the different actors conflict
totally or partially. Classical economics avoided these paradoxes by focusing
upon the two situations (monopoly and perfect competition) where mutual
expectations play no role.
Market institutions are workable (but not optimal) well beyond that range of
situations precisely because the limits on human abilities to compute possible
scenarios of complex interaction prevent an infinite regress of mutual
outguessing. Game theory's most valuable contribution has been to show that
rationality is effectively undefinable when competitive actors have unlimited
computational capabilities for outguessing each other, but that the problem does
not arise as acutely in a world, like the real world, of bounded rationality.
Rational Expectations
A different view from the one just expressed was for a time popular in
economics: that the problem of mutual outguessing should be solved by
assuming that economic actors form their expectations ''rationally."17
This is
interpreted to mean that the actors know (and agree on) the laws that govern the
economic system and that their predic-
17.
The idea and the phrase "rational expectations" originated with J. F. Muth, "Rational Expectations
and the Theory of Price Movements," Econometrica, 29(1961):315 335. The notion was picked up,
developed, and applied systematically to macroeconomics by R. E. Lucas, Jr., E. C. Prescott, T. J.
Sargent, and others.
Page 39
tions of the future are unbiased estimates of the equilibrium defined by these
laws. These assumptions rule out most possibilities that speculation will be
destabilizing.
Although the assumptions underlying rational expectations are empirical
assumptions, almost no empirical evidence supports them, nor is it obvious in
what sense they are "rational" (i.e., utility maximizing). Business firms,
investors, or consumers do not possess even a fraction of the knowledge or the
computational ability required for carrying out the rational expectations strategy.
To do so, they would have to share a model of the economy and be able to
compute its equilibrium.
Today, most rational expectationists are retreating to more realistic schemes of
"adaptive expectations," in which actors gradually learn about their environments
from the unfolding of events around them.18
But most approaches to adaptive
expectations give up the idea of outguessing the market, and instead assume that
the environment is a slowly changing "given" whose path will not be
significantly affected by the decisions of any one actor.
In sum, our present understanding of the dynamics of real economic systems is
grossly deficient. We are especially lacking in empirical information about how
economic actors, with their bounded rationality, form expectations about the
future and how they use such expectations in planning their own behavior.
Economics could do worse than to return to the empirical methods proposed (and
practiced) by George Katona for studying expectation formation,19
and to an
important extent, the current interest in experimental economics represents such
a return. In face of the current gaps in our empirical knowledge there is little
empirical basis for choosing among the competing models currently proposed by
economics to account for business cycles, and consequently, little rational basis
for choosing among the competing policy recommendations that flow from those
models.
18.
T. J. Sargent, Bounded Rationality in Macroeconomics (Oxford: Clarendon Press, 1993). Note that
Sargent even borrows the label of "bounded rationality" for his version of adaptive expectations, but,
regrettably, does not borrow the empirical methods of direct observation and experimentation that
would have to accompany it in order to validate the particular behavioral assumptions he makes.
19.
G. Katona, Psychological Analysis of Economic Behavior (New York: McGraw-Hill, 1951).
Simon, Herbert A. (1969). The Science Of The Artificial.
Simon, Herbert A. (1969). The Science Of The Artificial.
Simon, Herbert A. (1969). The Science Of The Artificial.
Simon, Herbert A. (1969). The Science Of The Artificial.
Simon, Herbert A. (1969). The Science Of The Artificial.
Simon, Herbert A. (1969). The Science Of The Artificial.
Simon, Herbert A. (1969). The Science Of The Artificial.
Simon, Herbert A. (1969). The Science Of The Artificial.
Simon, Herbert A. (1969). The Science Of The Artificial.
Simon, Herbert A. (1969). The Science Of The Artificial.
Simon, Herbert A. (1969). The Science Of The Artificial.
Simon, Herbert A. (1969). The Science Of The Artificial.
Simon, Herbert A. (1969). The Science Of The Artificial.
Simon, Herbert A. (1969). The Science Of The Artificial.
Simon, Herbert A. (1969). The Science Of The Artificial.
Simon, Herbert A. (1969). The Science Of The Artificial.
Simon, Herbert A. (1969). The Science Of The Artificial.
Simon, Herbert A. (1969). The Science Of The Artificial.
Simon, Herbert A. (1969). The Science Of The Artificial.
Simon, Herbert A. (1969). The Science Of The Artificial.
Simon, Herbert A. (1969). The Science Of The Artificial.
Simon, Herbert A. (1969). The Science Of The Artificial.
Simon, Herbert A. (1969). The Science Of The Artificial.
Simon, Herbert A. (1969). The Science Of The Artificial.
Simon, Herbert A. (1969). The Science Of The Artificial.
Simon, Herbert A. (1969). The Science Of The Artificial.
Simon, Herbert A. (1969). The Science Of The Artificial.
Simon, Herbert A. (1969). The Science Of The Artificial.
Simon, Herbert A. (1969). The Science Of The Artificial.
Simon, Herbert A. (1969). The Science Of The Artificial.
Simon, Herbert A. (1969). The Science Of The Artificial.
Simon, Herbert A. (1969). The Science Of The Artificial.
Simon, Herbert A. (1969). The Science Of The Artificial.
Simon, Herbert A. (1969). The Science Of The Artificial.
Simon, Herbert A. (1969). The Science Of The Artificial.
Simon, Herbert A. (1969). The Science Of The Artificial.
Simon, Herbert A. (1969). The Science Of The Artificial.
Simon, Herbert A. (1969). The Science Of The Artificial.
Simon, Herbert A. (1969). The Science Of The Artificial.
Simon, Herbert A. (1969). The Science Of The Artificial.
Simon, Herbert A. (1969). The Science Of The Artificial.
Simon, Herbert A. (1969). The Science Of The Artificial.
Simon, Herbert A. (1969). The Science Of The Artificial.
Simon, Herbert A. (1969). The Science Of The Artificial.
Simon, Herbert A. (1969). The Science Of The Artificial.
Simon, Herbert A. (1969). The Science Of The Artificial.
Simon, Herbert A. (1969). The Science Of The Artificial.
Simon, Herbert A. (1969). The Science Of The Artificial.
Simon, Herbert A. (1969). The Science Of The Artificial.
Simon, Herbert A. (1969). The Science Of The Artificial.
Simon, Herbert A. (1969). The Science Of The Artificial.
Simon, Herbert A. (1969). The Science Of The Artificial.
Simon, Herbert A. (1969). The Science Of The Artificial.
Simon, Herbert A. (1969). The Science Of The Artificial.
Simon, Herbert A. (1969). The Science Of The Artificial.
Simon, Herbert A. (1969). The Science Of The Artificial.
Simon, Herbert A. (1969). The Science Of The Artificial.
Simon, Herbert A. (1969). The Science Of The Artificial.
Simon, Herbert A. (1969). The Science Of The Artificial.
Simon, Herbert A. (1969). The Science Of The Artificial.
Simon, Herbert A. (1969). The Science Of The Artificial.
Simon, Herbert A. (1969). The Science Of The Artificial.
Simon, Herbert A. (1969). The Science Of The Artificial.
Simon, Herbert A. (1969). The Science Of The Artificial.
Simon, Herbert A. (1969). The Science Of The Artificial.
Simon, Herbert A. (1969). The Science Of The Artificial.
Simon, Herbert A. (1969). The Science Of The Artificial.
Simon, Herbert A. (1969). The Science Of The Artificial.
Simon, Herbert A. (1969). The Science Of The Artificial.
Simon, Herbert A. (1969). The Science Of The Artificial.
Simon, Herbert A. (1969). The Science Of The Artificial.
Simon, Herbert A. (1969). The Science Of The Artificial.
Simon, Herbert A. (1969). The Science Of The Artificial.
Simon, Herbert A. (1969). The Science Of The Artificial.
Simon, Herbert A. (1969). The Science Of The Artificial.
Simon, Herbert A. (1969). The Science Of The Artificial.
Simon, Herbert A. (1969). The Science Of The Artificial.
Simon, Herbert A. (1969). The Science Of The Artificial.
Simon, Herbert A. (1969). The Science Of The Artificial.
Simon, Herbert A. (1969). The Science Of The Artificial.
Simon, Herbert A. (1969). The Science Of The Artificial.
Simon, Herbert A. (1969). The Science Of The Artificial.
Simon, Herbert A. (1969). The Science Of The Artificial.
Simon, Herbert A. (1969). The Science Of The Artificial.
Simon, Herbert A. (1969). The Science Of The Artificial.
Simon, Herbert A. (1969). The Science Of The Artificial.
Simon, Herbert A. (1969). The Science Of The Artificial.
Simon, Herbert A. (1969). The Science Of The Artificial.
Simon, Herbert A. (1969). The Science Of The Artificial.
Simon, Herbert A. (1969). The Science Of The Artificial.
Simon, Herbert A. (1969). The Science Of The Artificial.
Simon, Herbert A. (1969). The Science Of The Artificial.
Simon, Herbert A. (1969). The Science Of The Artificial.
Simon, Herbert A. (1969). The Science Of The Artificial.
Simon, Herbert A. (1969). The Science Of The Artificial.
Simon, Herbert A. (1969). The Science Of The Artificial.
Simon, Herbert A. (1969). The Science Of The Artificial.
Simon, Herbert A. (1969). The Science Of The Artificial.
Simon, Herbert A. (1969). The Science Of The Artificial.
Simon, Herbert A. (1969). The Science Of The Artificial.
Simon, Herbert A. (1969). The Science Of The Artificial.
Simon, Herbert A. (1969). The Science Of The Artificial.
Simon, Herbert A. (1969). The Science Of The Artificial.
Simon, Herbert A. (1969). The Science Of The Artificial.
Simon, Herbert A. (1969). The Science Of The Artificial.
Simon, Herbert A. (1969). The Science Of The Artificial.
Simon, Herbert A. (1969). The Science Of The Artificial.
Simon, Herbert A. (1969). The Science Of The Artificial.
Simon, Herbert A. (1969). The Science Of The Artificial.
Simon, Herbert A. (1969). The Science Of The Artificial.
Simon, Herbert A. (1969). The Science Of The Artificial.
Simon, Herbert A. (1969). The Science Of The Artificial.
Simon, Herbert A. (1969). The Science Of The Artificial.
Simon, Herbert A. (1969). The Science Of The Artificial.
Simon, Herbert A. (1969). The Science Of The Artificial.
Simon, Herbert A. (1969). The Science Of The Artificial.
Simon, Herbert A. (1969). The Science Of The Artificial.
Simon, Herbert A. (1969). The Science Of The Artificial.
Simon, Herbert A. (1969). The Science Of The Artificial.
Simon, Herbert A. (1969). The Science Of The Artificial.
Simon, Herbert A. (1969). The Science Of The Artificial.
Simon, Herbert A. (1969). The Science Of The Artificial.
Simon, Herbert A. (1969). The Science Of The Artificial.
Simon, Herbert A. (1969). The Science Of The Artificial.
Simon, Herbert A. (1969). The Science Of The Artificial.
Simon, Herbert A. (1969). The Science Of The Artificial.
Simon, Herbert A. (1969). The Science Of The Artificial.
Simon, Herbert A. (1969). The Science Of The Artificial.
Simon, Herbert A. (1969). The Science Of The Artificial.
Simon, Herbert A. (1969). The Science Of The Artificial.
Simon, Herbert A. (1969). The Science Of The Artificial.
Simon, Herbert A. (1969). The Science Of The Artificial.
Simon, Herbert A. (1969). The Science Of The Artificial.
Simon, Herbert A. (1969). The Science Of The Artificial.
Simon, Herbert A. (1969). The Science Of The Artificial.
Simon, Herbert A. (1969). The Science Of The Artificial.
Simon, Herbert A. (1969). The Science Of The Artificial.
Simon, Herbert A. (1969). The Science Of The Artificial.
Simon, Herbert A. (1969). The Science Of The Artificial.
Simon, Herbert A. (1969). The Science Of The Artificial.
Simon, Herbert A. (1969). The Science Of The Artificial.
Simon, Herbert A. (1969). The Science Of The Artificial.
Simon, Herbert A. (1969). The Science Of The Artificial.
Simon, Herbert A. (1969). The Science Of The Artificial.
Simon, Herbert A. (1969). The Science Of The Artificial.
Simon, Herbert A. (1969). The Science Of The Artificial.
Simon, Herbert A. (1969). The Science Of The Artificial.
Simon, Herbert A. (1969). The Science Of The Artificial.
Simon, Herbert A. (1969). The Science Of The Artificial.
Simon, Herbert A. (1969). The Science Of The Artificial.
Simon, Herbert A. (1969). The Science Of The Artificial.
Simon, Herbert A. (1969). The Science Of The Artificial.
Simon, Herbert A. (1969). The Science Of The Artificial.
Simon, Herbert A. (1969). The Science Of The Artificial.
Simon, Herbert A. (1969). The Science Of The Artificial.
Simon, Herbert A. (1969). The Science Of The Artificial.
Simon, Herbert A. (1969). The Science Of The Artificial.
Simon, Herbert A. (1969). The Science Of The Artificial.
Simon, Herbert A. (1969). The Science Of The Artificial.
Simon, Herbert A. (1969). The Science Of The Artificial.
Simon, Herbert A. (1969). The Science Of The Artificial.
Simon, Herbert A. (1969). The Science Of The Artificial.
Simon, Herbert A. (1969). The Science Of The Artificial.
Simon, Herbert A. (1969). The Science Of The Artificial.
Simon, Herbert A. (1969). The Science Of The Artificial.
Simon, Herbert A. (1969). The Science Of The Artificial.
Simon, Herbert A. (1969). The Science Of The Artificial.
Simon, Herbert A. (1969). The Science Of The Artificial.
Simon, Herbert A. (1969). The Science Of The Artificial.
Simon, Herbert A. (1969). The Science Of The Artificial.
Simon, Herbert A. (1969). The Science Of The Artificial.
Simon, Herbert A. (1969). The Science Of The Artificial.
Simon, Herbert A. (1969). The Science Of The Artificial.
Simon, Herbert A. (1969). The Science Of The Artificial.
Simon, Herbert A. (1969). The Science Of The Artificial.
Simon, Herbert A. (1969). The Science Of The Artificial.
Simon, Herbert A. (1969). The Science Of The Artificial.
Simon, Herbert A. (1969). The Science Of The Artificial.
Simon, Herbert A. (1969). The Science Of The Artificial.
Simon, Herbert A. (1969). The Science Of The Artificial.
Simon, Herbert A. (1969). The Science Of The Artificial.
Simon, Herbert A. (1969). The Science Of The Artificial.
Simon, Herbert A. (1969). The Science Of The Artificial.
Simon, Herbert A. (1969). The Science Of The Artificial.
Simon, Herbert A. (1969). The Science Of The Artificial.
Simon, Herbert A. (1969). The Science Of The Artificial.
Simon, Herbert A. (1969). The Science Of The Artificial.
Simon, Herbert A. (1969). The Science Of The Artificial.
Simon, Herbert A. (1969). The Science Of The Artificial.
Simon, Herbert A. (1969). The Science Of The Artificial.

More Related Content

What's hot

Talent Management (HRM) by Abdul Muneeb Khan
Talent Management (HRM) by Abdul Muneeb KhanTalent Management (HRM) by Abdul Muneeb Khan
Talent Management (HRM) by Abdul Muneeb KhanMuneebKhan156
 
Human resources environment
Human resources environmentHuman resources environment
Human resources environmentarun savukar
 
Human Capital Management (HCM)
Human Capital Management (HCM)Human Capital Management (HCM)
Human Capital Management (HCM)Naman Markan
 
Agile HR - Dallas HR Summit October 2016
Agile HR - Dallas HR Summit October 2016Agile HR - Dallas HR Summit October 2016
Agile HR - Dallas HR Summit October 2016Agile Velocity
 
David Ulrich's HR Model
David Ulrich's HR ModelDavid Ulrich's HR Model
David Ulrich's HR ModelCreativeHRM
 
Max Weber's Bureaucratic Approach
Max Weber's Bureaucratic ApproachMax Weber's Bureaucratic Approach
Max Weber's Bureaucratic Approachneeraj pant
 
Challenges of organizational designs
Challenges  of organizational  designsChallenges  of organizational  designs
Challenges of organizational designsSHUBHAM RUSIA
 
Human resource development
Human resource developmentHuman resource development
Human resource developmentLamech Franklin
 
The New Model for Talent Management: Agenda for 2015
The New Model for Talent Management:  Agenda for 2015The New Model for Talent Management:  Agenda for 2015
The New Model for Talent Management: Agenda for 2015Josh Bersin
 
Succession Planning
Succession PlanningSuccession Planning
Succession PlanningM Dalton
 
The HR scorecard
The HR scorecardThe HR scorecard
The HR scorecardJon Ingham
 
Human Capital Management
Human Capital ManagementHuman Capital Management
Human Capital ManagementSD Paul
 
Introduction to hrd
Introduction to hrdIntroduction to hrd
Introduction to hrdKosha Nair
 
Human Resources Champion by Dave Ulrich Chapter 2 & 3
Human Resources Champion by Dave Ulrich Chapter 2 & 3Human Resources Champion by Dave Ulrich Chapter 2 & 3
Human Resources Champion by Dave Ulrich Chapter 2 & 3Hedi Fauzi
 
Strategic HR/Workforce Planning_Metrics & Succession Planning
Strategic HR/Workforce Planning_Metrics & Succession PlanningStrategic HR/Workforce Planning_Metrics & Succession Planning
Strategic HR/Workforce Planning_Metrics & Succession PlanningCharles Cotter, PhD
 

What's hot (20)

Talent Management (HRM) by Abdul Muneeb Khan
Talent Management (HRM) by Abdul Muneeb KhanTalent Management (HRM) by Abdul Muneeb Khan
Talent Management (HRM) by Abdul Muneeb Khan
 
Talent management best practices
Talent management best practicesTalent management best practices
Talent management best practices
 
Human resource champions
Human resource championsHuman resource champions
Human resource champions
 
Human resources environment
Human resources environmentHuman resources environment
Human resources environment
 
Human Capital Management (HCM)
Human Capital Management (HCM)Human Capital Management (HCM)
Human Capital Management (HCM)
 
Agile HR - Dallas HR Summit October 2016
Agile HR - Dallas HR Summit October 2016Agile HR - Dallas HR Summit October 2016
Agile HR - Dallas HR Summit October 2016
 
David Ulrich's HR Model
David Ulrich's HR ModelDavid Ulrich's HR Model
David Ulrich's HR Model
 
Max Weber's Bureaucratic Approach
Max Weber's Bureaucratic ApproachMax Weber's Bureaucratic Approach
Max Weber's Bureaucratic Approach
 
Challenges of organizational designs
Challenges  of organizational  designsChallenges  of organizational  designs
Challenges of organizational designs
 
Human resource development
Human resource developmentHuman resource development
Human resource development
 
The New Model for Talent Management: Agenda for 2015
The New Model for Talent Management:  Agenda for 2015The New Model for Talent Management:  Agenda for 2015
The New Model for Talent Management: Agenda for 2015
 
Succession Planning
Succession PlanningSuccession Planning
Succession Planning
 
History+of+od
History+of+odHistory+of+od
History+of+od
 
The HR scorecard
The HR scorecardThe HR scorecard
The HR scorecard
 
Human Capital Management
Human Capital ManagementHuman Capital Management
Human Capital Management
 
Introduction to hrd
Introduction to hrdIntroduction to hrd
Introduction to hrd
 
Human Resources Champion by Dave Ulrich Chapter 2 & 3
Human Resources Champion by Dave Ulrich Chapter 2 & 3Human Resources Champion by Dave Ulrich Chapter 2 & 3
Human Resources Champion by Dave Ulrich Chapter 2 & 3
 
Strategic HR/Workforce Planning_Metrics & Succession Planning
Strategic HR/Workforce Planning_Metrics & Succession PlanningStrategic HR/Workforce Planning_Metrics & Succession Planning
Strategic HR/Workforce Planning_Metrics & Succession Planning
 
HRIS
HRISHRIS
HRIS
 
Human Resource Information System (HRIS) – Implementation and Control
Human Resource Information System (HRIS) – Implementation and ControlHuman Resource Information System (HRIS) – Implementation and Control
Human Resource Information System (HRIS) – Implementation and Control
 

Similar to Simon, Herbert A. (1969). The Science Of The Artificial.

Argumentation in Artificial Intelligence.pdf
Argumentation in Artificial Intelligence.pdfArgumentation in Artificial Intelligence.pdf
Argumentation in Artificial Intelligence.pdfSabrina Baloi
 
John Hassard - Sociology and Organization Theory Positivism, Paradigms and Po...
John Hassard - Sociology and Organization Theory Positivism, Paradigms and Po...John Hassard - Sociology and Organization Theory Positivism, Paradigms and Po...
John Hassard - Sociology and Organization Theory Positivism, Paradigms and Po...Imbang Jaya Trenggana
 
Big questions come in bundles
Big questions come in bundlesBig questions come in bundles
Big questions come in bundlesRaysa Karina
 
The laboratoryandthemarketinee bookchapter10pdf_merged
The laboratoryandthemarketinee bookchapter10pdf_mergedThe laboratoryandthemarketinee bookchapter10pdf_merged
The laboratoryandthemarketinee bookchapter10pdf_mergedJeenaDC
 
Module 3 -Critical and Conspiracy Theories (Contemporary Philosophies).pdf
Module 3 -Critical and Conspiracy Theories (Contemporary Philosophies).pdfModule 3 -Critical and Conspiracy Theories (Contemporary Philosophies).pdf
Module 3 -Critical and Conspiracy Theories (Contemporary Philosophies).pdfDrDaryDacanay
 
Essay On Gun Control. essay examples: Gun Control Essays
Essay On Gun Control. essay examples: Gun Control EssaysEssay On Gun Control. essay examples: Gun Control Essays
Essay On Gun Control. essay examples: Gun Control EssaysLiz Milligan
 
Behavioral Therapy Critique
Behavioral Therapy CritiqueBehavioral Therapy Critique
Behavioral Therapy CritiqueMichelle Singh
 
Mickey,Mouse,90Jane,Doe,50Minnie,Mouse,95Donald,Duck,80.docx
Mickey,Mouse,90Jane,Doe,50Minnie,Mouse,95Donald,Duck,80.docxMickey,Mouse,90Jane,Doe,50Minnie,Mouse,95Donald,Duck,80.docx
Mickey,Mouse,90Jane,Doe,50Minnie,Mouse,95Donald,Duck,80.docxbuffydtesurina
 
Interviews about STS interventions (iSTS)
Interviews about STS interventions (iSTS)Interviews about STS interventions (iSTS)
Interviews about STS interventions (iSTS)Ernst Thoutenhoofd
 
MacrosystemExosystemMicrosystemIndividualChild.docx
MacrosystemExosystemMicrosystemIndividualChild.docxMacrosystemExosystemMicrosystemIndividualChild.docx
MacrosystemExosystemMicrosystemIndividualChild.docxcroysierkathey
 
MacrosystemExosystemMicrosystemIndividualChild.docx
MacrosystemExosystemMicrosystemIndividualChild.docxMacrosystemExosystemMicrosystemIndividualChild.docx
MacrosystemExosystemMicrosystemIndividualChild.docxwkyra78
 
Environmental Essay Contest. Town Launches Environmental Poster Contest for 4...
Environmental Essay Contest. Town Launches Environmental Poster Contest for 4...Environmental Essay Contest. Town Launches Environmental Poster Contest for 4...
Environmental Essay Contest. Town Launches Environmental Poster Contest for 4...Lauren Davis
 
Environmental Essay Contest.pdfEnvironmental Essay Contest. National Environm...
Environmental Essay Contest.pdfEnvironmental Essay Contest. National Environm...Environmental Essay Contest.pdfEnvironmental Essay Contest. National Environm...
Environmental Essay Contest.pdfEnvironmental Essay Contest. National Environm...Donna Baun
 
Essay On Trifles. Trifles: Gender and Mrs. Hale - reportspdf819.web.fc2.com
Essay On Trifles. Trifles: Gender and Mrs. Hale - reportspdf819.web.fc2.comEssay On Trifles. Trifles: Gender and Mrs. Hale - reportspdf819.web.fc2.com
Essay On Trifles. Trifles: Gender and Mrs. Hale - reportspdf819.web.fc2.comAshley Champs
 
Cultural Anthropology Essay.pdf
Cultural Anthropology Essay.pdfCultural Anthropology Essay.pdf
Cultural Anthropology Essay.pdfDana French
 
Notational systems and cognitive evolution
Notational systems and cognitive evolutionNotational systems and cognitive evolution
Notational systems and cognitive evolutionJeff Long
 

Similar to Simon, Herbert A. (1969). The Science Of The Artificial. (20)

Argumentation in Artificial Intelligence.pdf
Argumentation in Artificial Intelligence.pdfArgumentation in Artificial Intelligence.pdf
Argumentation in Artificial Intelligence.pdf
 
Syllabus
SyllabusSyllabus
Syllabus
 
Escobar2017 cultural dynamics
Escobar2017 cultural dynamicsEscobar2017 cultural dynamics
Escobar2017 cultural dynamics
 
Philosophyactivity
PhilosophyactivityPhilosophyactivity
Philosophyactivity
 
John Hassard - Sociology and Organization Theory Positivism, Paradigms and Po...
John Hassard - Sociology and Organization Theory Positivism, Paradigms and Po...John Hassard - Sociology and Organization Theory Positivism, Paradigms and Po...
John Hassard - Sociology and Organization Theory Positivism, Paradigms and Po...
 
Big questions come in bundles
Big questions come in bundlesBig questions come in bundles
Big questions come in bundles
 
The laboratoryandthemarketinee bookchapter10pdf_merged
The laboratoryandthemarketinee bookchapter10pdf_mergedThe laboratoryandthemarketinee bookchapter10pdf_merged
The laboratoryandthemarketinee bookchapter10pdf_merged
 
3.4.1 taxonomìa de beer
3.4.1 taxonomìa de beer3.4.1 taxonomìa de beer
3.4.1 taxonomìa de beer
 
Module 3 -Critical and Conspiracy Theories (Contemporary Philosophies).pdf
Module 3 -Critical and Conspiracy Theories (Contemporary Philosophies).pdfModule 3 -Critical and Conspiracy Theories (Contemporary Philosophies).pdf
Module 3 -Critical and Conspiracy Theories (Contemporary Philosophies).pdf
 
Essay On Gun Control. essay examples: Gun Control Essays
Essay On Gun Control. essay examples: Gun Control EssaysEssay On Gun Control. essay examples: Gun Control Essays
Essay On Gun Control. essay examples: Gun Control Essays
 
Behavioral Therapy Critique
Behavioral Therapy CritiqueBehavioral Therapy Critique
Behavioral Therapy Critique
 
Mickey,Mouse,90Jane,Doe,50Minnie,Mouse,95Donald,Duck,80.docx
Mickey,Mouse,90Jane,Doe,50Minnie,Mouse,95Donald,Duck,80.docxMickey,Mouse,90Jane,Doe,50Minnie,Mouse,95Donald,Duck,80.docx
Mickey,Mouse,90Jane,Doe,50Minnie,Mouse,95Donald,Duck,80.docx
 
Interviews about STS interventions (iSTS)
Interviews about STS interventions (iSTS)Interviews about STS interventions (iSTS)
Interviews about STS interventions (iSTS)
 
MacrosystemExosystemMicrosystemIndividualChild.docx
MacrosystemExosystemMicrosystemIndividualChild.docxMacrosystemExosystemMicrosystemIndividualChild.docx
MacrosystemExosystemMicrosystemIndividualChild.docx
 
MacrosystemExosystemMicrosystemIndividualChild.docx
MacrosystemExosystemMicrosystemIndividualChild.docxMacrosystemExosystemMicrosystemIndividualChild.docx
MacrosystemExosystemMicrosystemIndividualChild.docx
 
Environmental Essay Contest. Town Launches Environmental Poster Contest for 4...
Environmental Essay Contest. Town Launches Environmental Poster Contest for 4...Environmental Essay Contest. Town Launches Environmental Poster Contest for 4...
Environmental Essay Contest. Town Launches Environmental Poster Contest for 4...
 
Environmental Essay Contest.pdfEnvironmental Essay Contest. National Environm...
Environmental Essay Contest.pdfEnvironmental Essay Contest. National Environm...Environmental Essay Contest.pdfEnvironmental Essay Contest. National Environm...
Environmental Essay Contest.pdfEnvironmental Essay Contest. National Environm...
 
Essay On Trifles. Trifles: Gender and Mrs. Hale - reportspdf819.web.fc2.com
Essay On Trifles. Trifles: Gender and Mrs. Hale - reportspdf819.web.fc2.comEssay On Trifles. Trifles: Gender and Mrs. Hale - reportspdf819.web.fc2.com
Essay On Trifles. Trifles: Gender and Mrs. Hale - reportspdf819.web.fc2.com
 
Cultural Anthropology Essay.pdf
Cultural Anthropology Essay.pdfCultural Anthropology Essay.pdf
Cultural Anthropology Essay.pdf
 
Notational systems and cognitive evolution
Notational systems and cognitive evolutionNotational systems and cognitive evolution
Notational systems and cognitive evolution
 

More from Robert Louis Stevenson

Gladwell, Malcolm (2000). The Tipping Point. How Little Things can make a Big...
Gladwell, Malcolm (2000). The Tipping Point. How Little Things can make a Big...Gladwell, Malcolm (2000). The Tipping Point. How Little Things can make a Big...
Gladwell, Malcolm (2000). The Tipping Point. How Little Things can make a Big...Robert Louis Stevenson
 
Coats, Emmar (2012). The 22 Rules of Storytelling as by Pixar.
Coats, Emmar (2012). The 22 Rules of Storytelling as by Pixar.Coats, Emmar (2012). The 22 Rules of Storytelling as by Pixar.
Coats, Emmar (2012). The 22 Rules of Storytelling as by Pixar.Robert Louis Stevenson
 
Papert, Seymour (1980). MINDSTORMS. Children, Computers and Powerful Ideas.
Papert, Seymour (1980). MINDSTORMS. Children, Computers and Powerful Ideas.Papert, Seymour (1980). MINDSTORMS. Children, Computers and Powerful Ideas.
Papert, Seymour (1980). MINDSTORMS. Children, Computers and Powerful Ideas.Robert Louis Stevenson
 
Mijksenaar, Paul (1997). Visual Function. An Introduction To Information Design.
Mijksenaar, Paul (1997). Visual Function. An Introduction To Information Design.Mijksenaar, Paul (1997). Visual Function. An Introduction To Information Design.
Mijksenaar, Paul (1997). Visual Function. An Introduction To Information Design.Robert Louis Stevenson
 
Koberg, Don And Bagnall, Jim (1971). The Universal Traveler. A Soft-systems G...
Koberg, Don And Bagnall, Jim (1971). The Universal Traveler. A Soft-systems G...Koberg, Don And Bagnall, Jim (1971). The Universal Traveler. A Soft-systems G...
Koberg, Don And Bagnall, Jim (1971). The Universal Traveler. A Soft-systems G...Robert Louis Stevenson
 
Gill, Eric (1931). An Essay On Typography.
Gill, Eric (1931). An Essay On Typography.Gill, Eric (1931). An Essay On Typography.
Gill, Eric (1931). An Essay On Typography.Robert Louis Stevenson
 
Gerstner, Karl (1964). Designing Programmes. Instead Of Solutions For Problem...
Gerstner, Karl (1964). Designing Programmes. Instead Of Solutions For Problem...Gerstner, Karl (1964). Designing Programmes. Instead Of Solutions For Problem...
Gerstner, Karl (1964). Designing Programmes. Instead Of Solutions For Problem...Robert Louis Stevenson
 
De Bono, Edward (1995). Serious Creativity. Using The Power Of Lateral Thinki...
De Bono, Edward (1995). Serious Creativity. Using The Power Of Lateral Thinki...De Bono, Edward (1995). Serious Creativity. Using The Power Of Lateral Thinki...
De Bono, Edward (1995). Serious Creativity. Using The Power Of Lateral Thinki...Robert Louis Stevenson
 
Birren, Faber (1956). Selling with Color.
Birren, Faber (1956). Selling with Color.Birren, Faber (1956). Selling with Color.
Birren, Faber (1956). Selling with Color.Robert Louis Stevenson
 

More from Robert Louis Stevenson (9)

Gladwell, Malcolm (2000). The Tipping Point. How Little Things can make a Big...
Gladwell, Malcolm (2000). The Tipping Point. How Little Things can make a Big...Gladwell, Malcolm (2000). The Tipping Point. How Little Things can make a Big...
Gladwell, Malcolm (2000). The Tipping Point. How Little Things can make a Big...
 
Coats, Emmar (2012). The 22 Rules of Storytelling as by Pixar.
Coats, Emmar (2012). The 22 Rules of Storytelling as by Pixar.Coats, Emmar (2012). The 22 Rules of Storytelling as by Pixar.
Coats, Emmar (2012). The 22 Rules of Storytelling as by Pixar.
 
Papert, Seymour (1980). MINDSTORMS. Children, Computers and Powerful Ideas.
Papert, Seymour (1980). MINDSTORMS. Children, Computers and Powerful Ideas.Papert, Seymour (1980). MINDSTORMS. Children, Computers and Powerful Ideas.
Papert, Seymour (1980). MINDSTORMS. Children, Computers and Powerful Ideas.
 
Mijksenaar, Paul (1997). Visual Function. An Introduction To Information Design.
Mijksenaar, Paul (1997). Visual Function. An Introduction To Information Design.Mijksenaar, Paul (1997). Visual Function. An Introduction To Information Design.
Mijksenaar, Paul (1997). Visual Function. An Introduction To Information Design.
 
Koberg, Don And Bagnall, Jim (1971). The Universal Traveler. A Soft-systems G...
Koberg, Don And Bagnall, Jim (1971). The Universal Traveler. A Soft-systems G...Koberg, Don And Bagnall, Jim (1971). The Universal Traveler. A Soft-systems G...
Koberg, Don And Bagnall, Jim (1971). The Universal Traveler. A Soft-systems G...
 
Gill, Eric (1931). An Essay On Typography.
Gill, Eric (1931). An Essay On Typography.Gill, Eric (1931). An Essay On Typography.
Gill, Eric (1931). An Essay On Typography.
 
Gerstner, Karl (1964). Designing Programmes. Instead Of Solutions For Problem...
Gerstner, Karl (1964). Designing Programmes. Instead Of Solutions For Problem...Gerstner, Karl (1964). Designing Programmes. Instead Of Solutions For Problem...
Gerstner, Karl (1964). Designing Programmes. Instead Of Solutions For Problem...
 
De Bono, Edward (1995). Serious Creativity. Using The Power Of Lateral Thinki...
De Bono, Edward (1995). Serious Creativity. Using The Power Of Lateral Thinki...De Bono, Edward (1995). Serious Creativity. Using The Power Of Lateral Thinki...
De Bono, Edward (1995). Serious Creativity. Using The Power Of Lateral Thinki...
 
Birren, Faber (1956). Selling with Color.
Birren, Faber (1956). Selling with Color.Birren, Faber (1956). Selling with Color.
Birren, Faber (1956). Selling with Color.
 

Recently uploaded

Call In girls Bhikaji Cama Place 🔝 ⇛8377877756 FULL Enjoy Delhi NCR
Call In girls Bhikaji Cama Place 🔝 ⇛8377877756 FULL Enjoy Delhi NCRCall In girls Bhikaji Cama Place 🔝 ⇛8377877756 FULL Enjoy Delhi NCR
Call In girls Bhikaji Cama Place 🔝 ⇛8377877756 FULL Enjoy Delhi NCRdollysharma2066
 
Design Portfolio - 2024 - William Vickery
Design Portfolio - 2024 - William VickeryDesign Portfolio - 2024 - William Vickery
Design Portfolio - 2024 - William VickeryWilliamVickery6
 
西北大学毕业证学位证成绩单-怎么样办伪造
西北大学毕业证学位证成绩单-怎么样办伪造西北大学毕业证学位证成绩单-怎么样办伪造
西北大学毕业证学位证成绩单-怎么样办伪造kbdhl05e
 
办理学位证(TheAuckland证书)新西兰奥克兰大学毕业证成绩单原版一比一
办理学位证(TheAuckland证书)新西兰奥克兰大学毕业证成绩单原版一比一办理学位证(TheAuckland证书)新西兰奥克兰大学毕业证成绩单原版一比一
办理学位证(TheAuckland证书)新西兰奥克兰大学毕业证成绩单原版一比一Fi L
 
Revit Understanding Reference Planes and Reference lines in Revit for Family ...
Revit Understanding Reference Planes and Reference lines in Revit for Family ...Revit Understanding Reference Planes and Reference lines in Revit for Family ...
Revit Understanding Reference Planes and Reference lines in Revit for Family ...Narsimha murthy
 
办理学位证(NTU证书)新加坡南洋理工大学毕业证成绩单原版一比一
办理学位证(NTU证书)新加坡南洋理工大学毕业证成绩单原版一比一办理学位证(NTU证书)新加坡南洋理工大学毕业证成绩单原版一比一
办理学位证(NTU证书)新加坡南洋理工大学毕业证成绩单原版一比一A SSS
 
NATA 2024 SYLLABUS, full syllabus explained in detail
NATA 2024 SYLLABUS, full syllabus explained in detailNATA 2024 SYLLABUS, full syllabus explained in detail
NATA 2024 SYLLABUS, full syllabus explained in detailDesigntroIntroducing
 
Call Girls Meghani Nagar 7397865700 Independent Call Girls
Call Girls Meghani Nagar 7397865700  Independent Call GirlsCall Girls Meghani Nagar 7397865700  Independent Call Girls
Call Girls Meghani Nagar 7397865700 Independent Call Girlsssuser7cb4ff
 
Call Us ✡️97111⇛47426⇛Call In girls Vasant Vihar༒(Delhi)
Call Us ✡️97111⇛47426⇛Call In girls Vasant Vihar༒(Delhi)Call Us ✡️97111⇛47426⇛Call In girls Vasant Vihar༒(Delhi)
Call Us ✡️97111⇛47426⇛Call In girls Vasant Vihar༒(Delhi)jennyeacort
 
定制(RMIT毕业证书)澳洲墨尔本皇家理工大学毕业证成绩单原版一比一
定制(RMIT毕业证书)澳洲墨尔本皇家理工大学毕业证成绩单原版一比一定制(RMIT毕业证书)澳洲墨尔本皇家理工大学毕业证成绩单原版一比一
定制(RMIT毕业证书)澳洲墨尔本皇家理工大学毕业证成绩单原版一比一lvtagr7
 
在线办理ohio毕业证俄亥俄大学毕业证成绩单留信学历认证
在线办理ohio毕业证俄亥俄大学毕业证成绩单留信学历认证在线办理ohio毕业证俄亥俄大学毕业证成绩单留信学历认证
在线办理ohio毕业证俄亥俄大学毕业证成绩单留信学历认证nhjeo1gg
 
Top 10 Modern Web Design Trends for 2025
Top 10 Modern Web Design Trends for 2025Top 10 Modern Web Design Trends for 2025
Top 10 Modern Web Design Trends for 2025Rndexperts
 
Call Girls in Okhla Delhi 💯Call Us 🔝8264348440🔝
Call Girls in Okhla Delhi 💯Call Us 🔝8264348440🔝Call Girls in Okhla Delhi 💯Call Us 🔝8264348440🔝
Call Girls in Okhla Delhi 💯Call Us 🔝8264348440🔝soniya singh
 
VIP Call Girls Service Bhagyanagar Hyderabad Call +91-8250192130
VIP Call Girls Service Bhagyanagar Hyderabad Call +91-8250192130VIP Call Girls Service Bhagyanagar Hyderabad Call +91-8250192130
VIP Call Girls Service Bhagyanagar Hyderabad Call +91-8250192130Suhani Kapoor
 
shot list for my tv series two steps back
shot list for my tv series two steps backshot list for my tv series two steps back
shot list for my tv series two steps back17lcow074
 
Untitled presedddddddddddddddddntation (1).pptx
Untitled presedddddddddddddddddntation (1).pptxUntitled presedddddddddddddddddntation (1).pptx
Untitled presedddddddddddddddddntation (1).pptxmapanig881
 
Passbook project document_april_21__.pdf
Passbook project document_april_21__.pdfPassbook project document_april_21__.pdf
Passbook project document_april_21__.pdfvaibhavkanaujia
 
Call Girls Satellite 7397865700 Ridhima Hire Me Full Night
Call Girls Satellite 7397865700 Ridhima Hire Me Full NightCall Girls Satellite 7397865700 Ridhima Hire Me Full Night
Call Girls Satellite 7397865700 Ridhima Hire Me Full Nightssuser7cb4ff
 
8377877756 Full Enjoy @24/7 Call Girls in Nirman Vihar Delhi NCR
8377877756 Full Enjoy @24/7 Call Girls in Nirman Vihar Delhi NCR8377877756 Full Enjoy @24/7 Call Girls in Nirman Vihar Delhi NCR
8377877756 Full Enjoy @24/7 Call Girls in Nirman Vihar Delhi NCRdollysharma2066
 

Recently uploaded (20)

Call In girls Bhikaji Cama Place 🔝 ⇛8377877756 FULL Enjoy Delhi NCR
Call In girls Bhikaji Cama Place 🔝 ⇛8377877756 FULL Enjoy Delhi NCRCall In girls Bhikaji Cama Place 🔝 ⇛8377877756 FULL Enjoy Delhi NCR
Call In girls Bhikaji Cama Place 🔝 ⇛8377877756 FULL Enjoy Delhi NCR
 
Design Portfolio - 2024 - William Vickery
Design Portfolio - 2024 - William VickeryDesign Portfolio - 2024 - William Vickery
Design Portfolio - 2024 - William Vickery
 
西北大学毕业证学位证成绩单-怎么样办伪造
西北大学毕业证学位证成绩单-怎么样办伪造西北大学毕业证学位证成绩单-怎么样办伪造
西北大学毕业证学位证成绩单-怎么样办伪造
 
办理学位证(TheAuckland证书)新西兰奥克兰大学毕业证成绩单原版一比一
办理学位证(TheAuckland证书)新西兰奥克兰大学毕业证成绩单原版一比一办理学位证(TheAuckland证书)新西兰奥克兰大学毕业证成绩单原版一比一
办理学位证(TheAuckland证书)新西兰奥克兰大学毕业证成绩单原版一比一
 
Revit Understanding Reference Planes and Reference lines in Revit for Family ...
Revit Understanding Reference Planes and Reference lines in Revit for Family ...Revit Understanding Reference Planes and Reference lines in Revit for Family ...
Revit Understanding Reference Planes and Reference lines in Revit for Family ...
 
办理学位证(NTU证书)新加坡南洋理工大学毕业证成绩单原版一比一
办理学位证(NTU证书)新加坡南洋理工大学毕业证成绩单原版一比一办理学位证(NTU证书)新加坡南洋理工大学毕业证成绩单原版一比一
办理学位证(NTU证书)新加坡南洋理工大学毕业证成绩单原版一比一
 
NATA 2024 SYLLABUS, full syllabus explained in detail
NATA 2024 SYLLABUS, full syllabus explained in detailNATA 2024 SYLLABUS, full syllabus explained in detail
NATA 2024 SYLLABUS, full syllabus explained in detail
 
Call Girls Meghani Nagar 7397865700 Independent Call Girls
Call Girls Meghani Nagar 7397865700  Independent Call GirlsCall Girls Meghani Nagar 7397865700  Independent Call Girls
Call Girls Meghani Nagar 7397865700 Independent Call Girls
 
Call Us ✡️97111⇛47426⇛Call In girls Vasant Vihar༒(Delhi)
Call Us ✡️97111⇛47426⇛Call In girls Vasant Vihar༒(Delhi)Call Us ✡️97111⇛47426⇛Call In girls Vasant Vihar༒(Delhi)
Call Us ✡️97111⇛47426⇛Call In girls Vasant Vihar༒(Delhi)
 
定制(RMIT毕业证书)澳洲墨尔本皇家理工大学毕业证成绩单原版一比一
定制(RMIT毕业证书)澳洲墨尔本皇家理工大学毕业证成绩单原版一比一定制(RMIT毕业证书)澳洲墨尔本皇家理工大学毕业证成绩单原版一比一
定制(RMIT毕业证书)澳洲墨尔本皇家理工大学毕业证成绩单原版一比一
 
在线办理ohio毕业证俄亥俄大学毕业证成绩单留信学历认证
在线办理ohio毕业证俄亥俄大学毕业证成绩单留信学历认证在线办理ohio毕业证俄亥俄大学毕业证成绩单留信学历认证
在线办理ohio毕业证俄亥俄大学毕业证成绩单留信学历认证
 
Top 10 Modern Web Design Trends for 2025
Top 10 Modern Web Design Trends for 2025Top 10 Modern Web Design Trends for 2025
Top 10 Modern Web Design Trends for 2025
 
Call Girls in Okhla Delhi 💯Call Us 🔝8264348440🔝
Call Girls in Okhla Delhi 💯Call Us 🔝8264348440🔝Call Girls in Okhla Delhi 💯Call Us 🔝8264348440🔝
Call Girls in Okhla Delhi 💯Call Us 🔝8264348440🔝
 
Call Girls in Pratap Nagar, 9953056974 Escort Service
Call Girls in Pratap Nagar,  9953056974 Escort ServiceCall Girls in Pratap Nagar,  9953056974 Escort Service
Call Girls in Pratap Nagar, 9953056974 Escort Service
 
VIP Call Girls Service Bhagyanagar Hyderabad Call +91-8250192130
VIP Call Girls Service Bhagyanagar Hyderabad Call +91-8250192130VIP Call Girls Service Bhagyanagar Hyderabad Call +91-8250192130
VIP Call Girls Service Bhagyanagar Hyderabad Call +91-8250192130
 
shot list for my tv series two steps back
shot list for my tv series two steps backshot list for my tv series two steps back
shot list for my tv series two steps back
 
Untitled presedddddddddddddddddntation (1).pptx
Untitled presedddddddddddddddddntation (1).pptxUntitled presedddddddddddddddddntation (1).pptx
Untitled presedddddddddddddddddntation (1).pptx
 
Passbook project document_april_21__.pdf
Passbook project document_april_21__.pdfPassbook project document_april_21__.pdf
Passbook project document_april_21__.pdf
 
Call Girls Satellite 7397865700 Ridhima Hire Me Full Night
Call Girls Satellite 7397865700 Ridhima Hire Me Full NightCall Girls Satellite 7397865700 Ridhima Hire Me Full Night
Call Girls Satellite 7397865700 Ridhima Hire Me Full Night
 
8377877756 Full Enjoy @24/7 Call Girls in Nirman Vihar Delhi NCR
8377877756 Full Enjoy @24/7 Call Girls in Nirman Vihar Delhi NCR8377877756 Full Enjoy @24/7 Call Girls in Nirman Vihar Delhi NCR
8377877756 Full Enjoy @24/7 Call Girls in Nirman Vihar Delhi NCR
 

Simon, Herbert A. (1969). The Science Of The Artificial.

  • 1.
  • 2. The Sciences of the Artificial Third edition Herbert A. Simon title : The Sciences of the Artificial author : Simon, Herbert Alexander. publisher : MIT Press isbn10 | asin : 0262193744 print isbn13 : 9780262193740 ebook isbn13 : 9780585360102 language : English subject Science--Philosophy. publication date : 1996 lcc : Q175.S564 1996eb ddc : 300.1/1 subject : Science--Philosophy.
  • 3. Page iv © 1996 Massachusetts Institute of Technology All rights reserved. No part of this book may be reproduced in any form by any electronic or mechanical means (including photocopying, recording, or information storage and retrieval) without permission in writing from the publisher. This book was set in Sabon by Graphic Composition, Inc. Printed and bound in the United States of America. Library of Congress Cataloging-in-Publication Data
  • 4. Page v Simon, Herbert Alexander, 1916 The sciences of the artificial / Herbert A. Simon.3rd ed. p. cm. Includes bibliographical references and index. ISBN 0-262-19374-4 (alk. paper).ISBN 0-262-69191-4 (pbk.: alk. paper) 1. Science Philosophy. I. Title. Q175.S564 1996 300.1'1dc20 96-12633 CIP
  • 5. Page vi To Allen Newell in memory of a friendship
  • 6. Page vii Contents Preface to Third Edition ix Preface to Second Edition xi 1 Understanding the Natural and Artificial Worlds 1 2 Economic Rationality: Adaptive Artifice 25 3 The Psychology of Thinking: Embedding Artifice in Nature 51 4 Remembering and Learning: Memory As Environment for Thought 85 5 The Science of Design: Creating the Artificial 111 6 Social Planning: Designing the Evolving Artifact 139 7 Alternative Views of Complexity 169 8 The Architecture of Complexity: Hierarchic Systems 183 Name Index 217 Subject Index 221
  • 7. Page ix Preface to Third Edition As the Earth has made more than 5,000 rotations since The Sciences of the Artificial was last revised, in 1981, it is time to ask what changes in our understanding of the world call for changes in the text. Of particular relevance is the recent vigorous eruption of interest in complexity and complex systems. In the previous editions of this book I commented only briefly on the relation between general ideas about complexity and the particular hierarchic form of complexity with which the book is chiefly concerned. I now introduce a new chapter to remedy this deficit. It will appear that the devotees of complexity (among whom I count myself) are a rather motley crew, not at all unified in our views on reductionism. Various among us favor quite different tools for analyzing complexity and speak nowadays of "chaos," "adaptive systems," and "genetic algorithms." In the new chapter 7, "Alternative Views of Complexity'' ("The Architecture of Complexity" having become chapter 8), I sort out these themes and draw out the implications of artificiality and hierarchy for complexity. Most of the remaining changes in this third edition aim at updating the text. In particular, I have taken account of important advances that have been made since 1981 in cognitive psychology (chapters 3 and 4) and the science of design (chapters 5 and 6). It is gratifying that continuing rapid progress in both of these domains has called for numerous new references that record the advances, while at the same time confirm and extend the book's basic theses about the artificial sciences. Changes in emphases in chapter 2 reflect progress in my thinking about the respective roles of organizations and markets in economic systems.
  • 8. Page x This edition, like its predecessors, is dedicated to my friend of half a lifetime, Allen Newell but now, alas, to his memory. His final book, Unified Theories of Cognition, provides a powerful agenda for advancing our understanding of intelligent systems. I am grateful to my assistant, Janet Hilf, both for protecting the time I have needed to carry out this revision and for assisting in innumerable ways in getting the manuscript ready for publication. At the MIT Press, Deborah Cantor-Adams applied a discerning editorial pencil to the manuscript and made communication with the Press a pleasant part of the process. To her, also, I am very grateful. In addition to those others whose help, counsel, and friendship I acknowledged in the preface to the earlier editions, I want to single out some colleagues whose ideas have been especially relevant to the new themes treated here. These include Anders Ericsson, with whom I explored the theory and practice of protocol analysis; Pat Langley, Gary Bradshaw, and Jan Zytkow, my co-investigators of the processes of scientific discovery; Yuichiro Anzai, Fernand Gobet, Yumi Iwasaki, Deepak Kulkarni, Jill Larkin, Jean-Louis Le Moigne, Anthony Leonardo, Yulin Qin, Howard Richman, Weimin Shen, Jim Staszewski, Hermina Tabachneck, Guojung Zhang, and Xinming Zhu. In truth, I don't know where to end the list or how to avoid serious gaps in it, so I will simply express my deep thanks to all of my friends and collaborators, both the mentioned and the unmentioned. In the first chapter I propose that the goal of science is to make the wonderful and the complex understandable and simple but not less wonderful. I will be pleased if readers find that I have achieved a bit of that in this third edition of The Sciences of the Artificial. HERBERT A. SIMON PITTSBURGH, PENNSYLVANIA JANUARY 1, 1996
  • 9. Page xi Preface to Second Edition This work takes the shape of fugues, whose subject and counter subject were first uttered in lectures on the opposite sides of a continent and the two ends of a decade but are now woven together as the alternating chapters of the whole. The invitation to deliver the Karl Taylor Compton lectures at the Massachusetts Institute of Technology in the spring of 1968 provided me with a welcome opportunity to make explicit and to develop at some length a thesis that has been central to much of my research, at first in organization theory, later in economics and management science, and most recently in psychology. In 1980 another invitation, this one to deliver the H. Rowan Gaither lectures at the University of California, Berkeley, permitted me to amend and expand this thesis and to apply it to several additional fields. The thesis is that certain phenomena are "artificial" in a very specific sense: they are as they are only because of a system's being moulded, by goals or purposes, to the environment in which it lives. If natural phenomena have an air of "necessity" about them in their subservience to natural law, artificial phenomena have an air of "contingency" in their malleability by environment. The contingency of artificial phenomena has always created doubts as to whether they fall properly within the compass of science. Sometimes these doubts refer to the goal-directed character of artificial systems and the consequent difficulty of disentangling prescription from description. This seems to me not to be the real difficulty. The genuine problem is to show how empirical propositions can be made at all about systems that, given different circumstances, might be quite other than they are.
  • 10. Page xii Almost as soon as I began research on administrative organizations, some forty years ago, I encountered the problem of artificiality in almost its pure form: . . . administration is not unlike play-acting. The task of the good actor is to know and play his role, although different roles may differ greatly in content. The effectiveness of the performance will depend on the effectiveness of the play and the effectiveness with which it is played. The effectiveness of the administrative process will vary with the effectiveness of the organization and the effectiveness with which its members play their parts. [Administrative Behavior, p. 252] How then could one construct a theory of administration that would contain more than the normative rules of good acting? In particular, how could one construct an empirical theory? My writing on administration, particularly in Administrative Behavior and part IV of Models of Man, has sought to answer those questions by showing that the empirical content of the phenomena, the necessity that rises above the contingencies, stems from the inabilities of the behavioral system to adapt perfectly to its environment from the limits of rationality, as I have called them. As research took me into other areas, it became evident that the problem of artificiality was not peculiar to administration and organizations but that it infected a far wider range of subjects. Economics, since it postulated rationality in economic man, made him the supremely skillful actor, whose behavior could reveal something of the requirements the environment placed on him but nothing about his own cognitive makeup. But the difficulty must then extend beyond economics into all those parts of psychology concerned with rational behavior thinking, problem solving, learning. Finally, I thought I began to see in the problem of artificiality an explanation of the difficulty that has been experienced in filling engineering and other professions with empirical and theoretical substance distinct from the substance of their supporting sciences. Engineering, medicine, business, architecture, and painting are concerned not with the necessary but with the contingent not with how things are but with how they might be in short, with design. The possibility of creating a science or sciences of design is exactly as great as the possibility of creating any science of the artificial. The two possibilities stand or fall together. These essays then attempt to explain how a science of the artificial is possible and to illustrate its nature. I have taken as my main examples the
  • 11. Page xiii fields of economics (chapter 2), the psychology of cognition (chapters 3 and 4), and planning and engineering design (chapters 5 and 6). Since Karl Compton was a distinguished engineering educator as well as a distinguished scientist, I thought it not inappropriate to apply my conclusions about design to the question of reconstructing the engineering curriculum (chapter 5). Similarly Rowan Gaither's strong interest in the uses of systems analysis in public policy formation is reflected especially in chapter 6. The reader will discover in the course of the discussion that artificiality is interesting principally when it concerns complex systems that live in complex environments. The topics of artificiality and complexity are inextricably interwoven. For this reason I have included in this volume (chapter 8) an earlier essay, "The Architecture of Complexity," which develops at length some ideas about complexity that I could touch on only briefly in my lectures. The essay appeared originally in the December 1962 Proceedings of the American Philosophical Society. I have tried to acknowledge some specific debts to others in footnotes at appropriate points in the text. I owe a much more general debt to Allen Newell, whose partner I have been in a very large part of my work for more than two decades and to whom I have dedicated this volume. If there are parts of my thesis with which he disagrees, they are probably wrong; but he cannot evade a major share of responsibility for the rest. Many ideas, particularly in the third and fourth chapters had their origins in work that my late colleague, Lee W. Gregg, and I did together; and other colleagues, as well as numerous present and former graduate students, have left their fingerprints on various pages of the text. Among the latter I want to mention specifically L. Stephen Coles, Edward A. Feigenbaum, John Grason, Pat Langley, Robert K. Lindsay, David Neves, Ross Quillian, Laurent Siklóssy, Donald S. Williams, and Thomas G. Williams, whose work is particularly relevant to the topics discussed here. Previous versions of chapter 8 incorporated valuable suggestions and data contributed by George W. Corner, Richard H. Meier, John R. Platt, Andrew Schoene, Warren Weaver, and William Wise. A large part of the psychological research reported in this book was supported by the Public Health Service Research Grant MH-07722 from the National Institute of Mental Health, and some of the research on
  • 12. Page xiv design reported in the fifth and sixth chapters, by the Advanced Research Projects Agency of the Office of the Secretary of Defense (SD-146). These grants, as well as support from the Carnegie Corporation, the Ford Foundation, and the Alfred P. Sloan Foundation, have enabled us at Carnegie-Mellon to pursue for over two decades a many-pronged exploration aimed at deepening our understanding of artificial phenomena. Finally, I am grateful to the Massachusetts Institute of Technology and to the University of California, Berkeley, for the opportunity to prepare and present these lectures and for the occasion to become better acquainted with the research in the sciences of the artificial going forward on these two stimulating campuses. I want to thank both institutions also for agreeing to the publication of these lectures in this unified form, The Compton lectures comprise chapters 1, 3, and 5, and the Gaither lectures, chapters 2, 4, and 6. Since the first edition of this book (The MIT Press, 1969) has been well received, I have limited the changes in chapters 1, 3, 5, and 8 to the correction of blatant errors, the updating of a few facts, and the addition of some transitional paragraphs.
  • 13. Page 1 1 Understanding the Natural and the Artificial Worlds About three centuries after Newton we are thoroughly familiar with the concept of natural science most unequivocally with physical and biological science. A natural science is a body of knowledge about some class of things objects or phenomena in the world: about the characteristics and properties that they have; about how they behave and interact with each other. The central task of a natural science is to make the wonderful commonplace: to show that complexity, correctly viewed, is only a mask for simplicity; to find pattern hidden in apparent chaos. The early Dutch physicist Simon Stevin, showed by an elegant drawing (figure 1) that the law of the inclined plane follows in "self-evident fashion" from the impossibility of perpetual motion, for experience and reason tell us that the chain of balls in the figure would rotate neither to right nor to left but would remain at rest. (Since rotation changes nothing in the figure, if the chain moved at all, it would move perpetually.) Since the pendant part of the chain hangs symmetrically, we can snip it off without disturbing the equilibrium. But now the balls on the long side of the plane balance those on the shorter, steeper side, and their relative numbers are in inverse ratio to the sines of the angles at which the planes are inclined. Stevin was so pleased with his construction that he incorporated it into a vignette, inscribing above it Wonder, en is gheen wonder that is to say: "Wonderful, but not incomprehensible." This is the task of natural science: to show that the wonderful is not incomprehensible, to show how it can be comprehended but not to
  • 14. Page 2 Figure 1 The vignette devised by Simon Stevin to illustrate his derivation of the law of the inclined plane destroy wonder. For when we have explained the wonderful, unmasked the hidden pattern, a new wonder arises at how complexity was woven out of simplicity. The aesthetics of natural science and mathematics is at one with the aesthetics of music and painting both inhere in the discovery of a partially concealed pattern. The world we live in today is much more a man-made,1 or artificial, world than it is a natural world. Almost every element in our environment shows evidence of human artifice. The temperature in which we spend most of our hours is kept artificially at 20 degrees Celsius; the humidity is added to or taken from the air we breathe; and the impurities we inhale are largely produced (and filtered) by man. Moreover for most of us the white-collared ones the significant part of the environment consists mostly of strings of artifacts called "symbols" that we receive through eyes and ears in the form of written and spoken language and that we pour out into the environment as I am now doing by mouth or hand. The laws that govern these strings of 1. I will occasionally use "man" as an androgynous noun, encompassing both sexes, and "he," "his," and "him" as androgynous pronouns including women and men equally in their scope.
  • 15. Page 3 symbols, the laws that govern the occasions on which we emit and receive them, the determinants of their content are all consequences of our collective artifice. One may object that I exaggerate the artificiality of our world. Man must obey the law of gravity as surely as does a stone, and as a living organism man must depend for food, and in many other ways, on the world of biological phenomena. I shall plead guilty to overstatement, while protesting that the exaggeration is slight. To say that an astronaut, or even an airplane pilot, is obeying the law of gravity, hence is a perfectly natural phenomenon, is true, but its truth calls for some sophistication in what we mean by "obeying" a natural law. Aristotle did not think it natural for heavy things to rise or light ones to fall (Physics, Book IV); but presumably we have a deeper understanding of "natural" than he did. So too we must be careful about equating "biological" with "natural." A forest may be a phenomenon of nature; a farm certainly is not. The very species upon which we depend for our food our corn and our cattle are artifacts of our ingenuity. A plowed field is no more part of nature than an asphalted street and no less. These examples set the terms of our problem, for those things we call artifacts are not apart from nature. They have no dispensation to ignore or violate natural law. At the same time they are adapted to human goals and purposes. They are what they are in order to satisfy our desire to fly or to eat well. As our aims change, so too do our artifacts and vice versa. If science is to encompass these objects and phenomena in which human purpose as well as natural law are embodied, it must have means for relating these two disparate components. The character of these means and their implications for certain areas of knowledge economics, psychology, and design in particular are the central concern of this book. The Artificial Natural science is knowledge about natural objects and phenomena. We ask whether there cannot also be "artificial" science knowledge about artificial objects and phenomena. Unfortunately the term "artificial" has a pejorative air about it that we must dispel before we can proceed.
  • 16. Page 4 My dictionary defines "artificial" as, "Produced by art rather than by nature; not genuine or natural; affected; not pertaining to the essence of the matter." It proposes, as synonyms: affected, factitious, manufactured, pretended, sham, simulated, spurious, trumped up, unnatural. As antonyms, it lists: actual, genuine, honest, natural, real, truthful, unaffected. Our language seems to reflect man's deep distrust of his own products. I shall not try to assess the validity of that evaluation or explore its possible psychological roots. But you will have to understand me as using "artificial" in as neutral a sense as possible, as meaning man-made as opposed to natural.2 In some contexts we make a distinction between "artificial" and "synthetic." For example, a gem made of glass colored to resemble sapphire would be called artificial, while a man-made gem chemically indistinguishable from sapphire would be called synthetic. A similar distinction is often made between "artificial" and "synthetic" rubber. Thus some artificial things are imitations of things in nature, and the imitation may use either the same basic materials as those in the natural object or quite different materials. As soon as we introduce "synthesis" as well as "artifice," we enter the realm of engineering. For "synthetic" is often used in the broader sense of "designed" or "composed.'' We speak of engineering as concerned with "synthesis," while science is concerned with "analysis." Synthetic or artificial objects and more specifically prospective artificial objects having desired properties are the central objective of engineering activity and skill. The engineer, and more generally the designer, is concerned with how things ought to be how they ought to be in order to attain goals, 2. I shall disclaim responsibility for this particular choice of terms. The phrase "artificial intelligence" which led me to it, was coined, I think, right on the Charles River, at MIT. Our own research group at Rand and Carnegie Mellon University have preferred phrases like "complex information processing" and "simulation of cognitive processes." But then we run into new terminological difficulties, for the dictionary also says that "to simulate" means "to assume or have the mere appearance or form of, without the reality; imitate; counterfeit; pretend." At any rate, "artificial intelligence" seems to be here to stay, and it may prove easier to cleanse the phrase than to dispense with it. In time it will become sufficiently idiomatic that it will no longer be the target of cheap rhetoric.
  • 17. Page 5 and to function. Hence a science of the artificial will be closely akin to a science of engineering but very different, as we shall see in my fifth chapter, from what goes currently by the name of "engineering science." With goals and "oughts" we also introduce into the picture the dichotomy between normative and descriptive. Natural science has found a way to exclude the normative and to concern itself solely with how things are. Can or should we maintain this exclusion when we move from natural to artificial phenomena, from analysis to synthesis?3 We have now identified four indicia that distinguish the artificial from the natural; hence we can set the boundaries for sciences of the artificial: 1. Artificial things are synthesized (though not always or usually with full forethought) by human beings. 2. Artificial things may imitate appearances in natural things while lacking, in one or many respects, the reality of the latter. 3. Artificial things can be characterized in terms of functions, goals, adaptation. 4. Artificial things are often discussed, particularly when they are being designed, in terms of imperatives as well as descriptives. The Environment As Mold Let us look a little more closely at the functional or purposeful aspect of artificial things. Fulfillment of purpose or adaptation to a goal involves a relation among three terms: the purpose or goal, the character of the artifact, and the environment in which the artifact performs. When we think of a clock, for example, in terms of purpose we may use the child's definition: "a clock is to tell time." When we focus our attention on the clock itself, we may describe it in terms of arrangements of gears and the 3. This issue will also be discussed at length in my fifth chapter. In order not to keep readers in suspense, I may say that I hold to the pristine empiricist's position of the irreducibility of "ought" to "is," as in chapter 3 of my Administrative Behavior (New York: Macmillan, 1976). This position is entirely consistent with treating natural or artificial goal-seeking systems as phenomena, without commitment to their goals. Ibid., appendix. See also the well-known paper by A. Rosenbluth, N. Wiener, and J. Bigelow, ''Behavior, Purpose, and Teleology," Philosophy of Science, 10 (1943):18 24.
  • 18. Page 6 application of the forces of springs or gravity operating on a weight or pendulum. But we may also consider clocks in relation to the environment in which they are to be used. Sundials perform as clocks in sunny climates they are more useful in Phoenix than in Boston and of no use at all during the Arctic winter. Devising a clock that would tell time on a rolling and pitching ship, with sufficient accuracy to determine longitude, was one of the great adventures of eighteenth-century science and technology. To perform in this difficult environment, the clock had to be endowed with many delicate properties, some of them largely or totally irrelevant to the performance of a landlubber's clock. Natural science impinges on an artifact through two of the three terms of the relation that characterizes it: the structure of the artifact itself and the environment in which it performs. Whether a clock will in fact tell time depends on its internal construction and where it is placed. Whether a knife will cut depends on the material of its blade and the hardness of the substance to which it is applied. The Artifact As "Interface" We can view the matter quite symmetrically. An artifact can be thought of as a meeting point an "interface" in today's terms between an "inner" environment, the substance and organization of the artifact itself, and an ''outer" environment, the surroundings in which it operates. If the inner environment is appropriate to the outer environment, or vice versa, the artifact will serve its intended purpose. Thus, if the clock is immune to buffeting, it will serve as a ship's chronometer. (And conversely, if it isn't, we may salvage it by mounting it on the mantel at home.) Notice that this way of viewing artifacts applies equally well to many things that are not man-made to all things in fact that can be regarded as adapted to some situation; and in particular it applies to the living systems that have evolved through the forces of organic evolution. A theory of the airplane draws on natural science for an explanation of its inner environment (the power plant, for example), its outer environment (the character of the atmosphere at different altitudes), and the relation between its inner and outer environments (the movement of an air foil
  • 19. Page 7 through a gas). But a theory of the bird can be divided up in exactly the same way.4 Given an airplane, or given a bird, we can analyze them by the methods of natural science without any particular attention to purpose or adaptation, without reference to the interface between what I have called the inner and outer environments. After all, their behavior is governed by natural law just as fully as the behavior of anything else (or at least we all believe this about the airplane, and most of us believe it about the bird). Functional Explanation On the other hand, if the division between inner and outer environment is not necessary to the analysis of an airplane or a bird, it turns out at least to be highly convenient. There are several reasons for this, which will become evident from examples. Many animals in the Arctic have white fur. We usually explain this by saying that white is the best color for the Arctic environment, for white creatures escape detection more easily than do others. This is not of course a natural science explanation; it is an explanation by reference to purpose or function. It simply says that these are the kinds of creatures that will "work;" that is, survive, in this kind of environment. To turn the statement into an explanation, we must add to it a notion of natural selection, or some equivalent mechanism. An important fact about this kind of explanation is that it demands an understanding mainly of the outer environment. Looking at our snowy surroundings, we can predict the predominant color of the creatures we are likely to encounter; we need know little about the biology of the creatures themselves, beyond the facts that they are often mutually hostile, use visual clues to guide their behavior, and are adaptive (through selection or some other mechanism). 4. A generalization of the argument made here for the separability of "outer" from "inner" environment shows that we should expect to find this separability, to a greater or lesser degree, in all large and complex systems, whether they are artificial or natural. In its generalized form it is an argument that all nature will be organized in "levels:" My essay "The Architecture of Complexity,'' included in this volume as chapter 8, develops the more general argument in some detail.
  • 20. Page 8 Analogous to the role played by natural selection in evolutionary biology is the role played by rationality in the sciences of human behavior. If we know of a business organization only that it is a profit-maximizing system, we can often predict how its behavior will change if we change its environment how it will alter its prices if a sales tax is levied on its products. We can sometimes make this prediction and economists do make it repeatedly without detailed assumptions about the adaptive mechanism, the decision-making apparatus that constitutes the inner environment of the business firm. Thus the first advantage of dividing outer from inner environment in studying an adaptive or artificial system is that we can often predict behavior from knowledge of the system's goals and its outer environment, with only minimal assumptions about the inner environment. An instant corollary is that we often find quite different inner environments accomplishing identical or similar goals in identical or similar outer environments airplanes and birds, dolphins and tuna fish, weight-driven clocks and battery-driven clocks, electrical relays and transistors. There is often a corresponding advantage in the division from the standpoint of the inner environment. In very many cases whether a particular system will achieve a particular goal or adaptation depends on only a few characteristics of the outer environment and not at all on the detail of that environment. Biologists are familiar with this property of adaptive systems under the label of homeostasis. It is an important property of most good designs, whether biological or artifactual. In one way or an other the designer insulates the inner system from the environment, so that an invariant relation is maintained between inner system and goal, independent of variations over a wide range in most parameters that characterize the outer environment. The ship's chronometer reacts to the pitching of the ship only in the negative sense of maintaining an invariant relation of the hands on its dial to the real time, independently of the ship's motions. Quasi independence from the outer environment may be maintained by various forms of passive insulation, by reactive negative feedback (the most frequently discussed form of insulation), by predictive adaptation, or by various combinations of these.
  • 21. Page 9 Functional Description and Synthesis In the best of all possible worlds at least for a designer we might even hope to combine the two sets of advantages we have described that derive from factoring an adaptive system into goals, outer environment, and inner environment. We might hope to be able to characterize the main properties of the system and its behavior without elaborating the detail of either the outer or inner environments. We might look toward a science of the artificial that would depend on the relative simplicity of the interface as its primary source of abstraction and generality. Consider the design of a physical device to serve as a counter. If we want the device to be able to count up to one thousand, say, it must be capable of assuming any one of at least a thousand states, of maintaining itself in any given state, and of shifting from any state to the "next" state. There are dozens of different inner environments that might be used (and have been used) for such a device. A wheel notched at each twenty minutes of arc, and with a ratchet device to turn and hold it, would do the trick. So would a string of ten electrical switches properly connected to represent binary numbers. Today instead of switches we are likely to use transistors or other solid-state devices.5 Our counter would be activated by some kind of pulse, mechanical or electrical, as appropriate, from the outer environment. But by building an appropriate transducer between the two environments, the physical character of the interior pulse could again be made independent of the physical character of the exterior pulse the counter could be made to count anything. Description of an artifice in terms of its organization and functioning its interface between inner and outer environments is a major objective of invention and design activity. Engineers will find familiar the language of the following claim quoted from a 1919 patent on an improved motor controller: What I claim as new and desire to secure by Letters Patent is: 1 In a motor controller, in combination, reversing means, normally effective field-weakening means and means associated with said reversing means for 5. The theory of functional equivalence of computing machines has had considerable development in recent years. See Marvin L. Minsky, Computation: Finite and Infinite Machines (Englewood Cliffs, N.J.: Prentice-Hall, 1967), chapters 1 4.
  • 22. Page 10 rendering said field-weakening means ineffective during motor starting and thereafter effective to different degrees determinable by the setting of said reversing means . . .6 Apart from the fact that we know the invention relates to control of an electric motor, there is almost no reference here to specific, concrete objects or phenomena. There is reference rather to "reversing means" and "field-weakening means," whose further purpose is made clear in a paragraph preceding the patent claims: The advantages of the special type of motor illustrated and the control thereof will be readily understood by those skilled in the art. Among such advantages may be mentioned the provision of a high starting torque and the provision for quick reversals of the motor.7 Now let us suppose that the motor in question is incorporated in a planing machine (see figure 2). The inventor describes its behavior thus: Referring now to [figure 2], the controller is illustrated in outline connection with a planer (100) operated by a motor M, the controller being adapted to govern the motor M and to be automatically operated by the reciprocating bed (101) of the planer. The master shaft of the controller is provided with a lever (102) connected by a link (103) to a lever (104) mounted upon the planer frame and projecting into the path of lugs (105) and (106) on the planer bed. As will be understood, the arrangement is such that reverse movements of the planer bed will, through the connections described, throw the master shaft of the controller back and forth between its extreme positions and in consequence effect selective operation of the reversing switches (1) and (2) and automatic operation of the other switches in the manner above set forth.8 In this manner the properties with which the inner environment has been endowed are placed at the service of the goals in the context of the outer environment. The motor will reverse periodically under the control of the position of the planer bed. The "shape" of its behavior the time path, say, of a variable associated with the motor will be a function of the "shape" of the external environment the distance, in this case, between the lugs on the planer bed. The device we have just described illustrates in microcosm the nature of artifacts. Central to their description are the goals that link the inner 6. U.S. Patent 1,307,836, granted to Arthur Simon, June 24, 1919. 7. Ibid. 8. Ibid.
  • 23. Page 11 Figure 2 Illustrations from a patent for a motor controller to the outer system. The inner system is an organization of natural phenomena capable of attaining the goals in some range of environments, but ordinarily there will be many functionally equivalent natural systems capable of doing this. The outer environment determines the conditions for goal attainment. If the inner system is properly designed, it will be adapted to the outer environment, so that its behavior will be determined in large part by the
  • 24. Page 12 behavior of the latter, exactly as in the case of "economic man." To predict how it will behave, we need only ask, "How would a rationally designed system behave under these circumstances?" The behavior takes on the shape of the task environment.9 Limits of Adaptation But matters must be just a little more complicated than this account suggests. "If wishes were horses, all beggars would ride." And if we could always specify a protean inner system that would take on exactly the shape of the task environment, designing would be synonymous with wishing. "Means for scratching diamonds" defines a design objective, an objective that might be attained with the use of many different substances. But the design has not been achieved until we have discovered at least one realizable inner system obeying the ordinary natural laws one material, in this case, hard enough to scratch diamonds. Often we shall have to be satisfied with meeting the design objectives only approximately. Then the properties of the inner system will "show through." That is, the behavior of the system will only partly respond to the task environment; partly, it will respond to the limiting properties of the inner system. Thus the motor controls described earlier are aimed at providing for "quick" reversal of the motor. But the motor must obey electromagnetic and mechanical laws, and we could easily confront the system with a task where the environment called for quicker reversal than the motor was capable of. In a benign environment we would learn from the motor only what it had been called upon to do; in a taxing environment we would learn something about its internal structure specifically about those aspects of the internal structure that were chiefly instrumental in limiting performance.10 9. On the crucial role of adaptation or rationality and their limits for economics and organization theory, see the introduction to part IV, "Rationality and Administrative Decision Making," of my Models of Man (New York: Wiley, 1957); pp. 38 41, 80 81, and 240 244 of Administrative Behavior; and chapter 2 of this book. 10. Compare the corresponding proposition on the design of administrative organizations: "Rationality, then, does not determine behavior. Within the area of rationality behavior is perfectly flexible and adaptable to abilities, goals, and (footnote continued on next page)
  • 25. Page 13 A bridge, under its usual conditions of service, behaves simply as a relatively smooth level surface on which vehicles can move. Only when it has been overloaded do we learn the physical properties of the materials from which it is built. Understanding by Simulating Artificiality connotes perceptual similarity but essential difference, resemblance from without rather than within. In the terms of the previous section we may say that the artificial object imitates the real by turning the same face to the outer system, by adapting, relative to the same goals, to comparable ranges of external tasks. Imitation is possible because distinct physical systems can be organized to exhibit nearly identical behavior. The damped spring and the damped circuit obey the same second-order linear differential equation; hence we may use either one to imitate the other. Techniques of Simulation Because of its abstract character and its symbol manipulating generality, the digital computer has greatly extended the range of systems whose behavior can be imitated. Generally we now call the imitation "simulation," and we try to understand the imitated system by testing the simulation in a variety of simulated, or imitated, environments. Simulation, as a technique for achieving understanding and predicting the behavior of systems, predates of course the digital computer. The model basin and the wind tunnel are valued means for studying the behavior of large systems by modeling them in the small, and it is quite certain that Ohm's law was suggested to its discoverer by its analogy with simple hydraulic phenomena. (footnote continued from previous page) knowledge. Instead, behavior is determined by the irrational and non-rational elements that bound the area of rationality . . . administrative theory must be concerned with the limits of rationality, and the manner in which organization affects these limits for the person making a decision." Administrative Behavior, p. 241. For a discussion of the same issue as it arises in psychology, see my "Cognitive Architectures and Rational Analysis: Comment," in Kurt Van Lehn (ed.), Architectures for Intelligence (Hillsdale, NJ: Erlbaum, 1991).
  • 26. Page 14 Simulation may even take the form of a thought experiment, never actually implemented dynamically. One of my vivid memories of the Great Depression is of a large multi colored chart in my father's study that represented a hydraulic model of an economic system (with different fluids for money and goods). The chart was devised by a technocratically inclined engineer named Dahlberg. The model never got beyond the pen-and-paint stage at that time, but it could be used to trace through the imputed consequences of particular economic measures or events provided the theory was right!11 As my formal education in economics progressed, I acquired a disdain for that naive simulation, only to discover after World War II that a distinguished economist, Professor A. W. Phillips had actually built the Moniac, a hydraulic model that simulated a Keynesian economy.12 Of course Professor Phillips's simulation incorporated a more nearly correct theory than the earlier one and was actually constructed and operated two points in its favor. However, the Moniac, while useful as a teaching tool, told us nothing that could not be extracted readily from simple mathematical versions of Keynesian theory and was soon priced out of the market by the growing number of computer simulations of the economy. Simulation As a Source of New Knowledge This brings me to the crucial question about simulation: How can a simulation ever tell us anything that we do not already know? The usual implication of the question is that it can't. As a matter of fact, there is an interesting parallelism, which I shall exploit presently, between two assertions about computers and simulation that one hears frequently: 1. A simulation is no better than the assumptions built into it. 2. A computer can do only what it is programmed to do. I shall not deny either assertion, for both seem to me to be true. But despite both assertions simulation can tell us things we do not already know. 11. For some published versions of this model, see A. O. Dahlberg, National Income Visualized (N.Y.: Columbia University Press, 1956). 12. A. W. Phillips, "Mechanical Models in Economic Dynamics," Economica, New Series, 17 (1950):283 305.
  • 27. Page 15 There are two related ways in which simulation can provide new knowledge one of them obvious, the other perhaps a bit subtle. The obvious point is that, even when we have correct premises, it may be very difficult to discover what they imply. All correct reasoning is a grand system of tautologies, but only God can make direct use of that fact. The rest of us must painstakingly and fallibly tease out the consequences of our assumptions. Thus we might expect simulation to be a powerful technique for deriving, from our knowledge of the mechanisms governing the behavior of gases, a theory of the weather and a means of weather prediction. Indeed, as many people are aware, attempts have been under way for some years to apply this technique. Greatly oversimplified, the idea is that we already know the correct basic assumptions, the local atmospheric equations, but we need the computer to work out the implications of the interactions of vast numbers of variables starting from complicated initial conditions. This is simply an extrapolation to the scale of modern computers of the idea we use when we solve two simultaneous equations by algebra. This approach to simulation has numerous applications to engineering design. For it is typical of many kinds of design problems that the inner system consists of components whose fundamental laws of behavior mechanical, electrical, or chemical are well known. The difficulty of the design problem often resides in predicting how an assemblage of such components will behave. Simulation of Poorly Understood Systems The more interesting and subtle question is whether simulation can be of any help to us when we do not know very much initially about the natural laws that govern the behavior of the inner system. Let me show why this question must also be answered in the affirmative. First, I shall make a preliminary comment that simplifies matters: we are seldom interested in explaining or predicting phenomena in all their particularity; we are usually interested only in a few properties abstracted from the complex reality. Thus, a NASA-launched satellite is surely an artificial object, but we usually do not think of it as "simulating" the moon or a planet. It simply obeys the same laws of physics, which relate
  • 28. Page 16 only to its inertial and gravitational mass, abstracted from most of its other properties. It is a moon. Similarly electric energy that entered my house from the early atomic generating station at Shipping port did not "simulate" energy generated by means of a coal plant or a windmill. Maxwell's equations hold for both. The more we are willing to abstract from the detail of a set of phenomena, the easier it becomes to simulate the phenomena. Moreover we do not have to know, or guess at, all the internal structure of the system but only that part of it that is crucial to the abstraction. It is fortunate that this is so, for if it were not, the top down strategy that built the natural sciences over the past three centuries would have been infeasible. We knew a great deal about the gross physical and chemical behavior of matter before we had a knowledge of molecules, a great deal about molecular chemistry before we had an atomic theory, and a great deal about atoms before we had any theory of elementary particles if indeed we have such a theory today. This skyhook-skyscraper construction of science from the roof down to the yet unconstructed foundations was possible because the behavior of the system at each level depended on only a very approximate, simplified, abstracted characterization of the system at the level next beneath.13 This is lucky, else the safety of bridges and airplanes might depend on the correctness of the "Eightfold Way" of looking at elementary particles. Artificial systems and adaptive systems have properties that make them particularly susceptible to simulation via simplified models. The characterization of such systems in the previous section of this chapter 13. This point is developed more fully in "The Architecture of Complexity," chapter 8 in this volume. More than fifty years ago, Bertrand Russell made the same point about the architecture of mathematics. See the "Preface" to Principia Mathematica: ". . . the chief reason in favour of any theory on the principles of mathematics must always be inductive, i.e., it must lie in the fact that the theory in question enables us to deduce ordinary mathematics. In mathematics, the greatest degree of self-evidence is usually not to be found quite at the beginning, but at some later point; hence the early deductions, until they reach this point, give reasons rather for believing the premises because true consequences follow from them, than for believing the consequences because they follow from the premises." Contemporary preferences for deductive formalisms frequently blind us to this important fact, which is no less true today than it was in 1910.
  • 29. Page 17 explains why. Resemblance in behavior of systems without identity of the inner systems is particularly feasible if the aspects in which we are interested arise out of the organization of the parts, independently of all but a few properties of the individual components. Thus for many purposes we may be interested in only such characteristics of a material as its tensile and compressive strength. We may be profoundly unconcerned about its chemical properties, or even whether it is wood or iron. The motor control patent cited earlier illustrates this abstraction to organizational properties. The invention consisted of a ''combination" of "reversing means," of "field weakening means," that is to say, of components specified in terms of their functioning in the organized whole. How many ways are there of reversing a motor, or of weakening its field strength? We can simulate the system described in the patent claims in many ways without reproducing even approximately the actual physical device that is depicted. With a small additional step of abstraction, the patent claims could be restated to encompass mechanical as well as electrical devices. I suppose that any undergraduate engineer at Berkeley, Carnegie Mellon University, or MIT could design a mechanical system embodying reversibility and variable starting torque so as to simulate the system of the patent. The Computer As Artifact No artifact devised by man is so convenient for this kind of functional description as a digital computer. It is truly protean, for almost the only ones of its properties that are detectable in its behavior (when it is operating properly!) are the organizational properties. The speed with which it performs it basic operations may allow us to infer a little about its physical components and their natural laws; speed data, for example, would allow us to rule out certain kinds of "slow" components. For the rest, almost no interesting statement that one can make about an operating computer bears any particular relation to the specific nature of the hardware. A computer is an organization of elementary functional components in which, to a high approximation, only the function
  • 30. Page 18 performed by those components is relevant to the behavior of the whole system.14 Computers As Abstract Objects This highly abstractive quality of computers makes it easy to introduce mathematics into the study of their theory and has led some to the erroneous conclusion that, as a computer science emerges, it will necessarily be a mathematical rather than an empirical science. Let me take up these two points in turn: the relevance of mathematics to computers and the possibility of studying computers empirically. Some important theorizing, initiated by John von Neumann, has been done on the topic of computer reliability. The question is how to build a reliable system from unreliable parts. Notice that this is not posed as a question of physics or physical engineering. The components engineer is assumed to have done his best, but the parts are still unreliable! We can cope with the unreliability only by our manner of organizing them. To turn this into a meaningful problem, we have to say a little more about the nature of the unreliable parts. Here we are aided by the knowledge that any computer can be assembled out of a small array of simple, basic elements. For instance, we may take as our primitives the so-called Pitts-McCulloch neurons. As their name implies, these components were devised in analogy to the supposed anatomical and functional characteristics of neurons in the brain, but they are highly abstracted. They are formally isomorphic with the simplest kinds of switching circuits "and" "or," and "not'' circuits. We postulate, now, that we are to build a system from such elements and that each elementary part has a specified probability of functioning correctly. The problem is to arrange the elements and their interconnections in such a way that the complete system will perform reliably. The important point for our present discussion is that the parts could as well be neurons as relays, as well relays as transistors. The natural laws governing relays are very well known, while the natural laws governing 14. On the subject of this and the following paragraphs, see M. L. Minsky, op. cit.; then John von Neumann, "Probabilistic Logics and the Synthesis of Reliable Organisms from Unreliable Components," in C. E. Shannon and J. McCarthy (eds.), Automata Studies (Princeton: Princeton University Press, 1956).
  • 31. Page 19 neurons are known most imperfectly. But that does not matter, for all that is relevant for the theory is that the components have the specified level of unreliability and be interconnected in the specified way. This example shows that the possibility of building a mathematical theory of a system or of simulating that system does not depend on having an adequate micro theory of the natural laws that govern the system components. Such a micro theory might indeed be simply irrelevant. Computers As Empirical Objects We turn next to the feasibility of an empirical science of computers as distinct from the solid-state physics or physiology of their componentry.15 As a matter of empirical fact almost all of the computers that have been designed have certain common organizational features. They almost all can be decomposed into an active processor (Babbage's "Mill") and a memory (Babbage's "Store") in combination with input and output devices. (Some of the larger systems, somewhat in the manner of colonial algae, are assemblages of smaller systems having some or all of these components. But perhaps I may oversimplify for the moment.) They are all capable of storing symbols (program) that can be interpreted by a program-control component and executed. Almost all have exceedingly limited capacity for simultaneous, parallel activity they are basically one-thing-at-a-time systems. Symbols generally have to be moved from the larger memory components into the central processor before they can be acted upon. The systems are capable of only simple basic actions: recoding symbols, storing symbols, copying symbols, moving symbols, erasing symbols, and comparing symbols. Since there are now many such devices in the world, and since the properties that describe them also appear to be shared by the human central nervous system, nothing prevents us from developing a natural history of them. We can study them as we would rabbits or chipmunks and discover how they behave under different patterns of environmental stimulation. Insofar as their behavior reflects largely the broad functional 15. A. Newell and H. A. Simon, "Computer Science as Empirical Inquiry," Communications of the ACM, 19(March 1976):113 126. See also H. A. Simon, "Artificial Intelligence: An Empirical Science," Artificial Intelligence, 77(1995):95 127.
  • 32. Page 20 characteristics we have described, and is independent of details of their hardware, we can build a general but empirical theory of them. The research that was done to design computer time-sharing systems is a good example of the study of computer behavior as an empirical phenomenon. Only fragments of theory were available to guide the design of a time-sharing system or to predict how a system of a specified design would actually behave in an environment of users who placed their several demands upon it. Most actual designs turned out initially to exhibit serious deficiencies, and most predictions of performance were startlingly inaccurate. Under these circumstances the main route open to the development and improvement of time-sharing systems was to build them and see how they behaved. And this is what was done. They were built, modified, and improved in successive stages. Perhaps theory could have anticipated these experiments and made them unnecessary. In fact it didn't, and I don't know anyone intimately acquainted with these exceedingly complex systems who has very specific ideas as to how it might have done so. To understand them, the systems had to be constructed, and their behavior observed.16 In a similar vein computer programs designed to play games or to discover proofs for mathematical theorems spend their lives in exceedingly large and complex task environments. Even when the programs themselves are only moderately large and intricate (compared, say, with the monitor and operating systems of large computers), too little is known about their task environments to permit accurate prediction of how well they will perform, how selectively they will be able to search for problem solutions. Here again theoretical analysis must be accompanied by large amounts of experimental work. A growing literature reporting these experiments is beginning to give us precise knowledge about the degree of heuristic power of particular heuristic devices in reducing the size of the problem spaces that must be searched. In theorem proving, for example, there has 16. The empirical, exploratory flavor of computer research is nicely captured by the account of Maurice V. Wilkes in his 1967 Turing Lecture, "Computers Then and Now," Journal of the Association for Computing Machinery, 15(January 1968):1 7.
  • 33. Page 21 been a whole series of advances in heuristic power based on and guided by empirical exploration: the use of the Herbrand theorem, the resolution principle, the set-of-support principle, and so on.17 Computers and Thought As we succeed in broadening and deepening our knowledge theoretical and empirical about computers, we discover that in large part their behavior is governed by simple general laws, that what appeared as complexity in the computer program was to a considerable extent complexity of the environment to which the program was seeking to adapt its behavior. This relation of program to environment opened up an exceedingly important role for computer simulation as a tool for achieving a deeper understanding of human behavior. For if it is the organization of components, and not their physical properties, that largely determines behavior, and if computers are organized somewhat in the image of man, then the computer becomes an obvious device for exploring the consequences of alternative organizational assumptions for human behavior. Psychology could move forward without awaiting the solutions by neurology of the problems of component design however interesting and significant these components turn out to be. Symbol Systems: Rational Artifacts The computer is a member of an important family of artifacts called symbol systems, or more explicitly, physical symbol systems.18 Another important member of the family (some of us think, anthropomorphically, it is the most important) is the human mind and brain. It is with this family 17. Note, for example, the empirical data in Lawrence Wos, George A. Robinson, Daniel F. Carson, and Leon Shalla, "The Concept of Demodulation in Theorem Proving," Journal of the Association for Computing Machinery, 14(October 1967):698 709, and in several of the earlier papers referenced there. See also the collection of programs in Edward Feigenbaum and Julian Feldman (eds.), Computers and Thought (New York: McGraw-Hill, 1963). It is common practice in the field to title papers about heuristic programs, "Experiments with an XYZ Program." 18. In the literature the phrase information-processing system is used more frequently than symbol system. I will use the two terms as synonyms.
  • 34. Page 22 of artifacts, and particularly the human version of it, that we will be primarily concerned in this book. Symbol systems are almost the quintessential artifacts, for adaptivity to an environment is their whole raison d'être. They are goal- seeking, information-processing systems, usually enlisted in the service of the larger systems in which they are incorporated. Basic Capabilities of Symbol Systems A physical symbol system holds a set of entities, called symbols. These are physical patterns (e.g., chalk marks on a blackboard) that can occur as components of symbol structures (sometimes called "expressions"). As I have already pointed out in the case of computers, a symbol system also possesses a number of simple processes that operate upon symbol structures processes that create, modify, copy, and destroy symbols. A physical symbol system is a machine that, as it moves through time, produces an evolving collection of symbol structures.19 Symbol structures can, and commonly do, serve as internal representations (e.g., "mental images") of the environments to which the symbol system is seeking to adapt. They allow it to model that environment with greater or less veridicality and in greater or less detail, and consequently to reason about it. Of course, for this capability to be of any use to the symbol system, it must have windows on the world and hands, too. It must have means for acquiring information from the external environment that can be encoded into internal symbols, as well as means for producing symbols that initiate action upon the environment. Thus it must use symbols to designate objects and relations and actions in the world external to the system. Symbols may also designate processes that the symbol system can interpret and execute. Hence the programs that govern the behavior of a symbol system can be stored, along with other symbol structures, in the system's own memory, and executed when activated. Symbol systems are called "physical" to remind the reader that they exist as real- world devices, fabricated of glass and metal (computers) or flesh and blood (brains). In the past we have been more accustomed to thinking of the symbol systems of mathematics and logic as abstract and disembodied, leaving out of account the paper and pencil and human minds that were required actually to bring them to life. Computers have 19. Newell and Simon, "Computer Science as Empirical Inquiry," p. 116.
  • 35. Page 23 transported symbol systems from the platonic heaven of ideas to the empirical world of actual processes carried out by machines or brains, or by the two of them working together. Intelligence As Computation The three chapters that follow rest squarely on the hypothesis that intelligence is the work of symbol systems. Stated a little more formally, the hypothesis is that a physical symbol system of the sort I have just described has the necessary and sufficient means for general intelligent action. The hypothesis is clearly an empirical one, to be judged true or false on the basis of evidence. One task of chapters 3 and 4 will be to review some of the evidence, which is of two basic kinds. On the one hand, by constructing computer programs that are demonstrably capable of intelligent action, we provide evidence on the sufficiency side of the hypothesis. On the other hand, by collecting experimental data on human thinking that tend to show that the human brain operates as a symbol system, we add plausibility to the claims for necessity, for such data imply that all known intelligent systems (brains and computers) are symbol systems. Economics: Abstract Rationality As prelude to our consideration of human intelligence as the work of a physical symbol system, chapter 2 introduces a heroic abstraction and idealization the idealization of human rationality which is enshrined in modern economic theories, particularly those called neoclassical. These theories are an idealization because they direct their attention primarily to the external environment of human thought, to decisions that are optimal for realizing the adaptive system's goals (maximization of utility or profit). They seek to define the decisions that would be substantively rational in the circumstances defined by the outer environment. Economic theory's treatment of the limits of rationality imposed by the inner environment by the characteristics of the physical symbol system tends to be pragmatic, and sometimes even opportunistic. In the more formal treatments of general equilibrium and in the so-called "rational expectations" approach to adaptation, the possibilities that an information-processing system may have a very limited capability for
  • 36. Page 24 adaptation are almost ignored. On the other hand, in discussions of the rationale for market mechanisms and in many theories of decision making under uncertainty, the procedural aspects of rationality receive more serious treatment. In chapter 2 we will see examples both of neglect for and concern with the limits of rationality. From the idealizations of economics (and some criticisms of these idealizations) we will move, in chapters 3 and 4, to a more systematic study of the inner environment of thought of thought processes as they actually occur within the constraints imposed by the parameters of a physical symbol system like the brain.
  • 37. Page 25 2 Economic Rationality: Adaptive Artifice Because scarcity is a central fact of life land, money, fuel, time, attention, and many other things are scarce it is a task of rationality to allocate scarce things. Performing that task is the focal concern of economics. Economics exhibits in purest form the artificial component in human behavior, in individual actors, business firms, markets, and the entire economy. The outer environment is defined by the behavior of other individuals, firms, markets, or economies. The inner environment is defined by an individual's, firm's, market's, or economy's goals and capabilities for rational, adaptive behavior. Economics illustrates well how outer and inner environment interact and, in particular, how an intelligent system's adjustment to its outer environment (its substantive rationality) is limited by its ability, through knowledge and computation, to discover appropriate adaptive behavior (its procedural rationality). The Economic Actor In the textbook theory of the business firm, an "entrepreneur" aims at maximizing profit, and in such simple circumstances that the computational ability to find the maximum is not in question. A cost curve relates dollar expenditures to amount of product manufactured, and a revenue curve relates income to amount of product sold. The goal (maximizing the difference between income and expenditure) fully defines the firm's inner environment. The cost and revenue curves define the outer environment.1 Elementary calculus shows how to find the profit-maximizing 1. I am drawing the line between outer and inner environment not at the firm's boundary but at the skin of the entrepreneur, so that the factory is part of the external technology; the brain, perhaps assisted by computers, is the internal.
  • 38. Page 26 quantity by taking a derivative (rate at which profit changes with change in quantity) and setting it equal to zero. Here are all the elements of an artificial system adapting to an outer environment, subject only to the goal defined by the inner environment. In contrast to a situation where the adaptation process is itself problematic, we can predict the system's behavior without knowing how it actually computes the optimal output. We need consider only substantive rationality.2 We can interpret this bare-bones theory of the firm either positively (as describing how business firms behave) or normatively (as advising them how to maximize profits). It is widely taught in both senses in business schools and universities, just as if it described what goes on, or could go on, in the real world. Alas, the picture is far too simple to fit reality. Procedural Rationality The question of maximizing the difference between revenue and cost becomes interesting when, in more realistic circumstances, we ask how the firm actually goes about discovering that maximizing quantity. Cost accounting may estimate the approximate cost of producing any particular output, but how much can be sold at a specific price and how this amount varies with price (the elasticity of demand) usually can be guessed only roughly. When there is uncertainty (as there always is), prospects of profit must be balanced against risk, thereby changing profit maximization to the much more shadowy goal of maximizing a profit-vs.-risk "utility function" that is assumed to lurk somewhere in the recesses of the entrepreneur's mind. But in real life the business firm must also choose product quality and the assortment of products it will manufacture. It often has to invent and design some of these products. It must schedule the factory to produce a profitable combination of them and devise marketing procedures and structures to sell them. So we proceed step by step from the simple caricature of the firm depicted in the textbooks to the complexities of real firms in the real world of business. At each step toward realism, the problem 2. H. A. Simon, "Rationality as Process and as Product of Thought," American Economic Review, 68(1978):1 16.
  • 39. Page 27 gradually changes from choosing the right course of action (substantive rationality) to finding a way of calculating, very approximately, where a good course of action lies (procedural rationality). With this shift, the theory of the firm becomes a theory of estimation under uncertainty and a theory of computation decidedly non-trivial theories as the obscurities and complexities of information and computation increase. Operations Research and Management Science Today several branches of applied science assist the firm to achieve procedural rationality.3 One of them is operations research (OR); another is artificial intelligence (AI). OR provides algorithms for handling difficult multivariate decision problems, sometimes involving uncertainty. Linear programming, integer programming, queuing theory, and linear decision rules are examples of widely used OR procedures. To permit computers to find optimal solutions with reasonable expenditures of effort when there are hundreds or thousands of variables, the powerful algorithms associated with OR impose a strong mathematical structure on the decision problem. Their power is bought at the cost of shaping and squeezing the real-world problem to fit their computational requirements: for example, replacing the real-world criterion function and constraints with linear approximations so that linear programming can be used. Of course the decision that is optimal for the simplified approximation will rarely be optimal in the real world, but experience shows that it will often be satisfactory. The alternative methods provided by AI, most often in the form of heuristic search (selective search using rules of thumb), find decisions that are "good enough," that satisfice. The AI models, like OR models, also only approximate the real world, but usually with much more accuracy and detail than the OR models can admit. They can do this because heuristic search can be carried out in a more complex and less well-structured problem space than is required by OR maximizing tools. The price paid 3. For a brief survey of these developments, see H. A. Simon, "On How to Decide What to Do," The Bell Journal of Economics, 9(1978):494 507. For an estimate of their impact on management, see H. A. Simon, The New Science of Management Decision, rev. ed. (Englewood Cliffs, NJ: Prentice-Hall, 1977), chapters 2 and 4.
  • 40. Page 28 for working with the more realistic but less regular models is that AI methods generally find only satisfactory solutions, not optima. We must trade off satisficing in a nearly-realistic model (AI) against optimizing in a greatly simplified model (OR). Sometimes one will be preferred, sometimes the other. AI methods can handle combinatorial problems (e.g., factory scheduling problems) that are beyond the capacities of OR methods, even with the largest computers. Heuristic methods provide an especially powerful problem-solving and decision-making tool for humans who are unassisted by any computer other than their own minds, hence must make radical simplifications to find even approximate solutions. AI methods also are not limited, as most OR methods are, to situations that can be expressed quantitatively. They extend to all situations that can be represented symbolically, that is, verbally, mathematically or diagrammatically. OR and AI have been applied mainly to business decisions at the middle levels of management. A vast range of top management decisions (e.g., strategic decisions about investment, R&D, specialization and diversification, recruitment, development, and retention of managerial talent) are still mostly handled traditionally, that is, by experienced executives' exercise of judgment. As we shall see in chapters 3 and 4, so-called ''judgment" turns out to be mainly a non-numerical heuristic search that draws upon information stored in large expert memories. Today we have learned how to employ AI techniques in the form of so-called expert systems in a growing range of domains previously reserved for human expertise and judgment for example, medical diagnosis and credit evaluation. Moreover, while classical OR tools could only choose among predefined alternatives, AI expert systems are now being extended to the generation of alternatives, that is, to problems of design. More will be said about these developments in chapters 5 and 6. Satisficing and Aspiration Levels What a person cannot do he or she will not do, no matter how strong the urge to do it. In the face of real-world complexity, the business firm turns to procedures that find good enough answers to questions whose best answers are unknowable. Because real-world optimization, with or with-
  • 41. Page 29 out computers, is impossible, the real economic actor is in fact a satisficer, a person who accepts "good enough" alternatives, not because less is preferred to more but because there is no choice. Many economists, Milton Friedman being perhaps the most vocal, have argued that the gap between satisfactory and best is of no great importance, hence the unrealism of the assumption that the actors optimize does not matter; others, including myself, believe that it does matter, and matters a great deal.4 But reviewing this old argument would take me away from my main theme, which is to show how the behavior of an artificial system may be strongly influenced by the limits of its adaptive capacities its knowledge and computational powers. One requirement of optimization not shared by satisficing is that all alternatives must be measurable in terms of a common utility function. A large body of evidence shows that human choices are not consistent and transitive, as they would be if a utility function existed.5 But even in a satisficing theory we need some criteria of satisfaction. What realistic measures of human profit, pleasure, happiness and satisfaction can serve in place of the discredited utility function? Research findings on the psychology of choice, indicate some properties a thermometer of satisfaction should have. First, unlike the utility function, it is not limited to positive values, but has a zero point (of minimal contentment). Above zero, various degrees of satisfaction are experienced, and below zero, various degrees of dissatisfaction. Second, if periodic readings are taken of people in relatively stable life circumstances, we only occasionally find temperatures very far from zero in either direction, and the divergent measurements tend to regress over time back toward the zero mark. Most people consistently register either slightly below zero (mild discontent) or a little above (moderate satisfaction). 4. I have argued the case in numerous papers. Two recent examples are "Rationality in Psychology and Economics," The Journal of Business, 59(1986):S209 S224 (No. 4, Pt. 2); and "The State of Economic Science," in W. Sichel (ed.), The State of Economic Science (Kalamazoo, MI: W. E. Upjohn Institute for Employment Research, 1989). 5. See, for example, D. Kahneman and A. Tversky, "On the Psychology of Prediction," Psychological Review, 80(1973):237 251, and H. Kunreuther et al., Disaster Insurance Protection (New York: Wiley, 1978).
  • 42. Page 30 To deal with these phenomena, psychology employs the concept of aspiration level. Aspirations have many dimensions: one can have aspirations for pleasant work, love, good food, travel, and many other things. For each dimension, expectations of the attainable define an aspiration level that is compared with the current level of achievement. If achievements exceed aspirations, satisfaction is recorded as positive; if aspirations exceed achievements, there is dissatisfaction. There is no simple mechanism for comparison between dimensions. In general a large gain along one dimension is required to compensate for a small loss along another hence the system's net satisfactions are history-dependent and it is difficult for people to balance compensatory offsets. Aspiration levels provide a computational mechanism for satisficing. An alternative satisfices if it meets aspirations along all dimensions. If no such alternative is found, search is undertaken for new alternatives. Meanwhile, aspirations along one or more dimensions drift down gradually until a satisfactory new alternative is found or some existing alternative satisfices. A theory of choice employing these mechanisms acknowledges the limits on human computation and fits our empirical observations of human decision making far better than the utility maximization theory.6 Markets and Organizations Economics has been concerned less with individual consumers or business firms than with larger artificial systems: the economy and its major components, markets. Markets aim to coordinate the decisions and behavior of multitudes of economic actors to guarantee that the quantity of brussels sprouts shipped to market bears some reasonable relation to the quantity that consumers will buy and eat, and that the price at which brussels sprouts can be sold bears a reasonable relation to the cost of producing them. Any society that is not a subsistence economy, but has 6. H. A. Simon, "A Behavioral Model of Rational Choice," Quarterly Journal of Economics, 6(1955):99 118; I. N. Gallhofer and W. E. Saris, Foreign Policy Decision-Making: A Qualitative and Quantitative Analysis of Political Argumentation (New York: Praeger, in press).
  • 43. Page 31 substantial specialization and division of labor, needs mechanisms to perform this coordinative function. Markets are only one, however, among the spectrum of mechanisms of coordination on which any society relies. For some purposes, central planning based on statistics provides the basis for coordinating behavior patterns. Highway planning, for example, relies on estimates of road usage that reflect statistically stable patterns of driving behavior. For other purposes, bargaining and negotiation may be used to coordinate individual behaviors, for instance, to secure wage agreements between employers and unions or to form legislative majorities. For still other coordinative functions, societies employ hierarchic organizations business, governmental and educational with lines of formal authority running from top to bottom and networks of communications lacing through the structure. Finally, for making certain important decisions and for selecting persons to occupy positions of public authority, societies employ a wide variety of balloting procedures. Although all of these coordinating techniques can be found somewhere in almost any society, their mix and applications vary tremendously from one nation or culture to another.7 We ordinarily describe capitalist societies as depending mostly on markets for coordination and socialist societies as depending mostly on hierarchic organizations and planning, but this is a gross oversimplification, for it ignores the uses of voting in democratic societies of either kind, and it ignores the great importance of large organizations in modern "market" societies. The economic units in capitalist societies are mostly business firms, which are themselves hierarchic organizations, some of enormous size, that make almost negligible use of markets in their internal functioning. Roughly eighty percent of the human economic activity in the American economy, usually regarded as almost the epitome of a "market" economy, takes place in the internal environments of business and other organizations and not in the external, between-organization environments of markets.8 To avoid misunderstanding, it would be appropriate to call such 7. R. A. Dahl and C. E. Lindblom, Politics, Economics, and Welfare (New York: Harper and Brothers, 1953). 8. H. A. Simon, "Organizations and Markets," Journal of Economic Perspectives, 5(1991):25 44.
  • 44. Page 32 a society an organization-&-market economy; for in order to give an account of it we have to pay as much attention to organizations as to markets. The Invisible Hand In examining the processes of social coordination, economics has given top billing sometimes almost exclusive billing to the market mechanism. It is indeed a remarkable mechanism which under many circumstances can bring it about that the producing, consuming, buying and selling behaviors of enormous numbers of people, each responding only to personal selfish interests, allocate resources so as to clear markets do in fact nearly balance the production with the consumption of brussels sprouts and all the other commodities the economy produces and uses. Only relatively weak conditions need be satisfied to bring about such an equilibrium. Achieving it mainly requires that prices drop in the face of an excess supply, and that quantities produced decline when prices are lowered or when inventories mount. Any number of dynamic systems can be formulated that have these properties, and these systems will seek equilibrium and oscillate stably around it over a wide range of conditions. There have been many recent laboratory experiments on market behavior, sometimes with human subjects, sometimes with computer programs as simulated subjects.9 Experimental markets in which the simulated traders are "stupid" sellers, knowing only a minimum price below which they should not sell, and "stupid" buyers, knowing only a maximum price above which they should not buy move toward equilibrium almost as rapidly as markets whose agents are rational in the classical sense.10 Markets and Optimality These findings undermine the much stronger claims that are made for the price mechanism by contemporary neoclassical economics. Claims that it does more than merely clear markets require the strong assumptions of perfect competition and of maximization of 9. V. L. Smith, Papers in Experimental Economics (New York: Cambridge University Press, 1991.) 10. D. J. Gode and S. Sunder, "Allocative Efficiency of Markets with Zero Intelligence Traders," Journal of Political Economy, 101(1993):119 127.
  • 45. Page 33 profit or utility by the economic actors. With these assumptions, but not without them, the market equilibrium can be shown to be optimal in the sense that it could not be altered so as to make everyone simultaneously better off. These are the familiar propositions of Pareto optimality of competitive equilibrium that have been formalized so elegantly by Arrow, Debreu, Hurwicz, and others.11 The optimality theorems stretch credibility, so far as real-world markets are concerned, because they require substantive rationality of the kinds we found implausible in our examination of the theory of the firm. Markets populated by consumers and producers who satisfice instead of optimizing do not meet the conditions on which the theorems rest. But the experimental data on simulated markets show that market clearing, the only property of markets for which there is solid empirical evidence, can be achieved without the optimizing assumptions, hence also without claiming that markets do produce a Pareto optimum. As Samuel Johnson said of the dancing dog, "The marvel is not that it dances well, but that it dances at all "the marvel is not that markets optimize (they don't) but that they often clear. Order Without a Planner We have become accustomed to the idea that a natural system like the human body or an ecosystem regulates itself. This is in fact a favorite theme of the current discussion of complexity which we will take up in later chapters. We explain the regulation by feedback loops rather than a central planning and directing body. But somehow, untutored intuitions about self-regulation without central direction do not carry over to the artificial systems of human society. I retain vivid memories of the astonishment and disbelief expressed by the architecture students to whom I taught urban land economics many years ago when I pointed to medieval cities as marvelously patterned systems that had mostly just "grown" in response to myriads of individual human decisions. To my students a pattern implied a planner in whose mind it had been conceived and by whose hand it had been implemented. The idea that a city could acquire its pattern as naturally as a snowflake was 11. See Gerard Debreu, Theory of Value: An Axiomatic Analysis of Economic Equilibrium (New York: Wiley, 1959).
  • 46. Page 34 foreign to them. They reacted to it as many Christian fundamentalists responded to Darwin: no design without a Designer! Marxist fundamentalists reacted in a similar way when, after World War I, they undertook to construct the new socialist economies of eastern Europe. It took them some thirty years to realize that markets and prices might play a constructive role in socialist economies and might even have important advantages over central planning as tools for the allocation of resources. My sometime teacher, Oscar Lange, was one of the pioneers who carried this heretical notion to Poland after the Second World War and risked his career and his life for the idea. With the collapse of the Eastern European economies around 1990 the simple faith in central planning was replaced in some influential minds by an equally simple faith in markets. The collapse taught that modern economies cannot function well without smoothly operating markets. The poor performance of these economies since the collapse has taught that they also cannot function well without effective organizations. If we focus on the equilibrating functions of markets and put aside the illusions of Pareto optimality, market processes commend themselves primarily because they avoid placing on a central planning mechanism a burden of calculation that such a mechanism, however well buttressed by the largest computers, could not sustain. Markets appear to conserve information and calculation by assigning decisions to actors who can make them on the basis of information that is available to them locally that is, without knowing much about the rest of the economy apart from the prices and properties of the goods they are purchasing and the costs of the goods they are producing. No one has characterized market mechanisms better than Friederich von Hayek who, in the decades after World War II, was their leading interpreter and defender. His defense did not rest primarily upon the supposed optimum attained by them but rather upon the limits of the inner environment the computational limits of human beings:12 The most significant fact about this system is the economy of knowledge with which it operates, or how little the individual participants need to know in order to be able to take the right action. 12. F. von Hayek, "The Use of Knowledge in Society," American Economic Review, 35(September 1945):519 30, at p. 520.
  • 47. Page 35 The experiments on simulated markets, described earlier, confirm his view. At least under some circumstances, market traders using a very small amount of mostly local information and extremely simple (and non-optimizing) decision rules, can balance supply and demand and clear markets. It is time now that we turn to the role of organizations in an organization-&- market economy and the reasons why all economic activities are not left to market forces. In preparation for this topic, we need to look at the phenomena of uncertainty and expectations. Uncertainty and Expectations Because the consequences of many actions extend well into the future, correct prediction is essential for objectively rational choice. We need to know about changes in the natural environment: the weather that will affect next year's harvest. We need to know about changes in social and political environments beyond the economic: the civil warfare of Bosnia or Sri Lanka. We need to know about the future behaviors of other economic actors customers, competitors, suppliers which may be influenced in turn by our own behaviors. In simple cases uncertainty arising from exogenous events can be handled by estimating the probabilities of these events, as insurance companies do but usually at a cost in computational complexity and information gathering. An alternative is to use feedback to correct for unexpected or incorrectly predicted events. Even if events are imperfectly anticipated and the response to them less than accurate, adaptive systems may remain stable in the face of severe jolts, their feedback controls bringing them back on course after each shock that displaces them. After we fail to predict the blizzard, snow plows still clear the streets. Although the presence of uncertainty does not make intelligent choice impossible, it places a premium on robust adaptive procedures instead of optimizing strategies that work well only when finely tuned to precisely known environments.13 13. A remarkable paper by Kenneth Arrow, reprinted in The New Palgrave: A Dictionary of Economics (London: Macmillan Press, 1987), v. 2, pp. 69 74, under the title of "Economic Theory and the Hypothesis of Rationality," shows that to preserve the Pareto optimality properties of markets when there is uncertainty (footnote continued on next page)
  • 48. Page 36 Expectations A system can generally be steered more accurately if it uses feed forward, based on prediction of the future, in combination with feedback, to correct the errors of the past. However, forming expectations to deal with uncertainty creates its own problems. Feed forward can have unfortunate destabilizing effects, for a system can overreact to its predictions and go into unstable oscillations. Feed forward in markets can become especially destabilizing when each actor tries to anticipate the actions of the others (and hence their expectations). The standard economic example of destabilizing expectations is the speculative bubble. Bubbles that ultimately burst are observed periodically in the world's markets (the Tulip Craze being one of many well-known historical examples). Moreover, bubbles and their bursts have now been observed in experimental markets, the overbidding occurring even though subjects know that the market must again fall to a certain level on a specified and not too distant date. Of course not all speculation blows bubbles. Under many circumstances market speculation stabilizes the system, causing its fluctuations to become smaller, for the speculator attempts to notice when particular prices are above or below their "normal" or equilibrium levels in order to sell or buy, respectively. Such actions push the prices closer to equilibrium. Sometimes, however, a rising price creates the expectation that it will go higher yet, hence induces buying rather than selling. There ensues a game of economic "chicken," all the players assuming that they can get out just before the crash occurs. There is general consensus in economics that destabilizing expectations play an important role in monetary hyperinflation and in the business cycle. There is less consensus as to whose expectations are the first movers in the chain of reactions or what to do about it. The difficulties raised by mutual expectations appear wherever markets are not perfectly competitive. In perfect competition, each firm assumes that market prices cannot be affected by their actions: prices are as much a part of the external environment as are the laws of the physical world. (footnote continued from previous page) about the future, we must impose information and computational requirements on economic actors that are exceedingly burdensome and unrealistic.
  • 49. Page 37 But in the world of imperfectly competitive markets, firms need not make this assumption. If, for example, there are only a few firms in an industry, each may try to outguess its competitors. If more than one plays this game, even the definition of rationality comes into question. The Theory of Games A century and a half ago, Augustin Cournot undertook to construct a theory of rational choice in markets involving two firms.14 He assumed that each firm, with limited cleverness, formed an expectation of its competitor's reaction to its actions, but that each carried the analysis only one move deep. But what if one of the firms, or both, tries to take into account the reactions to the reactions? They may be led into an infinite regress of outguessing. A major step toward a clearer formulation of the problem was taken a century later, in 1944, when von Neumann and Morgenstern published The Theory of Games and Economic Behavior.15 But far from solving the problem, the theory of games demonstrated how intractable a task it is to prescribe optimally rational action in a multiperson situation where interests are opposed. The difficulty of defining rationality exhibits itself well in the so-called Prisoners' Dilemma game.16 In the Prisoners' Dilemma, each player has a choice between two moves, one cooperative and one aggressive. If both choose the cooperative move, both receive a moderate reward. If one chooses the cooperative move, but the other the aggressive move, the co-operator is penalized severely while the aggressor receives a larger reward. If both choose the aggressive move, both receive lesser penalties. There is no obvious rational strategy. Each player will gain from cooperation if and only if the partner does not aggress, but each will gain even more from aggression if he can count on the partner to cooperate. Treachery pays, unless it is met with treachery. The mutually beneficial strategy is unstable. 14. Researches into the Mathematical Principles of the Theory of Wealth (New York: Augustus M. Kelley, 1960), first published in 1838. 15. Princeton: Princeton University Press, 1944. 16. R. D. Luce and H. Raiffa, Games and Decisions (New York: Wiley, 1957), pp. 94 102; R. M. Axelrod, The Evolution of Cooperation, (New York: Basic Books, 1984).
  • 50. Page 38 Are matters improved by playing the game repetitively? Even in this case, cleverly timed treachery pays off, inducing instability in attempts at cooperation. However, in actual experiments with the game, it turns out that cooperative behavior occurs quite frequently, and that a tit-for-tat strategy (behave cooperatively until the other player aggresses; then aggress once but return to cooperation if the other player also does) almost always yields higher rewards than other strategies. Roy Radner has shown (personal communication) that if players are striving for a satisfactory payoff rather than an optimal payoff, the cooperative solution can be stable. Bounded rationality appears to produce better outcomes than unbounded rationality in this kind of competitive situation. The Prisoners' Dilemma game, which has obvious real-world analogies in both politics and business, is only one of an unlimited number of games that illustrates the paradoxes of rationality wherever the goals of the different actors conflict totally or partially. Classical economics avoided these paradoxes by focusing upon the two situations (monopoly and perfect competition) where mutual expectations play no role. Market institutions are workable (but not optimal) well beyond that range of situations precisely because the limits on human abilities to compute possible scenarios of complex interaction prevent an infinite regress of mutual outguessing. Game theory's most valuable contribution has been to show that rationality is effectively undefinable when competitive actors have unlimited computational capabilities for outguessing each other, but that the problem does not arise as acutely in a world, like the real world, of bounded rationality. Rational Expectations A different view from the one just expressed was for a time popular in economics: that the problem of mutual outguessing should be solved by assuming that economic actors form their expectations ''rationally."17 This is interpreted to mean that the actors know (and agree on) the laws that govern the economic system and that their predic- 17. The idea and the phrase "rational expectations" originated with J. F. Muth, "Rational Expectations and the Theory of Price Movements," Econometrica, 29(1961):315 335. The notion was picked up, developed, and applied systematically to macroeconomics by R. E. Lucas, Jr., E. C. Prescott, T. J. Sargent, and others.
  • 51. Page 39 tions of the future are unbiased estimates of the equilibrium defined by these laws. These assumptions rule out most possibilities that speculation will be destabilizing. Although the assumptions underlying rational expectations are empirical assumptions, almost no empirical evidence supports them, nor is it obvious in what sense they are "rational" (i.e., utility maximizing). Business firms, investors, or consumers do not possess even a fraction of the knowledge or the computational ability required for carrying out the rational expectations strategy. To do so, they would have to share a model of the economy and be able to compute its equilibrium. Today, most rational expectationists are retreating to more realistic schemes of "adaptive expectations," in which actors gradually learn about their environments from the unfolding of events around them.18 But most approaches to adaptive expectations give up the idea of outguessing the market, and instead assume that the environment is a slowly changing "given" whose path will not be significantly affected by the decisions of any one actor. In sum, our present understanding of the dynamics of real economic systems is grossly deficient. We are especially lacking in empirical information about how economic actors, with their bounded rationality, form expectations about the future and how they use such expectations in planning their own behavior. Economics could do worse than to return to the empirical methods proposed (and practiced) by George Katona for studying expectation formation,19 and to an important extent, the current interest in experimental economics represents such a return. In face of the current gaps in our empirical knowledge there is little empirical basis for choosing among the competing models currently proposed by economics to account for business cycles, and consequently, little rational basis for choosing among the competing policy recommendations that flow from those models. 18. T. J. Sargent, Bounded Rationality in Macroeconomics (Oxford: Clarendon Press, 1993). Note that Sargent even borrows the label of "bounded rationality" for his version of adaptive expectations, but, regrettably, does not borrow the empirical methods of direct observation and experimentation that would have to accompany it in order to validate the particular behavioral assumptions he makes. 19. G. Katona, Psychological Analysis of Economic Behavior (New York: McGraw-Hill, 1951).