Index | Parent Index | Build Freedom: Archive

#TL20D: CAUSATION, CONTROL, AND THE EVOLUTION OF COMPLEXITY

by H. H. Pattee
Department of Systems Science and Industrial Engineering
T. J. Watson School of Engineering and Applied Science
State University of New York at Binghamton
Binghamton, NY 13902-6000

Is causation a useful concept?

It is not obvious that the concept of causation, in any of its many forms, has ever played a necessary role in the discovery of the laws of nature. Causation has a tortuous philosophical literature with no consensus in sight (e.g., Hart & Honoré, 1958; Bunge, 1959; Taylor, 1972), and modern physics has little interest in the concept. Nevertheless, causation is so ingrained in both the syntax and semantics of our natural language that we usually feel that events are somehow causally explained by almost any grammatically correct declarative statement that relates a noun and a verb phrase to the event: Why did the ball roll? Because John kicked the ball. Why did the ball bounce? Because the ball hit the post. In Aristotelian terms, the verb is a form of efficient cause, and either the subject or object can act as a material cause. If the subject happens to have a large brain we may also attribute a formal, teleological, or intentional cause to the event: Why did John kick the ball? Because John wanted a goal. As a child we figure out that these linguistic forms are transitive and always lead to a vicious circle or an infinite regress, but we are usually told that it is rude to continue to ask Why? when presented with one proximal cause. The major weakness of the concept of causation is this Whorfian dependence on natural language. Thus, the richness and ambiguity of causal forms arises more from the richness and ambiguities of language than from any empirical necessity or from natural laws.

Naive causation requires a direction in time

This naive concept of causation is formed from our perception of certain sequences of events. One condition is temporal antisymmetry. That is, when we say an event B is caused by an event A, it must be the case that A occurred before B. If temporal order were reversed any cause and effect relation would also be reversed, although some philosophers have questioned this assertion (e.g., Dummett, 1964). The concept of causation therefore presupposes a model of time, usually a tacit model. Our everyday concept of time is directed in one dimension, and so we ascribe causation to events that can be decomposed into simple strings of ordered events or actions. The high-dimensional and diffuse concurrent influences that are ubiquitous are seldom viewed as causes. However, like the concept of time, the meaning of causation does not easily lend itself to deeper analysis. When we try to define more precisely the concepts of time and causation we find they are entirely context or model dependent. Furthermore, these concepts are often not consistent between contexts and levels. To make matters worse, they usually appear as irreducibly primitive concepts at all levels.

Causation is gratuitous in modern physics

The Newtonian paradigm of state-determined rate laws derived from a scalar time variable and explicit forces only strengthens the naive concept of one-dimensional, focal causation. Reductionists take the microscopic physical laws as the ultimate source of order. At this lowest level, causation was classically associated with the concept of force. According to one statement of Newton's law a force is the cause of objects changing their motion. The concept of force can also be interpreted in many ways, but in practice most physical models are of systems with a very small number of forces, or more precisely, of systems where the equations of motion can be easily integrated or computationally iterated. However, in the case of the famous n-body problem (n > 2) that is generally nonintegrable, the forces are so interdependent that no focal causes exist. The motion of one body in an n-body model might be seen as a case of downward causation, but this does not add anything to our understanding of the fundamental problem.

The fundamental problem is that the microscopic equations of physics are time symmetric and therefore conceptually reversible. Consequently the irreversible concept of causation is not formally supportable by microphysical laws, and if it is used at all it is a purely subjective linguistic interpretation of the laws. Hertz (1894) argued that even the concept of force was unnecessary. This does not mean that the concepts of cause and force should be eliminated, because we cannot escape the use of natural language even in our use of formal models. We still interpret some variables in the rate-of-change laws as forces, but formally these dynamical equations define only an invertible mapping on a state space. Because of this time symmetry, systems described by such reversible dynamics cannot formally (syntactically) generate intrinsically irreversible properties such as measurement, records, memories, controls, or causes. Furthermore, as Bridgman (1964) pointed out, "The mathematical concept of time appears to be particularly remote from the time of experience." Consequently, no concept of causation, especially downward causation, can have much fundamental explanatory value at the level of microscopic physical laws.

Do statistical laws give a direction to time?

The answer to this question is still controversial. It is a near tautology to state that on the average the more likely events will occur sooner than the less likely events. In the more precise form of the second law of thermodynamics this is still a useful near-tautology. Here the word "sooner" appears to give time a direction, as does the second law's increasing entropy or disorder with time in an isolated system. But on careful thought we see that sooner and later are concepts that presuppose a direction of time. This statement, and the second law, would still be true if time were reversed since sooner and later would also be reversed. Assuming an isolated system with less than maximum entropy, the plot of entropy vs. time would show increasing entropy in both directions of time without favoring either direction (e.g., Tolman, 1950). Nevertheless, it has been argued on many grounds that the observer's psychological time must be consistent with the second law, and furthermore, using the weak anthropic principle, both must correspond to the cosmological arrow of time (e.g., Hawking, 1988).

What is important to recognize is that the concepts of causation have completely different meanings in statistical models and in deterministic models. A reductionist will assume that cause refers to events in a lower level model. That is, if we ask what is the cause of temperature, the reductionist will answer that it is caused by molecules exchanging their kinetic energy by collisions. But notice that the measurement of temperature is practical only because measuring devices effectively average this exchange without requiring measurement of detailed initial conditions of all the molecules. Averaging is not part of the microscopic model but is a statistical process of a higher level model. A deterministic microscopic model cannot cause an average to be an observable. There is also the model of flipping a coin. Here the reductionist will again say that it is the detailed initial conditions that determine the result, but in this case precise enough measurement of initial conditions is not practical, and therefore flipping a coin is modeled as a random event.

Measurement gives a direction to time

Many people are satisfied by the reductionist's detailed "causes" and feel that these microscopic models have explained the macroscopic observations. However, a skeptic will observe that averages, coin equilibria, dissipation, measurement, and all other irreversible or stochastic events cannot be derived from reversible, deterministic models, and therefore cannot be adequately reduced to, or explained by, such models (e.g., Coveney and Highfield, 1991). In the two examples above, what forces an asymmetric direction of time in our models is not the microscopic behavior of the system, but the measurement process. In the case of temperature, the irreversible process of averaging is done by the measuring device, the thermometer, not the reversible dynamics of the molecules of the system being measured. For the same reason, the macroscopic observables of heads or tails of a coin appear only after the reversible dynamics of the coin have been dissipated and the coin has come irreversibly to rest. Dissipation here simply means that the useful details of the motion have become unmeasurable.

From this line of argument we conclude that our concepts of the direction of time and hence our concepts of causation arise from our being observers of events, not from the events themselves. Consequently concepts of causation are subjective in so far as they cannot be separated from the observer's choice of observables and the choice of measuring devices. According to this model one might be tempted to say that it is the observer who causes a direction to time, not physical laws, but this would overstate the causal powers of the observer. Physical explanations require an epistemic cut between the knower and the known, and a model of the observer on one side of the cut makes no sense without the complementary model of the laws of the observed system on the other side (e.g., von Neumann, 1955).

Universal causes are not explanatory

The reductionist's answers above are examples of universal causes. It is a metaphysical precondition for physical laws that they must hold everywhere for all observers. Laws are inexorable. That is, we expect every event at any level of complexity to satisfy these laws no matter what higher level observables may also be needed for a useful model. Therefore, just as it is correct to say that the temperature in this room is caused by atoms following the laws of physics, it is equally correct to say that the cause of my writing this paper is the atoms of my brain following the laws of physics. But since such statements hold in all conceivable cases they give no clue to the level of observables necessary for a useful model in each case. It is only our familiarity with this linguistic form that often leads us to accept uncritically such universal causes as explanations.

Complementary models require complementary causes

We know from the two fundamental levels of physical models, the microscopic laws and the statistical laws, that it is a wasteful exercise to try to abstract away the differences between these models since they are complementary. I am using complementary here in Boltzmann's and Bohr's sense of logical irreducibility. That is, complementary models are formally incompatible but both necessary. One model cannot be derived from, or reduced to, the other. Chance cannot be derived from necessity, nor necessity from chance, but both concepts are necessary. In his essay on dynamical and statistical laws, Planck (1960) emphasizes this point: "For it is clear to everybody that there must be an unfathomable gulf between a probability, however small, and an absolute impossibility... . Thus dynamics and statistics cannot be regarded as interrelated." Weyl (1949) agrees: " ... we cannot help recognizing the statistical concepts, besides those appertaining to strict laws, as truly original." And similarly, von Neumann (1955) in his discussion of measurement says: "In other words, we admit: Probability logics cannot be reduced to strict logics." It is for this reason that our concept of a deterministic cause is completely different from our concept of a statistical cause. Determinism and chance arise from two formally complementary models of the world. We should also not waste time arguing whether the world itself is deterministic or stochastic since this is a metaphysical question that is not empirically decidable.

These examples show the extreme forms and model-dependencies of our many uses of causation. Notice that both complete determinism and complete chance can be invoked as causal "explanations" of events. These extreme forms of causation are often combined to describe what we see as emergent events that require new levels of description as in symmetry-breaking and dissipative structures in physical models (e.g., Anderson and Stein, 1988), or what Crick called "frozen accidents" in biological models.

Useful causation requires control

As I noted above, the use of causation at the level of physical laws is now considered as only a gratuitous manner of speech with no fundamental explanatory value. Naturally the question arises: At what level of organization does the concept of causation become useful? To explain my answer to this question let me first jump up several levels of complexity. Clearly it is valuable to know that malaria is not a disease produced by "bad air" but results from Plasmodium parasites that are transmitted by Anopheles mosquitos. It is also valuable to know that the lack of vitamin C will result in scurvy. What more do we gain in these examples by saying that malaria is caused by a parasite and scurvy is caused by lack of vitamin C?

I believe the common, everyday meaning of the concept of causation is entirely pragmatic. In other words, we use the word cause for events that might be controllable. In the philosophical literature controllable is the equivalent of the idea of power. Bishop Berkeley thought it obvious that cause cannot be thought of apart from the idea of power (e.g., Taylor, 1972). In other words, the value of the concept of causation lies in its identification of where our power and control can be effective. For example, while it is true that bacteria and mosquitos follow the laws of physics, we do not usually say that malaria is caused by the laws of physics (the universal cause). That is because we can hope to control bacteria and mosquitos, but not the laws of physics. When we say that the lack of vitamin C is a cause of scurvy, all we mean is that vitamin C controls scurvy. A fundamental understanding or explanation of malaria or scurvy is an entirely different type of problem.

Similarly, when we seek the cause of an accident, we are looking for those particular focal events over which we might have had some control. We are not interested in all those parallel, subsidiary conditions that were also necessary for the accident to occur but that we could not control, or did not wish to control. For example, when an aircraft crashes there are innumerable subsidiary but necessary conditions for the accident to occur. When we look for "the cause" of the accident we are not looking for these multitudes of necessary conditions, but for a focal event that, by itself, might have prevented the accident but maintained all other expected outcomes.

In our artificial technologies and in engineering practice we also think of causes in terms of control. For example, the electrical power that provides the light in my room is ultimately caused by nuclear fission in the sun that drives the water cycle and photosynthesis, or by nuclear fusion on earth. Many complex machines and complex power distribution systems are also necessary in the causal chain of events lighting my room. So why do I think that the cause of the light in my room is my turning the switch on the wall? Because that is where I have proximal, focal control, and also because switching is a simple act that is easy to model, as contrasted with the complexities of nuclear reactions and power distribution networks.

We view the causal aspects of all our machines in this way. We do not think of any very complex system or diffuse network of stochastic influences as a cause. This is one reason that downward causation is problematic. In other words, we think of causes in terms of the simplest proximal control structures in what would otherwise turn into an endless chain or network of concurrent, distributed causes. A computer is a useful modeling device because the simple, controllable steps of a program are the pragmatic cause of the computer's behavior. It is also significant that at the cultural level of jurisprudence it is only those causes that are focal, explicit, and believed to be controllable that are admissible in determining guilt or innocence. No jury will acquit by reason of downward causation.

The origin of control

The lack of any obvious explanatory power or utility of the concept of causation at the level of physical laws led to the question of what level of complexity causation does become useful. I supported the classical philosophical view that causation is a useful concept only when associated with power and control. This leads to the next question: At what level of organization does the concept of control become useful? The concept of control does not enter physical theory because it is the fundamental condition for physical laws that they describe only those relations between events which are invariant with respect to different observers, and consequently those relations between events over which the observer has no control.

At the least, control requires, in addition to the laws, some form of local, structural constraint on the lawful dynamics. Pragmatic control also requires some measure of utility. To say the riverbed controls the flow of the river is a gratuitous use of control since there is no utility, and the simpler term constraint serves just as well. Following the pragmatic requirement that concepts of causation and control must have some utility, I would say that utility makes sense only in terms of some form of fitness or function of a system that is separate from, but embedded in, an environment. Just as the concept of measurement requires an epistemic cut between the measuring device and the system being measured, so the concept of control requires an epistemic cut between the controller and the controlled.

Living organisms are the first natural level of organization where we know these concepts of functional control and fitness in an environment clearly make sense, and in fact are necessary for a useful model. Of course artifacts are also functional, but these are products of living organisms. While there must be intermediate levels of organization from which our present forms of life arose, the fact is that present life requires semiotic control by coded gene strings. There are many theories of self-organization that try to fill in these intermediate levels (e.g., Eigen & Schuster, 1982; Nicholis &Prigogine, 1989; Kauffman, 1993; Langton, 1988), but at present there exists an enormous gap between these statistical physics and artificial computer-life models and the complex, coded, semiotic control of life as we know it. It is arguable whether the concepts of causation and control are necessary or useful in these intermediate level models. Often the use of such high-level concepts of natural language to describe simple models obscures the real problem.

Why do most of us first think of the gene as the primary causal structure of the organism even though we know that some form of downward causation from the organism level is essential to control which genes are expressed? Again, one answer is that the gene's control activities are local, sequential, and relatively easy to model, as contrasted with the organism's downward control which is diffuse, parallel, and complex. However, there is a more fundamental reason: Genetic control is heritable - it is stored in a relatively simple, localized, semiotic memory that is easy to transmit. The organism's downward controls are not stored in memory, but are part of the time-dependent dynamics of the phenotype. Phenotypic dynamics are neither simple, localized, nor heritable.

Levels of control match models of causation

The pragmatic view of causation implies that different levels of causation will be associated with different levels of control. Downward causation is a difficult concept to define precisely because it describes the collective, concurrent, distributed behavior at the system level where control is usually impractical, rather than at the parts level where focal control is possible. Downward causation is ubiquitous and occurs continuously at all levels, but it is usually ignored simply because it is not under our control. For example, even in relatively simple artificial neural nets we know that collectively the hidden nodes exert downward control on the output. Yet while we have some control training at the level of the entire net we rarely know how to explicitly control at the level of individual hidden nodes.

In real-life the problem is much worse. In the real brain we may exert some control by drugs at the coarse level of awareness and moods, or somewhat finer control by brain surgery, but the firing of individual neurons is not controllable in any useful way. The same situation occurs at all levels, in ecosystems, social systems, economic systems, and even in systems that are designed to be controllable but that have grown excessively complex. Some catastrophic system failures, including cancer, aging, death, and species extinctions that might be viewed as a form of downward causation could just as well be described as loss of detailed control.

Evolution requires semiotic control of construction

This fundamental problem of how the dynamics of life maintains, or increases, its control of complexity while most nonliving dynamics tend to decay was one of Boltzmann's deepest concerns, but he found no satisfactory answer. The first hint of the answer was suggested by von Neumann (1966) in his discussion of complication and his theory of self-reproducing automata. Von Neumann was also motivated by the apparent conflict between structures that decay and structures that evolve. He focused on automata models, but it is clear that he had the contrast between thermodynamics and biological evolution in mind. He saw in Turing's universal automaton an example of a simple, fixed symbol system that could generate open-ended complexity. In order to translate this open-endedness to a physical system, von Neumann first postulated a universal constructor that could interpret symbolic descriptions. The universal constructor, like Turing's universal machine, was relatively simple, but the descriptions could grow indefinitely and consequently the resulting constructions could grow in complexity. The essential property of semiotic description is that it can be read in two ways: it can be read syntactically to be transmitted, and it can be read semantically to control construction.

Today we know in great detail how cells reproduce and evolve using this fundamental description-construction strategy. Over evolutionary time scales the cell's construction machinery (tRNA, aminoacyl synthetases, ribosomes) remains more or less constant, but the gene grows in length and the organism grows in complexity. This dependence of life on the separation of genotype and phenotype has been implicit in evolution theory since Darwin, but it is only recently that the adaptive power of genetic search in sequence space and its redundant mapping to structure has begun to be understood. This power has been discovered largely by empirical exploration of adaptive systems by computer models of maps from sequence space to structure space (e.g., Schuster, 1989), and sequence space search using genetic algorithms (e.g., Holland, 1992; Goldberg, 1989). The combination of crossover and mutation has been shown to be surprisingly powerful for finding solutions of certain classes of problem that are otherwise intractable. It is not yet clear why genetic algorithms work well in some cases and not in others. The building-block hypothesis and schema theorem are part of the answer.

What is clear is that successful evolution depends on both the structure of the sequence space of the gene for efficient search, and how sequence space maps to function space by control of constructions (e.g., Conrad, 1990). The details of this mapping from genetic description to physical rate dynamics is a difficult empirical problem, but the fundamental requirement for open-ended evolvability is the interdependence of the semiotic domain of the heritable genetic memory and the dynamic domain of construction and function.

Artificial dynamics and self-organization

It is now well known history how semiotic rule-based systems dominated artificial intelligence models until the rediscovery of the potential of nonlinear dynamics and concurrent, distributed network models. With the rediscovery of the adaptive power of networks, the study of nonlinear dynamic behavior has now largely replaced the rule-based symbolic models of artificial intelligence. In evolution theory there has also been a shift in interest toward dynamical models of self-organization as a non-exclusive alternative to the traditional heritable genetic variation and selection theory of evolution. The current controversy is over how much of evolution and development results from genetic control and natural selection and how much from self-organizing nonlinear dynamics. At the cognitive level the corresponding controversy is over how much of our thinking is the result of sequential semiotic rules and how much is the result of distributed, coherent neural dynamics.

These questions will not be resolved by either-or answers, first, because semiotics and dynamics must be intricately related at all levels of organization, precisely because it is this semiotic-dynamic interaction that is responsible for evolving levels. Second, because semiotic and dynamic models are complementary, both conceptually and formally. Conceptually dynamical models describe how events change in time. Since time is viewed as continuous and one-dimensional, non-relativistic dynamical processes are conceptually viewed as concurrent, coherent, or parallel in time, no matter how many variables or other dimensions exist. Dynamical laws are state determined; we need only know the initial conditions; there is no memory. By contrast, semiotic models are based on discrete symbols and syntactic rules that have no direct relation to the laws of physics. One-dimensional strings of symbols are manipulated without regard to time or rates of change, or energy. Memory is fundamental for the existence of semiotic systems.

There is nothing wrong with trying to get as much self-organization as possible out of dynamical models, especially in the context of the origin of life before the genetic code existed. However, once coded, semiotic, description-construction exists it is not productive to minimize its significance as a heritable mechanism for harnessing dynamical laws. There is no competitive model for efficient open-ended evolution.

One current computational approach to the problem of how semiotic behavior might arise from dynamics is the study of cellular automata that can be interpreted as both a dynamical system and as a semiotic computational system (e.g., Mitchell, Crutchfield, and Hraber, 1994). A cellular automaton is interpreted dynamically as a discrete mapping of the states of cells in a metrical space into the next state by a fixed rule that is a function of the states of neighboring cells. There are many ways to interpret the cellular automaton as a computer, but they all involve the initial state of cells interpreted as symbolic input and some later configuration of cells as the computed symbolic output. The emphasis in these models is on formal equivalences, and consequently the weakness of this approach is that there is no attempt to address how descriptions control actual physical construction, and how constructions relate to function.

The complementary approach to artificial evolution is the study of sensorimotor control in situated robots by various learning networks (e.g., Brooks, 1992; Maes, 1992, Hasslacher and Tilden, 1995). This strategy couples the dynamics of artificial networks with the functional dynamics of sensors and activators in contact with the real physical world. Although this strategy has no direct interest in semiotic control, it is possible that such experiments may give us some clues about the origin of symbolic memory. The weakness of this approach is that this dynamic form of learning is not heritable, and consequently there is no evolvable self-replication.

When is downward causation a useful concept?

I have argued that causation is a useful concept when it identifies controllable events or actions. Otherwise it is an empirically gratuitous linguistic form that is so universal that it results in nothing but endless philosophical controversy. The issue then is how useful is the concept of downward causation in the formation and evolution of complex systems. My conclusion would be that downward causation is useful insofar as it identifies the controllable observables of a system or suggests a new model of the system that is predictive. In what types of models are these condition met?

One extreme model is natural selection. It might be considered the most complex case of downward causation since it is unlimited in its potential temporal span and effects every structural level of the organism as well as social populations. Similarly, the concept of fitness is a holistic concept that is not generally decomposable into simpler components. Because of the open-ended complexity of natural selection we know very little about how to control evolution, and consequently in this case the concept of downward causation does not add much to the explanatory power of evolution theory.

At the other extreme are simple statistical physics models. The n-body problem and certainly collective phenomena, such as phase transitions, are cases where the behavior of individual parts can be seen as resulting from the statistical behavior of the whole, but here again the concept of downward causation does not add to the model's ability to control or explain.

A better case might be made for downward causation at the level of organism development. Here, the semiotic genetic control can be viewed as upward causation, while the dynamics of organism growth controlling the expression of the genes can be viewed as downward causation. Present models of developmental control involve many variables, and there is clearly a disagreement among experts over how much control is semiotic or genetic and how much is intrinsic dynamics.

The best understood case of an essential relation of upward and downward causation is what I have called semantic closure (e.g., Pattee, 1995). It is an extension of von Neumann's logic of description and construction for open-ended evolution. Semantic closure is both physical and logical, and it is an apparently irreducible closure, which is why the origin of life is such a difficult problem. It is exhibited by the well-known genotype-phenotype mapping of description to construction that we know empirically is the way evolution works. It requires the gene to describe the sequence of parts forming enzymes, and that description, in turn, requires the enzymes to read the description.

This is understood at the logical and functional level, but looked at in detail this is not a simple process. Both the folding dynamics of the polypeptide string and specific catalytic dynamics of the enzyme are computationally intractable at the microscopic level. The folding process is crucial. It transforms a semiotic string into a highly parallel dynamic control. In its simplest logical form, the parts represented by symbols (codons) are, in part, controlling the construction of the whole (enzymes), but the whole is, in part, controlling the identification of the parts (translation) and the construction itself (protein synthesis).

Again, one still finds controversies over whether upward semiotic or downward dynamic control is more important, and which came first at the origin of life. There are extreme positions. One extreme sees the universe as a dynamics and the other extreme sees the universe as a computer. This is not only a useless argument, but it obscures the essential message. The message is that life and the evolution of complex systems is based on the semantic closure of semiotic and dynamic controls. Semiotic controls are most often perceived as discrete, local, and rate-independent. Dynamic controls are most often perceived as continuous, distributed and rate-dependent. But because there exists a necessary mapping between these complementary models it is all too easy to focus on one side or the other of the map and miss the irreducible complementarity.

Semantic closure at the cognitive level

Many comparisons have been made between the language of the genes and natural language (e.g., Jakobson, 1970; Pattee, 1980). Typically in both genes and natural language the symbol vehicles are discrete, small in number, and fixed but structurally largely arbitrary, yet they have the potential for an unlimited number of one-dimensional expressions. These expressions are held in a memory structure that is more or less random access, i.e., not significantly restricted by time, rate, energy, and position dependence. The basic elements of the language syntax are context free and unambiguous, but as the length of expressions increases the syntax and semantics become inseparable, and when taken as a whole the semantics of the text becomes context dependent and more ambiguous, with the organism exerting more downward controls. At the many pragmatic levels the entire organism and its environment exert strong stochastic influences on meaning, function and fitness.

We know the explicit steps required to map the semiotic gene strings into the dynamics of enzyme control of rates of reactions, but almost nothing is known about the details of how the brain generates or reads the semiotic strings of natural language to produce meaning or dynamic action. Consequently, while the essential complementarity and semantic closure of semiotics and dynamics is apparent in both cases, there are certainly major differences in the structure of the memory and the dynamics and how they are coupled. First, the discrete symbols of natural language appear to be surface structures in the sense that they appear only as output of dynamic speaking or writing acts. There is no evidence that symbols exist in the brain in any local, discrete form as in the case of the gene. On the other hand, if we look at the gene symbols as input constraints on the translation and the parallel dynamic folding process as producing the output action, this is not unlike symbols acting as constraints on the input layer of a neural network and the dynamics of network relaxation as producing the output action (Pattee, 1985).

Conclusion

To understand life as we know it, especially the continuous evolution of stable complex forms, it has proven essential to distinguish two complementary types of control models. One type, a semiotic model exerting upward control from a local isolated memory, and the other type, a dynamic model exerting downward control from a global network of coherent, interactive components. The semiotic model explains how control can be inherited and provides a remarkably efficient search process for discovering adaptive and emergent structures. The dynamic model suggests how the many components constructed under semiotic control can be integrated in the course of development and coordinated into emergent functions.

Neither model has much explanatory value without the other. Dynamical control models do not explain the discrete, rate-independent, orderly, heritable sequences that form the individual protein molecules, nor do semiotic control models explain how these sequences fold or self-assemble and how coordinated enzymes control the rates of specific reactions. It is true that each model alone can account for a limited level of self-organization. For example, copolymers can self-assemble more or less randomly, and by chance form autocatalytic cycles. Dynamics can also generate innumerable complex autonomous patterns. But dynamics without an open-ended heritable memory or memory without dynamic coordination have very limited emergent and survival potential. The origin of life probably requires the coupling of both self-organizing processes, but in any case, present life certainly does.

References

Anderson, P. W., and Stein, D. L., 1988, Broken symmetry, emergent properties, dissipative structures, life; Are they related?, in Self-organizing Systems - The Emergence of Order, F. E. Yates, ed., Pergammon, NY, pp. 445-457.

Aristotle, Metaphysics I.3. W. D. Ross, ed., Oxford University Press, 1924.

Bridgman, P. W., 1964, The Nature of Physical Theory, Wiley, NY, p. 58.

Brooks, R. A., 1992, Artificial life and real robots, in Toward a Practice of Autonomous Systems, F. J. Varela and P. Bourgine, eds., MIT Press, Cambridge, MA, pp. 3-10.

Bunge, M., 1959, Causality, Cambridge University Press.

Conrad, M., 1990, The geometry of evolution, BioSystems, 24, 61-81.

Coveney, P. and Highfield, R., 1991, The Arrow of Time, Ballantine, NY.

Dummett, M., 1964, Bringing about the past, Philosophical Review, 73, 338-359.

Eigen, M. and Schuster, P., 1982, Stages of emerging life - five principles of early organization, J. Molecular Biology, 19, 47-61.

Goldberg, D. E., 1989, Genetic Algorithms in Search, Optimization, and Machine Learning, Addison-Wesley, Reading, MA.

Hart, H. L. A. and Honoré, A. M., 1958, Causation and the Law, Oxford University Press.

Hasslacher, B. and Tilden, M. W., 1995, Living machines, in Robotic and Autonomous Systems: The Biology and Technology of Intelligent Autonomous Agents, L. Steels, ed., Elsevier,

Hawking, S., 1988, A Brief History of Time, Bantam Books, NY, pp.143-153.

Hertz, H., 1894, Die Principien Mechanik. English translation by D. E. Jones and J. T. Walley, The Principles of Mechanics, Dover, NY, 1956.

Holland, J. H., 1992, Adaptation in Natural and Artificial Systems, 2nd ed., MIT Press, Cambridge, MA.

Hume, D., An Enquiry Concerning Human Understanding, Section 4-7.

Jakobson, R., 1970, Main Trends of Research in the Social and Human Sciences, Mouton/UNESCO, Paris, pp.437-440.

Kauffman, S., 1993, Origins of Order Oxford University Press.

Langton, C., 1989, Artificial life, in Artificial Life, C. Langton, ed., Addison-Wesley, Redwood City, CA, pp. 1-47.

Maes, P., 1992, Learning behavior networks from experience, in Toward a Practice of Autonomous Systems, F. J. Varela and P. Bourgine, eds., MIT Press, Cambridge, MA, pp. 48-57.

Mitchell, M., Crutchfield, J. P., and Hraber, P. T., 1994, Evolving cellular automata to perform computations: mechanisms and impediments, Physica D 75, 361-391.

Nicolis, G. and Prigogine, I., 1989, Exploring Complexity, Freeman, NY.

Pattee, H. H., 1979, Complementarity vs. reduction as explanation of biological complexity, Am. J. Physiology, 236(5): R241-R246.

Pattee, H. H., 1980, Clues from molecular symbol systems, in Signed and Spoken Language: Biological Constraints on Linguistic Form, U. Bellugi and M. Studdert-Kennedy, eds., Dahlem Konferenzen Report 19, Verlag Chemie, Weinheim, pp. 261-274.

Pattee, H. H., 1985, Universal principles of measurement and language function in evolving systems, in Language and Life: Mathematical Approaches, J. Casti and A. Karlqvist, eds., Springer-Verlag, Berlin, pp. 268-281.

Pattee, H. H., 1995, Evolving self-reference: matter, symbols, and semantic closure, Communication and Cognition - Artificial Intelligence, 12(1-2), 9-28.

Planck, M., 1960, A Survey of Physical Theory, Dover, NY, p. 64.

Schuster, P., 1994, Extended molecular evolutionary biology: Artificial life bridging the gap between chemistry and biology, Artificial Life 1, 39-60.

Taylor, R., 1972, Causation, in The Encyclopedia of Philosophy, vols. 1 & 2, P. Edwards, ed., Macmillan, NY, pp. 56-66.

Tolman, R. C., 1950, The Principles of Statistical Mechanics, Oxford University Press, p. 157.

von Neumann, J., 1955, The Mathematical Foundations of Quantum Mechanics, Princeton University Press, Princeton, NJ, p. 351, p. 420.

von Neumann, J., 1966, Theory of Self-reproducing Automata, A. Burks, ed., University of Illinois Press, Urbana, IL.

Weyl, H., 949, Philosophy of Mathematics and Natural Science, Princeton University Press, Princeton, NJ, p. 203.


Index | Parent Index | Build Freedom: Archive

Disclaimer - Copyright - Contact

Online: buildfreedom.org - terrorcrat.com - mind-trek.com