[Last] | [Contents] | [Next] |
In Chapter 1 we saw that the new idea of fractal patterns can be applied to all
of the universe around us, such that even things that aren't obviously fractals
on the surface can be understood by the same approach that can handle fractals.
This gives us the opportunity to rethink our prior understanding, and see
fractal patterns as the underlying principle of everything around us. The
idea that we can "run the fractals" in the generating direction with deductive
thinking and in the compressing direction with inductive thinking then gave us
a solid base for understanding that these two types of thinking are different,
and that we need both. In Chapter 2 we saw how profoundly inductive thinking is
missing from most human cultures, and this enabled us to re-interpret what is
usually thought of as a form of mental handicap as people "falling out of step"
because they retain their faculties when everyone else goes to sleep
with their eyes open.
By understanding something of the way the magicians see the universe - as
fractal patterns - we have found our way to understanding why the magicians
think that most people in most societies don't see most of what is really going
on around them. It's because their inductive thinking ability is asleep. Now
we will complete the picture of how the magicians think differently to most
people, and this will make it possible to understand more of how they see the
universe.
We evolved our ability to think deductively as a special tool, in addition to
our ability to think inductively. Most people's loss of inductive thinking
because of mass boredom addiction is a very recent thing (in evolutionary
timescales), which has left the deductive kind of thinking trying to run things
on its own. The trouble is, when we do deductive thinking on its own we
inevitably fall into a deep and subtle error, that makes many of the logical
conclusions we reach profoundly wrong. Everything seems to check out as
perfectly logical - there don't seem to be any errors in the reasoning at all.
And in it's own terms, the reasoning really is correct. The error happens
because when we lose the background that inductive thinking should provide, it
makes whatever we are thinking about deductively become it's own background. An
example is the best way to explain how this can happen.
CYC is an ambitious project to make computers more intelligent by creating a
vast network of true facts, all linked together. The idea is that when we read
something written by another person, we use lots of little "common sense" facts
that we know to make sense of what we read. If we want computers to read news
stories or scientific reports and tell us about the ones we might be interested
in, perhaps the computers need to know all the little "common sense" facts to
make sense of the stories. So the people building CYC have spent years telling
CYC little facts. At night they leave the computer running, looking through the
facts it knows, and trying to find connections. In effect they try to get it to
deduce things from the facts it already knows. One morning, they came in to
find that CYC had printed out this remarkable claim:
Of course it isn't true, which is not to say that Julie who runs the local
shop or Sam who organises the school trip, or Terry who restores antique
furniture aren't important. They just aren't famous. So why did CYC deduce
(in its unconscious, machine way) that they are? Eventually the researchers
realised what the problem was. They'd told CYC about many, many people, and
all of them were famous! CYC didn't know that for every "Albert Einstein" who
is famous, there are thousands and thousands of Julies and Sams and Terrys
who aren't famous. Without the background of Julie and Sam and Terry, Albert
Einstein had become his own background, and CYC seemed to be in a world where
everyone is famous. CYC had got itself logically inside out. No matter how
carefully the CYC program checked the logic, it could never find the flaw - but
it would still be wrong.
The deductive mind's problem happens because it works like CYC. It has to
select the elements of anything it wants to think about and represent them in
its own internal thinking space, where it proceeds to push them around.
Inductive thinking works differently. It evolved to take everything
available to the person's senses and look for patterns in the whole. So when a
person has their inductive thinking ability working together with their
deductive ability, it's natural to remember that the elements that are being
pushed around have been taken out of their real context, and always bear that
in mind in every stage of thinking. The deductive mind never developed the
ability to maintain its own grounding in the context of whatever it's thinking
about because the inductive mind always took care of that. With a mind that's
an integrated mixture of induction and deduction, we are prevented from making
mistakes that the deductive mind working alone can't see.
This error is something which every person trapped in deductive thinking must
make over and over again. Even worse, the kind of thinking this creates means
that the whole culture ends up making the same logical error. Unlike boredom
addiction, where it's possible for some people to have a fortunate genetic
immunity and retain their full faculties, everyone in a society where inside
out thinking is common is in danger of being tricked. The effect is so powerful
and so hard to spot that people keep on making mistakes in what they are
convinced is logical thinking, and can't see where the problem is, even when
they can see that the result doesn't match what happens in reality. This is why
inside out thinking needs a chapter of its own to explain where the problem
is, even though at root it's just a consequence of most people being caught in
deductive thinking, which we've already looked at. The good news is that once
people get the hang of what goes wrong and start to break free of robotic
fixation, they naturally start to remember context and see new possibilities
or solutions to problems opening up, that they didn't notice before. Unlike
boredom addiction, there's nothing like withdrawal stress locking inside out
thinking into place.
Having seen how the CYC computer made its silly mistake, we can look at two
classic errors where humans do exactly the same thing. The first one still
puzzles mathematicians (the very people who were so surprised when Kurt Godel
proved that deductive thinking can't do everything). Imagine you are told two
facts, like this:
1) Some liontamers are women.
The question is, are there any redheaded liontamers? People usually think about
two possibilities (and if they've ever done sets at school they even draw the
actual diagrams shown below):
They look at the possibility on the left, and decide that there is no reason,
from the information available, to assume that the set of liontamers and the
set of redheads don't overlap like the diagram shows. So that can't be correct.
Then they look at the possibility on the right, and decide that there is no
reason to assume the two sets do overlap either. So they say that the answer
is "undefined". The problem as stated doesn't allow them to answer the
question. And that, most professional mathematicians would agree, is the
correct answer.
If we bear in mind the trap of inside out thinking, we find that there is
actually another possibility, which no-one ever thinks of. The deductive mind
is a control freak. Like CYC, its awareness is completely limited to its
internal thinking space, and it only knows about things that it has copied
into its internal space. Like CYC, it has complete knowledge of everything in
its internal space, and it doesn't know that anything except its internal
space exists, so it thinks it knows everything! This is the error we saw
Ouspensky and his fellow students making in the last chapter. The deductive
mind doesn't really know everything of course - in fact it hardly knows
anything. So from the outside we see that it only pretends it knows everything,
and also is so silly that it thinks anything it doesn't know about doesn't
exist, or doesn't matter. So to indicate its perfect knowledge of all things,
it has to draw see-through circles for the three sets - showing that it knows
everything about every little corner of its internal universe - and has the
problem of deciding if the set of liontamers overlaps the set of redheads
before it does anything else. Both of the choices using see-through circles
actually misrepresent the information available, which doesn't say anything at
all about the relationship between liontamers and redheads. Let's draw the
diagram in a more accurate way:
Now we've shown that although the universe certainly knows if it has any
redheaded liontamers in it, we don't know, because that information is hiding
behind the set of women, which we can't see into from the information given.
This more accurate diagram allows us to give a slightly (but importantly)
different answer to the question. We can answer "possibly".
What's the difference between "undefined" and "possibly"? It's that "undefined"
tells us absolutely nothing, while "possibly" tells us that we do possess some
information about the situation, and so far there is nothing to tell us that
there aren't any redheaded liontamers. After all, if our total knowledge
consisted of the contents of this box:
We would not know that liontamers do exist, we would not know that redheads do
exist, and we would not know that there is a possible meeting ground between
liontamers and redheads, because some members of both groups are also women. If
we want to meet some redheaded liontamers, that's quite a lot of information -
and so far, all of it is positive. We're like a football team that's reached
the semifinals. We haven't won the season yet, but our chances of winning are
better than they were when we started. Since we started with no information at
all and our chances of meeting redheaded liontamers at the start were
"undefined", our chances now that we've got some information and the
possibility still exists must be something better than "undefined". Because the
deductive mind needs to work in its own internal thinking space and can't deal
with an open situation like the inductive mind can, it has to pretend that
everything in that space is concrete and well defined. By being greedy about
certainty in this way, it actually ends up throwing away useful information.
This is a habit that is deeply ingrained in deductively fixated culture, which
is why you've never seen anyone draw a set diagram like the one above, with the
set of women hiding the relationship between the sets of liontamers and
redheads instead of joining or not joining them.
Some people who are not trapped in deductive thinking and naturally live in
the universe that their inductively capable minds are showing them, look at
people doing deductive thinking in this wrong, inside out kind of way, and come
to the conclusion that logic is bunk. They're out looking for the redheaded
liontamers and sometimes finding them, while the "logical" people are sitting
at home, quite convinced that there is no point even trying. Dismissing logic
like this, just because some people do it wrong, is itself an error. The
whole universe is logical. It all fits together, and we evolved the deductive
mind to make use of this fact. When we recognise the error of inside out
thinking, we have to be careful to only throw out the error, and not the good
stuff. This is why George Gurdjieff always talked about developing the human
mind as "perfecting our Objective Reason", and Rudolph Steiner (a magician best
known for founding the Waldorf School movement) warned about the danger of
developing inductive thinking without developing deductive thinking at the
same time, and so becoming fuzzy brained "one sided mystics".
In the example we've just looked at, inside out thinking confuses people's
ability to make judgements based on some simple facts because the deductive
mind acting alone has trouble understanding that the symbols it has in its
internal thinking space represent only partial knowledge of a different,
external reality which is complete. To get a sense of how tricky this error
can be, we can look at another puzzle (it's great fun to try this one on people
because they always get it wrong), and then look at how the error has led to
some serious miscarriages of justice.
Imagine you're a TV gameshow contestant. You've reached the final stage of the
show, where you try to win the big prize. The gameshow host stands you in
front of three doors:
You can't see what's behind the doors, although the host can because she keeps
running round behind them. The host explains that there's a car hidden behind
one of them (we assume you want a car), and lemons hidden behind the other two
(we assume you don't want a lemon). You have to pick a door, and tell the host
which one you've picked. Let's say you pick door A. The host then chooses and
opens one of the other doors, to reveal a lemon that has been hidden
behind it. (Because the host knows what's behind all three doors she can always
choose one that's got a lemon behind it.) Let's say she opens door C:
Now she gives you a choice. You can either stick with your original choice of
door A, or you can switch to door B, which is the other door that remains
unopened. After you've decided, the host will open the door you've settled on,
and whatever is behind it, is yours! The question is, should you stick, switch,
or doesn't it make any difference?
People always think that it makes no difference, and that's the wrong answer!
When you stick with door A you have a 1/3 chance of winning the car, but if
you switch to door B you have a 2/3 chance of winning the car, which is twice
as often! Let's look at why this is so, and why the confusion between what
the universe knows and our incomplete knowledge leads people astray. When you
pick a door at first, you have a simple 1/3 chance of picking the door that
happens to have the car behind it. You know no more than that, and you can't do
any better to improve your odds than just picking at random. When the host
opens one of the other doors though, she knows something you don't. She knows
exactly which door the car is behind, so she can always choose another door
with a lemon behind it to open. When you then decide to stick or change, you
are actually faced with a different problem to the one you started with. In the
first problem, only one out of three doors had a car behind it. Your first
choice is from a collection of two lemons and one car. Then the host removes
one lemon from the problem. After she has opened one of the doors, there is
only one car and one lemon left in play. So you can never switch from a lemon
hiding door to another lemon hiding door. All you can do is switch from a car
hiding door to a lemon hiding door (if you happened to pick the car in your
first choice), or from a lemon hiding door to a car hiding door (if you
happened to pick a lemon in your first choice). Since you had a 1/3 chance of
picking the car but a 2/3 chance of picking a lemon in your first choice, you
end up with a 1/3 chance of switching to a lemon and a 2/3 chance of switching
to the car!
Some people - particularly people with strong technical backgrounds - find this
result impossible to believe. So if you're one of these people, try it for
yourself. Here's a simple C program that you can use to do this:
So what goes wrong here? The problem is that the deductive mind sets up the
initial problem in it's internal thinking space, which it then treats as
"reality", instead of as a partially formed picture of external reality. When
the gameshow host adds information to what the deductive mind has by revealing
the location of one of the two lemons, the deductive mind doesn't recognise
that as an opportunity to improve its picture of external reality. The key
issue is that since the deductive mind hasn't pushed a car symbol around, it
doesn't believe that the car has "moved", and it can't take into account that
the information it started out with about where the car is has just been
improved. So to maintain its "grip on reality" it has to insist that there is
no point in switching. Because the deductive mind sets its own internal
thinking space up as reality, it can't trust the true, external reality to be
completely consistent and rely on that fact when it traces what is going on
through its own areas of ignorance - which is a point we'll come back to later.
Sometimes the effects of this kind of error can be very serious indeed. Cheap
mass DNA screening is now used to hunt for sex offenders and murderers, and
has exposed a serious confusion which the legal system has not yet managed to
cope with. Imagine you are on a jury, and you've heard the following evidence:
A horrible crime was committed, and the forensic scientists found a DNA sample.
DNA profiles are very specific (although they don't compare every single gene),
and only one person in a million will fit a given profile. The police then mass
sampled lots of people, and found a man who fit the DNA profile. There is only
a million to one chance of error, and so he is guilty. Would you convict him?
Most people say yes, and people are currently in prison for this reason. But
the logic is completely flawed. The problem is that there is no other evidence
to link the man with the crime. The police just sampled and sampled until they
found one person who matched the DNA profile. So the question isn't about all
the people who don't fit the profile. It's about the people who
do. If there are 50 million people in the country, there will be 50
people who fit the DNA found at the crime scene. If we just keep sampling until
we find one of them, we have one chance in 50 that the person we find first
happens to be the criminal. In 49 cases out of 50 we'll just pick up some other
unlucky person who happens to fit the DNA profile. So instead of having one
chance in a million of being wrong, we actually have one chance in 50 of being
right! When the deductive mind looks at the argument in the isolation of its
internal thinking space, the prosecution's argument seems very sensible. By
itself the deductive mind won't step outside of that closed internal space and
wonder about all the other people in the real situation of a country of 50
million people, in which the odds presented in court get switched inside out by
all the other possibilities that the prosecution never mentioned.
A similar problem is seen in cases where mothers of babies who have suffered
cot death are accused of murdering them. In several cases reputable doctors
have given evidence for the prosecution that the chances of a cot death are
very slight, so the chances of two babies suffering cot death in the same
family are so slight as to be ignored - and so the mothers must have smothered
the babies. In fact there's solid scientific evidence that although the cause
of cot death is still unknown, there is some genetic or environmental factor
that makes some families more likely to experience this terrible tragedy than
others, so it isn't like throwing dice, where each throw is completely
independent of every other throw. If a family has suffered cot death once, the
chances of it happening again are significantly greater. A second death does
not mean the mother must be a murderer.
This problem became even more worrying because of a case in the UK in 1996,
when the defence attempted to explain the logical error to the jury. The
mathematically correct way to cope with statistics like this is called Bayes
Theorem, after Reverend Thomas Beyes, who discovered it in the 18th century.
Reverend Bayes idea says (in a formal mathematical way) that we can start out
with our best guess at what will happen, but that as we learn more, we
have to adjust the probabilities that we guess for a thing happening or not. We
have to be like the gameshow contestant that learns from what the host does,
and not like the contestant who starts with an amount of "knowledge" which
doesn't change as more data arrive. We always have to be aware of the context
of the knowledge that we have. This caused a problem in court, because the
correct answer, and the wrong answer that the inside out deductive mind working
on its own tends to produce are different. The judge ended up telling the jury
that when they think about statistical, scientific evidence, they must not use
mathematics, but instead they must use something he called "judgement". In
effect, he said that mathematics has nothing to do with science, that they
mustn't use reason, and must instead give in to the error because it feels
right!
We've seen how the deductive mind working on its own can make serious mistakes
because it ends up confusing its understanding of reality with true reality,
and this means that it can get its sums wrong. This is not the worst effect
of inside out thinking. Things get worse when the loss of context causes people
to see everything around them in a dark and confused way. This means that the
universe that most people see is a much less interesting, fun and opportunity
filled place than the universe the magicians see - and this is something that
the magicians are right about.
Every relationship between everything found in the real universe exists by
default. Things in the real universe don't need to have special provisions to
make their relationships possible. Simply by virtue of existing in the same
universe, things have the opportunity and context to relate to each other. For
example, every bit of matter in the entire universe exerts a gravitational pull
on every other bit. Right now, your feet are exerting a graviatational pull on
your nose, and both of these parts of yourself are gently tugging on the
Eiffel tower in Paris, the Taj Mahal in India, the Moon and the stars Rigel and
Sirius. Meanwhile, Rigel and Sirius are tugging on each other. We still don't
understand how this happens, but it does. Scientists talk about gravitational
fields, but all that does is explain what they see happening - it doesn't
explain how it happens. Real space is somehow an active medium, which connects
everything without anything else being required to allow the connection. In
real space, the exceptions are situations where connections are not possible
because special measures have been taken to stop the connection being possible.
For example, if there are some chickens in a coop and a fox in the woods, the
material substance of the coop will prevent the fox getting to the chickens.
For another example, if we don't want two electrical wires to make a short
circuit we must seperate them with an insulator, to prevent a connection from
occuring by accident. Science is about discovering the connections that are
going on all on their own without any other help or permission required, and we
are always finding fascinating connections between things that we never
realised were going on. The internal thinking space of the deductive mind isn't
like this. It isn't an active space which enables everything to relate to
everything else by default. It's a passive space which things can get
transferred into, and which then just hang there, not connected to anything
else unless we make special provision to assume that a connection exists.
In reality relationships exist by default, and do not exist if special
provisions are made to stop them. In the internal thinking space relationships
do not exist by default, and exist if special provisions are made to enable
them. So when we take the things that we see in a part of the world and copy
them into the internal thinking space of the deductive mind, we turn the whole
picture inside out. Because we turn it all inside out at once, we are left with
a consistent picture. We can do all the logic we like in the inside out space,
and however we check our deductive thinking we'll find no errors. It's just
that when we compare our results with reality, we find that what we expect just
isn't what happens! The problem is that in our deductive thinking we've done
stuff that ignores the context of active real space. When the inductive mind is
turned on and has its own correct function, we just wouldn't have let the
deductive part of our minds make that kind of mistake in the first place. It's
a sneaky problem though, because when they are brought up in a culture that
does logic in a deductively fixated way, even people with their inductive minds
turned on learn a kind of "thinking" that doesn't use the inductive mind to
run a constant sanity check on what the deductive mind is doing. Instead of
using both parts of their minds together in an integrated and direct way, they
effectively stop using their minds and engage in a robotic, computer-like
symbol pushing activity they call "logic".
One consequence of how this turning inside out of absolutely everything catches
people is found in how they identify mutually exclusive and mutually
inclusive opportunities. It's something the entire culture consistently gets
wrong! For example, every time anyone does the sums, they discover that
improving industrial quality and reducing costs go hand in hand. The two
activities are mutually inclusive. When we improve quality we reduce wastage,
improve worker morale, reduce production times, we don't have to cope with
returned faulty goods, we spend less on marketing promotions (because the stuff
sells itself) and so on. Improving quality always decreases costs. Yet over and
over again, people's automatic reaction when they have to cut costs is to
reduce quality. They switch the mutually inclusive relationship to a mutually
exclusive one, and then extend the faulty logic to the conviction that all they
need to do is reduce quality and costs will surely come tumbling down! In
recent years, a similar mistake has convinced some people that so long as they
are damaging the environment, they must be making a profit, leading to a
bizarre kind of anti-environmentalism for its own sake!
On the other hand, at every election we see another batch of politicians
promising to reduce taxation and increase public spending at the same time.
Everyone knows this just can't add up because the two options really are
mutually exclusive. The reason that politicians do it is they know that
despite all reason to the contrary, people will actually fall for it. At heart,
people don't believe that reducing taxation and increasing public spending are
connected in a mutually exclusive way.
This really is a deep logical effect, and not simply sloppiness. We can see
this by considering the kind of logic that engineers use to create logic
circuits, as used in all sorts of gadgets. When we think about things
"logically", we use the basic relationships AND and OR to connect ideas that
we think of as TRUE or FALSE. Everything works fine, everything makes sense,
until we want to create little electronic circuits to represent these
relationships. Then suddenly, we find that nature doesn't seem to want to play
by our simple rules! Instead, what engineers find they can construct most
simply are two related operations they call NAND and NOR. These are the same
AND and OR relationships that humans use, but with the results negated, so a
TRUE result becomes FALSE and a FALSE result becomes TRUE. Except that the way
nature does it, a NAND isn't made from an AND with a extra component to turn
it inside out - it's the AND that's made from what we call a NAND with an
extra component to turn it inside out! Just like in the diagram above, we have
a consistent way of doing things, and so does nature, and the one is the inside
out reversal of the other!
Sometimes this business of turning things inside out because we lose the
context as we copy into the deductive mind's internal thinking space produces
some very odd results that just don't match what nature does. When this
happens, people tend to cope with the discrepancy by pretending that nature
does what their "logic" says it does - and carrying on regardless. An example
is the application of what is supposed to be a Darwinist philosophy of free
market economics as practiced in Europe and America. The Darwinist idea is that
living things (and in the analogy businesses too) compete, and in the
competition, the strongest survive by wiping out the weaker examples. People
who believe this point to animals preying on each other as a justification for
their philosophy, but then they have a problem. What tends to happen in this
model is that the economy starts out with a diverse range of businesses,
interacting with each other to create a diverse range of products for their
customers - a business ecology. Then during competition, more and more of the
businesses die off until only one monopoly (or artificially maintained duopoly)
remains. This dinosaur then becomes slow and bloated until it is finally killed
off by the evolutionary catastrophe of some new technology coming along which
the slow and bloated business is unable to adapt to. The new technology is
adopted by a number of new businesses and for a while the customers enjoy
choice and service again. Then the cycle repeats. The Darwinist rationale for
setting things up this way is that this occurs in nature. Except it doesn't!
What really happens in nature is that ecosystems start simple and over time
they get much more complex. We start with a ball of rock swinging through space
and end up with the Amazon rain forest (or at least we do until business models
supposedly inspired by nature deforest it). So what's gone wrong? Why does
competition do one thing in nature and another thing in business? The trick is
to recognise that although the deductive mind is correct when it copies
incidents of competition from the active real space into its passive internal
space, it fails to identify the interacting co-operation which is constantly
occuring between all elements of the real ecosystem because all the elements
are connected by the active space that they share. Animals keep breathing in
oxygen and breathing out carbon dioxide. Plants keep doing the opposite. Every
element of the ecosystem interacts with all the others in vast numbers of ways,
and this provides a co-operative context in which the isolated incidents of
competition take place. This co-operative context does not get copied into the
internal space of the deductive mind, which ends up seeing competition occuring
without any context. That is then the natural "model" that humans seek to
emulate, and in the resulting state of total war, it's hardly surprising that
desertification of the business ecology happens very quickly indeed.
As the deductive mind acting alone flips inclusive and exclusive relationships,
it also converts the open possibilities of the real universe into closed
possibilities. In reality, win - win relationships are the ecological norm, but
in the internal thinking space of the deductive mind, every winner implies a
loser. That's why after they've decided that quality and economy are mutually
exclusive, people go on to think they can control economy by reducing quality.
If quality loses, economy must win, because in the closed world, there's a
winner for every loser. This can lead to people thinking in negatives to a
quite remarkable degree, without ever realising what they are doing. For
example, very aggressive businesses that are trying to apply what they think of
as Darwinistic reasoning usually think that they are being selfish - and if
they really were being selfish there wouldn't be a problem. The trouble is,
they aren't being selfish at all. They are really being anti-altruistic.
Anti-altruism is the inside out version of selfishness, which works on the
assumption that if no-one gets a bean without paying for it, the company must
do well. On the other hand, quite sincere people who want to be altruistic end
up getting caught in the trap of anti-selfishness. This is the mistake of
thinking that if we deny ourselves then we must be benefitting others, and it's
as big a mistake. What is particularly sad about this example is that in
reality, where everyone is connected by active space, it isn't possible to
improve our own environment without improving everyone elses' as well. This is
what every plant and animal in the rain forest unconsciously practices. In this
sense, altruism equals selfishness. On the other hand, the business that screws
its cusomers to the point where they'll do anything to find another supplier
has nothing in common with the charity worker who flogs herself around in a
frenzy of self denial but never actually accomplishes anything useful.
The same pattern of "two wrongs don't make a right" can be seen in the ages
long dispute between people who think they are rational and people who think
they are spiritual. In fact, many people who call themselves rational actually
police an attitude that is anti-spiritual. To them, rationality is what is left
after everything that they don't know a causal mechanism for, every holistic
style of thought, and every poetic sensibility has been discounted. A
particularly unfortunate example of this was some people's reaction to James
Lovelock's Gaia Hypothesis. Lovelock observed that an interesting
property of the Earth's ecosystem is that it is out of chemical equilibrium,
and stays that way. In this it is the same as any single animal, which stays
out of chemical equilibrium throughout its life. It's only when the animal dies
that a process of decay sets in, and a chain of chemical reactions occur which
eventually stop when chemical equilibrium is reached and decay is complete. In
order to maintain itself out of chemical equilibrium, the living animal is
composed of an interlocking network of active systems which compensate for the
changes thrust upon it by its environment, keeping it in a stable state which
is not chemically stable. From this reasoning, Lovelock argued that because the
Earth's ecosystem has survived for millions of years while being impacted by
meteorites, and subjected to variations in the amount of energy reaching it
from the sun, it too must be composed of interlocking active systems that
compensate for changes. Although we don't yet know what they are, we can deduce
that these systems must exist, and so go looking for them. In this way we can
greatly increase our understanding, which is the purpose of science. Lovelock's
thinking was science at its best. Starting with an overall, holistic, poetic
impression of what must be happening, he saw a way to structure detailed
scientific enquiries. To communicate his new idea in a simple way, he accepted
a suggestion made by the author William Golding and named the idea after the
Greek mother goddess, Gaia. This was enough to drive a generation of
deductivists demented with ideological rage. It was a poetic, holistic,
creative concept. Producing it required a spontaneous recognition of a simple
yet profound truth. It was as repugnant as anything could possibly be to people
trapped in robotic, reductionist, deductive fixation. Nearly 40 years on, a new
generation of biologists, geologists, meteorologists and other specialists
have grown up with the Gaia Hypothesis in play. They have not been faced with
the trauma of the idea's introduction, and indeed are busy mapping out the
interlocking active systems that maintain the Earth's ecosystem out of chemical
equilibrium.
On the other hand, many people who call themselves spiritual are actually
anti-rational. To them, spirituality is what is left after everything that the
deductive mind can handle has been discounted. Parallelling the example of
anti-selfishness and anti-altruism, rationality and spirituality are completely
compatible (as this book shows in great detail), but anti-spiritual people will
never find common ground with anti-rational people, because both groups are
thinking in negatives and so are dead wrong.
In the same way, there is a huge difference between not doing anything wrong
(the principle concern of most people in work or legal contexts) and getting
things right (which satisfies customers and leads to profits). It is the
cultural fixation on the inside out issue of not getting things wrong that
leads people to avoid problem ownership, so the problems just sit in the middle
of the floor, getting worse.
The problem of lost context fuels an unfortunate trait of the deductive mind
acting alone, to experience false senses of fear and security. The chattering
mind wouldn't be so bad if most of what it chatters about wasn't such complete
nonsense! False fear occurs because the deductive mind acting alone doesn't
really believe that anything else exists outside the narrow confines of its
knowledge. People stumble into boring life situations purely by chance, and
get stuck there - because the idea that there might be anything else they could
do is then inconceivable. They forget that if they'd happened to go for the
washing up job instead of the car park attendant one, they'd still be alive,
still be doing things, but they would be different things. So they get stuck in
the car park until a hurricane comes along and knocks it over. This same closed
world also explains many people's strange unwillingness to experiment because
of fear of failure. A rational concern about spending scarce resources unwisely
is one thing, but fear of failure in itself is very odd. When we try something
new we can either succeed (great) or not succeed (hey ho, and we've learned
something new). But in the closed world of the deductive mind acting alone it
doesn't seem that way. There is either success which is good, or failure which
is the closed world alternative - bad. So just like the "logical" people who
won't go looking for readheaded liontamers because they aren't certain to find
them, people trapped in this way needlessly deny themselves opportunities for
success, even when they are zero cost.
False security comes from the mistaken belief that the only possible dangers
are the ones that people are conscious of. Because they've copied some specific
dangers that they already know about into their internal thinking space, they
come to believe that all they need to do is make explicit provisions to deal
with those dangers and they will be safe. This leads to a false sense of
security. A good example at the moment is the growth of compulsory drugs
testing in the workplace. Before this fashion caught on, managers used to
monitor their employees' work. If the work deteriorated, the manager would
investigate and find out why. Perhaps the employee's health had deteriorated or
she had family problems and needed some compassionate leave to intelligently
return her to full effectiveness. Perhaps an underappreciated member of staff
had left and her remaining colleauges had found themselves snowed under (that
one's amazingly common, because no-one on the ground will ever admit to it).
Perhaps the needs of the customer base had changed and more training or
resources were needed. And perhaps the employee had a drugs problem. Quite
apart from the totalitarian aspects of compulsory drugs testing, the perception
that the only thing an employer has to watch out for is employees using drugs
in their own time, breeds a dangerous sense of complacency. The same thing
happens with computer system security, where it really doesn't matter how many
passwords a system has if the latest security patches haven't been applied to
the web server, or the corporate network hasn't been designed to be robust in
the face of successful attacks, and with airport security where the number of
troops walking round the departure lounge with automatic weapons doesn't count
for anything if thousands of maintenance workers are entering and leaving the
hangers unchecked every day. Unfortunately in situations like this, the
totalitarianism and lack of imagination of herds trapped in deductive thinking
works together with the trap of losing context, to create situations which are
as useless as they are invasive of basic human rights.
Deduction All Alone
The Universe Knows - We Don't
2) Some women are redheads.
///////////////////////////////////////////////////////////////////////////////
//
// gameshow.c - simulate gameshow problem when contestant always switches.
//
///////////////////////////////////////////////////////////////////////////////
#include <stdio.h>
#include <stdlib.h>
main()
{
int CarBehind;
int MyFirst;
int Eliminated;
int MySecond;
int Play;
int Win;
int Lose;
Win = Lose = 0;
for(Play = 0; Play < 3000; Play++)
{
// Position the car.
CarBehind = rand() % 3;
// Make initial door selection. Under Linux rand() can be used like this.
MyFirst = rand() % 3;
// Eliminate a door.
if(MyFirst == 0)
{
if(CarBehind == 1) Eliminated = 2;
else Eliminated = 1;
}
else if(MyFirst == 1)
{
if(CarBehind == 0) Eliminated = 2;
else Eliminated = 0;
}
else
{
if(CarBehind == 0) Eliminated = 1;
else Eliminated = 0;
}
// Switch doors.
if(MyFirst != 0 && Eliminated != 0) MySecond = 0;
else if(MyFirst != 1 && Eliminated != 1) MySecond = 1;
else MySecond = 2;
// Add to Results
if(MySecond == CarBehind) Win++;
else Lose++;
}
printf("%d wins, %d loses\n", Win, Lose);
}
Turning Inside Out
[Last] | [Contents] | [Next] |
Copyright Alan G. Carter 2003.
Disclaimer - Copyright - Contact
Online: buildfreedom.org | terrorcrat.com / terroristbureaucrat.com