This correspondance from the progstone group investigates the question of whether the Ghost Not causes our culture to see concepts of selfishness and altruism, rationalism and spiritualism, in very peculiar ways.
From: Alan Carter
Hi All,
In "Star Trek", they have a problem, try loads of different things,
then they realign the forward detector array. It always works. Why
don't they just realign the forward detector array in the first place?
It is in this spirit of embarrasment that I offer the following
application of the Ghost Not to the selfishness question. It seems
to shed some light on the slippery definitions that seem to be at the
root of the confusion. It seems to work. YMMV.
We live in a universe which contains non-zero sum games. For example,
if I have a computer with no hard disk and a spare CD-ROM, and you
have a computer with no CD-ROM but a spare hard disk, we can swap
and both install Linux. Yippee! If packer super-hero Gordon Gekko
were to come along and ask who was the loser in the deal, we'd just
laugh and tell him he was, since he was not involved. This property
of containing non-zero sum games is not a trivial effect that is
only seen in special cases. There are no fossils in the hill that I'm
currently living on, because the strata pre-date the appearance of
life on the planet. Today, there are many living things on the planet.
From space, the presence of the biosphere is the most noticeable thing
about the planet, since it keeps it out of chemical equilibrium. Life
has established this dominance by creating an ecosystem. In an
ecosystem, all members are symbiotic. That is what "ecosystem" means
if you think about it. And at cosmological scales, we've now got
superclusters, quasars, regular stars, planets, plucky little Pioneer 10
and so on where there used to be just clouds of hydrogen.
Ghost Not afflicted people have this horrible habit of pre-narratisation.
They represent what they have seen on an internal whiteboard, and deny
that anything else exists. So they are always constrained to working
with the things they know about, in the configurations they know about.
They cannot take opportunities because they cannot see them. Thus they
quickly achieve a state called "scarcity", and within this scarcity
they know that if one person acquires a resource, another must lose it.
The Ghost Not converts a non-zero sum game universe into a zero sum game.
Then, they set about teaching their young. With the universe turned
inside out they cannot teach them any good reasons for not maximising
their own well-being at the expense of others. Yet they must do so,
because otherwise their young become competitors for scarce resources.
Since they cannot teach positive reasons for social behaviour, they
teach negative reasons for avoiding anti-social behaviour. This is
where "morals" and "punishment" come from. Sadly, not-anti-social does
not equal social, since the universe is not a mass of interlocking
zero-sum games at all levels. Not-anti-social does not bring the
benefits of symbiotic interaction.
In this context, Ghost Not afflicted people then produce two philosophical
ideas. "Selfishness" is defined as doing well for self at the expense of
others, whereas "Altruism" is defined as doing well for others at the
expense of self. There are no other cases. When young Ghost Not afflicted
people come to engage in economic activity, they experience stress. They
have been trained out of "selfishness" by aversion therapy - punishment.
Yet they don't want to give all their possessions away since that would
be madness. There are no alternatives. So they get "clever". They start
to practice anti-altruism instead of selfishness. If Microsoft were smart,
if they were really selfish in a non-zero sum game universe, they'd up
the technical quality of the product, reduce locked in APIs, and give
away key enabling technologies, just like Netscape and Sun do. They'd
create a user base who like them, who want to use their stuff when
appropriate, and who are already established "customers". They'd create
an ecology of value-added suppliers of other bits around them, exploiting
their own technologies in ways they themselves had not thought of, and
creating a more vibrant market.
But Microsoft just can't bring themselves to do it. Their anti-altruistic
model says that no-one gets anything unless they pay for it. Period.
In a zero sum game (where there is no leakage) this must by definition,
be good for Microsoft. Except it isn't working. The latest scam, attempting
to subvert the US itself by forcing the Federal Government to slash Justice
Department funding, puts them about on a par with the Capone gang in terms
of overstepping the mark, and isn't likely to do anything for Gates'
popularity.
I once saw an extreme case of what I'm here calling "anti-altruism" in
the behaviour of a bunch of pointy-haired civil servants, who booked in
excess of 10,000 GBP to having meetings to discuss the purchase of 100 GBP
worth of software, on the grounds of "public accountability". I'm sure
everyone has their favourite example of this kind of behaviour.
And why is it that investment types have such loathing of the so-called
"ethical trusts", which have no truck with arms sales, biochem dumping,
repressive regimes and the like, and in consequence outperform
anti-altruistic business practices (because the companies are free to get
on with productive work)? Don't stockbrokers want to make money? No! They
are addle brains and want to do it arse-about-face in a closed universe
that doesn't even exist. The higher aspiration is completely out of sight.
Which is sad, because it was called the "mercantile ethic" - doing well
while doing good. Having customers who are pleased to see you.
Even more tragic is what happens to people who see the effects of
anti-altruism, and recoil from them, but are trapped in the language of
the Ghost Not. These people reject anti-altruism, and replace it with
anti-selfishness. This is the even sillier idea that doing badly by
one's self must automatically benefit others. Think about the total
amount of human misery in the world. Or even just in India. Or just in
Calcutta. Then add the amount of extra suffering that would have been
there if Mother Theresa hadn't been anti-selfish. The addition is so tiny
it doesn't make any difference! Mother Theresa flogged herself stupid and
did not even dent the problem, because her strategy of anti-selfishness
was flawed. Having a jolly nice supper, looking at the problem afresh and
attacking the right pivot points would have been more productive.
So in a Ghost Not afflcited context we get anti-altruism and
anti-selfishness in a flawed model of a zero-sum game universe. Both are
losing strategies in reality. What is really amazing is that if we lose
the Ghost Not and turn things the right way around, there is no
distinction.
anti-selfishness != anti-altruism (1)
selfishness == altruism (2)
How can this be? The answer is that there is more to the universe than we
can directly see, although we know the extra bits are there. Someone who
believes in the "dark side of the moon" cannot take photographs of rock
formation on the far side of the moon when it's daytime there. We are
free to find new things, and we are free to combine the things we have in
new combinations. Thus we are free to find and exploit opportunities to
improve each others' environments symbiotically. And as explained above,
that is the main game of the universe, so the opportunities are there.
To put it another way, with the artificial system boundaries of the Ghost
Not out of the way, I can see that in order to improve my environment I
must also improve your environment, since we are actually embedded in
(nearly) the same environment. (You can get social behaviour for polluters
out of that if you ask how long the cycle time is for me to be as polluted
as you by my foul dioxin plumes. But that takes a more-than-one-shot mind.)
True selfishness on my part - actions intended to enhance my own well
being and never mind anything else will, persistently, in a non-zero sum
game universe, also enhance the well being of others. To the extent that
(even though I might not care about them) I can check the benefit to
self in my plan by finding the benefit to others. If the benefit to
others is there, I can assume that my plan is sound and my well being
will be served as intended.
To roll up this bit, consider this from Friedenberg's critique, "Laing":
As a rough check on the importance of envy as a political force in any
society, consider the proportion of expenditure of amenities, social
services, or welfare which is actually used for administrative
safeguards against "abuses" rather than to further the ends presumably
sought. It is impossible to get an adequate welfare program through
any state legislature; cities are being driven bankrupt by the mounting
costs of programmes simply too skimpy to alleviate misery. Yet these
programmes are burdened by procedures to eliminate "chiselling" that,
it is clear in advance, will cost far more than the highest possible
reasonable estimate of the total amount being "chiselled" - and which
in any case can only serve to make the poor more miserable by added
harrassment and delay.
It is in my selfish interest as a taxpayer to abolish the "safeguards"
described here. The morale and self esteem of the poor will improve, more
money will go exactly where it is needed, more poor will become rich,
my taxes will go down, and my nice car won't be torched while I'm parked
outside a restaurant. That's what I call selfishness. For it to work,
the poor must also be taught to be selfish in the real real world.
They will also have to be talked out of their dopamine self-addiction.
See the stuff about administration and procedures in the quote? Again,
Ghost Not and dopamine self-addiction co-support one another, leading
to the strange state of mind descriptively called a "packer".
Look at those two equations up there - the ones that say that turning
all the components of a proposition upside down makes the answers come
out wrong. We've seen it before of course - it's the Ghost Not figure
and ground effect. There's a story in the life of Feynman that fits
very nicely.
Feynman the undergraduate was discussing garden sprinklers with friends.
The hose goes in at the centre, and the sprinkler consists of an S
shaped tube. Pressure from the hose drives water out of the two ends
of the S, and the whole thing rotates by jet action, watering the lawn.
What happens if one sucks on the hose instead of blowing? Most thought that
the thing would go around backwards, but Feynman disagreed. The tale is
he got into trouble because his experimental lash up exploded, but not
before he had determined that it does not go backwards. Why? It's the
pressure. When blowing, the pressure of the water acts on the cross-section
of the end of the pipe only. When sucking, all we can do is reduce pressure,
which happens evenly for all possible angles of entry to the pipe. There
is a pressure shadow right behind it, but its so tiny it doesn't count.
The S stays still.
Turn everything inside out. Answers are different because of figure and
ground effect. If anyone finds some element of this stuff that doesn't
already have Feynman's fingerprints all over it, let me know!
Alan
P. S. My motives for doing this stuff in the first place are to reduce the
amount of packers giving me grief and increase the number of opportunities
to operate efficiently that I have. If in doing so I create a similar
environment for you lot, it doesn't hurt me, does it? Quite the reverse.
And you're all in this debate for reasons symettrical to my own. Yippee!!!
From: William Wechtenhiser
On Thu, 28 Oct 1999, David Barrett wrote:
I'm confused, are you suggesting that my view is that we should all share
and be happy? If so, I would have to disagree -- I am all about selfish
actions, and I think that altruism is at the heart of most problems. I'm
just saying that I'd prefer people's motivation for not taking someone
else's crops to be "This would not be in my best interests, as the farmer
could shoot me, or he could get his government to shoot me" instead of "I
shouldn't take these crops because it's Wrong (implicityly) to do so."
Implicit belief in Right/Wrong/Good/Bad/Moral/Immoral leads to
inconsistency, and therefore I'd say is subjective and unreliable as a
basis for interpersonal conduct.
Is it in my best interests to steal the farmer's crops if I know
that he's a pacifist and will not call the cops? What about my interests
in having him continue to produce food for myself and others? What about
my interest in living in a world in which people don't steal? What about
my interest in knowing that I am self-reliant which would be undermined if
I found myself so incapable of providing for my own sustenance that I was
reduced to stealing to subsist (how did I allow myself to reach such a
miserable state of affairs anyway?)?
I would suggest that one's own best interests, taking all
conceivable factors into account, and with an eye toward the long term is
a perfectly adequate basis for ethics (that is, for establishing what is
good/bad/&c.). In fact, I would argue that our use of reason is rooted in
this ethic. We use reason because it is in our best interests to, because
it provides us a good framework for judging what is in our best interests.
Seen from a broad perspective, this is ethics.
William Wechtenhiser
Perfection (in design) is achieved not when there is nothing more to add,
but rather when there is nothing more to take away.
-Antoine de Saint-Exupery
From: "Rob Harwood"
Good post Alan, it almost seems a shame to critique it:
We live in a universe which contains non-zero sum games.
Very true, but not always true. Many of the conclusions you come to are very
accurate, but there are a couple that are not. You seem to be coming from a
Game Theoretic point of view, which is good, but a lot of work has already
gone into this area, and I'm not sure if you're aware of it or not. The
Iterated Prisoner's Dilemma is a classic example from Game Theory of what
you're talking about. It spawned the study of Evolutionary Game Theory,
which is a very powerful tool in sociology, economics, and even biology. I
suggest you find the mass of work relating to the keywords Axelrod,
Evolutionary Game Theory, Iterated Prisoner's Dilemma, Tit For Tat, PAVLOV.
Actually, I've got a bit of time, I'll find some links for you:
http://www.constitution.org/prisdilm.htm
http://www.santafe.edu/sfi/publications/Working-Papers/97-12-094E.html
http://www.biozentrum.uni-wuerzburg.de/~brembs/ipd/ipd.html
http://www.biozentrum.uni-wuerzburg.de/~brembs/ipd/pavlov.html
Since they cannot teach positive reasons for social behaviour, they
teach negative reasons for avoiding anti-social behaviour. This is
where "morals" and "punishment" come from. Sadly, not-anti-social does
not equal social, since the universe is not a mass of interlocking
zero-sum games at all levels. Not-anti-social does not bring the
benefits of symbiotic interaction.
This brings to mind the distinction between not doing something because
'it's wrong' and not doing something because 'I might get caught'.
to practice anti-altruism instead of selfishness. If Microsoft were smart,
if they were really selfish in a non-zero sum game universe, they'd up
the technical quality of the product, reduce locked in APIs, and give
away key enabling technologies, just like Netscape and Sun do. They'd
create a user base who like them, who want to use their stuff when
appropriate, and who are already established "customers". They'd create
an ecology of value-added suppliers of other bits around them, exploiting
their own technologies in ways they themselves had not thought of, and
creating a more vibrant market.
I hate Microsoft as much as the next guy, but I think this is an incorrect
assessment of the situation. Microsoft has a monopoly. That changes
_everything_. This is a case in which the optimal solution is not
cooperation at all! In fact, the optimal solution for Microsoft is to try to
maintain its monopoly. Hence the Java scandal. "Sure, we'll form an
alliance, and produce a Java tool for Windows. <tweak><tweak><tweak> Here it
is! Don't you like it? It does everything you asked us not to make it do.
You can kiss your cross-platform code goodbye. Java's ours now!" Anyhow, if
they hadn't been caught, we'd be a lot worse off, but Microsoft would be a
lot _better_ off. If you were talking about that lack-lustre software firm,
Run-of-the-Mill Inc., competing in a diverse and highly competitive market,
then your assessment would be correct; they should cooperate and form
alliances. But Microsoft is not in this league. Their environment is
significantly different than Netscape's or Sun's. They don't need to create
a vibrant market, they need to snuff out vibrancy so they can have it all
for themselves.
If you've read about the Iterated Prisoner's Dilemma at this point, then
picture a landscape with All Ds, and one really big All D called Microsoft.
If a few TFTs show up, they will start cooperating and building up a higher
score, thus threatening Microsoft's position. It is thus in Microsoft's best
interests to snuff out the TFTs before they have a chance to find each other
and form an alliance. Now, the real world is a little more complicated than
that (since Microsoft isn't best characterized as All D), but the analogy is
pretty good: It's not always in your best interests to cooperate, just as
it's not always in your best interests to defect. It depends on many factors
(let's lump it all into one term called the 'environment').
I once saw an extreme case of what I'm here calling "anti-altruism" in
the behaviour of a bunch of pointy-haired civil servants, who booked in
excess of 10,000 GBP to having meetings to discuss the purchase of 100 GBP
worth of software, on the grounds of "public accountability". I'm sure
everyone has their favourite example of this kind of behaviour.
This is a very good example of what you call 'anti-altruism'.
Even more tragic is what happens to people who see the effects of
anti-altruism, and recoil from them, but are trapped in the language of
the Ghost Not. These people reject anti-altruism, and replace it with
anti-selfishness. This is the even sillier idea that doing badly by
one's self must automatically benefit others. Think about the total
amount of human misery in the world. Or even just in India. Or just in
Calcutta. Then add the amount of extra suffering that would have been
there if Mother Theresa hadn't been anti-selfish. The addition is so tiny
it doesn't make any difference! Mother Theresa flogged herself stupid and
did not even dent the problem, because her strategy of anti-selfishness
was flawed. Having a jolly nice supper, looking at the problem afresh and
attacking the right pivot points would have been more productive.
This is a bad example, regardless of whether it's true or not. You're not
going to convince many people by saying Mother Theresa was acting
sub-optimally. :-)
How about those nosy people who do something 'for your own good', and say "I
was just trying to help"? I don't have any good suggestions, I just think
the Mother Theresa one isn't very convincing.
True selfishness on my part - actions intended to enhance my own well
being and never mind anything else will, persistently, in a non-zero sum
game universe, also enhance the well being of others. To the extent that
(even though I might not care about them) I can check the benefit to
self in my plan by finding the benefit to others. If the benefit to
others is there, I can assume that my plan is sound and my well being
will be served as intended.
If by 'persistently', you mean 'in all cases in the long run', then this is
not accurate at all. Not every environment (or payoff matrix in Game Theory)
promotes the kind of cooperation you are proposing. The vast majority of
situations in our society? Yes. All of them? No. There are times when
cooperation is sub-optimal even in the long run, and even considering
genetics. Would you steal from your mother? I hope not. Would you cooperate
with a serial killer? Obviously not. There are less-extreme examples that
also hold. The above paragraph seems contradictory: First you define 'true
selfishness' as not minding anything but your own well-being, then you talk
about 'finding the benefit to others'. If you were in a room with twenty
cannibals, all with knives, all looking at you in hunger, there is
absolutely no point in 'finding the benefit to others' since their benefit
is your demise. Please follow the links I gave above. I think you'll find
that Evolutionary Game Theory has a lot to say about this kind of thing.
It's a fairly well-explored topic.
From: Alan Carter
On Tue, Nov 09, 1999 at 09:52:47PM -0500, Rob Harwood wrote:
We live in a universe which contains non-zero sum games.
Very true, but not always true. Many of the conclusions you come to are very
accurate, but there are a couple that are not. You seem to be coming from a
Game Theoretic point of view, which is good, but a lot of work has already
gone into this area, and I'm not sure if you're aware of it or not. The
Iterated Prisoner's Dilemma is a classic example from Game Theory of what
you're talking about. It spawned the study of Evolutionary Game Theory,
which is a very powerful tool in sociology, economics, and even biology. I
suggest you find the mass of work relating to the keywords Axelrod,
Evolutionary Game Theory, Iterated Prisoner's Dilemma, Tit For Tat, PAVLOV.
There's a very good introduction to this stuff in Hofstadter's
"Metamagical Themas", too. A part of my stress on non-zero sum games
comes from Reciprocal Cosmology, which I've fleshed out in "More
Selfishness", today.
Since they cannot teach positive reasons for social behaviour, they
teach negative reasons for avoiding anti-social behaviour. This is
where "morals" and "punishment" come from. Sadly, not-anti-social does
not equal social, since the universe is not a mass of interlocking
zero-sum games at all levels. Not-anti-social does not bring the
benefits of symbiotic interaction.
This brings to mind the distinction between not doing something because
'it's wrong' and not doing something because 'I might get caught'.
Absolutely! In the later case we have only taught the young negative lessons.
They get nothing positive to take away and build on.
I hate Microsoft as much as the next guy, but I think this is an incorrect
assessment of the situation. Microsoft has a monopoly. That changes
_everything_. This is a case in which the optimal solution is not
cooperation at all! In fact, the optimal solution for Microsoft is to try to
maintain its monopoly. Hence the Java scandal. "Sure, we'll form an
alliance, and produce a Java tool for Windows.
But they can't snuff out vibrancy. Jeff Goldblum's character at the beginning
of Jurassic Park did a good rant on this. The bloody life just keeps
sneaking round the corner! Anyway, when Sun gave away NFS and Java they
could indeed have exploited a monopoly position by making it proprietory.
They were too smart.
If you've read about the Iterated Prisoner's Dilemma at this point, then
picture a landscape with All Ds, and one really big All D called Microsoft.
If a few TFTs show up, they will start cooperating and building up a higher
score, thus threatening Microsoft's position. It is thus in Microsoft's best
interests to snuff out the TFTs before they have a chance to find each other
and form an alliance. Now, the real world is a little more complicated than
that (since Microsoft isn't best characterized as All D), but the analogy is
pretty good: It's not always in your best interests to cooperate, just as
it's not always in your best interests to defect. It depends on many factors
(let's lump it all into one term called the 'environment').
Exactly the same negative customer loyalty that devastated IBM is about to
consume their successor.
The above paragraph seems contradictory: First you define 'true
selfishness' as not minding anything but your own well-being, then you talk
about 'finding the benefit to others'.
It's the current zero-sum game paradigm that's contradictory. In reality
one cannot do anything without benefitting everything by ultimately increasing
complexity. Benefit self directly and the feedback is less convoluted is all.
If you were in a room with twenty
cannibals, all with knives, all looking at you in hunger, there is
absolutely no point in 'finding the benefit to others' since their benefit
is your demise.
The poor things will get kuru. More seriously, we are looking at global
heuristics here, not linear, reductionist logic as in the benighted
"requirements tracing" tools discussed in PS. The very idea of robotic
guides to behaviour, the incessant debate about statements of rules that
absolve the organism from thinking, is a perversity caused by M0. In the
real universe, novelty and consciousness are very much entangled.
Alan
From: Alan Carter
Hi All,
A couple of interesting follow ups to the idea that selfishness and
altruism are the same thing in a non-zero sum game universe, but the
Ghost Not conceals this.
The idea says that because the universe is non-zero sum, any authentic
benefit that I obtain for myself as a result of my actions must also
be matched by authentic benefits for "others". Who are these "others"?
They are not necessarily other humans! Seeing only humans in the
universe and then theorising by playing with words and ignoring the
reality the words represent, is a consequence of being brought up in
an M0 society. It is an effect of the Ghost Not logical component of M0.
What the universe of Reciprocal Cosmology is doing at the largest scales,
is increasing its complexity. It is doing this by adjusting its
configuration. When we take actions, we adjust the configuration. For
us, benefits actually mean increasing the richness or complexity we have
access to. The total complexity in any time period will increase. If
I wish to reduce my local complexity then there will be a concentration
of greater complexity somewhere near me in spacetime to fix up the sums.
This fixup is assured because the future, more complex state is already
assured. The mass energy around us is actually experiencing the creative
arrow and has already "done" it.
We've known about the thermodynamic arrow of time - the second law of
thermodynamics - for a couple of hundred years. In the last 50 we've
been becoming aware that self-organisation is also a way to "tell the
time" in the universe. Reciprocal Cosmology shows a way for the self-
org clock to be as rigorous as the second law clock.
So no matter what we do, there will be a nett increase in complexity.
From our point of view, assisting this by reducing our local complexity
produces unpleasant subjective experience. We might as well submit to
the inevitable and enjoy some of the complexity ourselves, by changing
the configuration so that it lands in our laps. To be sure, other
complexity (as in the PS self-cancelling piles of complexity that are
undesirable in programs) will arrive in something else's lap, but hey,
maybe it will buy us a beer and we'll be even better off.
Here and now, the primary beneficiary of our matched added authentic
benefit is humanity in total. The reason is that it is humanity in
total that is currently stacking up the most added complexity hereabouts
and there is a lot of added complexity to stack. There is a long way
to go. Humanity is not yet a discernable being composed of individual
cells but with its own agenda as you are. But other complexity matching
is focussing on the growing complexity of individual humans (especially
creative ones), the ecosystem of the planet, and the odd photon is
zooming off to stimulate the thinking of Zarg of the planet Tharg, a long
way away. It is important to make this point, because this use of the
revised cosmology to illuminate an area previously treated by playing
with words without referents in an erroneous paradigm is a global
methods use. Global methods will tell you what has to be accounted for,
not how to account for it.
In this view, we can see added complexity (richness) as much as a kind
of cosmological pollutant as a scarce good. It is increasing, and we have
to help account for it. There is a similar situation in electricity power
generation stations. The naive view would be that the station is pumping
out energy to houses and factories, so the least of its worries would
be an excess of energy. But this is exactly the problem that power
station designers must solve! They must maintain a temperature
differential between the input and output of the turbines, and they
find this hard, because of all the heat piling up on the output side. So
they erect vast cooling towers near the turbine house where they can
evaporate water and cool the turbine outputs.
What we are doing here is using the Ghost Not, via the modifications it
allows us to make in how we understand physics, to demonstrate that a
seemingly eternal, unanswerable question isn't even a question at all!
Why did we ever think it was? Because of the Ghost Not of course!
The Ghost Not converts the non-zero sum game filled with growing
ecosystems, galactic superclusters and Zarg of the planet Tharg into
a zero sum game filled with humans and a few objects in fixed
configurations. It makes people believe that there can be such things
as "winners" and "losers". It makes them lose sight of objective reality,
so that what one person "thinks" is exactly as valueless as what another
person "thinks", since there is nothing to check the "thinking"
against.
So in this argument, the question is not how "terrible" it would be if
everyone "thought" like this, but whether or not the thinking has any
physical merit. If it does, then the conclusions (whatever they are)
are the case, and whether people "like" it, or "think" otherwise
doesn't matter. Scrying the "terribleness" of peoples' "thinking" is
a habit picked up during the millenia since the first Greek blokes
wearing dresses stuck their heads up their bottoms (added Ghost Not
to dopamine self-addiction), found that they were then Living Gods
(totally lost the bits of the map they could no longer understand),
and started making ridiculous pronouncements.
To give you an idea of the extent to which people have been brainwashed
by this style of making things up as they go along, and then bodging
when the nonsenses emerge, consider the following two statements:
In the bullshit speak of the false universe, statement 2) is "good"
whereas statement 1) is "bad". Yet they are identical in meaning! They
express a relationship between helping others and helping self that
goes both ways. It is impossible to determine if a person is helping
himself in order to help others or is helping others in order to help
himself, just by looking at his actions, since they are identical!
Once the bullshit speak is installed, instant denial a la Ghost Not
kicks in, and we see the characteristic development of anti-altruism
instead of selfishness, and anti-selfishness instead of altruism
(where altruism and selfishness are actually the same thing). Hence
the M0 dictionary definition of "selfishness" explicitly asserts that
helping self is a zero sum game, and actually describes anti-altruism.
Meanwhile M0 altruism is converted into anti-selfishness - the denying
of benefit to self on the false assumption that reducing benefit to
self automatically increases benefit to others (the reverse is true).
Perhaps the idea that no matter what you do, you are doomed to help
increase the complexity of the universe is a little odd sounding.
Consider Microsoft. If their fervent practice of anti-altruism
had been a little less intense, would there have been so many people
stacked up, pissed off, and keen to help make Linux work? That's the
hard way to do it. On the other hand, I doubt that Linus Torvalds is
ever going to find himself without employment prospects at a high
salary if he wants it! By helping others, he has (like a cosmological
waste product) got loads of wonderful, complex opportunity falling
right into his lap.
Sadly for Chairman Bill, the stepwise inside-out logic of M0 just
can't cope with the holistic outside-in logic of the real universe.
Because his Microserfs couldn't predict the exact name of the
Scandanavian computer science student that would be a bigger menace to
him than the US Government, he was unable to appreciate that surely as
eggs is eggs there would be someone doing that job. Literally, bad
karma man! There is nothing here that the Buddha didn't say. All I've
done is showed that either term can be written on the left - or right -
of an "=" sign.
There were a couple of interesting examples of where the selfishness
argument might appear to break down offered on the list. In fact, it
doesn't and there are two important things to learn from this. Firstly,
this is a bigger argument. It does "ethics" as a part of cosmology.
If there seems to be a problem, what we have to do is avoid denouncing
the "thinks" as if they are built on nothing. The heuristic that all
"thinks" are built on nothing, so everything is as untrue as everything
else must be unlearned now we have a solid general theory. Instead
what we must do is follow back the reasoning and find the error. Maybe
we find en error in the theory, in which case it goes in the dustbin,
or maybe the theory illuminates our error. Either way it is a solid
development taken from a consistent way of seeing everything at once,
and not a game of "my factoid denies your factoid so there". Secondly,
we must always remember that we are talking about moving the
cognitive goalposts. We cannot change one bit of the picture, "forget"
to change another bit, and then claim to have found inconsistency!
Where the selfishness argument seemed to break down was always in cases
where the person has to think, "If there is no reason for me to avoid
action A, then everyone else will think the same thing, and I will suffer.
This is a reason for me to avoid action A." If the person is not capable
of doing this, then they will not understand the implications of taking
action A themselves, and so their selfishness will be thwarted by their
stupidity. Whatever could prevent people from perfoming this cognitive
operation? The operation is a "look again" operation. It is in the same
class as the operations that require the use of feedback in cognition
listed in the M0 paper. Such operations are so important to humans that
our brains implement them in hardware, using feedback loops that dopamine
self-addiction squelches. Then the Ghost Not constructs idiot ethics as
described above to cover up the stupidity. Here we see a wonderful example
of the neurochemical and logical components of M0 co-operating in ethics
just as they co-operate in the programming shop and school.
Finally, Mr. Gates wants another dollar. He always wants another dollar.
He will do anything, sacrifice anything, no matter what the cost to his
company, his employees, his own quality of life (he was asking a couple of
weeks ago why no-one likes him), just to get another dollar by the most
unsubtle route and only by the most unsubtle route. He already has loads
of dollars. He has more dollars than he could ever spend. Why does
Mr. Gates want another dollar so much? The idea that he is being
anti-altruistic and mistaking this for selfishness does make sense!
Alan
From: Alan Carter
Hi All,
Scanning one of the physics newsgroups yesterday, I saw a posting by
a chap who was objecting to an idea that had been posted by the
notorious Jack Sarfatti. The poster stated that he would require
very firm evidence that the idea was the case before he would consider
it, because he felt that he had detected a "mystical implication" in
Sarfatti's idea.
This got me thinking. Why did this poster (and he's far from alone in
this behaviour) feel that he would not consider an idea which (he felt)
had a "mystical implication" when people will quite cheerfully consider
ideas that explicitly state that (for example) there are umpteen
different versions of each of us, in umpteen different universes, each
only one quantum event different?
What is the problem?
Then an idea occured to me: This person says he is rational, but he's
not. What he is, is anti-spirtual! And anything that gives his dormant
feedback loop a twinge of holistic perception is "spiritual"! In the same
way, the island of Ibiza where I spent 18 months working through
Reciprocality is stuffed full of ex-patriates - particularly English and
Germans - who I'd identify as natural immunes that are trapped in the
Ghost Not. They don't suffer from the neurochemical component of M0, so
the incessant whining and mutual micropolicing that jams coherent
thought is missing, but they suffer from the logical component and so
have their entire logical field inverted. These dippy New Age tossers
(spot the value judgement) have retreated from the highly ritual addicted
societies they were born into because they aren't members of the
dopamine economy and so do not see ritual fixing as an inherent good,
and yes they can exploit feedback in cognition so they have some
intuitive awareness, but when it comes down to it they say that they are
"spirtual" while in fact they are anti-rational!
So we get people who call themselves rational who are in fact
anti-spiritual, and people who call themselves spiritual who are in fact
anti-rational. Anti-rational and anti-spiritual people have a non-
resolvable dispute about the nature of reality. In fact, they are both
wrong. A true rational person and a true spiritual person are the same
thing, since the deep structure exists, and must be perceived holistically
using feedback in cognition. Once the feedback loop has identified a
pattern in the deep structure, focussed rational thinking can be used
to explore and express it. This is why Gurdjieff speaks of the importance
of "perfecting one's Objective Reason" and Steiner likewise talks about
the importance of developing "clear, raitional thinking" - with
particular emphasis on mathematics. Mathematics is still the only "hard"
subject that the Waldorf School movement excels at.
This is the pattern I was talking about a couple of weeks ago, wherein
Ghost Not afflicted people call themselves selfish when actually they
are anti-altruistic, or call themselves altruistic when actually they
are anti-selfish. I didn't seem to do a very good job of getting the
idea across at the time, but I'm convinced that all that a creative
can actually do is improve his or her environment, so any authentic
benefit to the individual also brings an authentic benefit to others,
also. The key is being smart enough to recognise an authenic benefit
to self. Thus a true selfish person and a true altruistic person are the
same thing, whereas anti-selfish and anti-altruistic people are always
diametrically opposed to one another.
A simpler way to put it is that people without the Ghost Not describe
themselves in terms of what they are, whereas people trapped in the
Ghost Not describe themselves in terms of what they are not.
I love it when an hypothesis comes together!
Alan
Disclaimer
- Copyright
- Contact
Online:
buildfreedom.org
| terrorcrat.com
/ terroristbureaucrat.com
but
The Epilogue
Big Scales
Getting Used To A Bigger Argument
Index
| Parent Index
| Build Freedom:
Archive