Skip site navigation (1)Skip section navigation (2)
Date:      Fri, 30 Aug 2002 22:50:06 -0700
From:      Terry Lambert <tlambert2@mindspring.com>
To:        "Neal E. Westfall" <nwestfal@directvinternet.com>
Cc:        Dave Hayes <dave@jetcafe.org>, chat@FreeBSD.ORG
Subject:   Re: Why did evolution fail?
Message-ID:  <3D70590E.A1935AF3@mindspring.com>
References:  <20020830125515.I53482-100000@Tolstoy.home.lan>

next in thread | previous in thread | raw e-mail | index | archive | help
"Neal E. Westfall" wrote:
[ ... the treating of treatable genetic problems ... ]
> But you've just dodged my question.  Is it a good thing that we have
> the ability to keep those traits in the gene pool?

Probably not.  I'd just as soon not be selected against, however.


> Or is the development of that ability also a result of evolution?

Probably not.  Both the cause and the effect are exogenous.  If I
remove a connection in a ffedback circuit, it's not "part of the
feedback circuit".


> If so, how do we know that what we are talking about isn't actually
> de-evolution?

Technically, it probably is.  On the other hand, there is a larger
homeostatic system in effect.  Consider if a near-ELE happened next
Tuesday.  With a collapse of the technical infrastructure that is
necessary to the support of such people in opposition to environmental
pressures, the pressures reassert themselves.  The planet is certainly
well over its carrying capacity for a low technology civilization; but
the sudden conversion fron a high to low technology civilization would
remove the support for a percentage of the population which was itself
roughly proportional to the reduction in carrying capacity.

In effect, if it was suggently 1875 again from a technical perspective,
then only people who could have survived with an 1875 technological
base will survive.  This is really minor, though, compared to the
reduction in population as a result of starvation, since distribution
and production would get hit pretty hard, too, and they have a much
higher immediate effect than the ability to replace your soft contact
lenses in one month's time.


> By the same token, is the development of light bulbs a good thing?
> Why?  What I'm trying to get at is your ultimate criteria.  Usefulness?
> But then pragmatism is only useful if you know what ends to pursue.

"What is the meaning of life?"  8-).

I don't claim to have an answer to that question.  What I do claim
is that resetting the clock to an earlier time does no good.  We know
this because, to analogize Dave Hayes, there was no sudden heavenly
chorus announcing the amount of real, user, and system time elapsed
since the beginning of the universe, now that the run is complete.

There is also the slight problem that clocks run forward, meaning
that eventually you will get to the same point yet again, so you
might as well cross that bridge sooner rather than later.


> > All societies enforce standards of conduct upon their members,
> > and people are members of many societies.  Morality relationships
> > are generally hierarchical on one axis, and peering on another
> > (i.e. society condones the soldier that kills the soldier of the
> > enemy state, but not the clerk at the grocery store down the block,
> > even though both are human beings).
> 
> Are these standards of conduct arbitrary?

Humans have a certain amount of hard-wiring.  To draw a simplistic
parallel, maybe their "Add unity to memory" instruction takes 2 clock
cycles, and their load register X with immediate takes 1 cycle, but
their load register Y with immediate takes 50 cycles.  But any Turing
machine can actually run any software; it's just that some software
runs better than other software, on a given set of hardware.

So I think most of these standards of conduct are emergent, based
on their anti- or pro- species survival value.


> Why do societies go to war with each other, if not to enforce it's
> own standards of morality on that other society?

Jealousy.  Trolls crossing the borders.  Inability to effectively
compete in the context of a given consensus rule set (e.g. there are
no radical Moslem First World countries, there are no first world
countries without some form of population controls, volintary or
otherwise, there are no first world countries without immigration
controls, there are no dominant religions that favor birth control,
etc., etc.).

Consensus rule sets are intersting things, though.  One does not
really abitrarily arrive at a consensus rule set, one accepts a
rule set and it becomes consensus because of the limitations of
the physical universe.  Murder is not tolerable, because it most
definitely intereferes with the propagation of the genetic material
of those who tolerate rate it, for example.  A higher standard of
living leads to a longer reproductive life cycle, and limitations on
expansion of population lead to a higher standard of living.  There
are counter pressures, of course, but they are not overriding, and
so they do not lead to consensus rules, except in fractional
societies which exist as part of a larger whole.


> Why enforce those standards, if there is no ultimate criteria by
> which you could judge one society's standards as "right" over
> against the other societies standards which are wrong?

Richard Dawkins said it best, when he pointed out that this is all
an elaborate competition between selfish genes (genes which are
selfish, not genese which express as selfishness).  They even build
these huge hulking robots to carry them around, the better to
propagate (some of these robots are called "humans").

I guess in terms of conflicting societies, it comes down to whether
powerful society A can suffer less powerful society B to exist.


> > Morality is dictated by the larger society, in any given context.
> > It doesn't need to be transcendent, per se, it merely needs to
> > transcend the individual, or the smaller society within the larger.
> 
> Why does it need to transcend the individual, but not individual
> societies?

Marvin Minsky has a lot to say here, which would be useful; even
if it has since been discredited in the AI community, I will
recommend his book "Society of Mind".

The answer as to why it needs to transcend the individual is that
individuals share mutual boundaries.  And if you note, I said that
it *does* need to transcend individual societies ("the smaller
society within the larger").


> Or do you advocate a global society?

Not really.  I recognize it as emergent.  The Geneva Convention,
The World Court, The World Intellctual Property Association,
Maritime Law, International Law, war, treaties, capitualation,
etc..


> If so, is whatever mores that society adopts right by definition?

Personally, I believe a global society is not possible, at least
until there are one or more additional globes involved.  Call it
a result of "Thalience".  8-).  There is an implicit need of "the
other", at least in all the societies we've so far managed to
construct.

Also, societies as individuals are homeostatic creatures.


> > You may say some activity (e.g. killing another human being) is
> > "not right".  What you really mean is "it's unethical"; to borrow
> > from Dave Hayes, you are actually saying that it would violate
> > your internal code of conduct.  What this actually means, however,
> > is that you will not tolerate it in yourself, and so you will also
> > not tolerate it in others.
> 
> Then it would not be an internal code of conduct, by definition.
> Just because you wouldn't engage in a particular activity doesn't
> mean that somebody else shouldn't.

It turns out that there is an escape hatch.  It has to do with the
semantics of "human being".  This is actually *why* it's OK to kill
the enemy, without having to make an explicit exception which leads
you to a slippery slope: you define them to not be a human being.
The Sioux understood this implicitly.  The translation of the Sioux
word for themselves is "human being".

In reality, there's no avoiding externalizing ethics; if it's wrong
to kill another human being, then it's wrong whwther the act is
manifest by comission (performing the act) or omission (you permitting
the act to be performed).  By not acting, you act.


> Okay, but then if there is general agreement in that society that it
> would be genetically beneficial to kill off a certain segment of
> society, say, the jews, or people with certain genetic defects, it
> is then moral by definition for that society to do so.

It is moral *within the context of that society*.  Whether neighboring
societies would tolerate the activity is another matter altogether.
Societies hold each other to consensual standards, as well, in the
context of the society of societies of which they are members.


> > Individuals do not have morals, though individuals may *be* moral
> > or *act* morally or *demonstrate* morality.
> 
> Act morally with regard to what?  You seem to think that a society
> cannot enshrine laws that are immoral.

They can't.  They can enact them, but they can't enshrine them
without the consent of the governed.  The police will refuse to
enforce them, or the citizens will ignore them.  That's the
difference between a law that has ben enacted, and one that is
in effect.

In the case of a police state, where physical power is centralized,
there's always the possibility of subversion, infiltration, or, in
the limit, human wave assault.


> > If you want to boil down this whole discussion so far, it's that
> > Dave has an ethic which he would like to convert into a moral, by
> > getting other people to share it.  This ethic venerates the rights
> > of the individual over the rights of the state (the society to
> > which the individuals belong).
> 
> And you are making the opposite error, of venerating the rights of
> the state over the rights of the individual.  Such societies
> inevitably become tyrranical.

To have a society is to grant that society rights over individuals.
There is no such thing as a tyranny of one.  By your argument, all
jailed tyrants should be freed, because it's tyranical to jail a
tyrant.  But in freeing a tyrant to act upon your society, are you
not therefore still tyranical, this time by proxy?


> > My own objection to this is, first and foremost, that the rights
> > of the state take precedence of the rights of the individual, as
> > the state is composed of individuals, and the yardstick we must
> > therefore use is that of the greatest good for the greatest number.
> 
> I see.  And what exactly is "the greatest good for the greatest
> number"?  Weeding out inferior individuals from the gene pool?
> Why not?  Moreover, who makes these decisions?  Philosopher-kings?

Whoever the governed consent to have govern them.


> Yes, I agree that his ideas are self-refuting...but then ultimately
> so are yours, you just don't see it.

Pose it in terms of symbolic logic.  I promise I will see it, or
point out the error(s) in the formulation.


> > Self-organizing systems don't have to admit non-teleological basis.
> >
> > Science acknowledges "gosh numbers", such as "PI", "e", "G", or "The
> > Fine Structure Constant", etc., without needing to acknowledge a
> > non-teleological cause with a set of thermostats that can be adjusted,
> > one of which reads "Speed of Light" or another which reads "Planck Length".
> 
> Then I would have to ask to what end such "self-organizing systems"
> attain?  Organizing into what?  For what purpose?

Why does there have to be a purpose?


> > Does it matter if an action is wrong or not, if a penalty will
> > be assessed for the action regardless of your own personal views
> > of right and wrong?  If you want to avoid the penalty, you must
> > act as if you believed the action were wrong, regardless of your
> > personal beliefs in the matter.
> 
> Of course, my answer will be, "Yes it does."  I just think you are
> not thinking high enough on the ontological scale.

8-).

"Ontology recapitualates phylology".

It may matter to you, personally.  If it does, you with either act
within the system, to change the mechanism whereby the action results
in a penalty, or you will engage in civil disobedience to provide an
example to others -- sacrificing yourself to the greater good, or you
will declare your seperateness from society, in some way.

So you will change the rule, or you will be removed from the conflict
situation, or you will remove yourself from the conflict situation.
No matter what you do (or the actual outcome), the conflict will be
resolved to the satisfaction of the society.


> > > > > Members of society routinely and frequently violate these conditions,
> > > > > and That's The Way It Is.
> > > >
> > > > And we punish them, and That's The Way It Is.
> > >
> > > When we punish them, is our justification for doing so solely because
> > > we have the guns and the will to do so?
> >
> > Pretty much, yes.
> 
> So I take it you're not a libertarian...<g>

Actually, I am, or at least a Strict Constitutional Constructionist,
if you want to be technically accurate.

I was recently asked to run for public office in my district by the
Libertarian party, in fact (I declined; the suit, contacts, hair-cut,
kiss-hands-shake-babies drill was not my cup of tea; that, and the
party management procrastinated until too close to the registration
window for anyone they got to have a reasonable chance of winning).

Holding a philosophy, and forcing the larger society to hold a
philosophy are two very different things, even if it's for the
larger societies Own Good(tm).  There's such a thing as social
inertia, and societies, being made up of people, are slow to
change.  Anyone who wants a "quick fix" for what they perceive as a
social ill is most likely deluding themselves.  Societies only ever
change one individual at a time.

-- Terry

To Unsubscribe: send mail to majordomo@FreeBSD.org
with "unsubscribe freebsd-chat" in the body of the message




Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?3D70590E.A1935AF3>