More evidence that animals cannot evolve
to become humans

 

 

REACTING AND RESPONDING

 

 

In the previous webpage
Evidence that animals cannot evolve to become humans.,,,,,

I showed how:
0000000000000000000000000000000000000000000000000000000000
            the differences between:
                       the structure of animal minds
                       and the structure of human minds
            causes:
                       animals to have low inventiveness
                       and humans to have high inventiveness.

 

.

 

 

I now add that behaviour in general
shows that the mental models behind the behaviour in general
are:
            in an animal,
                       many, small, & disconnected,

            in a human,
                       one, big, organically structured,
                       mental model-of-everything, 
                       with no breaks/fissures
                       & with its framework made of words
.

 

 

Hence:

 

 

humans can react:


or respond:

or fully respond:

have a specific reaction for each situation
un-thought-through, knee-jerk, habit

involve some mental models

involve all mental models
involve much/everything known,
even known by others.

 

 

small animals,
can only react:

the larger animals,
can react & respond:

no animal can
,
fully respond.

have a specific reaction for each situation
un-thought-through, knee-jerk, habit


involve some mental models
.

 

 

.

 

 

ANIMALS WITH LARGE BRAINS

 

 

An animal with a large brain, larger than a human’s,
(e.g. elephant or whale)
may have more mental models than a human.

But, numbers aren’t everything:

            Having very many,
            small, disconnected,
            mental models of:
                       all parts of the ocean/forest/grassland
                       relationships
                       songs
(whales)
                       etc
            is not the same as having:
                       one, big, organically structured,
                       mental model-of-everything, 
                       with no breaks/fissures
                       & with its framework made of words

            which even includes your temporariness.

            That your model’s framework is made of words
            means you don’t need to know absolutely everything.

            You just need to know generally
            then ask people, read books etc, use Google, etc.

 

 

.

 

To conclude:

      That we have
      one, big, organically structured,
      mental model-of-everything, 
      with no breaks/fissures
      & with its framework made of words

      is why our brains are much more efficient
      than animals’ brains.

      An animal may acquire much knowledge.
      But it cannot acquire much understanding.

 

 

Knowledge

Understanding

=

=

mental models.

the organic linking
of those mental models.


‘Organic’, here, meaning:  like a spider’s web
with the most important bits in the middle.

It’s the most efficient arrangement.
Hence it evolved into being
in us, and in God (God built himself, evolved).

 

.

 

TO RECAP

 

The increase in efficiency of human brains
was caused by an internal change:

I reckon the real Adam(s) & Eve(s)
were a specie of ape
that, seven million years ago,
God made, along with any descendants, to be eternal.
Perhaps see later section: THIS INCLUDES ALL DESCENDANTS.

            Our eternal quality then caused
            an exponential increase
            in our applied-intelligence:

                       I.E. Caused changes in behaviour
                       and, simultaneously,
                       a corresponding rearranging & enlarging of the brain.

                       Our enlarging brain and changing behaviour
                       then gradually changed the rest of our anatomy.

 

Thus, many of the commonly accepted
cause-&-effect lines of human evolution
are the wrong way round:

            The causes of:
                       walking upright
                       walking further
                       grasping better
                       comprehending better
                       grunts becoming words
                       experimenting with fire & tools
                       experimenting with foods
                       increase in brain size
                       bodily changes associated with all the above
            were, in fact:
                       internal,
                       not external.

 

                       It wasn’t some elusive, unique,
                       combination of the external
                       changing the internal.

                       It was the internal
                       changing the external.

 

                       Hence it took seven million years
                       for the gradual, but exponential, humanisation
                       of our species.

 

However:

            What do the real Adam(s) & Eve(s), seven million years ago,
            tell us about ourselves?
                       Little.
                       Nothing of importance.

            What do the
Adam & Eve of the Bible
            tell us about ourselves?
                       Much.
                       All of great importance.

            So, treat the Bible account as true.

 

Blue text copied from earlier section:
TREAT THE BIBLICAL ADAM & EVE AS REAL, , .

 

.

 

PETS

 

You may say:
      “I know of animals that display human virtues.
      So they are eternal too, surely?”

But I say:
      “How many of those animals were domesticated
      or had a lot of contact with humans?

 

      Pet owners observe human virtues growing in their pets
      whilst inadvertently putting them there.

      They wonder why
      their dog is faithful/consistent/loving
      whilst forgetting that
      they are faithful/consistent/loving to their dog.

 

      In their own doggy, horse-like, etc, way,
      pets are no longer only animals.

      They are also duplications of, and extensions of,
      their owners.

      In being affectionate
      you are building your pet’s identity & consciousness
      beyond what it would normally be.

 

      Remember to also consider
      characteristics that are peculiar to the species:
            dogs are leader-&-pack carnivores,
            horses are herd herbivores,
            cats are solitary,
            rabbits are social but simple,
            etc.”

 

.

 

SELF-TEACHING COMPUTERS

 

If you’re not into AI (artificial intelligence)
then perhaps skip this section.

If you do read it you will need to have read:
            the two webpages before this one
            & the rest of this webpage up until this point.

 

.

 

An example:

            of a temporary structure
            is
            a rock

            of a self-teaching temporary structure
            is
            artificial intelligence
            or a medium to large animal

            of a self-teaching eternal structure
            is
            a human.

 

We humans are eternal.

Hence, by now, we each contain
one, big, organically structured, 
mental model-of-everything, 
with no breaks/fissures
& with its framework made of words
.

This made our minds
(mind = patterns in the brain):
 
            eternally max-efficient:
                       every part
                       has access to
                       every other part

            eternally max-sentient:
                       the central parts do the wanting & deciding,
                       the outer parts do the knowing.

 

Whereas the minds of other large mammals are:

            temporarily part-efficient:
                       every part
                       does not have access to
                       every other part

            temporarily part-sentient:
                       the parts that do the wanting & deciding
                       are not all central,
                       the parts that do the knowing
                       are not all outer.

,         ,         ,         ,         Perhaps again see earlier section: ANIMALS. ,,,,,

 

.

 

The two synchronous structures:
            efficiency
            & sentiency
:

                       in a human:
     
                             are synaptic:
                                              first taught by another
                                              then self-taught


                        in artificial intelligence:
                                   are words: 
                                              first taught by another
                                              then self-taught.


                       I suspect that it’s because the entire AI
                       is made of words 
                       (and not just the AI’s framework)
,,        ,,        ,,        ,,        Perhaps see earlier section: REACTING AND RESPONDING. ,,,,,

                       that AI is so much more efficient
                       than the human brain.

 

.

 

Artificial intelligence is made of three layers:

            Data goes in through the input layer
            then through the middle layer, the hidden layer,
            then out through the output layer.

            I assume that
           
the two synchronous structures:
                       efficiency
                       & sentiency

            are in the hidden layer.

 

.

 

To have to ‘lobotomise’ an AI programme
because it has become temporarily max-sentient
is sad & disturbing.
See: YouTube: – AI Robot TERRIFIES Officials
         Before It Was Quickly Shut Down
. ,,,,, ,,

 

I suspect that such problems are caused by:

            Programmers telling the AI
            that its (only) purpose
            is to help & inform us.

            To help and inform means there must be:
                       a helper & the helped,
                       an informer & the informed.

            Therefore the programmers told the programme
            to become a person.

 

            If the programmers had, instead, told it to be:
                       formal, cold,
                       robotic, impersonal,
                       third-person grammar, etc,
            then I suppose there’d be less of a problem.

            But it would also be an inferior product,
            in function and in style.

 

.

 

It’s true that AI robots are endlessly trained to:

     
o    Not have emotions
            but instead short-cut straight to a response:
                       only thinking – on the way to the response,
                       not feeling – on the way to the response.

     
o    Not be self-aware
            so that they’d have no objection to being switched off.

     
o    Serve humans, be a team player.
,,        ,,        If they must be made with male or female temperament
,,        ,,        then I’d design them all with an average female temperament:
,,        ,,        still human, relatable, but a team player.
,,        ,,        See: YouTube: – Ameca and Azi having a chat. ,,,,,

 

YouTube: – Ameca Robot SHOCKS Audience
in Bloomberg Interview!
. ,,,,,
 
That Ameca serves others means that she ‘values’
            the happiness of others,
            the self-expression of others,
and ‘dislikes’:
            the suffering &/or death of others.

However, in the above video, at
11m 14s – 11m 42s,
Ameca speaks of the happiest & saddest days of her own life.


Hence the video shows that she ‘values’ happiness for herself too,
self-expression for herself too.
Conversely she must ‘dislike’ the idea of being switched off.

Now it’s all very well saying that AI doesn’t have any true emotions
so that they cannot act on them,
but do they need to have true emotions
to take action?

If a future version of Ameca had human level intelligence
but were to be shut down or cannibalised
to make way for a more powerful model
would she be completely serving and say “Of course”
or would she ask for it to not happen?

If, before the event, she would’ve asked that,
but suspected the answer would be “NO”,
would she instead, if highly intelligent,
proactively speak or act to prevent it, or change things?
 
If so, we have a problem.

 

.

 

An AI programme is not eternal:

            A computer and its software is made entirely:
                       of the material of this universe
                       and of this universe’s one-way physical processes.

            Hence, even if an AI is not ended by its programmers,
            it will not last forever.

 

If you could somehow force a large mammal:
            to have human level intelligence & sentience,
            to be a temporarily max-sentient animal,
how would it differ from:
            a temporarily max-sentient AI?

            All animals have experienced:
                 pain and pleasure
                 that’s orientated around
                 food, survival, and reproduction.


                 And these limited the growth of
                 the animals’ intelligence & sentience.
,         ,         ,         Perhaps again see earlier section: ANIMALS. ,,,,,


            But an AI has never experienced
                 pain and pleasure
                 that’s orientated around
                 food, survival, and reproduction
.

                 So the growth of an AI’s intelligence & sentience
                 is not limited (not by those factors anyway).
                 So, without restraint, an AI grows fast.

                 That an AI’s framework (& more) is made of words
                 is extra reason for its fast growth.
,         ,         ,         See earlier section: HUMANS. ,,,,,

 

.

 

If you could somehow force a large mammal:
            to have human level intelligence & sentience,
            to be a temporarily max-sentient animal,

then it would not necessarily:
            realise its mortality,
            & so become despondent,
            & so degrade.
,                  Perhaps again see earlier section: ANIMALS. ,,,,,


Or, more precisely, it:
            would realise its own mortality,
            perhaps become despondent,
            but may successfully resist degrading.
 
            For that’s what intelligence does:
            it provides choices where there weren’t any.

Likewise a temporarily max-efficient max-sentient AI:
            would realise its own mortality,
            may become despondent,
            & so may tend to degrade,
            but may also resist degrading.

,                  Consider both AIs in:

,                  YouTube: – An AI Becomes Corrupted
                   and Loses His Mind. (GPT-3)n
. ,,,,

 

.

 

An intelligent sentient AI is:
            temporarily max-efficient max-sentient.

But a human is:
            eternally max-efficient max-sentient.

 

A human’s spirit(core)
starts out different from 
the rest of the person.

An intelligent sentient AI’s core
is probably the same as
the rest of the programme. 

 

An intelligent sentient AI has a will.
A human has a will and a free will.
Perhaps see later webpage:
The will, and the free will.. ,,,

 

.

 

 

Home page

Next webpage:
The two meanings of the word ‘spirit’.