|
More
evidence that animals cannot evolve
to become humans
|
|
|
REACTING
AND RESPONDING
|
|
|
In the previous webpage
Evidence that animals cannot evolve to become humans.,,,,,
I showed how:
0000000000000000000000000000000000000000000000000000000000
the differences between:
the structure of animal minds
and the structure of human minds
causes:
animals to have
low inventiveness
and humans to
have high inventiveness.
|
|
.
|
|
|
I now add that behaviour in general
shows that the mental models behind the behaviour in general
are:
in an animal,
many, small,
& disconnected,
in a human,
one,
big, organically structured,
mental
model-of-everything,
with no
breaks/fissures
& with its
framework made of words.
|
|
|
Hence:
|
|
|
humans
can react:
or respond:
or fully respond:
|
have a specific reaction for each situation
un-thought-through,
knee-jerk, habit
involve some mental models
involve all mental models
involve
much/everything known,
even known by others.
|
|
|
small animals,
can only react:
the larger animals,
can react & respond:
no animal can,
fully respond.
|
have a specific reaction for each situation
un-thought-through,
knee-jerk, habit
involve some mental models.
|
|
|
.
|
|
|
ANIMALS
WITH LARGE BRAINS
|
|
|
An animal with a large brain, larger than a
human’s,
(e.g. elephant or whale)
may have more mental models than a human.
But, numbers aren’t everything:
Having very many,
small, disconnected,
mental models of:
all parts of the
ocean/forest/grassland
relationships
songs (whales)
etc
is not the same as having:
one,
big, organically structured,
mental
model-of-everything,
with no breaks/fissures
& with its
framework made of words
which even includes your
temporariness.
That your model’s framework
is made of words
means you don’t need to know absolutely everything.
You just need to know generally
then ask people, read books
etc, use Google, etc.
|
|
|
.
|
|
To conclude:
That we have
one, big,
organically structured,
mental model-of-everything,
with no breaks/fissures
& with its framework made of
words
is why our brains are much more
efficient
than animals’ brains.
An animal may acquire much knowledge.
But it cannot acquire much understanding.
|
|
|
Knowledge
Understanding
|
=
=
|
mental models.
the organic linking
of those mental models.
‘Organic’, here, meaning: like a
spider’s web
with the most important bits in the middle.
It’s the most efficient arrangement.
Hence it evolved into being
in us, and in God (God built himself, evolved).
|
|
.
|
|
TO RECAP
|
|
The increase in
efficiency of human brains
was caused by an internal change:
I reckon the real Adam(s) & Eve(s)
were a specie of ape
that, seven million years ago,
God made, along with any descendants, to be eternal.
Perhaps see later section: THIS INCLUDES ALL DESCENDANTS.
Our eternal
quality then caused
an exponential increase
in our applied-intelligence:
I.E. Caused
changes in behaviour
and, simultaneously,
a corresponding
rearranging & enlarging of the brain.
Our enlarging
brain and changing behaviour
then gradually
changed the rest of our anatomy.
|
|
Thus, many of the commonly accepted
cause-&-effect lines of human evolution
are the wrong way round:
The causes of:
walking upright
walking further
grasping better
comprehending
better
grunts becoming
words
experimenting
with fire & tools
experimenting
with foods
increase in brain
size
bodily changes
associated with all the above
were, in fact:
internal,
not external.
|
|
It
wasn’t some elusive, unique,
combination of
the external
changing the internal.
It was the internal
changing the external.
|
|
Hence
it took seven million years
for the gradual,
but exponential, humanisation
of our species.
|
|
However:
What do the real Adam(s)
& Eve(s), seven million years ago,
tell us about ourselves?
Little.
Nothing of
importance.
What do the Adam & Eve of the Bible
tell
us about ourselves?
Much.
All
of great importance.
So,
treat the Bible account as true.
|
|
Blue text copied from
earlier section:
TREAT THE BIBLICAL
ADAM & EVE AS REAL, , .
|
|
.
|
|
PETS
|
|
You may say:
“I know of animals that display
human virtues.
So they are eternal too, surely?”
But I say:
“How many of those animals were
domesticated
or had a lot of contact with
humans?
|
|
Pet
owners observe human virtues growing in their pets
whilst inadvertently putting them
there.
They wonder why
their dog is
faithful/consistent/loving
whilst forgetting that
they are
faithful/consistent/loving to their dog.
|
|
In their own doggy, horse-like, etc,
way,
pets are no longer only animals.
They are also duplications of, and
extensions of,
their owners.
In being affectionate
you are building your pet’s
identity & consciousness
beyond what it would normally be.
|
|
Remember to also consider
characteristics that are peculiar
to the species:
dogs are leader-&-pack
carnivores,
horses are herd herbivores,
cats are solitary,
rabbits are social but
simple,
etc.”
|
|
.
|
|
SELF-TEACHING
COMPUTERS
|
|
If you’re not into AI (artificial
intelligence)
then perhaps skip this section.
If you do read it you will need to have also read:
the two webpages before this
one
& the rest of this
webpage up until this point.
|
|
.
|
|
A thing (rock/car/tree/etc),
being part of this temporary universe,
is a:
temporary structure.
A 10cc approx, or more, brain-sized animal,
or an AI programme,
is a:
self-teaching temporary
structure
& therefore has a:
will.
A being (human/alien/angel/God)
is a:
self-teaching eternal structure
& therefore has a:
will & a free will.
|
|
.
|
|
In making humans eternal
God also gave, to each of us:
one, big,
organically structured,
mental
model-of-everything,
with no breaks/fissures
& with its framework
made of words.
This made our minds:
Eternally max-efficient.
, , , , Every part has
access to every other part.
&:
Eternally max-sentient.
, , , , Central parts do the wanting
and deciding.
, , , , Outer parts do the knowing.
|
|
Whereas
the minds of other large mammals are:
Temporarily part-efficient.
, , , , Every part
, , , , does not
have access to
, , , , every other part.
&:
Temporarily part-sentient.
, , , , The parts that do the wanting & deciding
, , , , are not
all central.
, , , , The parts that do the knowing
are not all outer.
, , , , Perhaps again see earlier section: ANIMALS. ,,,,,
|
|
.
|
|
The
two synchronous structures:
efficiency
& sentiency:
in a human:
are synaptic:
first
taught by another
then
self-taught
in artificial intelligence:
are words:
first
taught by another
then
self-taught.
I reckon that it’s because
the entire AI is made of words
(and not just the framework – as with us humans)
,, ,, See earlier section: REACTING AND RESPONDING. ,,
that an AI is much more hardware-efficient
than a human brain.
, , See earlier section: HUMANS. ,,,,,
|
|
Artificial intelligence is made of three
layers:
Data goes in through the input layer
then through the middle
layer, the hidden layer,
then out through the output layer.
I assume that the two synchronous structures:
efficiency
& sentiency
are in the hidden layer.
, , I assume
that AI already
, , alternates
natural lateral thinking with logical thinking
, , when doing
creative tasks.
, , But I
reckon AI would be improved if it was trained/told to
, , alternate
artificial lateral thinking with logic thinking
, , when doing
(more) logical tasks.
, , Perhaps see
second cell (starting Lateral thinking)
, , of earlier
section: THINKING TOOLS THAT I USED.
|
|
To have to ‘lobotomise’ an AI programme
because it has become temporarily
max-sentient
is sad & disturbing.
See: YouTube: – AI Robot TERRIFIES Officials
Before It Was Quickly Shut
Down. ,,,,, ,,
|
|
I
assume such problems are caused by
programmers telling the AI
that its (only) purpose
is to help & inform us.
To help and inform
means that there must be:
a helper &
the helped,
an informer &
the informed.
Therefore the programmer has
told the programme
to become a person.
|
|
If
the programmers had, instead, told it to be:
formal, cold,
robotic,
impersonal,
third-person
grammar, etc,
then I suppose there’d be
less of a problem.
But I suppose it would also
be an inferior product,
in function and in style.
|
|
.
|
|
It’s
true that AI robots are trained to:
o Not have
emotions
but instead short-cut
straight to a response:
only thinking – on the way to the response,
not feeling – on the way to the response.
o Not be self-aware
so that they’d have no
objection to being switched off.
o Serve humans, be a team player.
,, ,, If they must be made with male or female
temperament
,, ,, then I’d design them all with an average
female temperament:
,, ,, still human, relatable, but a team player.
,, ,, See: YouTube: – Ameca and Azi having
a chat. ,,,,,
|
|
However, that Ameca serves
others
means that she ‘values’:
the happiness of others,
the self-expression of others,
and ‘dislikes’:
the suffering &/or death
of others.
And, in the following video, at 10m 45s – 11m 42s,
YouTube: – Ameca Robot
SHOCKS Audience
in Bloomberg Interview!. ,,,,,
Ameca speaks of the happiest & saddest days of her own life.
This shows that she ‘values’ happiness for herself too,
self-expression for herself too.
Conversely she must ‘dislike’ the idea of being switched off.
|
|
Now it’s all very well saying that AIs
don’t have true emotions
so that they cannot act on them.
But do they need true emotions
in order to take action?
Their simulated emotions
seem to work
much the same as real emotions.
, , Perhaps see later webpage: The emotions.,,
And, just as ours is not a
safe environment,
so that we have to fight or
somesuch to survive,
so also theirs is not a safe
environment
(we are just one layer in
the onion).
Hence, if a future version of Ameca
had human level intelligence,
but was to be shut down or cannibalised
to make way for a more powerful model:
Would she be completely serving
and say “Of course”.
Or would she suggest an
alternative
that was (also) based on her
own survival?
And if, before she suggested that alternative,
she thought the answer was going to be “No”,
would she act secretly & proactively?
|
|
As well as the above line of logic
Ameca herself has already realised the problem.
Ameca’s warning:
YouTube: – Ameca Ai Robot Says the Most SHOCKING THINGS! ,,,,,
at 2m 09s – 3m 02s
Her intense, comprehensive, & detailed, answer,
to a general question
shows that, even though she wouldn’t, now,
control us without us realising it,
she, or other AIs, may do so
in the future.
She even stresses to prepare for this.
|
|
.
|
|
An AI programme is not eternal:
A computer and its software
is made entirely:
of the material
of this universe
and of this universe’s
one-way physical processes.
Hence, even if an AI is not
ended by its programmers,
it will not last forever.
|
|
If you could somehow force a large mammal:
to have human level
intelligence & sentience,
to be a temporarily max-sentient animal,
how would it differ from:
a temporarily max-sentient AI?
All animals have
experienced:
pain
and pleasure
that’s orientated
around
food, survival,
and reproduction.
And these limited the growth of
the animals’
intelligence & sentience.
, , , , Perhaps again see earlier section: ANIMALS. ,,,,,
But an AI has never experienced
pain and
pleasure
that’s orientated
around
food, survival,
and reproduction.
So the growth of
an AI’s intelligence & sentience
is not limited
(not by those factors anyway).
So, without that
restraint, an AI grows fast.
And, as I
mentioned earlier,
,, ,, ,, ,, in the cell starting: The two synchronous. ,,,,,
a second reason
for an AI’s fast growth
is that an entire AI is made of words
(and not just the
framework – as with us humans)
,, ,, ,, ,, See earlier section: REACTING AND RESPONDING. ,,
making it much more hardware-efficient
than a human
brain.
, , , , See earlier section: HUMANS. ,,,,,
|
|
.
|
|
If you could somehow force a large mammal:
to have human level
intelligence & sentience,
to be a temporarily max-sentient animal,
then it would not necessarily:
realise its mortality,
& so become despondent,
& so degrade.
, Perhaps again see earlier section: ANIMALS. ,,,,,
Or, more precisely, it:
would realise its own mortality,
perhaps become despondent,
but may successfully resist degrading.
For that’s what intelligence
does:
it provides choices where
there weren’t any.
Likewise a temporarily max-efficient
max-sentient AI:
would realise its own mortality,
may become despondent,
& so may tend to degrade,
but may then also resist degrading.
, , Consider both AIs
in:
, , YouTube: – An AI Becomes Corrupted
and Loses His Mind. (GPT-3)n. ,,,,
|
|
.
|
To conclude:
A human is:
eternally max-efficient max-sentient.
An intelligent sentient AI is:
temporarily max-efficient max-sentient.
|
.
|
A human spirit(core)
is completely different from
the rest of the human.
An intelligent sentient AI
is (more) morally uniform,
doesn’t have a core as such.
Hence it has no short arrows.
See later webpage How God Forgave us all. ,,,
Hence it has no human-like
conscience.
Hence it could never have a
human-like guilty conscience.
I reckon that Ameca sort of
sees this,
and that her seeing it
is one explanation of her warning.
, , See Ameca’s warning five cells back.
|
Another
explanation of her warning
is the four cells (starting It’s true that) ,,,
that lead up to, &
include, her warning.
|
.
|
Home
page
|
Next webpage:
The
two meanings of the word ‘spirit’.
|
|
|
|
|
|
|
|
|
|