During the course of the fantastic
science fiction film, 2001: A Space Odyssey, humanity faces a dire standoff
with its creation – Artificial Intelligence.
In the form of the HAL 9000, Stanley
Kubrick and Arthur C. Clarke gave us a glimpse of just what capabilities an
amped-up toaster might gain, that could possibly present a threat to our
continued existence.
I’ve mentioned previously the idea of
supercomputer intelligences, in my book Rocket
Surgeon, noting the presence
of these in films such as COLOSSUS: The Forbin Project and Saturn 3.
But it was HAL that really brought the
horror to our national consciousness, with that implacable red eye, and the
dwindling of the life-support traces of the cryogenically frozen astronauts in
suspended animation. The coldness with which HAL killed the crew of Discovery
placed a marker that every science fiction story since, in all media, can
clearly see.
“OPEN THE POD BAY DOORS, HAL.”
“I’M AFRAID I CAN’T DO THAT, DAVE…”
With those words, the battle lines
between human intelligence and artificial intelligence were drawn.
DO
WE REALLY HAVE TO FEAR AI?
So, just what do we mean when we say
‘Artificial Intelligence’? The textbook definition, taken from our favorite
source, Wikipedia, says this:
“Artificial
intelligence can be classified into three different types of systems:
analytical, human-inspired, and humanized artificial intelligence. Analytical
AI has only characteristics consistent with cognitive intelligence; generating
a cognitive representation of the world and using learning based on past
experience to inform future decisions. Human-inspired AI has elements from
cognitive and emotional intelligence; understanding human emotions, in addition
to cognitive elements, and considering them in their decision making. Humanized
AI shows characteristics of all types of competencies (i.e., cognitive,
emotional, and social intelligence), is able to be self-conscious and is
self-aware in interactions with others.”[1]
Most people think ‘Terminator’ or
‘Dalek’ or some such thing, with a malevolent, unfeeling and coldly logical
machine intellect looking down the barrel of a laser rifle, exterminating some
poor random girl who might be the mother of the hero of the revolution, or just
causing havoc because we biological life forms are imperfect.
Sometimes, there are examples of hive
minds, like the Borg, or the machines from The Matrix that have their own
bent agendas, in which humanity is an annoying afterthought.
Occasionally, you get a creative
approach, such as the one that Daniel Suarez produced with his books Daemon and Freedom TM, where a Steve Jobs-like genius spawns a world-changing
AI. But, even there, the soulless nature of the AI comes to eventually prod the
hero of the book to make emotional decisions that are atypical of what we
normally perceive an AI to be.
There is a definite problem of morality
and ethical decision-making that the idea of an AI raises.
Even the most psychopathic killer cannot
evoke the feelings of impotence and helpless rage we feel when faced with a
machine that cares nothing at all for things such as love, or hate, or even
equality.
AI’s are the epitome of a Universe that
has its own goals, and we are only present to observe, and not really partake,
in the unfolding of that drama.
But, let’s clarify a thing or two
relating to just why this particular vision, of a rogue AI, while thrilling, is
probably just a nightmarish vapor.
A science fiction wet dream, as it were…
TERMINATO, TERMINATA
So, when I discuss this subject, I like
to clearly define the following ideas:
Expert Systems
– These are the kinds of rudimentary programs and algorithms that what is
considered expert knowledge and attempt to codify it via the concepts of
workflow and branching. These use input from people who have been exposed to
the processes that are sought to be made automatic, and their experience from
decades of performing what has become, to them, routine tasks. The analysts
work with them to decode the best, optimal way to perform the task, taking into
consideration such things as material inventories, path flows, best methods,
and speed of assembly / completion. Metrics are designed to measure the
efficiency and time required, the amount of infrastructure to be used,
floorspace, positioning of stations, energy requirements and other
environmental variables.
All of this is then collated and an
algorithmic representation coded, and tested.
An ideal outcome of this process is that
the output is successfully achieved, using no human input.
Robotics and communication systems must
be designed to accommodate delays, shortages and other obstacles, and
programmed to slow or stop production as to achieve the best efficiency and
scale.
These kinds of systems have been around
a very long time, and most of what happens behind the scenes on any manner of
customer-facing website are examples where this is being done.
Neural Networks
– Let’s say you are Facebook.[2]
You have gone through the hassle and expense of creating an Expert System that
is designed to look at post from your users, and search for ‘hate speech.’ When
your algorithm finds a suspicious post, it compares the text to other samples,
and weights it mathematically. If the score is deemed equivalent to something
that the original programmers have considered ‘hate speech,’ then the post can
be either quarantined or automatically routed for further attention (ironically
enough, by a human being!)[3]
A group of these algorithms, or
programmed decision trees, is a neural network. Now, our buddies over at
Wikipedia go on at length about stochastic this, and back-propagation that, and
it is all fascinating, much in the way that milk turns into butter and cheese.
But my point is simple – a lot of Expert
Systems can be put together to model a brain, and to do so, humanity has
created models of communication that simulate the biological functions of a
brain mathematically and these are what – I – mean by the term neural network.[4]
Artificial Intelligence – is the mimicry
of a human’s thinking process by a machine.
Now, we can go all Turing Test crazy
here, (and maybe I will in a future article), but the point I am making is that
AI is NOT human intelligence. This is a big thing.
As humans, we take it for granted that
machines are ‘better’ than we are at performing tasks.
Digging ditches, washing cars, doing
dishes, sewing, cutting down trees – all of these are tasks that we humans CAN
do, and also have tools to allow us to do it faster.
But a robotic tree-cutting machine,
with an AI, is assumed to be able to do it ‘better.’
Why is that, do you think?
Is it really all that beneficial to set
a machine on the edge of a forest, and start clear-cutting all the vegetation
out of it, turning it eventually into chairs, desks and toothpicks?
Is there some level of moral decision
that needs to be taken, before said robotic lumberjack starts hacking away?
An AI only does what it is programmed to
do.
The most advanced AI discussions project
that at some point, an AI will become self-aware.
But my point here is that this
transcendental moment is as much a dividing line as was HAL’s obstinate refusal
to open the pod bay doors.
This transgression from a simple machine
to a thinking being is only accomplished by the next term.
Consciousness
– This is the crux of the discussion, to me.
The idea of consciousness is complicated
and fraught with religious and ethical traps, and philosophical debates over
good and evil. Logic and math have some small measure here, but we can create
mathematical maps of transistor paths and outcomes, of voltage drops and
resistance, and still not be able to identify the location of consciousness
even in our own selves.
‘The
Good Place’ is a show that recently tried to tackle some of this. In it,
the infamous ‘Trolley Problem’ was
used as an example of how situational ethics is used to solve a thorny issue:
If you are on a train, and have to
change it from a track where one person will certainly die, to another where
five people will certainly die, can you find it in yourself to pull that lever?[5]
An AI would simply take a look and say 1
Human Life < 5 Human Lives and pull the lever.
A self-aware AI might do something like
try to apply the brake, and then derail the train. Or, if none of that works,
do nothing. It’s simply an accident. No blame. A mechanical fault.
Right there is another funny little artefact:
Inaction is a choice. As humans, we
instinctively do not do things, even when it might save others. Jumping into a
lake to save a drowning person, for instance, isn’t very wise if one does not
swim well. Even if you do, the flailing of the victim might end up more
tragically.
We perform a multitude of ethical and
moral calculations as we weigh the options.
Even if an AI could come to some
instantaneous decision, it will usually go with an optimal one that may not be
the best one.[6]
Which brings up the concept of hubris,
and arrogance. These are human traits.
HAL wasn’t either. He (It?) was simply
the sum of his (its?) flawed programming by fallible scientists.[7]
HAL’s lack of humanity allowed the deaths of the crew. It was a mathematically
favorable output.
HAL had also been programmed to save
human life, at one point. The descent into madness, brought about by Dave
de-programming HAL,[8]
only served to emphasize that disconnect between a truly self-aware and
sentient consciousness, and the innate human ability to perform an insane
action.
THE ORCHESTRA METAPHOR
So, let’s veer away from this morbid
subject, momentarily, to get a better and hopefully more easily understood
model of AI.
Think of an orchestra.
There are several components here:
The instruments – drums, pipes, strings,
brass horns, oboes, violins and violas, etc.
The players.
The music being performed.
The Conductor.
The audience.
The composer.
To put this into alignment with the
earlier descriptions, I like to propose an exercise – play Beethoven’s Ninth
Symphony.
The task is defined – play the symphony.
The tools are the instruments.
The Expert Systems are the musicians who
know how to do the following:
Read music.
Translate that into actions for a given
musical instrument.
Play the instrument, in time and synthetically,
with the other Experts.
The Neural Network is the orchestra
musicians, and the Conductor.
Now, at this point, the audience is not
really necessary, now is it? I mean, orchestras practice all the time, sans an
audience, to assure they put on a decent performance.
That is an AI. A collection of Expert
Systems forming a Neural Network to produce an act of intelligent performance,
that does not really require humans to be involved to enjoy the output.
Sure, we can program computers to do all
of this.
And here is the rub – what is the COMPOSER in all of this?
What is the AUDIENCE?
I would posit that these two factors are
the actual consciousness. The composer created a symphony, using his knowledge
of music and the tools, to provide enjoyment to an audience. Whether that was
his own ears, or those of a throng, is immaterial here. It is enough that the
composition stands on its own merits.
An AI, by my definition, cannot do that.
It can mimic this process by performing the steps involved.
But it is not a creator – and thus this
is my demarcation for identifying self-awareness.
An AI may indeed be clever enough one
day to launch nuclear missiles, or fly swarms of deadly drones into a mall and
blow up everyone, but that isn’t done by accident.
It has been programmed to perform that
task.
The difference between this technology
being plows or swords rests on our humanity.
We need to teach the machine the value
of life, and not the coldness of logic.
Anyone can pull a trigger.
It takes a lot more to grow a fruit
tree.
A.E. Williams
High Springs, Florida
May 27, 2019
[1] I will explain how my positions on this definition differ, so
please be patient!
[2] God forbid, but let’s say, for the sake of argument…
[3] Who may, or may not, be biased.
[4] This will become more clear in a moment.
[5] The hilarious answer on the show was, ‘probably not,’ and it made
the point in a gruesome, yet humorous fashion.
[6] The movie I, Robot, with Will Smith, did this pretty graphically.
[7] I am not being too facetious here with the pronoun instability. In
later books, Dave and HAL merge, so there is some confusion as to whether HAL
is a ‘person’ in that regard. I am sure many people see HAL as a real ‘person.’
[8] In one of the most upsetting and horrible ‘deaths’ ever put to
film, a slow-motion lobotomy!
About A.E. Williams:
A.E. Williams has a unique background of military experience, aerospace engineering and intelligence analysis.
Born near Pittsburgh, A.E. Williams is man of a mystery. As a young man, Williams served the United States government in various capacities, which he then followed with ten years as an outfitter. Williams finally retired and moved down to rural central Florida, where he ran a medium - sized tilapia farm. He did his writing at night, usually accompanied by a bottle of Maker's Mark bourbon and a large supply of Classic Dr. Pepper and ice.
A.E. Williams is the author of the exciting hard science fiction series Terminal Reset, which is about the effects of a mysterious force from billions of miles away from Earth that was formed millions of years ago. When The Wave strikes, everything changes!
No comments:
Post a Comment