from one thousand years remove ... instead of
twenty.
Can the Singularity be Avoided?
Well, maybe it won't happen at all: Sometimes I try to imagine the
symptoms that we should expect to see if the Singularity is not to
develop. There are the widely respected arguments of Penrose [19] and
Searle [22] against the practicality of machine sapience. In August of
1992, Thinking Machines Corporation held a workshop to investigate
the question "How We Will Build a Machine that Thinks" [27]. As you
might guess from the workshop's title, the participants were not
especially supportive of the arguments against machine intelligence. In
fact, there was general agreement that minds can exist on nonbiological
substrates and that algorithms are of central importance to the existence
of minds. However, there was much debate about the raw hardware
power that is present in organic brains. A minority felt that the largest
1992 computers were within three orders of magnitude of the power of
the human brain. The majority of the participants agreed with
Moravec's estimate [17] that we are ten to forty years away from
hardware parity. And yet there was another minority who pointed to [7]
[21], and conjectured that the computational competence of single
neurons may be far higher than generally believed. If so, our present
computer hardware might be as much as ten orders of magnitude short
of the equipment we carry around in our heads. If this is true (or for
that matter, if the Penrose or Searle critique is valid), we might never
see a Singularity. Instead, in the early '00s we would find our hardware
performance curves beginning to level off -- this because of our
inability to automate the design work needed to support further
hardware improvements. We'd end up with some very powerful
hardware, but without the ability to push it further. Commercial digital
signal processing might be awesome, giving an analog appearance even
to digital operations, but nothing would ever "wake up" and there
would never be the intellectual runaway which is the essence of the
Singularity. It would likely be seen as a golden age ... and it would also
be an end of progress. This is very like the future predicted by Gunther
Stent. In fact, on page 137 of [25], Stent explicitly cites the
development of transhuman intelligence as a sufficient condition to
break his projections.
But if the technological Singularity can happen, it will. Even if all the
governments of the world were to understand the "threat" and be in
deadly fear of it, progress toward the goal would continue. In fiction,
there have been stories of laws passed forbidding the construction of "a
machine in the likeness of the human mind" [13]. In fact, the
competitive advantage -- economic, military, even artistic -- of every
advance in automation is so compelling that passing laws, or having
customs, that forbid such things merely assures that someone else will
get them first.
Eric Drexler [8] has provided spectacular insights about how far
technical improvement may go. He agrees that superhuman
intelligences will be available in the near future -- and that such entities
pose a threat to the human status quo. But Drexler argues that we can
confine such transhuman devices so that their results can be examined
and used safely. This is I. J. Good's ultraintelligent machine, with a
dose of caution. I argue that confinement is intrinsically impractical.
For the case of physical confinement: Imagine yourself locked in your
home with only limited data access to the outside, to your masters. If
those masters thought at a rate -- say -- one million times slower than
you, there is little doubt that over a period of years (your time) you
could come up with "helpful advice" that would incidentally set you
free. (I call this "fast thinking" form of superintelligence "weak
superhumanity". Such a "weakly superhuman" entity would probably
burn out in a few weeks of outside time. "Strong superhumanity" would
be more than cranking up the clock speed on a human-equivalent mind.
It's hard to say precisely what "strong superhumanity" would be like,
but the difference appears to be profound. Imagine running a dog mind
at very high speed. Would a thousand years of doggy living add up to
any human insight? (Now if the dog mind were cleverly rewired and
then run at high speed, we might see something different....) Many
speculations about superintelligence seem to be based on the weakly
superhuman model. I believe that our best guesses about the
post-Singularity world can be obtained by thinking on the nature of
strong superhumanity. I will return to this point later in the paper.)
Another approach to confinement is to build rules into the mind of the
created superhuman entity (for example, Asimov's Laws [3]). I think
that
Continue reading on your phone by scaning this QR Code
Tip: The current page has been bookmarked automatically. If you wish to continue reading later, just open the
Dertz Homepage, and click on the 'continue reading' link at the bottom of the page.