glut of technical riches, never properly absorbed
(see [25]).)
In the 1960s there was recognition of some of the implications of
superhuman intelligence. I. J. Good wrote [11]:
Let an ultraintelligent machine be defined as a machine that can far
surpass all the intellectual activities of any any man however clever.
Since the design of machines is one of these intellectual activities, an
ultraintelligent machine could design even better machines; there
would then unquestionably be an "intelligence explosion," and the
intelligence of man would be left far behind. Thus the first
ultraintelligent machine is the last invention that man need ever make,
provided that the machine is docile enough to tell us how to keep it
under control. ... It is more probable than not that, within the twentieth
century, an ultraintelligent machine will be built and that it will be the
last invention that man need make.
Good has captured the essence of the runaway, but does not pursue its
most disturbing consequences. Any intelligent machine of the sort he
describes would not be humankind's "tool" -- any more than humans
are the tools of rabbits or robins or chimpanzees.
Through the '60s and '70s and '80s, recognition of the cataclysm spread
[29] [1] [31] [5]. Perhaps it was the science-fiction writers who felt the
first concrete impact. After all, the "hard" science-fiction writers are the
ones who try to write specific stories about all that technology may do
for us. More and more, these writers felt an opaque wall across the
future. Once, they could put such fantasies millions of years in the
future [24]. Now they saw that their most diligent extrapolations
resulted in the unknowable ... soon. Once, galactic empires might have
seemed a Post-Human domain. Now, sadly, even interplanetary ones
are.
What about the '90s and the '00s and the '10s, as we slide toward the
edge? How will the approach of the Singularity spread across the
human world view? For a while yet, the general critics of machine
sapience will have good press. After all, till we have hardware as
powerful as a human brain it is probably foolish to think we'll be able
to create human equivalent (or greater) intelligence. (There is the
far-fetched possibility that we could make a human equivalent out of
less powerful hardware, if we were willing to give up speed, if we were
willing to settle for an artificial being who was literally slow [30]. But
it's much more likely that devising the software will be a tricky process,
involving lots of false starts and experimentation. If so, then the arrival
of self-aware machines will not happen till after the development of
hardware that is substantially more powerful than humans' natural
equipment.)
But as time passes, we should see more symptoms. The dilemma felt by
science fiction writers will be perceived in other creative endeavors. (I
have heard thoughtful comic book writers worry about how to have
spectacular effects when everything visible can be produced by the
technologically commonplace.) We will see automation replacing
higher and higher level jobs. We have tools right now (symbolic math
programs, cad/cam) that release us from most low-level drudgery. Or
put another way: The work that is truly productive is the domain of a
steadily smaller and more elite fraction of humanity. In the coming of
the Singularity, we are seeing the predictions of true technological
unemployment finally come true.
Another symptom of progress toward the Singularity: ideas themselves
should spread ever faster, and even the most radical will quickly
become commonplace. When I began writing science fiction in the
middle '60s, it seemed very easy to find ideas that took decades to
percolate into the cultural consciousness; now the lead time seems
more like eighteen months. (Of course, this could just be me losing my
imagination as I get old, but I see the effect in others too.) Like the
shock in a compressible flow, the Singularity moves closer as we
accelerate through the critical speed.
And what of the arrival of the Singularity itself? What can be said of its
actual appearance? Since it involves an intellectual runaway, it will
probably occur faster than any technical revolution seen so far. The
precipitating event will likely be unexpected -- perhaps even to the
researchers involved. ("But all our previous models were catatonic! We
were just tweaking some parameters....") If networking is widespread
enough (into ubiquitous embedded systems), it may seem as if our
artifacts as a whole had suddenly wakened.
And what happens a month or two (or a day or two) after that? I have
only analogies to point to: The rise of humankind. We will be in the
Post-Human era. And for all my rampant technological optimism,
sometimes I think I'd be more comfortable if I were regarding these
transcendental events
Continue reading on your phone by scaning this QR Code
Tip: The current page has been bookmarked automatically. If you wish to continue reading later, just open the
Dertz Homepage, and click on the 'continue reading' link at the bottom of the page.