any rules strict enough to be effective would also produce a device
whose ability was clearly inferior to the unfettered versions (and so
human competition would favor the development of the those more
dangerous models). Still, the Asimov dream is a wonderful one:
Imagine a willing slave, who has 1000 times your capabilities in every
way. Imagine a creature who could satisfy your every safe wish
(whatever that means) and still have 99.9% of its time free for other
activities. There would be a new universe we never really understood,
but filled with benevolent gods (though one of my wishes might be to
become one of them).
If the Singularity can not be prevented or confined, just how bad could
the Post-Human era be? Well ... pretty bad. The physical extinction of
the human race is one possibility. (Or as Eric Drexler put it of
nanotechnology: Given all that such technology can do, perhaps
governments would simply decide that they no longer need citizens!).
Yet physical extinction may not be the scariest possibility. Again,
analogies: Think of the different ways we relate to animals. Some of
the crude physical abuses are implausible, yet.... In a Post-Human
world there would still be plenty of niches where human equivalent
automation would be desirable: embedded systems in autonomous
devices, self-aware daemons in the lower functioning of larger sentients.
(A strongly superhuman intelligence would likely be a Society of Mind
[16] with some very competent components.) Some of these human
equivalents might be used for nothing more than digital signal
processing. They would be more like whales than humans. Others
might be very human-like, yet with a one-sidedness, a dedication that
would put them in a mental hospital in our era. Though none of these
creatures might be flesh-and-blood humans, they might be the closest
things in the new enviroment to what we call human now. (I. J. Good
had something to say about this, though at this late date the advice may
be moot: Good [12] proposed a "Meta-Golden Rule", which might be
paraphrased as "Treat your inferiors as you would be treated by your
superiors." It's a wonderful, paradoxical idea (and most of my friends
don't believe it) since the game-theoretic payoff is so hard to articulate.
Yet if we were able to follow it, in some sense that might say
something about the plausibility of such kindness in this universe.)
I have argued above that we cannot prevent the Singularity, that its
coming is an inevitable consequence of the humans' natural
competitiveness and the possibilities inherent in technology. And yet ...
we are the initiators. Even the largest avalanche is triggered by small
things. We have the freedom to establish initial conditions, make things
happen in ways that are less inimical than others. Of course (as with
starting avalanches), it may not be clear what the right guiding nudge
really is:
Other Paths to the Singularity: Intelligence Amplification
When people speak of creating superhumanly intelligent beings, they
are usually imagining an AI project. But as I noted at the beginning of
this paper, there are other paths to superhumanity. Computer networks
and human-computer interfaces seem more mundane than AI, and yet
they could lead to the Singularity. I call this contrasting approach
Intelligence Amplification (IA). IA is something that is proceeding
very naturally, in most cases not even recognized by its developers for
what it is. But every time our ability to access information and to
communicate it to others is improved, in some sense we have achieved
an increase over natural intelligence. Even now, the team of a PhD
human and good computer workstation (even an off-net workstation!)
could probably max any written intelligence test in existence.
And it's very likely that IA is a much easier road to the achievement of
superhumanity than pure AI. In humans, the hardest development
problems have already been solved. Building up from within ourselves
ought to be easier than figuring out first what we really are and then
building machines that are all of that. And there is at least conjectural
precedent for this approach. Cairns-Smith [6] has speculated that
biological life may have begun as an adjunct to still more primitive life
based on crystalline growth. Lynn Margulis (in [15] and elsewhere) has
made strong arguments that mutualism is a great driving force in
evolution.
Note that I am not proposing that AI research be ignored or less funded.
What goes on with AI will often have applications in IA, and vice versa.
I am suggesting that we recognize that in network and interface
research there is something as profound (and potential wild) as
Artificial Intelligence. With that insight, we may see projects that are
not as directly applicable as conventional interface and network design
work, but which serve to advance us toward the Singularity
Continue reading on your phone by scaning this QR Code
Tip: The current page has been bookmarked automatically. If you wish to continue reading later, just open the
Dertz Homepage, and click on the 'continue reading' link at the bottom of the page.