Apr. 18th, 2010

blatherskite: (Default)
(Continued from the previous day's entry...)

The question I always ask about singularity fiction is the following: Why would we want to abandon being human? There are certainly design flaws with the human mind and body that we need to fix, but by and large, being human is a richly satisfying experience and one that is desirable to retain for future generations. Eliminate disease, suffering, poverty, and many of the other undesirable aspects of a design that evolved rather than being engineered to allow continuous upgrades, and life could be very pleasant indeed. (At least for those of us in the first world. People in developing countries have more serious problems that must be solved before the notion of Humanity 2.0 becomes useful.)

I don't doubt that some day, some of us will take the leap into some sort of post-human condition for a variety of reasons, including boredom and fear or frustration with the limits of the human design combined with a desire to throw away old limitations to seek something new. But many will stay behind, whether from fear of what they might be leaving behind (think of Leonard McCoy in Star Trek and his fear of the teleporter device), fear of the spiritual consequences (what is this thing called a soul, and would it be preserved if we upload our consciousness into a computer?), or simple lack of desire to experience something so different. We humans are a surprisingly conservative group; most of us have no desire to experience life as a robot, a genetically engineered organism with eight arms, or a disembodied brain.

I can imagine a situation in which people are required to live a life as Human 1.0 first before they are allowed to upgrade to Human 2.0—possibly as a way of maintaining a common thread of humanity between all the many diverging lines of human descent that might arise after a singularity, and possibly just as a smart way of maintaining a "backup" in case we engineer our future selves into some kind of inescapable corner. Most strains of singularity fiction tend to be strongly utopian, with the notion that although there may be problems, they'll all be solvable by significantly smarter humans. The long and sad history of software engineering gives me little confidence in that notion.

"Complexity theory" comes in many flavors, but they all share a common thread: it's inordinately difficult to predict the behavior of complex systems, particularly since many properties of these systems are "emergent" (they arise from the behavior of the system rather than from its original design parameters). The underlying concept of a singularity is that technology becomes so radical that the nature of the singular change itself and its consequences are both unpredictable. In many ways, we're already there: nobody really understands all the behavior of even a program as simple as Word, let alone expert systems and future neural networks. Even systems that seem simpler, such as the structure of our electrical power supply grid, have complex and poorly understood operating mechanisms, so that problems such as cascade failure become inevitable.

Another trope of singularity fiction is the notion of artificial intelligence evolving in a computer, without any conscious plans by the designers to achieve this. I've always suspected that true artificial intelligence will arise without us ever noticing, whether because we aren't paying attention or whether because the intelligence is too alien to be comprehensible. It probably won't be the evil, human-hating Skynet of the Terminator series of movies and the TV show*, but it will be something strange. There's been much fiction that inverts this notion and posits a post-human artificial intelligence that benevolently tries to manage our affairs, but I can't think of any of this fiction that is truly utopian; the notion is no more attractive to most adults (who value their freedom of action) than any other kind of paternalistic system. But some may value the ability to surrender their desire for free will and independent thought; this is true of adherents of most fundamentalist religions. This kind of musing leads many to fear the concept of singularity, or to write about it in intensely negative terms; it is, after all, human to fear what we don't understand.

* I don't think I've ever seen this noted explicitly, but Terminator may have its historical roots in Harlan Ellison's deeply creepy 1967 short story, I Have No Mouth and I Must Scream. If the title alone doesn't send a chill along your spine, read the actual story. Among other virtues, it serves as a fascinating exercise in examining the cultural assumptions embedded in the author's thoughts, some of which the author recognizes and explicitly challenges—and some of which Ellison takes as unexamined "givens".

Jody Lynn Nye compared singularity to the changes of adolescence and puberty. I'll have to think through the implications of that notion, though it's certainly a concept with a long literary history in science fiction—witness Arthur Clarke's literalization of that concept in the title of his 1950 novel Childhood's End.

One of my bigger concerns about singularity fiction is that it is often based on the attitude that rather than trying to solve our problems, we can simply leave them behind. This is problematic on several levels. On the ethical level, it's at best lazy and at worst an attempt to avoid responsibility for the consequences of our actions. On a practical level, it ignores the historical lesson of technology, namely that the flaws of the designer are inevitably embodied in their designs. This is, of course, true for many other things beyond science and technology, including (most relevantly to this blog) the musings of fiction writers and essayists.

Profile

blatherskite: (Default)
blatherskite

Expand Cut Tags

No cut tags