blatherskite: (Default)
[personal profile] blatherskite
"Where a calculator on the ENIAC is equpped with 18,000 vaccuum tubes and weighs 30 tons, computers in the future may have only 1,000 vacuum tubes and perhaps weigh 1 1/2 tons."—Popular Mechanics (March 1949)

"640K ought to be enough for anyone."—Bill Gates

I didn't try to track down the full sources of these quotes, because historical accuracy is less important here than the basic sense of the quotes, namely that even experts have a lousy track record when it comes to predicting technological progress. This is a significant problem for writers of science fiction, since setting a story in the future presupposes that you are making a prediction about what that future will be like. Surprisingly, it's also a problem for writers of fantasy and historical fiction. Why? Because most fantasy fiction is based on a bowdlerized version of some historical period in Earth's history, after which the author extrapolates certain changes to make the situation more fictional. Those extrapolations don't have to be as scientifically rigorous as the extrapolations in science fiction, but they must still be logical and believable.

That being the case, how can we non-experts do a better job of prediction than the experts? We can't, at least not reliably. But we can start with the recognition that there are two very different ways to predict the future that will arise from your chosen starting point.

First, we can do what most experts do: propose a straight-line projection of current trends. For example, Moore's law has predicted an ongoing doubling of the number of transistors on a computer chip roughly every 2 years, and it's been amazingly accurate since about 1971. But if you're writing science fiction, you need to recognize that Moore's law will soon lose its predictive power, because we're reaching certain physical limits (due to quantum mechanics) in how much circuitry we can pack into a given space.

For near-future science fiction, or for fantasies set soon after the historical period used as their initial model, it's reasonable to simply extrapolate from what's going on: Technology will be faster and a bit more sophisticated, and less expensive (thus, more ubiquitous), and current social trends will continue largely uninterrupted. If your focus is more on the story and characters than on the technological and social changes, that's a reasonable approach, and it still lets you tell interesting stories. Continuing the computer chip example, you might want to propose that the near-future evolution of computers will involve the use of multiple processors instead of creating every smaller or faster processors. Cloud computing and distributed processing are examples of where we may be heading. Even for home computers, it will probably become cheaper to incorporate a dozen chips in a computer and make them run in parallel instead of trying to fit more processors into a single chip, which is Intel's current strategy.

Second, we can propose a radical discontinuity—not necessarily a true singularity, in the sense of Vernor Vinge and Charles Stross, but rather something that is nonetheless a radical departure from past trends. For example, we might propose that quantum computers will replace conventional computers. A fully evolved quantum computer will be so fast that amazing things will become possible that current computers can't even imagine.

Computer software is a better example, since it's easier to understand. As a generally true statement, modern software development represents no advances over the software written 61 years ago at the time of ENIAC. It's still algorithmic and sequential and linear and primitive; worse yet, it's probably far buggier than it ever was, in large part due to the ever-increasing complexity of the code and the refusal of marketing managers to prioritize quality over quantity. There are no intelligent agents to help us deal with computing tasks, and we still spend far too much time telling the computer what to do. Shouldn't computers learn from watching what we're doing and simply automate tasks? Programs like QuicKeys can do this, but if you want to do this by yourself in OS X or Windows or Linux, you have to do all the heavy lifting yourself. Apple and Microsoft, among others, have been asleep behind the keyboard, and the Linux community is largely following in their footsteps rather than seeking a paradigm change.

If you're taking the second approach to prognostication, you can simply assume that programmers eventually get a clue and start designing software to do what we want instead of what their marketing managers tell them to do (i.e., create more useless features and ignore longstanding bugs). The resulting story is potentially much more interesting, but the more radical the change you propose, the more likely you are to be embarrassed by what really happens. On the other hand, if you propose something sufficiently outré, it will be a long time before reality catches up with you—or more likely, bypasses you completely.

Moreover, it becomes much more difficult to think through all the implications of the change you've proposed. The technological implications are the easy part, though they're by no means easy. But social changes are a bear: Who would have predicted Twitter and omnipresent cell phones 30 years ago?

Still, half the fun of writing speculative fiction, whether science fiction or fantasy, is playing with these ideas to see where they lead. The fact that we'll be outright wrong in most cases shouldn't stop us from trying. But to do it well, you need to think about which strategy you'll adopt and spend some time thinking through the implications.
(will be screened)
(will be screened)
(will be screened)
If you don't have an account you can create one now.
HTML doesn't work in the subject.
More info about formatting

If you are unable to use this captcha for any reason, please contact us by email at support@dreamwidth.org

Profile

blatherskite: (Default)
blatherskite

Expand Cut Tags

No cut tags