Just about finished reading Charles Stross' Neptune's Brood, sequel to Saturn's Children. It's about what you'd expect from recent Stross, which is to say so full of novel ideas, interestingly explored, that you'll be lost inside of the first few pages if you're not already an experienced reader of speculative fiction. In particular, Stross explores one of his recent fascinations, economics, in ways that are thought-provoking and often hilarious. His equation of merchant bankers with pirates is good for a chuckle, and his invocation of Monty Python's "Crimson Permanent Assurance" is simultaneously an Easter Egg for the culturally literate reader, a clue that we're reading an (economics-based) comedy of manners, and a major protagonist in the story.
Uncommonly for a Stross book, there are several major continuity errors and a great many things that seemed to be continuity or logical errors (particularly a failure to account for the effects of relativity). I would need to create a rigorous timeline (not an easy task) to confirm that some of the problems are real and not just me missing a point or the description being unclear and leading me to misunderstand. Most egregiously, I simply don't buy his explanation of the mechanisms of fast, medium, and slow money -- a consideration required to account for the needs of interstellar commerce in a context where faster-than-light travel is impossible. It's intriguing and brilliant, but it's essential for the story logic, and that logic seemed deeply flawed the more I thought about the implications. (That being said, if you're willing to accept the premise, it makes for an interesting driving force for the novel.)
But the motivation for this essay is the notion in the story that within a thousand years -- two at most -- we'll have reached the limits of science's ability to improve our understanding of the mechanics of the universe. As a result, there will be no further progress in science, and specifically, no possibility of moving from Physics 2.0 (Einsteinian relativity) to Physics 3.0 (the next radical change in our understanding of the universe). That is, there will be no revolutionary insight akin to how we moved from Physics 1.0 (Newtonian mechanics) to Physics 2.0. In short, the story universe is founded on the assumption that as of the current state of knowledge in the 21st century, we have effectively reached the end of science as we know it: all that remains is to play around with the details via engineering projects of various degrees of sophistication.
This is very different from the so-called "mundane manifesto", which is an esthetic choice based on the notion that near-future speculative fiction should rely on what we currently know about physics rather than ignoring that knowledge and inventing magical devices should reality prove inconvenient for our storytelling needs. It's a legitimate approach, so long as you recognize its goal and accept its constraints, and it's fun to occasionally work within such constraints. But if this is intended as serious speculation in the context of deep time (here, specifically, thousands of years in the future), Stross' choice either represents spectacular and probably unfounded arrogance about the scope of modern science or -- far more likely given Stross' background and previous novels -- a conscious choice to simplify the number of "what ifs?" that must be dealt with when considering the story context.
The "what if?" problem is serious for an author of speculative fiction. The more details of future science, psychology, and society that you try to change, the more complex the story becomes and the more difficult it becomes to create a logically and fictionally consistent world. The problem arises from trying to juggle multiple simultaneous technological, scientific, and social interactions: every assumption has large consequences both for the story and for all the other assumptions you have made, and there are difficult-to-predict feedback effect as those consequences in turn create changes in the starting assumptions. Creating a consistent extrapolation becomes nearly impossible when you try this, even for experienced and skilled authors. Thus, most authors are content to change one big thing (maybe two) and explore the consequences thoroughly.
The big hole in Neptune's Brood, particularly given Stross' knowledge of computing and his handling thereof in previous works such as the Accelerando sequence, relates to computers. Basically, they don't seem to exist in any state different from what we already have, or if they do, those differences don't appear to affect the story in any significant way. At a minimum, given that human life as we know it is extinct in these stories, and that all the protagonists are genetically and mechanically re-engineered "post-humans" that are far healthier and more robust than we are, I would expect similarly upgraded brains and very sophisticated software assistants by the time of this story. This doesn't happen. Some simplifying assumptions are easy to swallow for the sake of a good story; this one choked me.
It's reasonable to think that if we stick with the squishy and limited brains posessed by Humanity 2.0 (i.e., us), we will at some point hit the limits of our ability to think. We're very close to that limit right now. For example, most of us can't examine the simultaneous interaction of more than about four variables, even using several of the advanced visualization tricks that I describe in my upcoming book on writing for science journals. The human brain simply isn't capable of dealing with that many variables at once. There are clear "hardware" limits, such as "the magic number 7", which represents a fairly firm limit on the average person's short-term memory. Different problems arise when we attempt really gnarly computations such as determining how proteins with a given sequence of amino acids will fold -- crucial to understanding how these molecules function in an organism -- or try to understand the ridiculously complex interactions within the microbiome. Complex economic, ecological, and physical models are similarly difficult problems, and the more human psychology enters into these models, the worse the predictive problem becomes.
I chose these examples advisedly, since they illustrate a crucial point: Even today, we are developing computational approaches that can handle these problems. Thus far, our success is limited by the computational complexity of the problems, which has two implications: as computers become more powerful, the problems will become more tractable, and as we begin to understand the science better, the algorithms used in the calculations will also improve and reduce the computational requirement. What I foresee is an evolution in computing from what we have now, Computing 1.0 (largely brute-force application of unsophisticated algorithms driven primarily by human thought-processes), to something much more symbiotic, namely Computing 2.0 (far more sophisticated algorithms, probably developed by artificial intelligence and working as an equal partner to the human researcher).
When that happens, all the horizons get pushed back, and our ability to understand is no longer limited by our biological hardware. Just as modern Computing 1.0 statistical analysis software and computer visualization tools let us see things now that we could never have seen even 50 years ago, humans working in partnership with truly sophisticated software will lead us to Science 3.0, 4.0, and even farther in coming years. Lest you think this to be unfounded speculation, it's worth pondering the remarkable breakthroughs that have been achieved (and that will be achieved) from combining the insights of people from different ethnic, philosophical, religious, and cultural backgrounds, which is described in the October 2014 issue of Scientific American.
Neptune's Brood is a fun read, but undermined by a single assumption ("the end of science") that I found too hard to swallow. I think we're better than that, and the universe is far more interesting than Physics 2.0 and Computing 1.0 suggest.
Uncommonly for a Stross book, there are several major continuity errors and a great many things that seemed to be continuity or logical errors (particularly a failure to account for the effects of relativity). I would need to create a rigorous timeline (not an easy task) to confirm that some of the problems are real and not just me missing a point or the description being unclear and leading me to misunderstand. Most egregiously, I simply don't buy his explanation of the mechanisms of fast, medium, and slow money -- a consideration required to account for the needs of interstellar commerce in a context where faster-than-light travel is impossible. It's intriguing and brilliant, but it's essential for the story logic, and that logic seemed deeply flawed the more I thought about the implications. (That being said, if you're willing to accept the premise, it makes for an interesting driving force for the novel.)
But the motivation for this essay is the notion in the story that within a thousand years -- two at most -- we'll have reached the limits of science's ability to improve our understanding of the mechanics of the universe. As a result, there will be no further progress in science, and specifically, no possibility of moving from Physics 2.0 (Einsteinian relativity) to Physics 3.0 (the next radical change in our understanding of the universe). That is, there will be no revolutionary insight akin to how we moved from Physics 1.0 (Newtonian mechanics) to Physics 2.0. In short, the story universe is founded on the assumption that as of the current state of knowledge in the 21st century, we have effectively reached the end of science as we know it: all that remains is to play around with the details via engineering projects of various degrees of sophistication.
This is very different from the so-called "mundane manifesto", which is an esthetic choice based on the notion that near-future speculative fiction should rely on what we currently know about physics rather than ignoring that knowledge and inventing magical devices should reality prove inconvenient for our storytelling needs. It's a legitimate approach, so long as you recognize its goal and accept its constraints, and it's fun to occasionally work within such constraints. But if this is intended as serious speculation in the context of deep time (here, specifically, thousands of years in the future), Stross' choice either represents spectacular and probably unfounded arrogance about the scope of modern science or -- far more likely given Stross' background and previous novels -- a conscious choice to simplify the number of "what ifs?" that must be dealt with when considering the story context.
The "what if?" problem is serious for an author of speculative fiction. The more details of future science, psychology, and society that you try to change, the more complex the story becomes and the more difficult it becomes to create a logically and fictionally consistent world. The problem arises from trying to juggle multiple simultaneous technological, scientific, and social interactions: every assumption has large consequences both for the story and for all the other assumptions you have made, and there are difficult-to-predict feedback effect as those consequences in turn create changes in the starting assumptions. Creating a consistent extrapolation becomes nearly impossible when you try this, even for experienced and skilled authors. Thus, most authors are content to change one big thing (maybe two) and explore the consequences thoroughly.
The big hole in Neptune's Brood, particularly given Stross' knowledge of computing and his handling thereof in previous works such as the Accelerando sequence, relates to computers. Basically, they don't seem to exist in any state different from what we already have, or if they do, those differences don't appear to affect the story in any significant way. At a minimum, given that human life as we know it is extinct in these stories, and that all the protagonists are genetically and mechanically re-engineered "post-humans" that are far healthier and more robust than we are, I would expect similarly upgraded brains and very sophisticated software assistants by the time of this story. This doesn't happen. Some simplifying assumptions are easy to swallow for the sake of a good story; this one choked me.
It's reasonable to think that if we stick with the squishy and limited brains posessed by Humanity 2.0 (i.e., us), we will at some point hit the limits of our ability to think. We're very close to that limit right now. For example, most of us can't examine the simultaneous interaction of more than about four variables, even using several of the advanced visualization tricks that I describe in my upcoming book on writing for science journals. The human brain simply isn't capable of dealing with that many variables at once. There are clear "hardware" limits, such as "the magic number 7", which represents a fairly firm limit on the average person's short-term memory. Different problems arise when we attempt really gnarly computations such as determining how proteins with a given sequence of amino acids will fold -- crucial to understanding how these molecules function in an organism -- or try to understand the ridiculously complex interactions within the microbiome. Complex economic, ecological, and physical models are similarly difficult problems, and the more human psychology enters into these models, the worse the predictive problem becomes.
I chose these examples advisedly, since they illustrate a crucial point: Even today, we are developing computational approaches that can handle these problems. Thus far, our success is limited by the computational complexity of the problems, which has two implications: as computers become more powerful, the problems will become more tractable, and as we begin to understand the science better, the algorithms used in the calculations will also improve and reduce the computational requirement. What I foresee is an evolution in computing from what we have now, Computing 1.0 (largely brute-force application of unsophisticated algorithms driven primarily by human thought-processes), to something much more symbiotic, namely Computing 2.0 (far more sophisticated algorithms, probably developed by artificial intelligence and working as an equal partner to the human researcher).
When that happens, all the horizons get pushed back, and our ability to understand is no longer limited by our biological hardware. Just as modern Computing 1.0 statistical analysis software and computer visualization tools let us see things now that we could never have seen even 50 years ago, humans working in partnership with truly sophisticated software will lead us to Science 3.0, 4.0, and even farther in coming years. Lest you think this to be unfounded speculation, it's worth pondering the remarkable breakthroughs that have been achieved (and that will be achieved) from combining the insights of people from different ethnic, philosophical, religious, and cultural backgrounds, which is described in the October 2014 issue of Scientific American.
Neptune's Brood is a fun read, but undermined by a single assumption ("the end of science") that I found too hard to swallow. I think we're better than that, and the universe is far more interesting than Physics 2.0 and Computing 1.0 suggest.