Worldcon report: No more soldiers, part I
Nov. 13th, 2009 06:41 pm"Before a war military science seems a real science, like astronomy; but after a war it seems more like astrology."—Rebecca West
My post on Remembrance Day reminded me it would be an appropriate time to summarize the "No more soldiers" panel from the Montreal Worldcon. War stories are a popular staple of science fiction because they combine three of the genre's key concerns: high technology, human reactions under stress, and how changes in the cultural context (here, transplanting a traditional war story into the future) affect the meaning of and interactions between the former two concerns. Plus, the possibility that protagonists might die—or are likely to be forced into making decisions with long-term moral consequences—clearly provides much opportunity for dramatically satisfying situations.
Some of the resulting stories aren't much better than video games translated into words—what I jokingly refer to as "aliens, explosions, and exploding aliens". Robert Heinlein's Starship Troopers is an early work that is often criticized for tending in this direction, though the critics who attack this novel most viciously usually choose to ignore inconvenient facts; specifically, they often neglect one of the important aspects of the backstory, namely Heinlein's belief that attaining full rights within a society should require some form of service to that society. Although Heinlein focused on military service in his novel, that was by no means the only form of service he acknowledged as relevant or important. The best science fiction war stories are both entertaining from an action-film perspective and socially insightful; Joe Haldeman's The Forever War is the classic example, and a clear response to Heinlein's novel.
Robots are a staple of military science fiction. It's been argued that roboticizing warfare would be a bad thing because it would remove the human cost of war (i.e., death and injury), and in so doing, would remove the primary reason to avoid wars. But this assumption ignores an important lesson of modern history, namely that the politicians who start wars are not the people who actually fight the wars, and these people rarely care much about the human cost because they won't have to bear it. That means that a technological war, fought by machines rather than humans, is no more or less likely than a conventional war fought solely by humans.
A more serious objection to the notion of creating robot warriors can be seen in the insurgencies and guerilla warfare being fought around the world today: when an enemy is so technologically superior that you can't confront them directly, you can often achieve your aims much more easily by attacking their civilians or by hiding behind your own civilians. Not only is there less risk to your own soldiers, you're also more likely to mobilize your enemy's public opinion against the war if you harm their voters or if you persuade their voters that the main casualties of the war will be the enemy's innocent civilians. This is one reason why we so rarely see open-field wars between professional armies nowadays, and why warfare is increasingly being fought inside cities and close to civilian populations. It's been said that generals are always fighting the last war instead of focusing on the current one, and emphasizing the potential of highly destructive robots and related tech supports this notion because such approaches work best in open combat between two armies—something we may not see much of in the future. (There are promising signs that modern generals have been learning their history, and that this kind of antequated thinking will become less common in the future. I'll come back to that in Part II of this essay.)
A side discussion led me to ponder the moral aspects of video games and their fictional possibilities. Right now, the characters operated by the computer (what we used to call the "non-player characters" in role-playing games such as Dungeons and Dragons) are quite primitive. Killing or maiming them therefore has little moral consequence, other than such consequences as are engendered by what you're thinking while you're doing the killing. But as computer artificial intelligence improves, at some point we'll conceivably produce computerized characters capable of passing the Turing test—that is, characters with a level of sentience that is indistinguishable from that of humans, at least to an amateur like most of us. At that point, the moral equation changes dramatically, because we're no longer just extinguishing pixels on the screen.
Now let's invert that same concern and consider into the kind of artificial intelligence that would be required to operate a combat robot and problems soon arise: How is ordering an intelligent robot to kill another intelligent robot more ethical than ordering a human to kill another human? More scarily, unless we get much better than we have been at computer programming (which seems unlikely based on our track record thus far), it will be prohibitively difficult to prevent such robots from accidentally or intentionally killing human noncombatants. How do you define and enforce the rules of engagement for a robot? If you can do so, will anyone continue to respect these rules when they're starting to lose the war? The Terminator series of movies and the TV show aren't the most insightful bits of writing, but they do at least illustrate the possibilities of how badly wrong such things can go.
Speaking of video games, you might wonder why it is that intelligent robots would be necessary: Why can't we just operate machines from a distance the same way we'd play a video game? Wouldn't it be safer for civilians with a human mind behind the controls? The problem with teleoperated devices such as remote-control vehicles arises from time delays. The speed of light (thus, of radio waves) is astonishing, but it's slow enough that significant time delays arise between when the robot's sensors first capture an image, upload it to a nearby satellite, and send it to you, and the delay repeats when you send a command back along the same pathway to tell the robot how to respond. (This doesn't even include the time it takes to decide how to respond to the image.) The delay is sufficient that your robot could easily be destroyed, or lose track of a target, by the time your command reaches it. The result is an ineffective response, and an increased risk of exposing civilians to incorrectly targeted weapons fire.
The solution would be to place the controller closer to the combat zone, but there are two key problems: first, these people become highly vulnerable to enemy fire (in fact, they become priority targets), and even if you can protect them, you can't necessarily protect their signal transmissions. Having someone hack into your communications network is probably not a serious risk unless you're fighting an equally technologically advanced foe, but jamming those communication channels would effectively disable your teleoperated robots. Even where jamming isn't possible, so much military communication currently occurs via cell phones (no, really!) and related technology that transmission towers become an important target both for sabotage and for hackers. So for the most part, the robots have to be at least somewhat autonomous, and that requires some form of artificial intelligence. This already exists in primitive form, with devices such as automated "sentry guns" that you can set up to prevent anything living from entering the area they control. The technology is only likely to get better—and scarier.
Part II tomorrow...
My post on Remembrance Day reminded me it would be an appropriate time to summarize the "No more soldiers" panel from the Montreal Worldcon. War stories are a popular staple of science fiction because they combine three of the genre's key concerns: high technology, human reactions under stress, and how changes in the cultural context (here, transplanting a traditional war story into the future) affect the meaning of and interactions between the former two concerns. Plus, the possibility that protagonists might die—or are likely to be forced into making decisions with long-term moral consequences—clearly provides much opportunity for dramatically satisfying situations.
Some of the resulting stories aren't much better than video games translated into words—what I jokingly refer to as "aliens, explosions, and exploding aliens". Robert Heinlein's Starship Troopers is an early work that is often criticized for tending in this direction, though the critics who attack this novel most viciously usually choose to ignore inconvenient facts; specifically, they often neglect one of the important aspects of the backstory, namely Heinlein's belief that attaining full rights within a society should require some form of service to that society. Although Heinlein focused on military service in his novel, that was by no means the only form of service he acknowledged as relevant or important. The best science fiction war stories are both entertaining from an action-film perspective and socially insightful; Joe Haldeman's The Forever War is the classic example, and a clear response to Heinlein's novel.
Robots are a staple of military science fiction. It's been argued that roboticizing warfare would be a bad thing because it would remove the human cost of war (i.e., death and injury), and in so doing, would remove the primary reason to avoid wars. But this assumption ignores an important lesson of modern history, namely that the politicians who start wars are not the people who actually fight the wars, and these people rarely care much about the human cost because they won't have to bear it. That means that a technological war, fought by machines rather than humans, is no more or less likely than a conventional war fought solely by humans.
A more serious objection to the notion of creating robot warriors can be seen in the insurgencies and guerilla warfare being fought around the world today: when an enemy is so technologically superior that you can't confront them directly, you can often achieve your aims much more easily by attacking their civilians or by hiding behind your own civilians. Not only is there less risk to your own soldiers, you're also more likely to mobilize your enemy's public opinion against the war if you harm their voters or if you persuade their voters that the main casualties of the war will be the enemy's innocent civilians. This is one reason why we so rarely see open-field wars between professional armies nowadays, and why warfare is increasingly being fought inside cities and close to civilian populations. It's been said that generals are always fighting the last war instead of focusing on the current one, and emphasizing the potential of highly destructive robots and related tech supports this notion because such approaches work best in open combat between two armies—something we may not see much of in the future. (There are promising signs that modern generals have been learning their history, and that this kind of antequated thinking will become less common in the future. I'll come back to that in Part II of this essay.)
A side discussion led me to ponder the moral aspects of video games and their fictional possibilities. Right now, the characters operated by the computer (what we used to call the "non-player characters" in role-playing games such as Dungeons and Dragons) are quite primitive. Killing or maiming them therefore has little moral consequence, other than such consequences as are engendered by what you're thinking while you're doing the killing. But as computer artificial intelligence improves, at some point we'll conceivably produce computerized characters capable of passing the Turing test—that is, characters with a level of sentience that is indistinguishable from that of humans, at least to an amateur like most of us. At that point, the moral equation changes dramatically, because we're no longer just extinguishing pixels on the screen.
Now let's invert that same concern and consider into the kind of artificial intelligence that would be required to operate a combat robot and problems soon arise: How is ordering an intelligent robot to kill another intelligent robot more ethical than ordering a human to kill another human? More scarily, unless we get much better than we have been at computer programming (which seems unlikely based on our track record thus far), it will be prohibitively difficult to prevent such robots from accidentally or intentionally killing human noncombatants. How do you define and enforce the rules of engagement for a robot? If you can do so, will anyone continue to respect these rules when they're starting to lose the war? The Terminator series of movies and the TV show aren't the most insightful bits of writing, but they do at least illustrate the possibilities of how badly wrong such things can go.
Speaking of video games, you might wonder why it is that intelligent robots would be necessary: Why can't we just operate machines from a distance the same way we'd play a video game? Wouldn't it be safer for civilians with a human mind behind the controls? The problem with teleoperated devices such as remote-control vehicles arises from time delays. The speed of light (thus, of radio waves) is astonishing, but it's slow enough that significant time delays arise between when the robot's sensors first capture an image, upload it to a nearby satellite, and send it to you, and the delay repeats when you send a command back along the same pathway to tell the robot how to respond. (This doesn't even include the time it takes to decide how to respond to the image.) The delay is sufficient that your robot could easily be destroyed, or lose track of a target, by the time your command reaches it. The result is an ineffective response, and an increased risk of exposing civilians to incorrectly targeted weapons fire.
The solution would be to place the controller closer to the combat zone, but there are two key problems: first, these people become highly vulnerable to enemy fire (in fact, they become priority targets), and even if you can protect them, you can't necessarily protect their signal transmissions. Having someone hack into your communications network is probably not a serious risk unless you're fighting an equally technologically advanced foe, but jamming those communication channels would effectively disable your teleoperated robots. Even where jamming isn't possible, so much military communication currently occurs via cell phones (no, really!) and related technology that transmission towers become an important target both for sabotage and for hackers. So for the most part, the robots have to be at least somewhat autonomous, and that requires some form of artificial intelligence. This already exists in primitive form, with devices such as automated "sentry guns" that you can set up to prevent anything living from entering the area they control. The technology is only likely to get better—and scarier.
Part II tomorrow...