Nov. 14th, 2009

blatherskite: (Default)
Continuing from where I left off in yesterday's post:

Most war stories in science fiction ignore the economic aspects of technological warfare. Even if some miracle worker solves the problem of price inflation in the defence industry (a notion that I'd call bad science fiction), it's likely that techno toys such as robots will remain very expensive. Unfortunately, humans are easier and less expensive to produce, and easily replacable without requiring expensive technology, so it seems likely that humans are likely to be employed as soldiers for the foreseeable future.

Even more unfortunately, this is why the problem of armies built from child warriors has become so severe, and why troops in places like Afghanistan and Iraq are in such jeopardy: one panelist noted that it only costs about $10 to hire a civilian to empty a machine gun into a convoy, and the cost is even less if you hire children. This lets you treat the most impoverished and vulnerable members of a society as, in effect, cheap and disposable weapons. (Professional soldiers are expensive to recruit, train, and supply with food and equipment. "Temps" pose none of these problems to the economically astute sociopath.) Women are arguably even more useful than children when you're fighting against a primarily male army (that is, most modern armies) because most men, for reasons good and bad, have been culturally indoctrinated to hesitate before harming women. (Children too, but children large enough to carry a modern rifle are easily mistaken for adult soldiers.) Soldiers typically hesitate to shoot women, even armed ones, and run the risk of getting shot if they hesitate. And if a soldier does shoot a woman or child to save their own life, the backlash from the local civilians and their own people back home makes this a victory for their enemy.

Though morally bankrupt, the practice of enlisting civilians makes it easy to inflict an ongoing series of small losses to your enemy without ever endangering your own professional soldiers. This means that although we're likely to see robots and related technologies play an increasing role in future warfare, they're unlikely to ever completely replace humans. That's particularly true in what is referred to as an "overmatch" situation, when one side's technology is so overwhelming that their enemy can't possibly engage with them directly. An interesting side-effect of this economic argument is that we are now seeing snipers, who were formerly tasked with shooting people, being refocused on the goal of targeting enemy technology: the payback from eliminating expensive machines is far greater than the payback from killing people, and there are fewer consequences.

One thing to keep in mind about robots is that they respond programmatically: after all, the goal of creating a robot is to obtain a tool that will follow instructions without question, unlike human soldiers. We already have primitive robots, ranging from land mines to "fire and forget" missiles. Both illustrate the problem with robots: once we've told the weapon what to do, it may then be largely out of our control, and if we change our minds, it may be too late. Moreover, politicians tend to ignore inconvenient problems such as the need to go back and remove automated weapons such as land mines after a war. One can easily imagine the problems more sophisticated robots would pose when it comes time to demobilize them after a war: could we just put them to sleep until the next conflict? You'd hope their designers would build in this feature, but 20 years grappling with software hasn't inspired much faith in the profession of programmer. If the robot is sentient to some degree, would mothballing it after a war raise significant ethical issues? For a truly sentient robot, what kinds of post-traumatic stress disorder might the poor robot have to cope with?

Isaac Asimov explored some of these issues at length by creating his three laws of robotics, which were designed to both protect us fragile humans from robots, and to protect robots (which are, after all, valuable assets). Ignore the Will Smith movie I, Robot, which bears little or no resemblance to anything that Asimov wrote. Being a clever fellow, Asimov explored several situations that led to conflicts between a robot following its orders and being required to protect itself. Keith Laumer also wrote several interesting stories about this in his Bolo series.

Another thing that's often forgotten in the "military pornography" style of fiction, in which the focus is on the technology and the violence, is a serious treatment of the human factors involved in warfare: Why are we fighting? How can we find a way to stop? What does it mean to be a soldier, whether professionally or involuntarily? On a particularly chilling note, we can see the phenomenon of humans being turned into automatons—robots by any other name—whether in the form of child warriors who have not matured sufficiently to develop morality and the ability to say no, or whether in the form of suicide bombers. David Weber has been criticized for straying too far towards military pornography, but in his defence, he also shows a keen concern for his characters. Though he often committed clumsy parody in his early books, his later books demonstrated a strong ability to find both villains and heroes on each side of a conflict. I wouldn't quite place his stories in the same league as Patrick O'Brian's Aubrey–Maturin stories, which he was clearly writing an hommage to, but some of the later stories come awfully close.

One slightly scary aspect of military technology is that what works well against enemy soldiers can work even better against your own civilians, whether in the hands of terrorists or in the hands of your own government. For example, there's a newish technology referred to as "brown noise" capable of causing disabling pain (and presumably incontinence) but without necessarily causing permanent damage. The goal is to allow soldiers to disable their opponents without killing them, but there are clear implications for riot police and crowd control.

From a less unpleasant perspective, it's interesting to consider how robots and other tech will increasingly be used as mobile sensors, whether or not they carry weapons. Remotely piloted drones now serve this role well, and the kind of frontline information they provide is crucial for penetrating "the fog of war" and allowing commanders to respond quickly to changing conditions as a battle progresses. However, as these devices improve they raise the risk of information overload, when too much information arrives at once for any human to process. As the volume of information expands, commanders will increasingly need support from computers (i.e., artificial intelligence) to cope, and possibly even to respond for them when they're overwhelmed. That's a risky business, because at some point, the programmer's choices about how to filter the information (or the choices of an artificial intelligence that is given this task) raise potentially serious ethical concerns both for the programmer (who cannot hope to deal with the seemingly infinite complexities of a battlefield situation) and the commander (who has no choice but to rely on his automated advisors).

It's nice to be able to end this essay on a hopeful note. If you accept my contention in the previous essay that the leaders who declare war usually don't give a damn about the consequences, and are so distant from the conflict that they can plausibly deny any responsibility for what happens in combat, this clearly places an enormous moral burden on the soldiers and their field commanders. Having human soldiers in combat provides a crucial safety margin that we won't have with any near-future robot: a human can always say no to orders that take them outside their moral comfort zone. Indeed, instilling this kind of moral judgment is now an important part of the training of modern soldiers.

This is also one reason why we're seeing increasingly rigorous training of soldiers in areas such as establishing and maintaining contact with local civilians. One of the heartening things I took away from the panel discussion was the confidence expressed by Paul Chafe (Canadian army) and Mike Rennie (British army) in how well modern soldiers are being trained to engage with civilians. Both speakers remind us that contrary to popular opinion, soldiers aren't inevitably uneducated violent morons. (Check out the Web sites of these two people to get an idea of just how highly educated they are.) Most officers and professional soldiers aren't much different from me and thee in this sense, which is to say that you'll find violent, reactionary, stupid people in any profession. I have no hope whatsoever that warfare will disappear from human affairs any time soon, but I do have some hope that we'll increasingly find ways to avoid it and minimize its human cost. And when we can't do that, I'm reassured that people like Paul and Mike will be there or will be training those who will be.

Profile

blatherskite: (Default)
blatherskite

Expand Cut Tags

No cut tags