blatherskite: (Default)
[personal profile] blatherskite
Scientists and technologists have good intentions in spades, but sometimes you wonder if they ever leave the house and mingle with real people. Take, for example, two well-intentioned but doomed ab initio efforts to put some ethics back into a particular branch of technological endeavor, namely the development of artificially intelligent robots. A brief definitional note before we get going: “Intelligence” is a slippery term to define, and in practice, the definition usually comes down to “whatever standard I can evoke that will make me seem more intelligent than you or allow me to treat you as a lesser being”. For artificial intelligence, the standard definition relies on the Turing test, which (in greatly simplified terms) states that something is “intelligent” if one cannot distinguish it from a real human. With the footnote that intelligence is multidimensional, not something that can be gauged with a single evaluation or a single evaluation metric, this test remains a broadly useful criterion, and one that I will adopt. In short, we can summarize this test as “a difference that makes no difference is no difference”.

The first problematic initiative aims to eliminate the use of artificially intelligent robots in military contexts. Even if you don’t believe that the Terminator franchise represents the inevitable endpoint of research on this technology, you have to admit that the Future of Life Institute makes a compelling case for why we should not go down this particular dark road. To me, the most compelling reason is that replacing human warriors with technological surrogates seems to eliminate the human cost of warfare and thereby makes war seem insufficiently horrible to make prevention a priority.

In practice, this is only true for the aggressor, and then only if they can remain a safe distance from the chaos. We’ve already seeing how shortsighted this perspective is in the high ”collateral damage” associated with the use of advanced military technologies, most recently in the form of remotely operated drones. This damage should not have been at all surprising given the spectacular failures of previous “this will solve everything” technologies such as precision bombing to eliminate or reduce civilian casualties, but we humans are nothing if not expert at ignoring inconvenient realities (cf. the abovementioned “in practice” definition of intelligence).

In reality, civilian casualties are inevitable in modern warfare, and have increased greatly over the last few millennia (in absolute numbers, if not proportionally). The problem is that conflicts rarely occur in neatly delineated killing fields, like sports stadiums located far from civilians. It’s simply not credible to propose that modern warfare will only be fought in carefully sequestered arenas where the combat can be kept far away from civilians. Pretending that artificially intelligent robots would solve the problem is nothing more than a layer of abstraction, intended solely to make the unpalatable palatable by hiding its ugly reality. The terminology itself illustrates the problem: instead of the accurate phrase “death of non-combatants”, or the simpler “murder of innocent civilians”, “collateral damage” only serves the goal of abstracting human tragedy so that we can ignore its ethical consequences.

Eliminating the use of artificially intelligent robots in warfare therefore has much to recommend it. Yet there are two problems. First and most serious, those who make the decision to declare war on others rarely, if ever, experience the consequences personally. As a result, they have no incentive to avoid declaring war because someone else will pay the price for them. Eliminating robots from the equation does nothing to solve the problem. Second, the history of technology is the history of finding ways to convert even the most seemingly innocuous technology into a means of killing or wounding other people, and the history of warfare is the history of conflicts escaping nice, tidy boundaries.

Warfare is only a specific form of the violence we humans seem to do instinctively, and it has deep roots in all cultures and all historical periods. It’s not something we’re going to abandon or confine to killing fields that will spare civilians, no matter who or what does the fighting. Hence the sarcastic and deeply pessimistic title of this essay, “good luck with that”.

The second initiative aims to eliminate the use of artificially intelligent robots in sexual contexts, and specifically to eliminate “sexbots” -- robots designed primarily or solely for use as sexual surrogates. This one’s a little harder to understand, at first glance: such devices could eliminate the spread of sexually transmitted diseases, provide companionship and possibly emotional instruction to people who may not be able to sustain a healthy relationship on their own, and greatly reduce (though probably not eliminate*) sexual slavery or the abuse of adults and children. Yet as in the case of warfare, adding a layer of abstraction to something as fundamentally human as our sexuality lets us avoid dealing with the real problem. In addition, there’s considerable evidence that humans (at least a small proportion thereof) will copulate with just about anything that moves, and many things that don’t; this second initiative will face a hard time combating that urge. This leads to the “good luck with that” conclusion for this initiative too.

* Sexual abuse is not always about sex; often it’s about power over the weak, or sadism, or other unpleasant aberrations of human psychology.

Another concern, raised by SF writer Elizabeth Bear in her chilling short story, Dolly (about the abuse of a sexbot and its consequences), is the intelligence part of artifical intelligence. Whether in matters of warfare or sexuality, it’s hard to imagine that it would really be more ethical to shift abuse from our fellow organic beings to non-organic but otherwise intelligent beings and rationalize this abuse as acceptable. “Intelligence” is relative, not an absolute and binary scale that provides nice distinctions. If you accept that proposition, the possession of intelligence should entitle any intelligent being to the same protections we would grant ourselves, including protection from sexual abuse. Not everyone accepts this as being valid; to some, there is a unique spark (let’s call it a “soul”) that makes humans qualitatively different from anything else, no matter how intelligent. Yet even if we accept their distinction as valid, the long and horrible history of torture suggests that “good luck with that” is again the correct response to any suggestion that we ban such behavior.

So should we throw up our hands in despair and ignore these issues? The story of King Canute is often misrepresented as an example of human arrogance. In the incorrect version of the tale, a powerful but arrogant king attempts to turn back the tide and fails. This failure has spawned the idiomatic phrase “attempting to stem (halt) the tide”, with the implicit meaning of a doomed fight*. Yet men and women of good conscience should attempt to stem the tide, even if their struggle seems doomed. Unlike Canute, we have some hope of stemming the future tide of misuse of artificially intelligent robots, at least for most nations and for some time.

* In the original version of this tale, the King’s goal was to demonstrate the importance of humility to his courtiers: some things cannot be stopped by even the most powerful humans, good or bad intentions notwithstanding. That's the wrong message for the sake of this essay.

As proof of what is possible, I offer the example of the 1925 Geneva Protocol, an early attempt to limit the use of chemical and bacteriological weapons in warfare. Though the protocol has by no means eliminated the use of such weapons, the contrast with the use of chemical weapons (toxic gases) during World War I and earlier uses of smallpox-contaminated blankets in an effort to eradicate tribes of Native Americans is dramatic; rather than toxic gases and microbes becoming a standard part of the military toolkit, the use of such tools remains the exception, and one that attracts horror and often reprisals from the international community. People still die, often horribly, during warfare, but the conventions have greatly reduced the frequency of two horrible ways to die. The non-use of nuclear weapons since the end of World War II is another promising sign, though recent events in Iran and North Korea give me cause for hesitation.

As a cynic, I don’t think we’ll suddenly evolve sufficiently ethical behavior on a global scale to win this fight. Thus, I see no plausible way to avoid the creation of warrior robots and sexbots. But the successes in limiting other abuses makes the fight no less worth fighting. We may not be able to stop either form of abuse, but we may at least limit its scope. “Good luck with that” is not an acceptable response when so many lives, whether natural or artificial, will be affected.

Profile

blatherskite: (Default)
blatherskite

Expand Cut Tags

No cut tags