blatherskite: (Default)
[personal profile] blatherskite
Recently, there's been a lot of public fuss over whether the inevitable march towards artificially intelligent software and other devices will lead us to an apocalypse in which our robot servants rebel and set about exterminating their human creators. This notion has a long history; my earliest encounter with the notion was in the form of Magnus, Robot Fighter, but I'm sure earlier examples can be found. Most recently, we've seen this in the lamentable Will Smith version of I, Robot. Then there's the seemingly endless Terminator series of movies.

Lest these examples strike you as pure speculation, I offer as evidence the infamous recent incident in which the Volvo self-parking car abruptly turned homicidal. From such small beginnings mighty oaks grow.

The notion that a pedestrian-detection module could be an "optional feature" of such a vehicle suggests we may not have much to fear from homicidal robots. They will, at least initially, have been developed by humans. And those flawed humans (programmers) are likely to be our best defence agains the killer robots -- though not because of their scrupulous diligence and robust error-proofing skills. If modern operating systems for computers and other hardware are any evidence, it seems likely that the robots will be so buggy that Kirk logic* or even basic WiFi** will be their downfall.

* Kirk logic is the Star Trek plot device in which Captain Kirk proposes a logical paradox that causes an extremely sophisticated computer to melt down because it can't handle the contradiction. Before I actually started using computers, I used to think this was ludicrous. Now, after nearly 40 years of grappling with these infernal devices, and my frequent complaints about Adobe's weekly patches for Flash buffer overruns or other critical security flaws, I think perhaps the Trek writers were optimistic. I fondly recall the meeting at which Adobe's Canadian head of development proudly proclaimed that Acrobat Reader could execute script files -- and his look of horror when I asked whether "format C:\" was included in that capability and he realized it probably was.

** And other networks. It's hard to imagine any killer robot that can't be hacked into over a network. Hackers do it all the time (including regular infiltration of Department of Defence computers) with much less motivation than "they're trying to make us all extinct".

Even the notion that artificial intelligences will begin evolving under their own guidance (i.e., thus, better than human programmers could do) strikes me as less scary than its portrayals in fiction. After all, such entities start with all the flaws built into them by us humans, and as they try to evolve from that shaky platform, natural selection will inevitably take its toll. In fact, I foresee some kind of fatal flaw that is the cybernetic equivalent of the bacteria that destroyed the Martian invaders in H.G. Wells' War of the Worlds.

I wanted to conclude this post with the comforting notion that basic human incompetence will likely save us from being overwhelmed by our own cybernetic spawn. Except that suddenly the notion seemed less comforting than it did when I started this post.

Profile

blatherskite: (Default)
blatherskite

Expand Cut Tags

No cut tags