Artificial (and natural) intelligence
Aug. 15th, 2015 10:37 amWriting about artificial intelligence requires one to deal first with the thorny issue of human intelligence*, since it’s helpful to start with an idea of what you’re discussing to provide context and set the evaluation criteria. Getting to a useful working definition is difficult, since it’s tempting to fall into tautology: as the beings who are tasked with the need to come up with a definition, it’s perhaps inevitable that we’ll define it in a way that shows us in the most favorable light. You can see this thought process at work in how ethologists (scientists who study animal behavior) have historically kept moving the goal posts each time some animal is found to be able to accomplish one of their sacred “human-only” skills.
* The story goes that Mahatma Gandhi was once asked what he thought about Western civilization, and that he replied with Shavian wit: “I think it would be a good idea”. Sadly, the same statement might be profitably applied to human intelligence.
With the caveats that all such lists are provisional and that a good science fiction author (or psychologist or ethologist) will be able to propose interesting exceptions to each of the following criteria, here’s my starting point for a list of the key attributes that indicate intelligence:
tool use: the ability to go beyond the limits imposed by one’s body by finding or creating suitable tools, whether levers to move rocks or words to move hearts
symbol use: the ability to use symbols (words, facial expressions, gestures, whatever) to communicate information
abstract thought: the ability to describe a problem in a generalizable way that allows one to solve different but related problems, possibly based on pattern-recognition skills
learning: the ability to form and store memories in a way that allows them to be retrieved and compared (note that this also requires pattern recognition skills)
a sense of time: the ability to define cause and effect relationships requires an ability to understand what comes first (the cause), what follows (the effect), and the time that elapses between the two (i.e., a short time suggests causality, a longer time conceals causality)
goal-seeking behavior: the ability to set a goal and find ways to achieve it rather than merely accepting what the world gives us (note that this assumes an ability and desire to change our environment instead of being forced to change in response to it)
self-awareness: the ability to recognize oneself and distinguish between oneself and others
emotional awareness: the ability to recognize our responses to a situation (emotions) and deal with them
delayed response: the ability to consider the aspects of a problem before acting rather than just responding by reflex (i.e., judgment)
One thing that this list leaves implicit but that should be made explicit is the fact that there is both gestalt and synergy at work here: intelligence is both the sum of all these things and something greater than that sum. Simply checking off each item on the list is not sufficient to define someone or something as intelligent. For each of these criteria, humans differ from other animals primarily in the degree to which we can meet such criteria. For example, crows can solve complex problems that would baffle some humans, and cetaceans are arguably even more intelligent; for example, some dolphins have learned to cooperate with human fishermen.
In addition to these criteria, I would claim that naturalintelligence has three “mechanical” requirements that will lead us directly into a discussion of artificial intelligence. The thoughts that are the hallmark of human intelligence require three things to function: an engine capable of operationalizing the thoughts (i.e., the human brain), fuel capable of driving that engine (i.e., knowledge), and some kind of software that forms relationships (e.g., language, mathematics) and that drives the engine to accomplish something. For artificial intelligence, the engine is a computer, the fuel is data (“big” or otherwise), and the software is (duh!) the software. For both human and non-human intelligences, one might productively argue that a fourth factor is necessary: the ability to compare experiences with others, whether through conversation (humans) or Internet connections (computers).
With the same caveat that many people will move the goalposts as soon as it looks like a computer might be reaching the same level of these skills as humans, how close are computers to meeting the criteria with which I started this article? Let’s take each point in turn:
tool use: Computer-controlled manufacturing (e.g., assembly lines) and mobile robots (whether on Mars or here on Earth) are clear evidence that computers can use tools. They’re not yet capable of creating their own tools, with the limited exceptions described below.
symbol use: Codes such as the binary “words” that lie at the base of all computer software are proof of symbol use; more advanced symbol use is demonstrated by the increasingly powerful examples of image-recognition software (e.g., facial analysis, feature extraction) and by assistants such as Apple’s Siri and Microsoft’s Cortana, which can not only recognize simple speech but reply and take actions in response to that speech.
abstract thought: Thus far, computers have not achieved what we would typically consider to be abstract thought. But that statement depends heavily on what we consider to be “abstract”. For example, Mathematica can perform remarkable feats of mathematical problem-solving.
learning: Neural network software and genetic algorithms can clearly “learn” and preserve that learning, albeit with some assistance from us. Furthermore, there’s no reason (other than a lack of interest in doing so) why programmers have not designed operating systems capable of learning our preferences and adapting to them. We can manually force software to do this through our “preference” settings and control panels, but I want a computer that notices how I manually back up data to a flash drive every hour or so and offers to do this for me. Voice recognition software already learns our unique vocal characteristics, so this kind of adaptation is clearly possible.
a sense of time: Software is inherently time-based, and statistical software can detect correlations between events or factors, but the recognition of cause and effect relationships is still some way off.
goal-seeking behavior: This is the whole basis of machine learning, so clearly software can seek goals. It can’t yet define its own goals, however; we still tell it what our goals are and command it to meet those goals.
self-awareness: A computer’s ability to recognize itself and distinguish between itself and other devices is inherent to such things as the media access control identifier that uniquely identifies a computer’s network card and the IP addresses that underlie the URLs we type into our Web browser. Unfortunately, that’s a primitive talent compared to (for example) the ability to explore the implications of cogito ergo sum.
emotional awareness: To the best of our knowledge, we haven’t been able to program emotions into computers, in large part because we define emotions based on complex biochemical reactions that lead to complex neurological responses that haven’t yet been emulated in software. But fields such as the design of facial recognition software are advancing rapidly, and it won’t be long before our computers can recognize when we’re sad or happy based on a glimpse of our face.
delayed response: All modern software has the underpinnings of this skill, since the software is generally event-driven (i.e., it waits for something to happen and some criterion to be met) and then chooses how to respond based on a series of hardwired criteria for the appropriate response to any given event. Problem-solving and goal-seeking (optimization) software already exists, and will rapidly become more sophisticated. However, software generally can't improvise in response to events that were not anticipated by its programmer and it may be a very long time before it acquires this ability.
Bottom line? All the rudiments are in place for the evolution of a true artificial intelligence. We already have the software equivalents of idiots savants, which are very good at one or a few things and completely hopeless at every other task. Computer scientists are aggressively pursuing the goal of more sophisticated systems, and they’re likely to come up with increasingly sophisticated results. I have no doubt that within my lifetime, they’ll come up with software capable of passing the Turing test.
But this leads us to the question of whether AI might evolve spontaneously. My take? It’s more likely than one might think. For evolution to occur, several criteria must be met:
Evolutionary pressure must be exerted on the organism: If there is no “need” for a group of organisms to change, then evolution is conservative and tends not to cause a change. Survival is the usual “need” that produces change: organisms that survive because they’re adapted to their environment pass on their genes (see the next point); those that fail to survive don’t pass on their genes. For software, the evolutionary pressure is imposed by computer scientists (the blind watchmakers of the software universe), but with the rapid advances being made in genetic algorithms and self-modifying software, it seems likely that setting goals for such software and weeding out software that fails to meet those goals will create enormous evolutionary pressure.
The organism must be able to change and retain those changes: In nature, the mechanisms that permit this adaptability and memory are genes. Since the whole point of being able to update and upgrade software is to change and retain those changes, computers are clearly capable of this function. Self-modifying code will take this to the next level.
Notwithstanding the previous points, random events are also important. Just as most mutations in the human genome are counterproductive or even fatal, computer programs are unlikely to improve or even continue functioning after experiencing a random change in the code. But it’s not hard to imagine software becoming orders of magnitude more robust than it currently is and becoming able to cope with and even benefit from such glitches.
Again, all of the rudiments for evolution are in place. But an evolutionary leap forward to something new and recognizably intelligent won’t happen soon. The current dominant model for software development is “command and control”: a programmer defines the behavior required by their software, and embeds that behavior in stone, and when the software doesn’t behave as desired, it’s debugged and redesigned until it does. But there are signs that we’re moving towards something more interesting, in which the programmer instead defines the goals and constraints and lets the software figure out how to accomplish those goals. When software becomes broadly capable of such feats of insight, we’ll see a true sea change, in Shakespeare’s original sense of “a sea-change into something rich and strange”.
What we’ll then begin seeing is emergent behavior, as in Dolly, Elizabeth Bear’s brilliant and chilling story.
* The story goes that Mahatma Gandhi was once asked what he thought about Western civilization, and that he replied with Shavian wit: “I think it would be a good idea”. Sadly, the same statement might be profitably applied to human intelligence.
With the caveats that all such lists are provisional and that a good science fiction author (or psychologist or ethologist) will be able to propose interesting exceptions to each of the following criteria, here’s my starting point for a list of the key attributes that indicate intelligence:
One thing that this list leaves implicit but that should be made explicit is the fact that there is both gestalt and synergy at work here: intelligence is both the sum of all these things and something greater than that sum. Simply checking off each item on the list is not sufficient to define someone or something as intelligent. For each of these criteria, humans differ from other animals primarily in the degree to which we can meet such criteria. For example, crows can solve complex problems that would baffle some humans, and cetaceans are arguably even more intelligent; for example, some dolphins have learned to cooperate with human fishermen.
In addition to these criteria, I would claim that naturalintelligence has three “mechanical” requirements that will lead us directly into a discussion of artificial intelligence. The thoughts that are the hallmark of human intelligence require three things to function: an engine capable of operationalizing the thoughts (i.e., the human brain), fuel capable of driving that engine (i.e., knowledge), and some kind of software that forms relationships (e.g., language, mathematics) and that drives the engine to accomplish something. For artificial intelligence, the engine is a computer, the fuel is data (“big” or otherwise), and the software is (duh!) the software. For both human and non-human intelligences, one might productively argue that a fourth factor is necessary: the ability to compare experiences with others, whether through conversation (humans) or Internet connections (computers).
With the same caveat that many people will move the goalposts as soon as it looks like a computer might be reaching the same level of these skills as humans, how close are computers to meeting the criteria with which I started this article? Let’s take each point in turn:
Bottom line? All the rudiments are in place for the evolution of a true artificial intelligence. We already have the software equivalents of idiots savants, which are very good at one or a few things and completely hopeless at every other task. Computer scientists are aggressively pursuing the goal of more sophisticated systems, and they’re likely to come up with increasingly sophisticated results. I have no doubt that within my lifetime, they’ll come up with software capable of passing the Turing test.
But this leads us to the question of whether AI might evolve spontaneously. My take? It’s more likely than one might think. For evolution to occur, several criteria must be met:
Again, all of the rudiments for evolution are in place. But an evolutionary leap forward to something new and recognizably intelligent won’t happen soon. The current dominant model for software development is “command and control”: a programmer defines the behavior required by their software, and embeds that behavior in stone, and when the software doesn’t behave as desired, it’s debugged and redesigned until it does. But there are signs that we’re moving towards something more interesting, in which the programmer instead defines the goals and constraints and lets the software figure out how to accomplish those goals. When software becomes broadly capable of such feats of insight, we’ll see a true sea change, in Shakespeare’s original sense of “a sea-change into something rich and strange”.
What we’ll then begin seeing is emergent behavior, as in Dolly, Elizabeth Bear’s brilliant and chilling story.