A recent Boston Globe piece, Souls of a New Machine, has been getting some attention around the traps. In it Chris Spurgeon describes the interesting phenomenon in which some complex computer system takes advantage of human thinking to produce an intelligence result. For example,
The Google image labeler (images.google .com/imagelabeler) is an addictive online game that takes advantage of the fact that it’s very easy for a human to recognize the subject matter of an image (“That’s a puppy!” “That’s two airplanes and a bird!”) but virtually impossible for a computer. The game teams up pairs of strangers to assign keywords to images as quickly as possible. The more images you can label in 90 seconds, the more points you get. Meanwhile, Google gets hundreds of people to label their images with keywords, something that would be impossible with just computer analysis.
Unfortunately, the article calls this phenomenon “intelligence augmentation”. Which it aint.
Using the term this way is inconsistent with established usage. And it muddies the waters. These points are closely related.
The thinker most closely associated with the concept of intelligence augmentation, and who is often credited with having coined the term, is Douglas Engelbart. Engelbarts’ obsession was making human beings smart enough to handle the enormous problems we have to deal with. This may mean making tools help us be more intelligent.
Thus in his writings (see, e.g., his famous tech report Augmenting Human Intellect) what is being augmented is the intelligence of the human. Some kind of external system is used to make a human smarter. The external system itself might be completely stupid. (Analogy: a shovel can help me dig far more effectively. The shovel itself cannot do any digging.)
In the systems described by Spurgeon, by contrast, it is the external systems which are being made smarter. Human intelligence is being exploited in these systems, but the humans involved are not themselves becoming smarter in any interesting way.
Thus, the systems described by Spurgeon are quite different in the most central way from those which interested Engelbart.
In using the term Intelligence Augmentation to describe these new systems, Spurgeon is effectively lumping together under one heading systems which differ in a crucial respect.
Not a good idea.
It is clear in Spurgeon’s article that he’s interested in the contrast between these intelligence-exploiting systems and standard AI systems, in which the external systems themselves are (supposedly) made intelligent.
There is certainly a very important difference between computer systems which are intelligent “on their own” and those which are only intelligent because there are humans inside the box, so to speak.
That important difference should be marked by using a different term.
Its just that its a mistake to take an existing term which already means something else and to misapply it.
When there’s a genuinely new and interesting phenomenon, why not mark that with an appropriate new term?
“Intelligence Exploitation” works for me.
So, we have three distinct phenomena:
- Artificial Intelligence (AI) is making computers smart on their own, i.e., with no human “in the box.”
- Intelligence Augmentation (IA) is using external systems, particularly computers, to make humans smarter.
- Intelligence Exploitation (IE) is making computer systems smart by extracting and redeploying human intelligence, i.e., by including humans “in the box.”
Why do I care about this? Am I just a verbal pedant? Admittedly, with a background in analytic philosophy, I do have a taste for semantic niceties and their relationship to clear thinking.
But I’m no longer a philosopher, at least in the standard academic sense. These days, Engelbart’s mission is what excites me.
Rationale is all about Intelligence Augmentation – not AI or IE, though I expect that Rationale will eventually incorporate both AI and IE in some way.