Feeds:
Posts
Comments

Archive for the ‘Intelligence Augmentation’ Category

This is day 1 of the Graphic and Visual Representations of Evidence and Inference in Legal Settings conference in New York. Probably never before have so many argument mapping aficionadoes been gathered at one place before. It is only a small conference – maybe 75 people total – but the concentration of interest is remarkable. I’d only met two of these people before, and then only briefly, but “knew” dozens of them in varying degrees by internet association or being otherwise acquainted with their work. In addition to the academics there are a number of lawyers and others coming from a more commercial direction, and their presence/interest is an indication of how structured argumentation, argument visualisation, etc., are starting to get traction outside of narrow academic niches. There’s a good chance that in 10-20 years it will turn out that this conference was a pivotal moment in the field of argument mapping – a bit like the 1956 Dartmouth Artificial Intelligence workshop.

My talk in the second session today was “Rationale – A Generic Argument Mapping Tool.” I gave an overview of Rationale, and discussed some of the issues surrounding its design. In the day or two leading up the presentation, I figured out that there were three main points to be made:

  • In considering what is a good visualisation of evidence, we must attend at least as much to the nature of users and their tasks as we do to the nature of the domain itself.
  • A good visualisation is one which supports interaction as much as comprehension.
  • We should think of ourselves as builders of thinking support systems based on interactive diagrams rather than as argument diagrammers.

I also emphasised the importance, to us, of the market – our customers and clients – as constraints on what we develop. In other words, Rationale is the way it is, in a great many aspects, because of our sense that it has to be that way to be commercially successful.

A key development in today’s talk was that I used Rationale itself as the presentation tool, rather than using (e.g.) PowerPoint. This might be the first time somebody has done that. Over the past few weeks, even in last day or two before I set off for New York, the technical team was looking at this issue and adding some new features (such as an animated zoom-to-map) which helped make it possible to use Rationale this way. I started out with a view of the entire workspace with various grouping and argument maps ready for the presentation:

workspace2.jpg

and zoomed in and out, and panned around, as required.  I also did some “on the fly” argument mapping, dragging and dropping claims from the browser window.   Somebody came up afterwards and asked if we’d considered using Rationale to plan a whole book, since all the pieces of it could be on the same infinite workspace.

The talk seemed to go over well and generally people seemed impressed with Rationale. It was a great feeling to be “showing off” such a quality tool. I was (of course) proud of Rationale, and of the Austhink team who’ve been creating it over the past year or more.

Advertisements

Read Full Post »

Watch Le Grand Content – a short, well-produced and very entertaining video.  I’m not sure what it is meant to be – some kind of graphical poetry?  But it strikes me as an excellent portrayal of how conscious thought unfolds when one is trying to think about something.  Moments of structure mixed with free associations; occasional patches of sense but the whole thing amounts to an incoherent ramble.   Conscious thought is not very good at staying on topic and organising itself into structures which “hang together” in some useful way.  This is why we can generally think more effectively when conscious thought is paired with external representations which do sit relatively still and maintain their structure.  It is also why Rodin’s classic Thinker

thinker21.jpg

is such a misleading picture of thinking, even though it is almost always the first think people think of if you ask them to picture thinking in their minds.  If you could see inside Rodin’s Thinker’s mind, you’d probably be watching something like Le Grand Content. 

Here’s a much better picture of thinking:

reasoning-with-diagram.jpg

Read Full Post »

A recent Boston Globe piece, Souls of a New Machine, has been getting some attention around the traps. In it Chris Spurgeon describes the interesting phenomenon in which some complex computer system takes advantage of human thinking to produce an intelligence result. For example,

The Google image labeler (images.google .com/imagelabeler) is an addictive online game that takes advantage of the fact that it’s very easy for a human to recognize the subject matter of an image (“That’s a puppy!” “That’s two airplanes and a bird!”) but virtually impossible for a computer. The game teams up pairs of strangers to assign keywords to images as quickly as possible. The more images you can label in 90 seconds, the more points you get. Meanwhile, Google gets hundreds of people to label their images with keywords, something that would be impossible with just computer analysis.

Unfortunately, the article calls this phenomenon “intelligence augmentation”. Which it aint.

Using the term this way is inconsistent with established usage. And it muddies the waters. These points are closely related.

The thinker most closely associated with the concept of intelligence augmentation, and who is often credited with having coined the term, is Douglas Engelbart. Engelbarts’ obsession was making human beings smart enough to handle the enormous problems we have to deal with.  This may mean making tools help us be more intelligent.

Thus in his writings (see, e.g., his famous tech report Augmenting Human Intellect) what is being augmented is the intelligence of the human. Some kind of external system is used to make a human smarter. The external system itself might be completely stupid. (Analogy: a shovel can help me dig far more effectively. The shovel itself cannot do any digging.)

In the systems described by Spurgeon, by contrast, it is the external systems which are being made smarter. Human intelligence is being exploited in these systems, but the humans involved are not themselves becoming smarter in any interesting way.

Thus, the systems described by Spurgeon are quite different in the most central way from those which interested Engelbart.

In using the term Intelligence Augmentation to describe these new systems, Spurgeon is effectively lumping together under one heading systems which differ in a crucial respect.

Not a good idea.

It is clear in Spurgeon’s article that he’s interested in the contrast between these intelligence-exploiting systems and standard AI systems, in which the external systems themselves are (supposedly) made intelligent.

There is certainly a very important difference between computer systems which are intelligent “on their own” and those which are only intelligent because there are humans inside the box, so to speak.

That important difference should be marked by using a different term.

Its just that its a mistake to take an existing term which already means something else and to misapply it.

When there’s a genuinely new and interesting phenomenon, why not mark that with an appropriate new term?

“Intelligence Exploitation” works for me.

So, we have three distinct phenomena:

  1. Artificial Intelligence (AI) is making computers smart on their own, i.e., with no human “in the box.”
  2. Intelligence Augmentation (IA) is using external systems, particularly computers, to make humans smarter.
  3. Intelligence Exploitation (IE) is making computer systems smart by extracting and redeploying human intelligence, i.e., by including humans “in the box.”

Why do I care about this? Am I just a verbal pedant? Admittedly, with a background in analytic philosophy, I do have a taste for semantic niceties and their relationship to clear thinking.

But I’m no longer a philosopher, at least in the standard academic sense. These days, Engelbart’s mission is what excites me.

Rationale is all about Intelligence Augmentation – not AI or IE, though I expect that Rationale will eventually incorporate both AI and IE in some way.

Read Full Post »

« Newer Posts