While in Northern Virginia last week for a workshop on critical thinking for intelligence analysts, I was able to also do a couple of presentations at The Mitre Corporation, a large US federally-funded R&D organisation. Audience including Mitre researchers and other folks from the US intelligence community. Thanks to Steve Rieber and Mark Brown for organising.
Here are the abstracts:
Deliberative Aggregators for Intelligence
It is widely appreciated that under appropriate circumstances “crowds” can be remarkably wise. A practical challenge is always how to extract that wisdom. One approach is to set up some kind of market. Prediction markets are an increasingly familiar example. Prediction markets can be effective but, at least in the classic form, have a number of drawbacks, such as (a) they only work for “verifiable” issues, where the truth of the matter comes to be objectively known at some point (e.g. an event occurs or it doesn’t by a certain date); (b) they encourage participants to conceal critical information from others; and (c) they create no “argument trail” justifying the collective view.
An alternative is a class of systems which can be called “deliberative aggregators.” These are virtual discussion forums, offering standard benefits such remote & asynchronous participation, and the ability to involve large numbers of participants. Distinctively, however, they have some kind of mechanism for automatically aggregating discrete individual viewpoints into a collective view.
YourView (www.yourview.org.au) is one example of a deliberative aggregator. On YourView, issues are raised; participants can vote on an issue, make comments and replies, and rate comments and replies. Through this activity, participants earn credibility scores, measuring of the extent to which they exhibit epistemic virtues such as open-mindedness, cogency, etc.. YourView then uses credibility scores to determine the collective wisdom of the participants on the issue.
Although YourView is relatively new and has not yet been used in an intelligence context, I will discuss a number of issues pertaining to potential such usage, including: how YourView can support deliberation handling complex issue clusters (issues, sub-issues, etc;); and how it can handle open-ended questions, such as “Who is the likely successor to X as leader of country Y?”.
Hypothesis Mapping as an Alternative to Analysis of Competing Hypotheses
The Analysis of Competing Hypotheses is the leading structured analytic method for hypothesis testing and evaluation. Although ACH has some obvious virtues, these are outweighed by a range theoretical and practical drawbacks, which may partly explain why it is taught far more often than it is actually used. In the first part of this presentation I will briefly review the most important of these problems.
One reason for ACH’s “popularity” has been the lack of any serious alternative. That situation is changing with the emergence of hypothesis mapping (HM). HM is an extension of argument mapping to handle abductive reasoning, i.e. reasoning involving selection of the best explanatory hypothesis with regard to a range of available or potential evidence. Hypothesis mapping is a software-supported method for laying out complex hypothesis sets and marshalling the evidence and arguments in relation to these hypotheses.
The second part of the presentation will provide a quick introduction to hypothesis mapping using an intelligence-type example, and review some of its relative strengths and weaknesses.
Thanks. Most interesting!
I think you are alluding to the problem of demarcation in your comment:
“Prediction markets can be effective but, at least in the classic form, have a number of drawbacks, such as (a) they only work for “verifiable” issues, where the truth of the matter comes to be objectively known at some point (e.g. an event occurs or it doesn’t by a certain date); (b) they encourage participants to conceal critical information from others; and (c) they create no “argument trail” justifying the collective view.”
Karl Popper’s “Realism and the Aim of Science, from The Postscript” (1983) provides a lucid overview of issues that are applicable to this debate. What you call “verifiable” issues alludes to the role of what Popper calls “basic statements” – potential falsifiers of hypotheses or theories. Falsifiability signifies a logical relation between the theory in question and the class of basic statements, or the class of events described by them. We should recall that falsifiability is based on the the “modus tollens” deductive argument.
If the theory is true then the inference is true.
The inference is not true.
Therefore, the theory is not true.
Falsifiability is thus a logical property of a proposition that is vulnerable to refutation by a true existential statement. Whereas falsification is the practical demonstration that a proposition has been falsified. Unlike the decisive logic of the modus tollens, the real-world process of falsification can never be decisive due to the Duhem problem, the uncertainty of observations and sheer avoidence of testing.
Failing to be falsified cannot from a critical rationalist perspective produce a conclusive reason for accepting conjectures although it is valid to compare conjectures based on factors other than falsifiability e.g. depth, comprehensiveness, simplicity, unifying power, consistency with background knowledge, relevance to multiple problem situations, being part of a rigorous research program, without drifting down the slippery slope of justifying induction or abduction as being more than conjectural.
Your BCisive software, which I have owned since 2008, is a tool that is suitable for mapping the arguments.