Feeds:
Posts
Comments

Archive for the ‘Collective Wisdom’ Category

[This post is in response to a request from a colleague for “the best thing to introduce someone to YourView”.]

The goal of the YourView project was to develop a platform for identifying collective wisdom, focusing on major public issues, aiming to remedy some of the defects of democracy.  The platform was a deliberative aggregator, i.e. it supported large-scale deliberation and its aggregated outputs reflected that deliberation.  The project was active from around 2011 to 2014, with its moment of glory being when the platform was used as part of Fairfax Media’s coverage of the 2013 Federal Election.

The YourView platform is still alive and can be explored, though there has been no real activity on the site since 2014.  There is a collection of pages and links about the YourView project.

The best theoretical overview is Cultivating Deliberation for Democracy, particularly the second half.  The “The Zone” interview by Fairfax’s Michael Short is a good read.

Read Full Post »

The question of who actually wrote the works attributed to “William Shakespeare” is a genuine conundrum.  In fact it may be the greatest “whodunnit” of all time.

Although mainstream scholars tend to haughtily dismiss the issue, there are very serious problems with the hypothesis that the author was William Shakspere of Stratford upon Avon. However all other candidates also have serious problems.  For example Edward de Vere died in 1604, but plays kept appearing for another decade or so.  Hence the conundrum.

Recently however this conundrum may have been resolved.  A small group of scholars (James, Rubinstein, Casson) have been arguing the case for Henry Neville.  A new book, Sir Henry Neville Was Shakespeare, presents an “avalanche” of evidence supporting Neville.  Nothing comparable has been available for any other candidate.

Suppose Rubinstein et al are right.  How can the relevant experts, and interested parties more generally, reach rational consensus on this?  How could the matter be decisively established?  How can the process of collective rational resolution be expedited?

A workshop later this month in Melbourne will address this issue.  The first half will involve traditional presentations and discussion, including Rubinstein making the case for Neville.

The second half will be attempting something quite novel.  We will introduce a kind of website – an “arguwiki” where the arguments and evidence can be laid out, discussed and evaluated not as a debate, in any of the standard formats, but as a collaborative project.  The workshop will be a low-key launch of the Shakespeare Authorship Arguwiki; and later, all going well, it will be opened up to the world at large.  Our grand ambition is that the site, or something like it, may prove instrumental in resolving the greatest whodunnit of all time, and more generally be a model for collective rational resolution of difficult issues.

The workshop is open to any interested persons, but there are only a small number of places left.

Register now.  There is no charge for attending.

 

Read Full Post »

Meta-analysis has become an indispensable part of modern science.  By pooling data from many studies, and using special mathematical techniques, meta-analysis answers more questions, with more power and precision, than is possible either with single studies or informal reviews.

Currently, however, meta-analysis is a closed activity. It is performed by small, funded teams operating behind closed doors and with the results only becoming available in technical journal articles which are often stuck behind pay-walls. This closed approach has serious problems, as we discuss below.

Change is overdue, and indeed on its way.  There are increasing calls to make meta-analysis projects more accessible, transparent, collaborative, and frequently updated (“living”) – or in a word, more open.

What would truly open meta-analysis be like as a social practice?  How would it fit into other practices such as journal publication?  What technological support does it need?  How well could it work?

We are confident that meta-analysis would benefit greatly from being conducted far more openly than it typically is today. We draw inspiration from the way the wiki transformed encyclopedia production, and open-source transformed software production.  Meta-analysis is, to be sure, a technical matter, but so is writing an encyclopedia article about leukemia or producing an operating system.

We see many serious challenges, but no fundamental barrier to having crowds collaborate in posing useful questions, identifying suitable studies, extracting key data, selecting and applying analytical methods, and deriving insights from the results.

We use the term “open collaborative meta-analysis” (OCMA) to indicate that in this open alternative, MA projects welcome all comers to not only view the data and findings, but to also contribute their data, their labor and their insights, regardless of whether they be career scientists working on the topic, undergraduates learning the ropes, or interested members of the public.

What’s wrong with how meta-analysis is currently done?

Thousands of meta-analyses appear every year, many of high quality, profoundly contributing to scientific knowledge.  However the current approach has some serious problems, caused in part by its closed nature.

One is painfully apparent to anyone who has actually done a meta-analysis: they take a lot of tedious work. When this work is shouldered solely by a small, funded team, it is slow and expensive. This reduces the number of projects that get undertaken, and narrows their scope. Important issues, such as the role of moderators, are neglected, and information inherent in the vast pool of primary studies is under-exploited.

The current approach can also harm meta-analytic findings, because, first, there is no opportunity, during the analysis, for external critical review of the innumerable judgements the team must make, such as what risk of bias rating to give a particular study.  When teams have only themselves as critics, they are more likely to make clerical mistakes or technical errors. Sometimes they may even be tempted by dubious, self-serving choices.  Second, it is hard for small teams, with their limited resources and networks, to identify all the studies which meet their inclusion criteria.  A recent review has found biomedical meta-analyses to be “consistently incomplete” in their evidence base. Third, a MA project typically comes to a halt when the small team has drawn their conclusions and drafted their publications, even though new relevant studies continue to appear.  This means that a project’s findings can quickly go out of date.

Consequently, all too often a meta-analysis’s findings are limited, wrong or misleading, notwithstanding the competence and diligence of the small team behind it.

The situation gets worse when we consider sets of meta-analyses in a given area. The closed and competitive nature of the current approach means that teams are often unaware that other teams are addressing the same or very similar questions – or they are aware, but push on regardless.  The result is redundant analyses and even confusion when different analyses present overlapping and conflicting findings. The closed approach, with its lack of transparency in the meta-analytic process, hinders clarification of why these differences exist and how they might be corrected.

Finally, there is a growing movement towards synthesizing meta-analyses into even larger studies. This is difficult to do when meta-analyses themselves are so poorly disclosed.

What is the open alternative?

The essence of OCMA is that meta-analyses are conducted as public collaborations.  Anyone can initiate a MA, and anyone can contribute to an existing project, in a range of ways. Projects are ongoing; they evolve over time as more studies become available, problems are corrected, analytical methods improve, and new questions are asked.

In this way meta-analysis projects benefit from much broader input than is possible in the standard approach, and both the scientific community and the public benefit from projects and their findings being so easily accessible, correctable, and continually updated.

A well-designed online platform will be needed for OCMA to work.  Since an online platform is, in one sense, just software code running on servers, the platform can itself become an open development project. Similarly, OCMA as a scientific practice, with its workflow, norms, roles, and sanctions, can be governed by by the community of users, much as the practices of open encyclopedia production are governed by the Wikipedia community.

OCMA is very general, applicable in any area of science.  We envisage a single platform capable of supporting analyses not just in biomedical science but in education and many other fields, though it may be more practical to have a number of specialised OCMA platforms.

OCMA changes the way people come together to share and collaborate, not the theory of meta-analysis. OCMA processes and platforms would support whatever range of statistical methods the scientific community deems appropriate.

Is there anything like this out there already?

OCMA, as we conceive it, does not yet exist.  There have been important developments pushing in broadly similar directions, but they all lack one or more key ingredients of true open, collaborative meta-analysis.  Space does not allow exhaustive comparisons, but we can illustrate with reference to some of the most comparable efforts:

  • openMetaAnalysis is biomedicine-specific, limited in functionality, and not easy to use.
  • Covidence presents the kind of platform interface quality OCMA needs, but was designed for traditional small-groups, and only supports gathering and coding of studies, not the full analytic process.
  • metaBUS makes results relatively easily accessible to the public, but depends on “curation” work by a cadre of technical specialists and currently at least is restricted to the field of human resource management.
  • The Systematic Review Data Repository makes data sets and systematic reviews available, but does not support open collaboration in the meta-analysis process
  • Live cumulative network meta-analysis is as yet only a concept, and focuses a very technical form of biomedical meta-analysis; it would not be suitable for the vast majority of meta-analysis projects.

More generally, as compared with existing developments, OCMA is to varying degrees more crowd-oriented, collaborative, widely applicable across scientific fields and user-friendly.

But have you thought of…?

OCMA faces formidable challenges. We describe some here, and sketch some possible solutions.  However we recognise that these are difficult problems, and that our prototyping exercise may well throw up many new ones.

Why would anyone bother contributing?

There are many different kinds of motivation for participating in open projects such as open encyclopaedias, open science projects, and open software development.  Different people would contribute to OCMA for their own mix of reasons.

For example, a researcher may want to put her MA-in-progress up on the open platform in order to gain the benefits of crowd involvement, such as contributions of labour, and double-checking of judgements.  Authors of relevant studies will often be motivated to ensure that their studies are included and treated appropriately. Other researchers may want to participate through interest, collegiality, and concern for correctness.

An important challenge is to allow researchers to get recognition for their contributions to open projects.  This problem has already arisen in other contexts of open knowledge production. OCMA would need to include mechanisms for reliably documenting and perhaps even assessing a researcher’s contributions. In parallel, the wider scientific community would need to evolve ways of accepting such documentation in performance evaluations.

Would OCMA analyses get published?  How?

At least initially, OCMA would not affect how meta-analytic results get published.  OCMA might support standard publications as follows. An OCMA platform would enable a researcher to “freeze” an instance of a suitably-developed project.  The researcher can take its findings and present them in an article which is then subject to a journal’s normal review processes and standards. The OCMA community is acknowledged as a kind of contributor, but the researcher takes final responsibility for completeness and correctness.

In the longer term, OCMA may give rise to an alternative to standard publishing for meta-analyses. There is currently much dissatisfaction with scientific publishing, and many people are exploring ways to improve or sidestep the standard journal publication processes.  A critical issue is how research gets authorised or endorsed by the scientific community.  We envisage that OCMA would develop practices, supported by the OCMA platform, for indicating when projects are sufficiently well-developed that they are at least as, if not more, authoritative than traditional journal publications.  These practices may be supported by rigorous quality tests comparing OCMA analyses with suitable benchmarks such as Cochrane reviews on the same topics.

Nuisance

An obvious problem is that if anyone can come in make changes, then malicious users could vandalise projects; users with vested interests may try to manipulate findings; and well-meaning users might just “stuff things up.”

This is a version of a standard early objection to Wikipedia. However Wikipedia has proven that nuisance is manageable. It has developed a range of responses, including the ability to revert changes; page watchlists; blocking vandals; and clean-up bots. Such methods can also be used in OCMA.

Also, an OCMA site will be less likely to attract nuisance users. The OCMA site would be relatively dry and technical in nature, and have a relatively small community of viewers and editors. We expect that a sufficiently large proportion of visitors will be reasonably competent and well-intentioned that nuisance could be managed adequately.

Stability and Customization

A key challenge will be to reconcile two apparently conflicting requirements.  On one hand, the whole idea is that projects continually evolve as users make incremental changes, or do exploratory “what ifs.”  On the other, users will want the project to remain fixed or stable for various purposes such as publishing findings.  One technical solution may be to enable signed-in users to save a configuration for a particular project.  A visitor can then view either the master version, one of their saved configurations, or one which has been shared with them by somebody else.

 


Thanks to the following for their input into this document:

  • Professor Robert Badgett, Preventive Medicine and Public Health, University of Kansas School of Medicine
  • Professor John Hattie, Director, Melbourne Education Research Institute, University of Melbourne
  • Professor Julian Elliott, Head of Clinical Research, Infectious Diseases, Alfred Hospital and Monash University; Senior Research Fellow at the Australasian Cochrane Centre
  • Dr. Charles Twardy, Senior Data Scientist, NTVI; Affiliate Professor, George Mason University

Read Full Post »

While in Northern Virginia last week for a workshop on critical thinking for intelligence analysts, I was able to also do a couple of presentations at The Mitre Corporation, a large US federally-funded R&D organisation.  Audience including Mitre researchers and other folks from the US intelligence community.   Thanks to Steve Rieber and Mark Brown for organising.

Here are the abstracts:

Deliberative Aggregators for Intelligence

It is widely appreciated that under appropriate circumstances “crowds” can be remarkably wise.  A practical challenge is always how to extract that wisdom.  One approach is to set up some kind of market.  Prediction markets are an increasingly familiar example.  Prediction markets can be effective but, at least in the classic form, have a number of drawbacks, such as (a) they only work for “verifiable” issues, where the truth of the matter comes to be objectively known at some point (e.g. an event occurs or it doesn’t by a certain date); (b) they encourage participants to conceal critical information from others; and (c) they create no “argument trail” justifying the collective view.

An alternative is a class of systems which can be called “deliberative aggregators.” These are virtual discussion forums, offering standard benefits such remote & asynchronous participation, and the ability to involve large numbers of participants.  Distinctively, however, they have some kind of mechanism for automatically aggregating discrete individual viewpoints into a collective view.

YourView (www.yourview.org.au) is one example of a deliberative aggregator.  On YourView, issues are raised; participants can vote on an issue, make comments and replies, and rate comments and replies.  Through this activity, participants earn credibility scores, measuring of the extent to which they exhibit epistemic virtues such as open-mindedness, cogency, etc.. YourView then uses credibility scores to determine the collective wisdom of the participants on the issue.

Although YourView is relatively new and has not yet been used in an intelligence context, I will discuss a number of issues pertaining to potential such usage, including: how YourView can support deliberation handling complex issue clusters (issues, sub-issues, etc;); and how it can handle open-ended questions, such as “Who is the likely successor to X as leader of country Y?”.

Hypothesis Mapping as an Alternative to Analysis of Competing Hypotheses

The Analysis of Competing Hypotheses is the leading structured analytic method for hypothesis testing and evaluation.  Although ACH has some obvious virtues, these are outweighed by a range theoretical and practical drawbacks, which may partly explain why it is taught far more often than it is actually used.  In the first part of this presentation I will briefly review the most important of these problems.

One reason for ACH’s “popularity” has been the lack of any serious alternative.  That situation is changing with the emergence of hypothesis mapping (HM).  HM is an extension of argument mapping to handle abductive reasoning, i.e. reasoning involving selection of the best explanatory hypothesis with regard to a range of available or potential evidence.  Hypothesis mapping is a software-supported method for laying out complex hypothesis sets and marshalling the evidence and arguments in relation to these hypotheses.

The second part of the presentation will provide a quick introduction to hypothesis mapping using an intelligence-type example, and review some of its relative strengths and weaknesses.

Read Full Post »

Prediction markets can be a remarkably effective way to divine the wisdom of crowds.

Prediction markets of course only work for predictions – or more generally for what I call “verifiable” questions.   A verifiable question is one for which it is possible, at some point, to determine the answer definitively.  For example, predicting the winner of the Oscar for best picture.   This is what allows the prediction market to determine how much each player wins or loses.

The problem is that many issues we want addressed are not verifiable in this sense.

For example, decisions.  Would it be better to continue to negotiate with Iran over its nuclear ambitions, or should a military strike be launched?   We can speculate and debate about this, but we’ll never know the answer for sure, because one path will be taken, and the other never taken, and so we’ll never know what would have happened had we taken the other path.

Wouldn’t it be good if we had something like a prediction market, but which works for non-verifiable issues?

Amazon.com book ratings are an interesting case.   Whether a book is or is not a good one is certainly a non-verifiable issue.   Yet Amazon has created a mechanism for combining the views of many people into a single collective verdict, e.g. 4.5 stars.   At one level the system is just counting votes; Amazon users vote by choosing a numerical star level, and Amazon averages these.   But note that Amazon’s product pages also allow users to make comments, and reply to comments; and these comment streams can involve quite a lot of debate.   It is plausible that, at least sometimes, a user’s vote is influenced by these comments.   So the overall rating is at least somewhat influenced by collective deliberation over the merits of the book.

Amazon’s mechanism is an instance of a more general class, for which I’ve coined the term “deliberative aggregator“.   A deliberative aggregator has three key features:

  1. It is some kind of virtual forum, thereby allowing large-scale, remote and asynchronous participation.
  2. It supports deliberation, and its outputs in some way depend on or at least are influenced by that deliberation.  (That’s what makes it “deliberative.”)
  3. It aggregates data of some kind (e.g. ratings) to produce a collective viewpoint or judgement.

YourView is another example of a deliberative aggregator.   Yourview’s aggregation mechanism (currently) is to compute the “weighted vote,” i.e. the votes of users weighted by their credibility, where a user’s credibility is a score, built up over time, indicating the extent to which, in their participation on YourView, they have exhibited “epistemic virtues,” i.e. the general traits of good thinkers.

Many other kinds of deliberative aggregators would be possible.   An interesting theoretical question is: what is the best design for a deliberative aggregator?  And more generally: what is the best way to discern collective wisdom for non-verifiable questions?

Read Full Post »

I have a short paper appearing next month in the Journal of Public Deliberation.  A preview is available here.  Below is a precis.

In its first half, “Cultivating Deliberation for Democracy” discusses the failure of “deliberation technologies” to substantially improve public deliberation in either quantity or quality.   To be sure, new technologies have made possible massive quantities of deliberation of a very public kind (e.g. in public forums such as comments in the New York Times).  However those technologies are not specifically deliberation technologies.  Nothing about them is specifically tailored to support deliberation as opposed to other forms of public conversation.  Meanwhile, deliberation technologies properly so-called – including my own previous efforts – have notably failed to be adopted by the public at large.  I explain this by pointing out the obvious: people don’t like to be “boxed in” by the kinds of constraints typically provided in deliberation technologies.

The second half gives an overview of the YourView project.  YourView is a deliberation technology, but tries to take a rather different approach, aiming to cultivate rather than construct quality public deliberation. YourView provides a forum in which participants can vote and comment on major public issues.  What makes YourView distinctive is that it attempts to determine the “collective wisdom” of the participants.  It does this by calculating, for each participant, a “credibility” score, using data generated through their participant and others’ responses.   In more philosophical terms, YourView attempts to determine the extent to which a participant is exhibiting various “epistemic virtues” such as open-mindedness.  Credibility scores are useful in two ways.  First, they enable YourView to calculate the collective wisdom by weighting contributions by credibility.   Second, they drive more, and more thoughtful, engagement on the site, because high credibility translates to status and (in some ways) power in the YourView forum.

 

Read Full Post »

A new draft of What Do We Think?  Divining the Public Wisdom to Guide Sustainability Decisions is now available.

Download PDF

Read Full Post »

Older Posts »