In an earlier post I suggested that a marketplace for citizen intelligence might take the form of a platform hosting sponsored challenges, similar to the data analytics platform Kaggle.

For convenience, I’ll describe such a marketplace as a CCI platform, with CCI short for “Crowdsourcing Citizen Intelligence.” 

Three big challenges involved in getting a CCI platform up and running are finding customers, building a pool of citizen analysts, and report evaluation. 

Customers

Potential CIC platform customers are organisations that, for whatever reason, want to receive an open-source intelligence report on a topic of importance to them.  These customers might be large companies. To illustrate, when we first floated the concept of crowdsourced citizen intelligence at an AIPIO seminar, we were approached by a senior executive at a large insurance company.  He had a large database of public information related to fires at commercial properties, and he was keen to see if citizen analyst teams might be able to generate insights of strategic value by combining analysis of this database with any other open-source information or relevant knowledge.  The company had already been doing its own analyses, but the crowd might be able to come up with things they had missed.  

Potential CCI platform customers might also include organisations that are themselves intelligence suppliers, such as government intelligence agencies. This may sound surprising, since these agencies are often dealing with very sensitive matters, and often have access to classified information not available to citizen crowd. Indeed, most questions a government intelligence agency has to address could not be outsourced to a public platform. 

However some questions in some circumstances might be suitable. For example, the downing of MH370 is a topic of public knowledge and concern, and there is a vast quantity of information already in the public domain. An agency investigating the possibility of terrorist or nation-state involvement might sponsor a suitably-framed open-source challenge, where the best citizen responses would constitute alternative or complementary analyses to their own. 

In many intelligence organisations, particularly in smaller countries, analysts teams are small and thinly spread.  (A colleague who many years ago worked as head of the China desk for a military intelligence agency confessed that he alone was the China desk.) Sponsoring a challenge on a CCI platform may be an effective way to produce analyses on questions the agency itself can’t cover given its limited resources. 

Citizen Analysts

A CCI platform also needs a large pool of citizens willing and able to do intelligence-type work in response to challenges.  It might seem odd that lots of people would be willing to work hard in return for a mere chance at a monetary prize, but many crowdsourcing platforms have demonstrated that it is possible to build large crowds of essentially volunteer workers. Kaggle, for example, has over a million people signed up, and many thousands might work on any given challenge. Meanwhile in the SWARM Project at the University of Melbourne, we have signed up thousands of citizens to participate in our intelligence-related research activities, and many of them have put in huge efforts producing very polished work. 

Crowdsourcing sites can attract large numbers of participants because there are important rewards for participants other than winning a prize.  These include learning or professional development; satisfaction from taking on an intellectually demanding task; the thrills and camaraderie that arise from being part of a team in a competition; and building on-platform reputation which can be both satisfying and career-enhancing.  

Evaluating Responses

A third and very serious challenge in getting a CCI platform up and running is reliably determining how good a response is.  This is important for two reasons.  First, the platform must be able to send a shortlist of high-quality responses to the customer.  Imagine a popular challenge that generates hundreds of reports.  Many, perhaps most of these are likely to be sub-standard.  The customer doesn’t want to have to wade through mountains of dross to find rare nuggets of useful intelligence.  Second, feedback is important to help motivate citizen analyst teams.  Even coarse feedback, such as approximate ranking, would help greatly.  

This is a problem of scale.  It is feasible for human experts to score a handful products.  Manually scoring hundreds of products for every challenge would be very costly in both effort and money; it would threaten the viability of the whole exercise.  Kaggle manages this problem by automating the scoring process, which they can do because they restrict the challenges they accept to machine-score-able exercises.  Problems on a CIC platform would be much more qualitative and open-ended, and so far less amenable to automated techniques. 

Implementing a good solution to this problem will be part of the “secret sauce” of a good CCI platform. There are some promising directions to explore, including peer assessment, and AI (machine learning, natural language processing), and it may be that some combination of these, with human experts doing quality assurance, will do the trick.   

Dark clouds ahead

Suppose these three challenges were solved, and the CCI platform was up and running at scale, with substantial prizes for winning reports.  What serious problems might arise? I’ll take that up in a subsequent post. 


Interested in a weekly digest of content from this blog? Sign up to the Analytical Snippets list. This blog, and the list, is intended to keep anyone interested in improving intelligence analysis informed about new developments in this area.


Thanks to Peter Christener for the image.