2 Comments

A few years ago DARPA’s little sibling IARPA (Intelligence Advanced Research Projects Activity) sought to improve the forecasting of future events through crowdsourcing. It established the Aggregative Contingent Estimation Program to “improve accuracy, precision and timeliness of forecasts for a broad range of events”. [Crowdsourcing refers to tapping into the insights of any and everyone with an interest to solve a problem, or tapping into their wallets to fund projects as Siouxsie has blogged about here at Sciblogs]

Following an earlier trial this program has developed into the Global Crowd Intelligence website (run by Applied Research Associates Inc., which has the scent of “Universal  Exports” that a certain Mr Bond allegedly worked for). Here crowdsourcing is combined with “gamification” (see my earlier blog posting on this). You get to select missions (be it predicting the likelihood of a future conflict, when the iPad mini will be launched, or whether Kim Kardashian’s divorce will be finalised before December).

An article on the BBC’s website advises that “Forecast topics are not related to actual intelligence operations.”

Should you choose to accept them, the more missions you take on the more experience you accrue, the better your reputation becomes and the quicker you advance on from being a humble analyst to something perhaps more suave and sophisticated.

The BBC report notes that earlier experiments indicated an 25% improvement in predictions compared to a non-crowd sourced control group. Not spectacular, but progress, which I’m sure IARPA will be seeking to improve upon. I’d be interested in what were their stunning failures as well as the successes. I’m not sure if the latest trial has a control. What would be good to see would be to pit crowdsourcing against data mining and experienced intelligence operatives for some scenarios to see which may be better and under what circumstances. A few sensible and knowledgeable heads may be more prescient than wishful or ill informed thinking from a host of others.

Crowdsourcing predictions about defined events or scenarios is becoming common – see NZ’s iPredict. [The just announced proposal to trial a system to track the most vulnerable children isn't crowdsourcing, but it has elements of it]. Success varies, and like fortune tellers, will often be influenced by how precise the scenario is worded. One problem with scenarios is that if you are just fixed on predicting their likelihood you may miss other things going on. I’m sure those smart folk at the CIA, MI6, and our own GCSB & SIS will have that covered though. Don’t you think?

Another issue is the signal to noise ratio you get when gathering lots of data. An earlier crowdsourcing challenge run by DARPA – to find a set of red balloons [PDF] scattered across America – illustrated how some strategies work better than others, and that a lot of effort is required to be able to verify or discount some of the incoming information. The latest project is designed to be able to detect rogue elements attempting to distort the outcomes.

I expect IARPA will learn most about what types of scenarios are more or less successful at predicting via crowdsourcing, and they’ll get some useful insights in how to analyse information more effectively. Whether we could all be part of the GCSB in the future seems doubtful.