Academic crowdsourcing – feedback loops

This year’s summer reading included Jon Ronson’s So you’ve been publicly shamed, a journalistic investigation into why the internet has become so fond of collaring those who transgress its unwritten rules and tearing them apart. His case studies include Jonah Lehrer, the writer found to have fabricated quotes by Bob Dylan, Lindsey Stone, who was inadvisably photographed flipping the bird in Arlington National Cemetery, and Justine Sacco, the advertising exec who, while en route to South Africa, tweeted a “joke” about hoping she didn’t catch AIDS as she was white. All three became transient global hate-figures, with tens of thousands of tweets and comments raining shame down upon them. Ronson’s book is a readable and engaging romp through what are, of course, deadly serious issues for contemporary digital culture. However his conclusion interested me: he contends that the modern day version of the village stocks he describes is down to “feedback loops”. Ronson urges us to disregard the theories of Gustave Le Bon (one of Goebbels’s favourite theoreticians) and Philip Zimbardo, conceiver of the notorious Stanford Prison Experiment, who argue that mass hatred and hysteria are spread from node to node within the crowd through some process of broadly defined “contagion”. Rather, says Ronson, internet users copy what they see happening – a version of the “information cascade” theory of James Surowiecki, which I have blogged about before. So when tens of thousands of Twitter users piled into the wretched Sacco for example, it was because they had seen others doing so, resulting in a collective assurance that it was “right”. Ronson underscores this with the observation of how dramatically effective are signs attached to speed limit signs which automatically flash motorists’ current speed. This instant feedback, devoid on any actual consequence or punishment – dramatically cuts instances of speeding.

Successful academic crowdsourcing projects, as I and other have argued elsewhere, depend for their success on the relationships they create with their volunteers. I believe there is some reason to believe that Ronson’s logic can be applied here too – i.e. where both non-professional volunteers and professional project instigators are exposed controlled feedback loops. Lasecki et. al. for example argue that crowds can self-learn through the correct application of mechanical tasks which are tightly regulated and controlled on platforms such as Mechanical Turk. The feedback loop is that the task has been performed correctly or incorrectly. Other volunteers report learning from each other via discussion forums, learning from good practice. Others go on to create Wikipedia pages around the content they have worked on – although whether Wikipedia is crowdsourcing or something else, such as community participation is another matter (a distinction succinctly made on this blog post).

Those of us who have researched crowdsourcing over the last few years often get hung up on semantics and labels; and I am guilty as charged — I have found myself having far longer conversations that the subject justifies (which is how much?) over whether crowdsourcing should a hyphen or not. I think that considering the attributes of what makes crowdsourcing crowdsourcing, as opposed to something else, is more useful. An effort to characterize what makes “good” or “productive” feedback loops – as opposed to wild and unconstrained ones which destroyed Lehrer, Stone and Sacco – might be a good place to start.