Lab Report XXVI

Among the best and most interesting examples of successful crowdsourcing is the reCAPTCHA project by Carnegie Mellon University’s CyLab. In it, indecipherable words from old manuscripts are used as part of the CAPTCHA identification strings. The goal here is to help scholars, through crowdsourcing, as they try to decipher these old texts. If enough people […]

Among the best and most interesting examples of successful crowdsourcing is the reCAPTCHA project by Carnegie Mellon University’s CyLab. In it, indecipherable words from old manuscripts are used as part of the CAPTCHA identification strings. The goal here is to help scholars, through crowdsourcing, as they try to decipher these old texts. If enough people identify an indecipherable word as “butterfly” for example, then the researchers know there is a good chance that word is, indeed, “butterfly.”

At the University of California, Berkeley’s Institute of Design (BID), the value of crowdsourcing has taken another turn. A communitysourcing vending machine, created by computer scientists and information scientists, harnesses the unique skills of students for specific crowdsourcing projects. Communitysourcing targets higher-order tasks that can only be performed by specific populations that possess unique skill-sets and knowledge. That is exactly what the researchers of the Umati vending machine discovered.

Another important aspect of communitysourcing is harnessing the skills of specific target groups. The designers of the Umati vending machine decided that this was best done through a physical outlet, a vending machine they located at a site that would attract the targeted crowd. The researchers’ goal was to test the value of how a physical outlet can take advantage of a potentially large participant pool with self-identified knowledge or expertise in given topics. The targeted crowd then receives a reward: in this test case, credit for snacks.


More from Metropolis


Communitysourcing vending machine, image from bid.berkeley.edu

In this test case the researchers set out to investigate the accuracy of community grade exams. They then compared the results with grading by a single individual and with Amazon’s Mechanical Turk crowdsourcing. The results were clear. Communitysourcing produced a 2% higher grading accuracy than individual grading. As the researchers state, “Mechanical Turk workers had no success grading the same exams.” So much for crowdsourcing work that requires specific expertise.

Of course, there are some caveats to this initial study. For one, the length of the study was admittedly short: only one week. Another issue is that the participants, in addition to being self-identified experts in the subject they were grading, may also have been motivated by the novelty of the project. There is no data on the psychological impetus for targeted audiences in public settings and how they might respond to participating in a communitysourcing project. That would depend on the type of project that requires communitysourcing expertise, as well as on the placement of the kiosk, and the type of rewards offered.

The potential for harnessing expertise for short-term, anonymous tasks is clear. We’ve seen, for example, that the input of crowds corrects for the biases of a few individuals. Another application to pursue further is making statistical research more accurate. Normally, statistical research at universities is heavily biased towards a young, educated population because the participants are overwhelmingly undergrads from certain cultural, racial, and economic demographics. Students of color, as well as graduate students are less likely to volunteer for university studies, a fact that skews the results and their relevance. Communitysourcing promises to be an important corrective.

Sherin Wing writes on social issues as well as topics in architecture, urbanism, and design. She is a frequent contributor to Archinect, Architect Magazine and other publications. She is also co-author of The Real Architect’s Handbook. She received her PhD from UCLA. Follow Sherin on Twitter at@xiaying

For Previous Lab Reports follow this link.

Recent Projects