Interfaces for Crowdsourcing Interpretation
July 17, 2013, 15:30 | Centennial Room, Nebraska Union
Our research will detail a number of approaches to crowdsourcing interpretation — especially as these approaches relate to the ongoing development and design of Prism, a tool that facilitates crowdsourced interpretation of texts. We take up the challenge detailed by Ramsay and Rockwell (2012) that the activity of building provides affordances as rich and informed as writing, and that it is important to to be aware of the nature and quality of the intervention that happens through building. Drucker (2009) and Ruecker et. al. (2011) demonstrate the importance of speculative prototyping as a way to explore humanities questions and make arguments through prototypes. In that spirit, our research will inform the creation of several interfaces that address problems related to the individual and crowdsourced interpretation.
Background
In 2011, the Praxis team at the University of Virginia created Prism as digital realization of the “Patacritical Demon” imagined by Drucker (2009), McGann (2004), and Nowviskie (2012). In its current form, Prism allows multiple users to highlight a text based on certain predetermined categories. The tool then creates an aggregate visualization of individual responses. In this way Prism diverges from current utilizations of crowdsourcing; where crowdsourced projects have traditionally asked users to compile data or do other mechanistic tasks, Prism asks individuals to mark up a text with categories of meaning, to discern trends in the way a larger group of users reads the text. Although Prism promises to bring individual experience into the fold, Bethany Nowiskie (2012) notes that Prism “is not a device for rich, individual exegesis,” and the usefulness of Prism lies in the overlapping and visualizing of all contributors, “generating spectra of similarity and difference.”
Existing Approaches to Crowdsourcing Interpretation
Owens (2012) distinguishes between two approaches that are commonly lumped under the heading of crowdsourcing: “human computation” and “the wisdom of crowds.” Human computation projects ask participants to solve problems for which computational solutions are comparatively expensive to develop or perform. Such problems include transcription (Transcribing Bentham, Old Weather, reCAPTCHA), protein folding (fold.it), and image metadata tagging (ESP Game). Crowd-wisdom projects, on the other hand, engage participants in open-ended socially-negotiated tasks that may go further than processing information and actually create new knowledge. (This is the mode of Wikipedia or any website with comment or discussion forum functionality.)
Directions for Research
We have begun to consider other ways to crowdsource interpretation, especially approaches that attend more closely to individual responses. In particular, we have identified two areas for exploration:
- Computation and Crowd Wisdom — Owens (2012) suggests that both existing modes of crowdsourcing are worth designing for and can work in tandem for Digital Humanities projects (Galaxy Zoo). In that spirit, Prism presents the user with a task that is in both constrained in such a way that it produces information and somewhat open-ended and socially-negotiated through the user’s engagement with a text.
- Wisdom of the Individual — Crowdsourcing interpretation might offer an increased focus on individuals as a part of the collaborative process. Preserving the marks of each participant would better respect the human element at the core of crowdsourcing. This feature would also make Prism a more powerful pedagogical tool, as an instructor could identify an individual student’s remarks for generating discussion. This would also be useful for the social sciences, which are inherently concerned with the individual as a member (and perhaps representative) of a particular group.
Future design and development of Prism must account for the many potential roles of individuals in crowdsourcing interpretation. For example, in order to better serve projects concerned with the wisdom of particular individuals Prism would need to store user markings and interpretations, in a way that they are extractable and easily viewed in relation to the markings of the crowd. Similarly, in order to better facilitate social science research Prism might include a way to separate out user interpretations according to demographic information (such as class, gender, age, etc.) or according to specific user responses. The poster will detail how the the design and development of Prism has been and continues to be influenced by the different roles individuals might play within the crowd.