Workshops and Tutorials

15 July 2013

Alloy - Full Day

C. M. Sperberg-McQueen, Information-technology consultant, Black Mesa Technologies, LLC

Description

Taking modeling seriously, A hands-on introduction to Alloy

This one-day tutorial introduces digital humanists to the use of Alloy for modeling. Alloy is a language for describing structures and a tool for visualizing hierarchies and relationships associated with a data model. This workshop will introduce Alloy’s internal logic, summarize how its syntax works, and describe individual test cases for Alloy as a tool.

Audience

The target audience consists of digital humanists interested in techniques for formalizing important concepts and tools for working with such formalizations. The tutorial deals with high level data modeling concepts.  Some prior exposure to symbolic logic and/or programming is desirable; failing that, highly motivated participants may be able to benefit from the workshop if they have sufficiently high tolerance for exposure to new material. Participants should bring a laptop computer with a current installation of Java; they may optionally pre- install Alloy 4.2 or they may install it during the workshop.

Heurist - Half Day Morning

Ian Johnson, Director, Arts eResearch, University of Sydney

Description

Heurist is a database tool that hides the complexity of database design behind a simple web interface. It allows researchers to create complex databases rapidly, without using a programming language. In this workshop participants will learn how to build a Heurist online database to support a small research project, and end up with a fully operational, web-accessible, private or shared database on a hosted server. After the workshop, they will be able to create additional databases and/or download the open source software for installation on an institutional or cloud-based server.

By the end of the workshop participants will be confident to use the web interface to create new databases online, import templates, make changes to database structure, edit and import data, search for and save subsets of the data, map, export and transform data, and publish data feeds within a website.

Heurist has been developed across a number of Australian Research Council research grants in the Humanities to support a wide range of research, from the production of large historical encyclopaedias with an editorial team, such as the Dictionary of Sydney (dictionaryofsydney.org), through archaeological survey and excavation databases to individual text markup projects.

Audience

The workshop will be relevant to a wide range of Humanities researchers, but particularly to scholars who deal with collections of richly interlinked data. These may include historical individuals, organisations and events, inscriptions, manuscripts and archival records, theatrical performances, places, buildings, artefacts, images and bibliographic information. It will be particularly relevant to scholars who do not have good institutional backing for eResearch tools and database development. No programming or technical skills are required to complete the workshop.

Open Annotation - Half Day Morning

Timothy W. Cole, Mathematics and Digital Content Access Librarian and Professor of Library and Information Science at the University of Illinois
Jane Hunter, Professorial Research Fellow & Leader of the eResearch Lab, School of ITEE, The University of Queensland
Robert Sanderson, co-chair of the W3C Open Annotation Community Group and co-editor of the specification
James Smith, Software Architect at the Maryland Institute for Technology in the Humanities (at the University of Maryland)

Description

Annotation is a well-established set of practices that supports digital humanities scholarship. Over the course of this half-day workshop we will examine the current and prospective role of annotation in digital humanities scholarship and investigate the potential utility of the Open Annotation data model recently released by the W3C Open Annotation Community Group.

Participants will consider whether this specification can help encourage the extension of existing tools and the development of new more robust, interoperable web-based annotation tools and services in ways that can better meet the needs of the digital humanities scholarly community. Please visit http://www.openannotation.org/OAatDH_Workshop.html for more information on the Using Open Annotation Workshop.

Audience

We anticipate an audience of digital humanities tool and web service developers and technology managers responsible for digital library services, scholarly discourse services, note-taking software and similar web-based applications. Registration is open (i.e., non-competitive) until full.

Requirements: Prior to the workshop, each participant will be asked to submit a brief (1 to 2 page) summary giving their initial reaction to the Open Annotation data model in the context of a specific annotation application, research requirement or use case drawn from his or her own scholarship. Participant summaries should be submitted to openannotation@gmail.com by no later than 15 June. All briefs submitted will be posted on a workshop website (to be maintained by the Open Annotation Collaboration), and a subset of these will be presented by participants and discussed during the workshop. In addition the workshop leaders will present results from several of the concrete annotation demonstration experiments conducted over the last 18 months by the Open Annotation Collaboration. The current status of web-based digital annotation tools, services, practices and communities will be reviewed with the goal of illuminating critical facets of infrastructure beyond the scope of the Open Annotation data model or areas of the data model which require further refinement. Outcomes from the Workshop will help identify potential priorities and future directions for the W3C Open Annotation Community Group. Participants will gain a better understanding of the Open Annotation data model, its implementation, and its potential as a resource supportive of their future work.

VSim - Half Day Morning

Lisa M. Snyder, senior member of the Urban Simulation Team at UCLA

Description

VSim: A new interface for integrating real-time exploration of three-dimensional content into humanities research and pedagogy

VSim provides a much-needed real-time interface for academics working with 3D content. It allows users three modes of navigation, includes a mechanism for creating linear narratives through the virtual world that can be augmented by text and images (think PowerPoint or Prezi in 3D space), and provides a way for content creators to link to primary and secondary resources from within the modeled environment. Through these two features – the narratives and embedded resources – VSim provides scholars, educators, and students the opportunity to build knowledge through exploration of the virtual world as never before possible. It responds to the needs of in- service educators by supporting both teacher-centered presentations and student-centered exploration, providing the opportunity for students to actively engage with the content to build knowledge by creating a personalized virtual learning environment. And most importantly, VSim breaks down the barriers to instructional use of 3D content by providing a simple interface that easily re-purposes the crowd-sourced computer models built for Google Earth and available in the Google/Trimble 3D Warehouse.

Audience

The proposed half-day tutorial will introduce VSim to interested humanities scholars. The target audience includes academics across the humanities disciplines who are working with 3D content, supervising students on historic reconstruction projects, using available 3D content to supplement their ongoing research activities, or interested in integrating computer models into their seminars, classrooms, and conference presentations. Interest in this tutorial is difficult to predict, so has been organized to work whether capped at 20-25 to ensure individualized attention to each participant or opened up to a larger audience. (The only impact to being flexible is the size of the volunteer pool for the participant presentations scheduled towards the end of the tutorial.) The need for an advance CFP is not anticipated.

Juxta - Half Day Afternoon

Nick Laiacona, President of Performant Software Solutions LLC
Gregor Middell, Literary Scholar & Software Developer

Description

In this workshop, participants will learn how to develop a customized collation pipeline in Ruby using Juxta WS. Juxta WS is a web service that can collate variant texts and visualize the differences between them. It is free, open-source software comprising a pipeline of “micro-services” that may be valuable to any project that deals with digital texts, especially texts encoded in XML.

In the first half of the workshop, participants will follow along on their laptops as we introduce Juxta WS and the constituent parts of the collation pipeline. In the second half of the workshop, we will begin by showing participants how to obtain texts for experimentation from online sources. We will then have a hacking session where participants will work in small groups to leverage Juxta WS on texts from their own projects or obtained online.

Audience

This workshop is intended for software developers with a basic working knowledge of XML and Ruby. Knowledge of TEI is optional. It will be of interest to developers working on any text-based project, but especially editorial projects, projects handling variant texts, and projects dealing with XML encoded texts, including TEI.

Prerequisites: Workshop participants should bring:

Sustainability Strategies - Half Day Afternoon

Nancy Maron, Program Director, Ithaka S+R

Description

This half-day tutorial will introduce project leaders to the basics of sustainability planning, help them establish ambitious but realistic sustainability goals, define the challenges they face, and sketch out a hypothesis of their ideal funding model. The workshop will include group participation and will share real-world examples, illustrated by case studies of projects that really worked, or …didn’t. The session will also allow participants to review the ‘Funding Model Framework,’ a tool designed by Ithaka S+R to help guide those leading digital resource projects in choosing and testing the funding strategies that will work best for them.

We hope that by introducing new some ideas and practical tools in a supportive and engaging setting, this tutorial will encourage digital humanities project leaders in developing and testing new ideas to support their work.

Audience

Participants of this tutorial should be those with interest in and/or responsibility for charting a course for the development of a digital scholarly project or resource. This could include:

  • Academic project leaders who are leading or have created a digital resource.
  • Managers of digital collections and digitization units at cultural organizations, including museums, libraries, archives and other institutions.

Those in early stages of considering sustainability strategies for their projects are encouraged to attend.

Moving Images - Half Day Afternoon

Virginia Kuhn, Associate director of the Institute for Multimedia Literacy in the School of Cinematic Arts at the University of Southern California
Michael Simeone, Associate Director for Research and Interdisciplinary Studies at ICHASS at the University of Illinois at Urbana-Champaign

Description

Keywords to Keyframes: Video Analytics for Archival Research

This workshop will serve scholars of any level of technical expertise who are interested in studying images as part of their work in the digital humanities using a hybrid method that combines machine analytics (keyframes) and crowd-sourced tagging (keywords). Facilitating a discussion and training session featuring up to twenty participants, we will demo the Large Scale Video Analytics (LSVA) workbench for moving and still image analysis and archiving.

The LSVA is a web portal developed through a collaboration among the National Center for Supercomputing Applications (NCSA) and the Extreme Science and Engineering Discovery Environment (XSEDE), the IML (Institute for Multimedia Literacy), and ICHASS (the Institute for Computing in the Humanities, Arts and Social Science). The LSVA has customized the prominent Medici content management system, a multimedia database which has served scholars worldwide. The LSVA requires no software installation, though we do require online access. Further, we will provide access to IM2Learn, a free software package for image analysis developed by the NCSA.

Audience

The target audience for this workshop consists of scholars with research interests related to the way that visual media impacts culture.

16 July 2013

Desktop Fabrication - Full Day

Jeremy Boggs, Design Architect for Digital Research and Scholarship in the Scholars’ Lab, at the University of Virginia Library
Devon Elliott, PhD candidate in history at Western University
Jentery Sayers, Assistant Professor of English at the University of Victoria

Description

Desktop fabrication is the digitization of analog manufacturing techniques. Comparable to desktop publishing, it affords the output of digital content (e.g., 3D models) in physical form (e.g., plastic). It also personalizes production through accessible software and hardware, with more flexibility and rapidity than its analog predecessors. Additive manufacturing is a process whereby a 3D form is constructed by building successive layers of a melted source material (at the moment, this is most often some type of plastic). The technology driving additive manufacturing in the desktop fabrication field is the 3D printer, tabletop devices that materialize digital 3D models.

In this workshop, we will introduce technologies used for desktop fabrication and additive manufacturing, and offer a possible workflow that bridges the digital and physical worlds for work with three­dimensional forms. We will begin by introducing 3D printers, and demonstrate how they operate by printing things throughout the event. The software used in controlling the printer and in preparing models to print will be explained. We will use free software sources so those in attendance can experiment with the tools as they are introduced.

The main elements of the workshop are:

  • Acquisition of digital 3D models — from online repositories to creating your own with
    photogrammetry, scanning technologies, and modelling software.
  • Software to clean and reshape digital models in order to make them print­ready and
    remove artifacts from the scanning process.
  • 3D printers and the software to control and use them.

Audience

Targeted towards scholars interested in learning about technologies surrounding 3D printing and additive manufacturing, and for accessible solutions to implementing those technologies in their work. Past workshops have been for faculty, graduate and undergraduate students in the humanities; librarians; archivists; GLAM professionals; digital humanities centers. This is an introductory workshop, so little prior experience is necessary, only a desire to learn and be engaged with the topic.

Those attending are asked to bring, if possible, a laptop computer to install and run the software introduced, and a digital camera or smartphone for experimenting with photogrammetry. Workshop facilitators will bring cameras, a 3D printer, plastics, and related materials for the event. By the end of the conference, each participant will have the opportunity to print an object for their own use.

TXM - Full Day

Serge Heiden, Project manager of the TXM platform development

Description

Introduction to the TXM content analysis platform

The objective of the “introduction to TXM” tutorial is to introduce the participants to the methodology of textometric content analysis through working with the TXM software directly on their own laptop computers. At the end of the tutorial, the participants will be able to input their own textual corpora (Unicode encoded raw texts or XML tagged texts) into TXM and to analyze them with the panel of content analysis tools available : word patterns frequency lists, kwic concordances and text browsing, rich full text search engine syntax (allowing to express various sequences of word forms, part of speech and lemma combinations constrained by XML structures), statistically specific sub-corpus vocabulary analysis, statistical collocation analysis, etc.).

During the tutorial, each participant will install TXM and the TreeTagger lemmatizer on her Windows, Mac or Linux laptop and will leave the tutorial with a ready to use environment.

Audience

Each participant should come with her own laptop computer.

Voyant - Half Day Morning

Geoffrey Rockwell, Professor of Philosophy and Humanities Computing at the University of Alberta, Canada
Stéfan Sinclair, Associate Professor of Digital Humanities at McGill University

Description

Teaching Text Analysis with Voyant

One of the common skills covered in introductory digital humanities courses at both the undergraduate and graduate levels is computer-assisted text analysis. This workshop will introduce participants to ways of teaching text analysis with the online text analysis environment, Voyant Tools. Unlike previous workshops that have been focused on using Voyant Tools for research, this workshop is aimed at participants who want to introduce text analysis into their teaching. Participants are not expected to know much about text analysis or Voyant; this workshop will include a brief hands-on component to introduce text analysis with Voyant as an example of what can be done.

Audience

Text analysis is about asking questions of a text with the help of a computer. As such it is a research method of interest to any who interpret texts.

Intro to Grantwriting - Half Day Morning

Jennifer Guiliano, Assistant Director of the Maryland Institute for Technology in the Humanities (MITH)
Simon Appleford, Associate Director for Humanities, Arts, and Social Sciences at the Clemson CyberInstitute, Adjunct Lecturer in History at Clemson University

Description

Designed for humanities scholars seeking assistance with their first grant, this workshop introduces participants to best practices in writing and submitting a grant. Participants will be provided with a series of online resources, including presentations, exemplar successful grants, and podcasts to help them complete a first draft of a proposal before they arrive in Lincoln. Those drafts will be circulated to other participants prior to the workshop and will serve as the core basis of our workshop discussions with the anticipation being that each participant will receive clear feedback from other attendees that will aid them in the revision of their proposal. Drafts will be encouraged to emulate the popular National Endowment for the Humanities Digital Humanities Start­up Grant competition in order to provide the most flexibility for participants in their digital humanities endeavors. This seminar will be limited to 15 participants; additional seminars may be made available should demand necessitate.

Papers should be 5 to 7 pages in length, double-spaced. We encourage participants to produce shorter papers to allow for greater commenting/consideration. Additionally, we want to encourage these papers to emulate the narrative section of the National Endowment for Humanities Digital Humanities Start Up Grants solicitation, as this competition focuses on humanities significance and innovation and is a likely funding source for early development projects. For more information on the composition of the draft as well as materials to assist you in writing, please visit http://devdh.org/workshops/dh2013/.

Audience

The target audience for this workshop are primarily early career digital humanists, including graduate students and junior scholars, who will be submitting their first grant in the coming year.

Crowdsourcing - Half Day Afternoon

Mia Ridge, PhD candidate in Department of History, Open University, United Kingdom

Description

Designing successful digital humanities crowdsourcing projects

Successful crowdsourcing projects help organisations connect with audiences who enjoy engaging with their content and tasks, whether transcribing handwritten documents, correcting OCR errors, identifying animals on the Serengeti or folding proteins. Conversely, poorly-designed crowdsourcing projects find it difficult to attract or retain participants.

This workshop will present international case studies of best practice crowdsourcing projects to illustrate the range of tasks that can be crowdsourced, the motivations of participants and the characteristics of well-designed projects. Attendees will learn about the attributes of well-designed humanities crowdsourcing projects and will be able to apply these lessons by designing and critiquing a simple crowdsourced task based on their own materials or projects.

Audience

No technical or design experience is necessary but knowledge of potential or existing audiences for any relevant datasets or related tasks would be helpful in the design exercise.

Methods for Data Querying - Half Day Afternoon

Piotr Bański, Assistant Professor of linguistics at the Institute of English Studies of the University of Warsaw,
Nils Diewald (Instructor), Researcher at the Institut für Deutsche Sprache (IDS) in Mannheim,
Andreas Witt, Head of the Research Infrastructure Group at the IDS

Description

This tutorial will present state­ of ­the ­art methods for querying data, from textual to multimodal, with a focus on use cases commonly found in digital humanities, or envisioned for the near future of this expanding field. It will be taught by two specialists in markup languages and corpus linguistics, currently involved in the process of creating a new analysis platform designed to handle large amounts of linguistic data. This is not meant to be a tutorial just for linguists, however: we intend to provide an opportunity to carry over some well­ known methods and techniques from linguistic research, where they have been used for years, onto the broader area of digital humanities, where queries target not only texts, but also non­textual objects, such as binary streams, ontologies, prosopographic databases or GIS data (these latter types of objects will be discussed to the extent to which they can be linked from textual resources).

Audience

We intend the tutorial for general DH audience: variety is a virtue in this case, because we want to address actual use cases, some of which will surely come from the participants themselves.