Prior to the digital era, the stories collected in the name of oral history tended to be shared with communities and potential future researchers through public exhibits, documentary film, broadcast radio, print materials, and transcriptions held in special collections or archives. For sure, all of these forms of oral history publishing remain valid in this digital era.
But the advent of new digital tools— and the algorithms that power them— opens new horizons for sharing oral histories and telling stories with communities. These tools come from a wide variety of niche industries, but they all share one thing: the ability to sync a media file with a text file.
Why is this interesting? Consider this video. It’s your average six-minute distillation of a full-length oral history. In order to make this “cooked” version that her family asked for, I made decisions about what content to keep, and what content to cut, all while considering a particular audience and purpose for the production. Notably, things Juanita spoke that would be of value to future researchers or scholars did not make the cut— such as how the Ladies’ Aid Society quilting bees would sometimes take place at her mother’s house. While a produced video short can merge narrative (Juanita’s spoken words) with archival photography (scanned family photos) and photos or b-roll I took in Juanita’s home during my time with her, her life story as shared with me is necessarily filtered through my editorial authority. Things I did not find important were cut, her quotes are clipped out of the context in which she spoke them, her words are rearranged and re-ordered. (Consider, if you have time, the difference in how she states her name and birthday at the opening of the full length interview as indexed in OHMS versus my edited version of her words in the short video. Through my editorial authority, I have literally impacted her voice and manner of speaking.)
While edited excerpts of narrative are ideal for a variety of digital stories, when we work with spoken word in scholarship we need access to the full interview in order to retain the context of what was spoken and the Considering digital tools that allow for increased navigation within narratives that retain their context is a priority concern for scholars working with spoken word.
Though originally made for breakbeat artists and sound mixers, Soundcloud is perhaps the easiest tool to harness for visual engagement and digital publishing of interviews. It provides an easy to use embeddable audio player with a featured image and the ability to add tags. Apart from being incredibly easy to use, it’s best feature for oral histories is the ability to comment and make notes directly on the visual audio player, calling out pithy parts of the interview or introducing simple ‘chapter markers’.
The next tool worth noting for oral historians on the ‘ease of use scale’ is arguably Podigee. Podigee is similar to Soundcloud, in that it is useful for audio-only, provides an easily embeddable interface for sharing and publishing oral narratives in a collection with a feature image and the ability to add tags. What Podigee adds is the ability to segment the interview by implementing time-coded chapter markers— calling out the various turns in the interview, or marking the driving questions. This is a core feature of other oral history programs (like OHMS) that are emerging as industry standards, and Podigee provides an easily accessible platform to demo the concepts and get a feel for the function within your own interviews.
If you’re ready to take it up to the next level, you can add images to the different segments of the interview, and individual text files— with an end functionality of chapter markers and transcripts similar to OHMS. (More on that later.) In it’s fullest form, Podigee would be a nice platform for a collection of stories that would prompt listeners to the next in a series. (See http://ohla.info/oh-gee-look-its-podigee/)
Knight Lab Soundcite
What if you could read about the key themes of your oral history project, but actually hear the voices you quote as you read the text? The Knight Lab at Northwestern University has a phenomenal tool that does just that— Soundcite for inline audio. Try out this tool if you will write any web-hosted long-form narrative and you’d like to your reader to hear pithy quotes and soundscapes collected during fieldwork.
Soundcite allows the user to embed audio clips within a story to enhance the experience of the narrative. It’s one thing to read what an interviewee says about living on the island of Tangier, where the seas are the rising and the youth are disappearing to the mainland, but it’s another thing to hear them speak in their dialect. Soundcite provides an auditory sensation for the reader that puts them in the middle of the scenario they’re reading about, plays them a relevant soundscape, or even lets them in on a conversation or phone call that an article is referencing.
Check out our tutorial on Soundcite here.
Knight Lab Storymap
Does your oral history project chart a sequence of events or a local chronology that takes place in a regional or hyperlocal place? The Knight Lab at Northwestern University offers a tool that can merge maps, videos, texts, and images into a compelling digital story. This mapping tool is particularly useful for telling a story that happens over time, through a series of events taking place in a regional setting.
StoryMap is one of the more complicated tools that The Knight Lab produces, but it is still just as user-friendly as SoundCite and TimelineJS. StoryMap lets you make what is essentially a PowerPoint but instead of moving from slide to slide, the visual moves from place to place, with little headlines and descriptions for each place you choose.
Check out our overview of StoryMap here.
Knight Lab Timeline
TimelineJS from the Northwestern University Knight Lab is a highly capable tool that can merge media from multiple platforms into a chronological story. Use this tool with projects that have a strong chronological narrative (but not necessarily a place), like an individual’s life story or the story of a trend in intellectual or cultural history that your project explores and documents. TimelineJS lets you create a good looking and easy to use visual timeline with text and pictures along the way.
To use TimelineJS, the only prior knowledge you need is how to use a spreadsheet program and Flickr. Whether that spreadsheet program is Microsoft Excel or Google Sheets, if you can enter data into a box in a spreadsheet you can use TimelineJS. Check out our overview here.
Story Maps ARCGis
Story Maps is a multi-functional tool for digital projects that are map-driven and place-based, with a variety of templates that allow you to connect images, sound, video, and text to the map in different visual and sequential formats. This means you can learn one tool, but choose different layouts that might be more appropriate for different projects. Story Maps is also integrated with the powerful ArcGIS mapping platform. If your institution has an ArcGIS account (perhaps through your digital technologist, environmental sciences, or geography program), inquire about joining forces with the registered users— you might find an unexpected partner to bring a long-term or data-based aspect to your oral history project or community-engaged learning initiative.
Most of us doing community-based learning with oral history will start with the Story Maps app, which just requires an email to create an account. OHLA recommends starting with one of these templates, but there’s more to choose from!
Check out our overview of Story Maps here.
OHMS- The Oral History MetaData Synchronizer
For many in the professional oral history community, the Oral History Metadata Synchronizer is the emerging standard and tool of choice for those envisioning a more archival environment and workflow for their interview collections. Built by Douglas Boyd at the Louie B. Nunn Center at the University of Kentucky Libraries, OHMS is an opensource tool that allows for syncing of media in an approachable user interface. The media— audio or video— can live anywhere, and must be streamed by a server with a publicly-accessible url. This can be accomplished through a Youtube hosted video link, uploading an audio file to a WordPress media library, or can be a link to a media file on your institutional server.
While installing OHMS requires a small amount of technical knowledge, if you are willing to learn how to work with an FTP program (like Cyberduck) you’ll be able to manage your own installation and workflows in OHMS. If you are lucky to be invited to an OHMS-powered environment (like OHLA’s faculty and student project archives), you’ll simply begin with a link to a media file and the interviewee and interviewer names. The back end interface of OHMS allows for a video-game like experience of indexing your interview media. Indexing, in OHMS language, is like chapter marking— taken to a scholarly level. You can segment the interview into meaningful chucks, summarize the narration, tag it, and apply keywords from a controlled LOC thesaurus. OHMS documentation is extensive, and describes three levels of indexing with increasing intellectual and pedagogical complexity. Teams of users can work on a series of interviews in a collection, and OHMS provides some internal features for tracking completion and quality control workflows.
Perhaps the greatest aspect of OHMS is that you can create a significant number of entry points into a series of interviews without providing a full transcription, replete with optional controlled thesaurus and prompts for archival metadata. Still the holy grail of oral history publishing online rests somewhere in the ‘natural language mapping’— the magic that happens at the interstice of a full transcription (the actual words spoken) and the thematic coding and summaries applied by a careful researcher with broad access in mind. If you bring a transcription to OHMS (in the form of a properly formatted plain text file), the end result is a fully searchable interview, where the narrator’s sentiment can be accessed in context while the end listener toggles between the summary index and the full transcription. The final product is embedded in webpages through a php call in an iframe, after exporting a cache file and uploading it via FTP to the OHMS files on your server. In the land of software technologies, it’s an easy to learn process for those who wish to publish scholarly and archival collections of interviews online.
See Juanita’s full interview in OHMS here.
3-Play Media: Transcription, Sync, Captions, and (drumroll….) Clipmaker!
3-Play media is a comprehensive transcription and captioning service based upon accessibility standards for compliant media productions featuring spoken word. They guarantee 90% accuracy through machine generated transcripts with two rounds of human review. Once you’ve paid for their default service (auto transcription and caption alignment) or brought your own transcript (and paid for their caption alignment service, which syncs your text to your media), 3-Play offers two tools of great interest to the oral history community. The Interactive Transcripts plugin allows you to embed your transcripts with a video player anywhere on the web in a sleek and easy to navigate interface (see it in action on the MIT Infinite History oral history site). Clipmaker provides function unlike anything I’ve seen— except once in a workshop with the Interclipper tool in use by Randforce Associates under the lead of Michael Frisch.
We’re currently testing their automatic transcriptions and playing with the Interactive Transcripts plugin and the very exciting Clipmaker. Early tests suggest 3-Play media provides a top notch service at a top-notch price. I appreciate their accessibility standards approach.
The Answer Rarely Lies in One Tool: Thinking About Workflows & Technology Stacks
My workflow always begins with PopUpArchive, where I drop interviews for (a) redundant storage and (b) automatic transcription. After tweaking the transcript in their line-by-line interface, I download a timestamped copy and a plain text version— which I port to OHMS and Podigee. In fact, if my project required a fast turn around, I wouldn’t edit the transcript at all, I would simply download a timestamped transcript and use it as an index in Podigee or OHMS to create my interview segments (aka chapter markers).
My workflow always end with WordPress, where I can embed any player into any post or page, as part of a forward facing digital project.
What’s your workflow?