SoundCite gives voice to WaPo's account of Wendy Davis' filibuster

Knight Lab couldn’t have been more excited to learn that The Washington Post used our newly launched project, SoundCite, to tell the story of the Wendy Davis ‘tweetstorm’ following her filibuster in Texas.

There's just something about launching a project and seeing it used to help tell stories. It's like sending a child off into the world and watching her succeed.

SoundCite co-creator and Knight Lab student fellow, Tyler Fisher, said it best:



To which the article’s author, Caitlin Dewey, replied:


Music to our ears.

SoundCite received more Twitter love and some great descriptions of what this tool can add to the reading experience:


 


 

The Post's story wasn't the first time we've seen SoundCite by a publication. It was first adopted in late May by Chicago's own WBEZ for a story on Chance the Rapper. That use highlighted SoundCite's ability to enrich music-related articles by providing inline playback of clips from Chance the Rapper's songs while reading the lyrics.

The Post's story, on the other hand, shows how journalists can give voice and personality to the quotes within their stories and use ambient sound to transport the reader onto the scene. In the case of Wendy Davis' filibuster this became a powerful way to convey the emotions involved over her 13-hour stand, and gave readers a stronger connection to the story and the cause.

While music reviews were the original inspiration for SoundCite, we had been hoping publishers would use it like the Post did here in telling the Wendy Davis' tweetstorm story. Other use-cases we imagine would be for storytellers to use it as as an effective way to give readers access to clips of 911 calls, speeches, or even ambient sound. The only requirement is that the audio be hosted on SoundCloud. Unfortunately, because of SoundCloud's embed technology, it doesn't yet work on iPhones and other iOS-enabled devices. We've done our best to smooth out that experience by allowing the text to appear and the reader will not even be aware of the missing clip.

About the author

Jordan Young

Operations and Project Manager, 2011-2015

Latest Posts

  • With the 25th CAR Conference upon us, let’s recall the first oneWhen the Web was young, data journalism pioneers gathered in Raleigh

    For a few days in October 1993, if you were interested in journalism and technology, Raleigh, North Carolina was the place you had to be. The first Computer-Assisted Reporting Conference offered by Investigative Reporters & Editors brought more than 400 journalists to Raleigh for 3½ days of panels, demos and hands-on lessons in how to use computers to find stories in data. That seminal event will be commemorated this week at the 25th CAR Conference, which...

    Continue Reading

  • Prototyping Augmented Reality

    Something that really frustrates me is that, while I’m excited about the potential AR has for storytelling, I don’t feel like I have really great AR experiences that I can point people to. We know that AR is great for taking a selfie with a Pikachu and it’s pretty good at measuring spaces (as long as your room is really well lit and your phone is fully charged) but beyond that, we’re really still figuring...

    Continue Reading

  • Capturing the Soundfield: Recording Ambisonics for VR

    When building experiences in virtual reality we’re confronted with the challenge of mimicking how sounds hit us in the real world from all directions. One useful tool for us to attempt this mimicry is called a soundfield microphone. We tested one of these microphones to explore how audio plays into building immersive experiences for virtual reality. Approaching ambisonics with the soundfield microphone has become popular in development for VR particularly for 360 videos. With it,...

    Continue Reading

  • Prototyping Spatial Audio for Movement Art

    One of Oscillations’ technical goals for this quarter’s Knight Lab Studio class was an exploration of spatial audio. Spatial audio is sound that exists in three dimensions. It is a perfect complement to 360 video, because sound sources can be localized to certain parts of the video. Oscillations is especially interested in using spatial audio to enhance the neuroscientific principles of audiovisual synchrony that they aim to emphasize in their productions. Existing work in spatial......

    Continue Reading

  • Oscillations Audience Engagement Research Findings

    During the Winter 2018 quarter, the Oscillations Knight Lab team was tasked in exploring the question: what constitutes an engaging live movement arts performance for audiences? Oscillations’ Chief Technology Officer, Ilya Fomin, told the team at quarter’s start that the startup aims to create performing arts experiences that are “better than reality.” In response, our team spent the quarter seeking to understand what is reality with qualitative research. Three members of the team interviewed more......

    Continue Reading

  • How to translate live-spoken human words into computer “truth”

    Our Knight Lab team spent three months in Winter 2018 exploring how to combine various technologies to capture, interpret, and fact check live broadcasts from television news stations, using Amazon’s Alexa personal assistant device as a low-friction way to initiate the process. The ultimate goal was to build an Alexa skill that could be its own form of live, automated fact-checking: cross-referencing a statement from a politician or otherwise newsworthy figure against previously fact-checked statements......

    Continue Reading

Storytelling Tools

We build easy-to-use tools that can help you tell better stories.

View More