Posts

  • Another Big TimelineJS Update That You Shouldn’t Even NoticeBut we wanted to warn you ahead of time

    There’s another big but invisible change coming to TimelineJS. In July, we announced a big update to TimelineJS on the day of the release. With some minor exceptions, we were right that most people didn’t notice. Next week, we’ll be making another big update, which, again, we hope you won’t notice. This time, we thought we should announce it ahead of time, because timelines created before January, 2015 will probably stop working unless steps are taken. Timelines...

    Continue Reading

  • Evaluating VR and AR Tools for the ClassroomHow we choose tools for Knight Lab Studio

    In the Knight Lab: Studio class, every Virtual and Augmented Reality project is a blank slate. All of our developers are learning how to make VR and AR for the first time; many of them don’t even consider themselves developers yet. With the exception of a few projects which have run in successive quarters, we’re never building on something that already exists. We go from “New Project” to our final destination in ten weeks. The fact that...

    Continue Reading

  • A Big TimelineJS Update That You Shouldn’t Even Notice

    Today we’re releasing a new version of TimelineJS, but most of you shouldn’t even notice a difference. We make updates to TimelineJS periodically, and we usually don’t say much about it, partly because people who publish timelines using our embed tool are automatically updated to the new version—there’s nothing they need to change. That includes this new release. However, in this case, we thought it was worthwhile posting for two related reasons. First, this release...

    Continue Reading

  • AR Face Filters: How Do They Work and How Are They Changing Us?

    This video explains the mechanisms behind how selfie filters change our digital images and then how this might psychologically affect us. To make this video, I researched “Snapchat Dysmorphia” and also looked for the technical understanding of selfie filters. Before creating this video, I made a selfie filter for the Knight Lab using Spark AR to get a sense of how face deformation would work in Instagram selfie filters. The animation workflow for me looked...

    Continue Reading

  • 9 Important VR Experiences for Journalists to See

    In an era where social media outpaces print newspapers in the U.S. by a whopping 4% (Pew Research Center, 2018), there is much to be said about the role of interactive media technology in minimizing this gap between technology and storytelling. How can publications, and the journalists within them, use interactive media technology to re-engage with this lost audience? And why might this shift towards digital storytelling, such as VR, be more valuable for journalism than...

    Continue Reading

  • 7 Reflections on SRCCON:Productthe first conference for “product thinkers” in media

    Marco Túlio Pires On Feb. 8 in Philadelphia, I was lucky to be one of about 150 participants at SRCCON:Product, the first conference for “product thinkers” in media. A few reflections on what I saw and learned: 1. A quarter of a century into the digital age, we are finally figuring out how to build media products In 1995, I was named the first online director for The Miami Herald, with responsibility for launching the...

    Continue Reading

  • Building a Community for VR and AR Storytelling

    In 2016 we founded the Device Lab to provide a hub for the exploration of AR/VR storytelling on campus. In addition to providing access to these technologies for Medill and the wider Northwestern community, we’ve also pursued a wide variety of research and experimental content development projects. We’ve built WebVR timelines of feminist history and looked into the inner workings of ambisonic audio. We’ve built virtual coral reefs and prototyped an AR experience setting interviews...

    Continue Reading

  • A Brief Introduction to NewsgamesCan video games be used to tell the news?

    When the Financial Times released The Uber Game in 2017, the game immediately gained widespread popularity with more than 360,000 visits, rising up the ranks as the paper’s most popular interactive piece of the year. David Blood, the game’s lead developer, said that the average time spent on the page was about 20 minutes, which was substantially longer than what most Financial Times interactives tend to receive, according to Blood. The Uber Game was so successful that the Financial...

    Continue Reading

  • With the 25th CAR Conference upon us, let’s recall the first oneWhen the Web was young, data journalism pioneers gathered in Raleigh

    For a few days in October 1993, if you were interested in journalism and technology, Raleigh, North Carolina was the place you had to be. The first Computer-Assisted Reporting Conference offered by Investigative Reporters & Editors brought more than 400 journalists to Raleigh for 3½ days of panels, demos and hands-on lessons in how to use computers to find stories in data. The cover of the Raleigh conference program. Nerds like us couldn't resist bad...

    Continue Reading

  • Prototyping Augmented Reality

    Something that really frustrates me is that, while I’m excited about the potential AR has for storytelling, I don’t feel like I have really great AR experiences that I can point people to. We know that AR is great for taking a selfie with a Pikachu and it’s pretty good at measuring spaces (as long as your room is really well lit and your phone is fully charged) but beyond that, we’re really still figuring...

    Continue Reading

  • Capturing the Soundfield: Recording Ambisonics for VR

    When building experiences in virtual reality we’re confronted with the challenge of mimicking how sounds hit us in the real world from all directions. One useful tool for us to attempt this mimicry is called a soundfield microphone. We tested one of these microphones to explore how audio plays into building immersive experiences for virtual reality. Approaching ambisonics with the soundfield microphone has become popular in development for VR particularly for 360 videos. With it,...

    Continue Reading

  • Prototyping Spatial Audio for Movement Art

    One of Oscillations’ technical goals for this quarter’s Knight Lab Studio class was an exploration of spatial audio. Spatial audio is sound that exists in three dimensions. It is a perfect complement to 360 video, because sound sources can be localized to certain parts of the video. Oscillations is especially interested in using spatial audio to enhance the neuroscientific principles of audiovisual synchrony that they aim to emphasize in their productions. Existing work in spatial......

    Continue Reading

  • Oscillations Audience Engagement Research Findings

    During the Winter 2018 quarter, the Oscillations Knight Lab team was tasked in exploring the question: what constitutes an engaging live movement arts performance for audiences? Oscillations’ Chief Technology Officer, Ilya Fomin, told the team at quarter’s start that the startup aims to create performing arts experiences that are “better than reality.” In response, our team spent the quarter seeking to understand what is reality with qualitative research. Three members of the team interviewed more......

    Continue Reading

  • How to translate live-spoken human words into computer “truth”

    Our Knight Lab team spent three months in Winter 2018 exploring how to combine various technologies to capture, interpret, and fact check live broadcasts from television news stations, using Amazon’s Alexa personal assistant device as a low-friction way to initiate the process. The ultimate goal was to build an Alexa skill that could be its own form of live, automated fact-checking: cross-referencing a statement from a politician or otherwise newsworthy figure against previously fact-checked statements......

    Continue Reading

  • Comparing Motion Capture Techniques for Movement Art

    With Oscillations’ connection to the movement arts, it made sense to experiment with existing motion capture technology to find accurate, consistent, and scalable ways to obtain three-dimensional motion data for purposes such as animation or machine learning to augment performances in virtual reality. An additional motivation to learn more about motion capture was connected to our early experiments with spatial audio (read more about them in our spatial audio blog post). Apart from using ambisonic......

    Continue Reading

Page 1 of 24