MozFest 2015: Why data visualization for mobile shouldn't hurt

As data journalists, we tend to focus on visualizing our numbers beautifully for desktops. We pour over D3.js line charts and spend hours getting the tooltips on our maps just right. And right before our deadlines, we’ll throw in some CSS media queries for mobile screens and call it a day. I know I’ve been a culprit more than once.

One of my favorite sessions was Aaron William’s MozFest session “Crafting new visualization techniques for mobile web” where he emphasized a mobile-first, desktop-second focus.

https://twitter.com/aboutaaron/status/663006106701180929

This seems like an obvious concept, especially in a time when desktops have shrunk to the size of your palm and phone screens have grown to the size of your face. But I realized that I always think of mobile design as an obstacle to what I’ve already created, even though many readers use mobile devices as their number one method of absorbing content.

So we started off the session by making a few short lists:

Some types of data visualizations:

  • Maps (cloropleth, heat)
  • Bar graphs
  • Timelines
  • Small multiples
  • Tables
  • Line charts


Advantages of mobile phone:

  • Multiple touch functionality + force touch with iPhone 6S
  • GPS system
  • Portability
  • Normal and front-facing cameras
  • Vibration
  • Screen rotation


Disadvantages of mobile phone:

  • Thumb size affecting touch
  • Small screen size
  • Battery life
  • Bandwidth and weak internet connections
  • No hover functionality
  • No mouse
  • Shorter time and attention span given from audience


The task was to solve a disadvantage or to play up an advantage of data viz on mobile web. My group chose the phone’s GPS system to work with a combination of maps and timelines. A few members of the group had been working on a project called Histropedia, a tool to help users understand the historical timeline about their current location. These are the sketches we came up with.

The results of our brainstorming sessions

A few other groups tackled timelines as well, presenting the option of showing the timeline as a whole but maybe in truncated sections, a vertical orientation or even a “snakes and ladders” curving situation. We stuck to displaying one timeline card at a time. Each card would theoretically be accompanied by a small line graph of the whole timeline to symbolize the differences in length of time between events or clusters of many events around a short period of time.

Aside from the fun paper phones and great brainstorming, the session helped restructure my perception of data viz and made me think about how my previous projects might have been different with a mobile-first view. Without some qualities exclusive to desktops, we have the opportunity to simplify and detangle our information graphics to focus both our readers and also our storytelling. And I think by thinking mobile first, we can even explore future options of data viz that are ideally suited to a mobile interface and more sensitive to a user’s location, capitalize on the multiple touch function or include vibration qualities that may cater to visually impaired audiences.

About the author

Ashley Wu

Undergraduate Fellow

Designing, developing and studying journalism at Northwestern. Also constantly scouting the campus for free food.

Latest Posts

  • With the 25th CAR Conference upon us, let’s recall the first oneWhen the Web was young, data journalism pioneers gathered in Raleigh

    For a few days in October 1993, if you were interested in journalism and technology, Raleigh, North Carolina was the place you had to be. The first Computer-Assisted Reporting Conference offered by Investigative Reporters & Editors brought more than 400 journalists to Raleigh for 3½ days of panels, demos and hands-on lessons in how to use computers to find stories in data. That seminal event will be commemorated this week at the 25th CAR Conference, which...

    Continue Reading

  • Prototyping Augmented Reality

    Something that really frustrates me is that, while I’m excited about the potential AR has for storytelling, I don’t feel like I have really great AR experiences that I can point people to. We know that AR is great for taking a selfie with a Pikachu and it’s pretty good at measuring spaces (as long as your room is really well lit and your phone is fully charged) but beyond that, we’re really still figuring...

    Continue Reading

  • Capturing the Soundfield: Recording Ambisonics for VR

    When building experiences in virtual reality we’re confronted with the challenge of mimicking how sounds hit us in the real world from all directions. One useful tool for us to attempt this mimicry is called a soundfield microphone. We tested one of these microphones to explore how audio plays into building immersive experiences for virtual reality. Approaching ambisonics with the soundfield microphone has become popular in development for VR particularly for 360 videos. With it,...

    Continue Reading

  • Prototyping Spatial Audio for Movement Art

    One of Oscillations’ technical goals for this quarter’s Knight Lab Studio class was an exploration of spatial audio. Spatial audio is sound that exists in three dimensions. It is a perfect complement to 360 video, because sound sources can be localized to certain parts of the video. Oscillations is especially interested in using spatial audio to enhance the neuroscientific principles of audiovisual synchrony that they aim to emphasize in their productions. Existing work in spatial......

    Continue Reading

  • Oscillations Audience Engagement Research Findings

    During the Winter 2018 quarter, the Oscillations Knight Lab team was tasked in exploring the question: what constitutes an engaging live movement arts performance for audiences? Oscillations’ Chief Technology Officer, Ilya Fomin, told the team at quarter’s start that the startup aims to create performing arts experiences that are “better than reality.” In response, our team spent the quarter seeking to understand what is reality with qualitative research. Three members of the team interviewed more......

    Continue Reading

  • How to translate live-spoken human words into computer “truth”

    Our Knight Lab team spent three months in Winter 2018 exploring how to combine various technologies to capture, interpret, and fact check live broadcasts from television news stations, using Amazon’s Alexa personal assistant device as a low-friction way to initiate the process. The ultimate goal was to build an Alexa skill that could be its own form of live, automated fact-checking: cross-referencing a statement from a politician or otherwise newsworthy figure against previously fact-checked statements......

    Continue Reading

Storytelling Tools

We build easy-to-use tools that can help you tell better stories.

View More