NICAR 2015: Machine learning lessons for journalists

Machine learning is certainly not a new concept in journalism, but it seemed to enjoy plenty of prominence at NICAR this year — fantastic news for newbies to the field like me. I attended several sessions on it, both theoretical and technical, and a few key concepts came up repeatedly. Whether this year’s conference was your first exposure to machine learning, or you’re a seasoned pro, here are four takeaways worth reviewing:

Machine learning is the starting point for a story, not the end.

This critical reminder came from Sarah Cohen, during the panel “Machine Learning in the Wild” (with Steven Rich and Janet Roberts.) Machine learning is an incredibly powerful tool — it can help you clean up data, sort through thousands of documents, and make telling predictions.

But as Sarah Cohen cautioned during Machine Learning in the Wild, the results of your model are not your story. Instead, they will provide helpful insight and serve as a useful launch pad for additional reporting. It’s important to recognize the limitations of your data. Otherwise, as Cohen pointed out, journalists fall squarely in Drew Conway’s danger zone: we know just enough about handling data to be dangerous (not enough to not handle it responsibly.)

"Nothing that comes out of these algorithms is the ‘truth,’” machine learning researcher Hanna Wallach said in her session “Lessons from Computational Social Sciences.” Your results have to be contextualized to be handled responsibly. And that takes time and plenty of due diligence because...

Your first model will suck.

Sadly, that’s just the way things are. Roberts presented a real-life scenario: in 2014, she and her team at Reuters investigated a story about how a handful of lawyers with close connections to Supreme Court justices were routinely succeeding in getting their clients’ cases heard in the highest court. They used machine learning (specifically, latent Dirichlet allocation) to identify the topics in 14,400 petitions for Supreme Court hearings and to categorize briefs, would provide data about, among other things, who these lawyers were representing (largely corporate interests.) But the first model Roberts’ team tried returned results with a dismal 36 percent accuracy.

Roberts put up pictures of the nine Supreme Court justices — some of the most powerful people in the U.S. — and discussed the elite lawyers’ connections to them. She was not, she said, going to publish a story about these people with that kind of accuracy.

But for a first model, this sort of failure is normal. In fact, machine learning is "a lot of failure," Rich said. “I’ve never had a win without a fail (or five) first.”

There are ways to fix it. Make adjustments. Evaluate. Repeat.

Model adjustment is a feedback loop, said Chase Davis during his session “Hands-on with Machine Learning.” We constantly have to ask, “How can we get this score to go up? How can we do better?” Nick Diakopoulos covered some text processing methods in his session “Text Analysis and Visualization,” including:

  • converting all text to lowercase
  • consolidating various plurals and tenses of a word into its root form (called "stemming")
  • taking "meaningless" words, like prepositions, out of consideration (called "stop word removal")


Roberts and her team employed some of these methods and experimented with the number of topics they were extracting (too few, and they would lose precision — too many, and topics would be hyperspecific). Eventually, their model returned the top topic with 93 percent accuracy.

Know exactly how poorly (or wonderfully) your model did is not as valuable as understanding why.

"You’re going to live or die by your ability to evaluate your model,” Davis said, and it's especially true when you’re working with your first few iterations. But as Wallach pointed out, digging past your accuracy percentage will provide crucial insight. Understand the math of your model if you can, and think carefully about what it got right and what it got wrong.

Wallach calls this an “end user concept.” For computer scientists working in computational social sciences, being able to present information about what caused your algorithm to return a certain result helps social scientists understand how much they should trust the data. And, Wallach says, for journalists trying to make editors understand that the data will never be 100 percent accurate, understanding what’s behind an accuracy result can help them explain that “the fact that we’re getting it 30 percent wrong doesn’t invalidate the fact that we’re getting it 70 percent right.”

For more on NICAR’s Machine Learning sessions, check out Stephen Suen’s summary of Hanna Wallach’s computational social sciences talk, Rich’s slides on machine learning wins and fails, Diakopoulos’s slides on text processing, and Davis’s materials for his hands-on demo.

About the author

Anushka Patil

Undergraduate Fellow

Latest Posts

  • Prototyping Augmented Reality

    Something that really frustrates me is that, while I’m excited about the potential AR has for storytelling, I don’t feel like I have really great AR experiences that I can point people to. We know that AR is great for taking a selfie with a Pikachu and it’s pretty good at measuring spaces (as long as your room is really well lit and your phone is fully charged) but beyond that, we’re really still figuring...

    Continue Reading

  • Capturing the Soundfield: Recording Ambisonics for VR

    When building experiences in virtual reality we’re confronted with the challenge of mimicking how sounds hit us in the real world from all directions. One useful tool for us to attempt this mimicry is called a soundfield microphone. We tested one of these microphones to explore how audio plays into building immersive experiences for virtual reality. Approaching ambisonics with the soundfield microphone has become popular in development for VR particularly for 360 videos. With it,...

    Continue Reading

  • How to translate live-spoken human words into computer “truth”

    Our Knight Lab team spent three months in Winter 2018 exploring how to combine various technologies to capture, interpret, and fact check live broadcasts from television news stations, using Amazon’s Alexa personal assistant device as a low-friction way to initiate the process. The ultimate goal was to build an Alexa skill that could be its own form of live, automated fact-checking: cross-referencing a statement from a politician or otherwise newsworthy figure against previously fact-checked statements......

    Continue Reading

  • Northwestern is hiring a CS + Journalism professor

    Work with us at the intersection of media, technology and design.

    Are you interested in working with journalism and computer science students to build innovative media tools, products and apps? Would you like to teach the next generation of media innovators? Do you have a track record building technologies for journalists, publishers, storytellers or media consumers? Northwestern University is recruiting for an assistant or associate professor for computer science AND journalism, who will share an appointment in the Medill School of Journalism and the McCormick School...

    Continue Reading

  • Introducing StorylineJS

    Today we're excited to release a new tool for storytellers.

    StorylineJS makes it easy to tell the story behind a dataset, without the need for programming or data visualization expertise. Just upload your data to Google Sheets, add two columns, and fill in the story on the rows you want to highlight. Set a few configuration options and you have an annotated chart, ready to embed on your website. (And did we mention, it looks great on phones?) As with all of our tools, simplicity...

    Continue Reading

  • Join us in October: NU hosts the Computation + Journalism 2017 symposium

    An exciting lineup of researchers, technologists and journalists will convene in October for Computation + Journalism Symposium 2017 at Northwestern University. Register now and book your hotel rooms for the event, which will take place on Friday, Oct. 13, and Saturday, Oct. 14 in Evanston, IL. Hotel room blocks near campus are filling up fast! Speakers will include: Ashwin Ram, who heads research and development for Amazon’s Alexa artificial intelligence (AI) agent, which powers the...

    Continue Reading

Storytelling Tools

We build easy-to-use tools that can help you tell better stories.

View More