Introducing Neighborhood Buzz

Neightborhood Buzz, Chicago, O'hare, Science tweets

As social media have become a regular part of daily life, people have wondered what they can learn about themselves and their communities from the millions of messages posted online—especially on Twitter, because it is so public and so conversational. Many projects in this space begin by selecting tweets for analysis based on who tweeted or specific terms used in the tweets. Students in our Fall 2012 Innovation in Journalism and Technology class wanted instead to explore what could be learned by grouping tweets based on their geolocation. Building on that first prototype, the Knight Lab staff has developed the idea a bit further, leading to today's release of Neighborhood Buzz.

In short, Neighborhood Buzz is an experiment in summarizing the topics of conversation on Twitter by neighborhood in 20 cities across the country. Neighborhood Buzz continually collects tweets, assigns them to a neighborhood and categorizes them topically. When you visit a city page, Buzz shows you the number of geolocated tweets in that city in each category over the last week. If you click on a neighborhood on the map, the numbers adjust reflect tweets geolocated in that neighborhood. Or, you can click on a topical category in the list, which displays a heatmap overlaid on the city map showing how much each neighborhood tweets about that topic. (More on that below.)  You can also click on the arrow next to each category to see a sampling of tweets in that category for the selected city or neighborhood.

The original student team focused on Chicago and used a simple mathematical algorithm (rounding to grid coordinates) to assign tweets to neighborhoods. While we were developing this new version of Neighborhood Buzz, the Code For America Louisville fellows released a fun project called Click that 'Hood. As part of that project, they've collected a trove of neighborhood maps for cities around the world. Their maps and our decision to use PostGIS for our database made it easy to add many more cities to the project. (Unfortunately for our friends outside the United States, for technical reasons involving Twitter's streaming API, we had to use a geographic filter that only processes tweets geocoded somewhere in the U.S.)

We started with all of the neighborhood maps collected by Click that 'Hood. After a little experimentation, we found that there just weren't enough geocoded tweets per neighborhood in many of the cities for statistical analysis. We looked at the totals and saw a pretty natural break after the top twenty cities, so we decided to limit our project to those. (Technically, we have 15 cities; four of the five boroughs of New York City; and Los Angeles County including neighborhoods of the city of Los Angeles).

[sc:pull-right pulltext="In developing Neighborhood Buzz to this point, we find two continuing challenges … tweets are simply hard to classify using traditional text analysis methods … only a very small number of tweets are geocoded." ]Once we sort tweets into neighborhoods, we use a topical classifier to assign them to one of nine categories. The classifier provides a score for each category reflecting its "confidence" that the tweet belongs in that category. We assign the tweet to whichever category has the highest score.

In developing Neighborhood Buzz to this point, we find two continuing challenges. The primary problem is that tweets are simply hard to classify using traditional text analysis methods. They are chatty, full of abbreviations and slang, and of course, just short. About one-third of the tweets we attempt to classify don't get any scores at all, and so aren't shown in Buzz. Additionally, classifiers must be trained using labeled texts, and the texts we had available for this purpose were from a different category altogether (news stories). For these reasons, you'll sometimes see quirks such as the word 'party' often leading a tweet to be put in the 'politics' category.

Finally, at the current time only a very small number of tweets are geocoded. A couple of random samples indicate that about 1.5% of all tweets have a location associated with them. And many of those are geocoded because they are sent by a third-party service, such as FourSquare or Instagram, so it's a bit harder to say that we know what people are "talking about" in a certain neighborhood. (Also, Instagram's default text, "Just posted a photo," results in a disproportionate number of tweets being labeled about "Art.")

And, getting back to the heat map: any time one uses a heat map to summarize data points, it's easy to simply reproduce a population density map. Computing the per-neighborhood population was outside the scope of our project, and Twitter users are not evenly distributed among the general public. So, we looked for a different way to normalize the data. In our current implementation, we compute the percentage of all tweets in the given category that came from a given neighborhood. In practice, it's rare that any one neighborhood is the source of more than 20% of all tweets in a category, so the maps are often a fairly uniform color.

We’re releasing Neighborhood Buzz now so people can use it and tell us what they think. Our plans for future development will depend on what we hear. We've had some conversations internally about other approaches to aggregating and summarizing the tweets in a neighborhood. I'm sure we'll have other projects for which we’ll continue working on the general challenge of categorizing tweets, and some of those advances may make their way back into Buzz. In the meantime, try it and tell us what you think!

About the author

Joe Germuska

Chief Nerd

Joe runs Knight Lab’s technology, professional staff and student fellows. Before joining us, Joe was on the Chicago Tribune News Apps team. Also, he hosts a weekly radio show on WNUR-FM – Conference of the Birds.

Latest Posts

  • Introducing StorylineJS

    Today we're excited to release a new tool for storytellers.

    StorylineJS makes it easy to tell the story behind a dataset, without the need for programming or data visualization expertise. Just upload your data to Google Sheets, add two columns, and fill in the story on the rows you want to highlight. Set a few configuration options and you have an annotated chart, ready to embed on your website. (And did we mention, it looks great on phones?) As with all of our tools, simplicity...

    Continue Reading

  • Join us in October: NU hosts the Computation + Journalism 2017 symposium

    An exciting lineup of researchers, technologists and journalists will convene in October for Computation + Journalism Symposium 2017 at Northwestern University. Register now and book your hotel rooms for the event, which will take place on Friday, Oct. 13, and Saturday, Oct. 14 in Evanston, IL. Hotel room blocks near campus are filling up fast! Speakers will include: Ashwin Ram, who heads research and development for Amazon’s Alexa artificial intelligence (AI) agent, which powers the...

    Continue Reading

  • Bringing Historical Data to Census Reporter

    A Visualization and Research Review

    An Introduction Since Census Reporter’s launch in 2014, one of our most requested features has been the option to see historic census data. Journalists of all backgrounds have asked for a simplified way to get the long-term values they need from Census Reporter, whether it’s through our data section or directly from individual profile pages. Over the past few months I’ve been working to make that a reality. With invaluable feedback from many of you,......

    Continue Reading

  • How We Brought A Chatbot To Life

    Best Practice Guide

    A chatbot creates a unique user experience with many benefits. It gives the audience an opportunity to ask questions and get to know more about your organization. It allows you to collect valuable information from the audience. It can increase interaction time on your site. Bot prototype In the spring of 2017, our Knight Lab team examined the conversational user interface of Public Good Software’s chatbot, which is a chat-widget embedded within media partner sites.......

    Continue Reading

  • Stitching 360° Video

    For the time-being, footage filmed on most 360° cameras cannot be directly edited and uploaded for viewing immediately after capture. Different cameras have different methods of outputting footage, but usually each camera lens corresponds to a separate video file. These video files must be combined using “video stitching” software on a computer or phone before the video becomes one connected, viewable video. Garmin and other companies have recently demonstrated interest in creating cameras that stitch......

    Continue Reading

  • Publishing your 360° content

    Publishing can be confusing for aspiring 360° video storytellers. The lack of public information on platform viewership makes it nearly impossible to know where you can best reach your intended viewers, or even how much time and effort to devote to the creation of VR content. Numbers are hard to come by, but were more available in the beginning of 2016. At the time, most viewers encountered 360° video on Facebook. In February 2016, Facebook......

    Continue Reading

Storytelling Tools

We build easy-to-use tools that can help you tell better stories.

View More