As a student fellow at the Knight Lab, I get the opportunity to work on a variety of different projects. Recently, I’ve been working with Larry Birnbaum, a Knight Lab co-founder, and Shawn O’Banion, a computer science Ph.D. student, to build an application that takes a user’s Twitter handle, analyzes their activity and returns a list of celebrities that they tweet most like.
It’s not an earth-shattering project, but it is a fun way for Twitter users to see who they tweet like and perhaps discover a few interesting things about themselves in the process. It also gave me a great excuse to experiment with the tools available in the open source community for web scraping and mining Twitter data, which you can read about below.
The tools listed here are primarily for Python, but equivalent versions of these libraries exist in other languages — just search around!
Who’s a celebrity, exactly?
The first step in building this project was to gather a list of celebrities to compare users against. To do this, I searched the web for sites that had celebrity information. IMDB was the perfect solution as it had an extensive list of celebrities (actors, movie directors, singers, sports figures, etc) and provided the information in a structured format that was straightforward to collect using a web scraping tool.
Tools used:
- Beautiful Soup — A useful Python library for scraping web pages that has extensive documentation and community support. Choosing elements to save from a page is as simple as writing a CSS selector.
Collecting tweets
After gathering a list of celebrities, I needed to find them on Twitter and save their handles. Twitter’s API provides a straightforward way to query for users and returns results in a JSON format which makes it easy to parse in a Python script. One wrinkle when dealing with celebrities is that fake accounts use similar or identical names and could be difficult to detect. Luckily, Twitter includes a handy data field in each user object that indicates whether the account is verified, which I checked before saving the handle.
Once the celebrity name was associated with a Twitter handle, the next step was to again use Twitter’s API to download the user’s tweets and save them into a database.
When gathering data you will often encounter the “rate limit exceeded” error message. This is because Twitter imposes a limit on the number of API calls a single app can make in set “window” of times (currently 15 minutes). To get around this problem, you can either make multiple Twitter Apps and request additional OAuth credentials or set up a cronjob task to run every 15 minutes. Doing so will allow for your script to run during scheduled times or intervals in the background, leaving you free to perform other tasks.
A few tips for writing cronjob tasks that I found extremely helpful when collecting data:
- Construct your scripts in a way that cycles through your API keys to stay within the rate limit.
- Be sure to catch exception errors that may occur when accessing Twitter’s API and write to an error file for later review. This will allow for your scripts to run unattended and not crash the entire program when an error occurs.
- Run your scripts on a remote computer (unless you want to keep your computer on the entire time the scripts are running!).
Tools used:
- Twitter API — A Python wrapper for performing API requests such as searching for users and downloading tweets. This library handles all of the OAuth and API queries for you and provides it to you in a simple Python interface. Be sure to create a Twitter App and get your OAuth keys — you will need them to get access to Twitter’s API.
- MongoDB — An open source document storage database and is the go-to “NoSQL” database. It makes working with a database feel like working with Javascript.
- PyMongo — A Python wrapper for interfacing with a MongoDB instance. This library lets you connect your Python scripts with your database and read/insert records.
- Cronjobs — A time based job scheduler that lets you run scripts at designated times or intervals (e.g. always at 12:01 a.m. or every 15 minutes).
Once the tweets have been successfully stored in your database, you can manipulate the data to fit the needs of your project. For my project, I removed common words and created an index on the text of the collected tweets to perform the similarity comparisons.
Accessing the Firehose
If you’re ready to go beyond the data limits that Twitter imposes for free access, you can upgrade to Twitter’s Firehose API where you can get nearly unlimited access to Twitter’s data stream via one of the various data providers that Twitter partners with, including Dataminr (CNN recently partnered with Dataminr build an application that alerts journalists in newsrooms of breaking news and emerging trends), Datasift, Gnip, Lithium, Topsy.
What now?
While the number of projects you could build using Twitter data is close to infinite, there are a few cool and fun civic-minded projects already out there. NoHomophobes.com gives you a glimpse of how prevelant homophobic speach is on Twitter. Closer to home, Knight Lab has developed a number a different projects using the tools above: twXplorer, BookRx, and NeighborhoodBuzz to name a few. While the scope of these projects range from text aggregation to recommendation engines to sentiment analysis, they all leverage the use of various open source tools to access Twitter data and build applications on top of it.
About the author