A beginner's guide to collecting Twitter data (and a bit of web scraping)

As a student fellow at the Knight Lab, I get the opportunity to work on a variety of different projects. Recently, I’ve been working with Larry Birnbaum, a Knight Lab co-founder, and Shawn O’Banion, a computer science Ph.D. student, to build an application that takes a user’s Twitter handle, analyzes their activity and returns a list of celebrities that they tweet most like.

It’s not an earth-shattering project, but it is a fun way for Twitter users to see who they tweet like and perhaps discover a few interesting things about themselves in the process. It also gave me a great excuse to experiment with the tools available in the open source community for web scraping and mining Twitter data, which you can read about below.

The tools listed here are primarily for Python, but equivalent versions of these libraries exist in other languages — just search around!

Who’s a celebrity, exactly?

The first step in building this project was to gather a list of celebrities to compare users against. To do this, I searched the web for sites that had celebrity information. IMDB was the perfect solution as it had an extensive list of celebrities (actors, movie directors, singers, sports figures, etc) and provided the information in a structured format that was straightforward to collect using a web scraping tool.

Tools used:

  • Beautiful Soup — A useful Python library for scraping web pages that has extensive documentation and community support. Choosing elements to save from a page is as simple as writing a CSS selector.

Collecting tweets

After gathering a list of celebrities, I needed to find them on Twitter and save their handles. Twitter’s API provides a straightforward way to query for users and returns results in a JSON format which makes it easy to parse in a Python script. One wrinkle when dealing with celebrities is that fake accounts use similar or identical names and could be difficult to detect. Luckily, Twitter includes a handy data field in each user object that indicates whether the account is verified, which I checked before saving the handle.

Once the celebrity name was associated with a Twitter handle, the next step was to again use Twitter’s API to download the user’s tweets and save them into a database.

When gathering data you will often encounter the “rate limit exceeded” error message. This is because Twitter imposes a limit on the number of API calls a single app can make in set “window” of times (currently 15 minutes). To get around this problem, you can either make multiple Twitter Apps and request additional OAuth credentials or set up a cronjob task to run every 15 minutes. Doing so will allow for your script to run during scheduled times or intervals in the background, leaving you free to perform other tasks.

A few tips for writing cronjob tasks that I found extremely helpful when collecting data:

  • Construct your scripts in a way that cycles through your API keys to stay within the rate limit.
  • Be sure to catch exception errors that may occur when accessing Twitter’s API and write to an error file for later review. This will allow for your scripts to run unattended and not crash the entire program when an error occurs.
  • Run your scripts on a remote computer (unless you want to keep your computer on the entire time the scripts are running!).


Tools used:

  • Twitter API —  A Python wrapper for performing API requests such as searching for users and downloading tweets. This library handles all of the OAuth and API queries for you and provides it to you in a simple Python interface. Be sure to create a Twitter App and get your OAuth keys — you will need them to get access to Twitter’s API.
  • MongoDB —  An open source document storage database and is the go-to “NoSQL” database. It makes working with a database feel like working with Javascript.
  • PyMongo — A Python wrapper for interfacing with a MongoDB instance. This library lets you connect your Python scripts with your database and read/insert records.
  • Cronjobs — A time based job scheduler that lets you run scripts at designated times or intervals (e.g. always at 12:01 a.m. or every 15 minutes).


Once the tweets have been successfully stored in your database, you can manipulate the data to fit the needs of your project. For my project, I removed common words and created an index on the text of the collected tweets to perform the similarity comparisons.

Accessing the Firehose

If you’re ready to go beyond the data limits that Twitter imposes for free access, you can upgrade to Twitter’s Firehose API where you can get nearly unlimited access to Twitter’s data stream via one of the various data providers that Twitter partners with, including Dataminr (CNN recently partnered with Dataminr build an application that alerts journalists in newsrooms of breaking news and emerging trends), Datasift, Gnip, Lithium, Topsy.

What now?

While the number of projects you could build using Twitter data is close to infinite, there are a few cool and fun civic-minded projects already out there. NoHomophobes.com gives you a glimpse of how prevelant homophobic speach is on Twitter. Closer to home, Knight Lab has developed a number a different projects using the tools above: twXplorer, BookRx, and  NeighborhoodBuzz to name a few. While the scope of these projects range from text aggregation to recommendation engines to sentiment analysis, they all leverage the use of various open source tools to access Twitter data and build applications on top of it.

About the author

Allen Zeng

Undergraduate Fellow

Latest Posts

  • A Big Change That Will Probably Affect Your Storymaps

    A big change is coming to StoryMapJS, and it will affect many, if not most existing storymaps. When making a storymap, one way to set a style and tone for your project is to set the "map type," also known as the "basemap." When we launched StoryMapJS, it included options for a few basemaps created by Stamen Design. These included the "watercolor" style, as well as the default style for new storymaps, "Toner Lite." Stamen...

    Continue Reading

  • Introducing AmyJo Brown, Knight Lab Professional Fellow

    AmyJo Brown, a veteran journalist passionate about supporting and reshaping local political journalism and who it engages, has joined the Knight Lab as a 2022-2023 professional fellow. Her focus is on building The Public Ledger, a data tool structured from local campaign finance data that is designed to track connections and make local political relationships – and their influence – more visible. “Campaign finance data has more stories to tell – if we follow the...

    Continue Reading

  • Interactive Entertainment: How UX Design Shapes Streaming Platforms

    As streaming develops into the latest age of entertainment, how are interfaces and layouts being designed to prioritize user experience and accessibility? The Covid-19 pandemic accelerated streaming services becoming the dominant form of entertainment. There are a handful of new platforms, each with thousands of hours of content, but not much change or differentiation in the user journeys. For the most part, everywhere from Netflix to illegal streaming platforms use similar video streaming UX standards, and...

    Continue Reading

  • Innovation with collaborationExperimenting with AI and investigative journalism in the Americas.

    Lee este artículo en español. How might we use AI technologies to innovate newsgathering and investigative reporting techniques? This was the question we posed to a group of seven newsrooms in Latin America and the US as part of the Americas Cohort during the 2021 JournalismAI Collab Challenges. The Collab is an initiative that brings together media organizations to experiment with AI technologies and journalism. This year,  JournalismAI, a project of Polis, the journalism think-tank at...

    Continue Reading

  • Innovación con colaboraciónCuando el periodismo de investigación experimenta con inteligencia artificial.

    Read this article in English. ¿Cómo podemos usar la inteligencia artificial para innovar las técnicas de reporteo y de periodismo de investigación? Esta es la pregunta que convocó a un grupo de siete organizaciones periodísticas en América Latina y Estados Unidos, el grupo de las Américas del 2021 JournalismAI Collab Challenges. Esta iniciativa de colaboración reúne a medios para experimentar con inteligencia artificial y periodismo. Este año, JournalismAI, un proyecto de Polis, la think-tank de periodismo...

    Continue Reading

  • AI, Automation, and Newsrooms: Finding Fitting Tools for Your Organization

    If you’d like to use technology to make your newsroom more efficient, you’ve come to the right place. Tools exist that can help you find news, manage your work in progress, and distribute your content more effectively than ever before, and we’re here to help you find the ones that are right for you. As part of the Knight Foundation’s AI for Local News program, we worked with the Associated Press to interview dozens of......

    Continue Reading

Storytelling Tools

We build easy-to-use tools that can help you tell better stories.

View More