Paying dinosaurs: Lessons learned from many hacky deployments with Heroku

As a hobby developer and computer science student, I find myself using Heroku to release many of my projects. Heroku is a cloud Platform as a Service (PaaS) business that provides server power for developers, and I have taken recently been taking advantage of their Free and Hobby plans.

While Heroku offers a simple, cheap solution to developers, it’s not perfect. The documentation isn’t always clear and there are many small hurdles that come up in the deployment process that can be difficult to debug. I have been working through these types of issues for the past few months, so I’ll share some of the tactics that have helped me optimize my Heroku deployments.

This is by no means a comprehensive list. Rather, these are just a few things that I’ve found to be helpful to me as a developing Heroku user.

Working with apps and dynos

Heroku allows developers to have multiple applications under one Heroku account. Heroku enforces a limit of five applications per account, unless you verify your account by adding your payment information. Giving Heroku your payment information doesn’t necessarily mean they will begin to charge you immediately. The necessity to pay for Heroku will arise when your application begins to have frequent usage.

So how is usage measured? Heroku uses dynos (which can be defined in the Procfile) to run user-specified processes (for example, the command node app.js to start your Node.js application or python app.py to start your Flask application). When your application hasn’t been accessed for a while, Heroku deactivates the dyno and restarts when it’s needed again. Therefore, the usage of your application can be measured by the amount of time that your dyno is active.

Here’s the mental metaphor that I use to remember it all: A dyno is actually a dinosaur, and as a developer you can enlist these dinosaurs to help you out by specifying processes for them to follow. If you don’t pay the dinosaurs, then they’ll work for you at most eighteen hours in a given day, and they’ll take naps after 30 minutes of being idle. If they’re working hard for all eighteen hours, then they’ll need six full hours of sleep. Luckily, if you pay the dinosaurs then they’ll be willing to stay up around the clock.

I’ve found that it’s been especially important to understand the details of dyno sleeping when using APIs that are deployed with Heroku’s free plan. Even though the hobby APIs I have worked on do not experience much traffic, requests that I send to them might timeout if the application is asleep when the request is sent. I’ve tried using code to ping the servers in order to wake up dynos before sending requests, but I have found that in those situations it may be better to use a paid Heroku plan or a different PaaS.

Setting up your application for deployment

Each time a commit is pushed to your Heroku server, it attempts to rebuild your application project. This means that each project should include a listing of the dependencies so that they can be installed in your Heroku build process. For example, a Node.js project should include a package.json file and a Flask application should include a requirements.txt file. In Node.js, you can use npm install -- save to automatically add the module to the dependency listing in your package.json file. With Python-based project, one easy tool for creating your requirements.txt file is a command line tool called pipreqs.

It is also helpful to explicitly set the buildpack for your application to ensure that the correct framework is being used to build and run your application. Choose the correct buildpack and then run heroku buildpack:set buildpackname. In some cases, it may also be helpful to change the timezone for your application, especially when you’re working with time-sensitive code such as job schedulers. You can select the right timezone and then run heroku config:add TZ=”Continent/City”.

Two cool things to note: You can SSH into your Heroku server by running heroku ssh. Also, you don't have to commit a new change to make your server rebuild. Heroku will rebuild your application when you submit an empty commit as well, so you can run git commit --allow-empty -m "empty commit" and then push it to Heroku with git push heroku master.

Checking the logs

The first step to debugging any Heroku issue is checking the Heroku logs. After installing the Heroku Command Line Interface, you can open up the directory of your Heroku project in the Terminal and type heroku logs. This command displays the log statements being printed your Heroku server. These logs will show the printed outputs coming from your application, so while you’re in the debugging state it may be helpful to add some thoughtfully placed print statements to your code to ensure that things are working as expected. Heroku will also print of the details of errors that are arising when it tries to run your application. These will be helpful to review.

So far, that’s what I’ve learned. Heroku is a really extensive platform and it can be used for many things. Please share your thoughts and advice in the comments!

About the author

Bomani McClendon

Student Fellow

Latest Posts

  • With the 25th CAR Conference upon us, let’s recall the first oneWhen the Web was young, data journalism pioneers gathered in Raleigh

    For a few days in October 1993, if you were interested in journalism and technology, Raleigh, North Carolina was the place you had to be. The first Computer-Assisted Reporting Conference offered by Investigative Reporters & Editors brought more than 400 journalists to Raleigh for 3½ days of panels, demos and hands-on lessons in how to use computers to find stories in data. That seminal event will be commemorated this week at the 25th CAR Conference, which...

    Continue Reading

  • Prototyping Augmented Reality

    Something that really frustrates me is that, while I’m excited about the potential AR has for storytelling, I don’t feel like I have really great AR experiences that I can point people to. We know that AR is great for taking a selfie with a Pikachu and it’s pretty good at measuring spaces (as long as your room is really well lit and your phone is fully charged) but beyond that, we’re really still figuring...

    Continue Reading

  • Capturing the Soundfield: Recording Ambisonics for VR

    When building experiences in virtual reality we’re confronted with the challenge of mimicking how sounds hit us in the real world from all directions. One useful tool for us to attempt this mimicry is called a soundfield microphone. We tested one of these microphones to explore how audio plays into building immersive experiences for virtual reality. Approaching ambisonics with the soundfield microphone has become popular in development for VR particularly for 360 videos. With it,...

    Continue Reading

  • Prototyping Spatial Audio for Movement Art

    One of Oscillations’ technical goals for this quarter’s Knight Lab Studio class was an exploration of spatial audio. Spatial audio is sound that exists in three dimensions. It is a perfect complement to 360 video, because sound sources can be localized to certain parts of the video. Oscillations is especially interested in using spatial audio to enhance the neuroscientific principles of audiovisual synchrony that they aim to emphasize in their productions. Existing work in spatial......

    Continue Reading

  • Oscillations Audience Engagement Research Findings

    During the Winter 2018 quarter, the Oscillations Knight Lab team was tasked in exploring the question: what constitutes an engaging live movement arts performance for audiences? Oscillations’ Chief Technology Officer, Ilya Fomin, told the team at quarter’s start that the startup aims to create performing arts experiences that are “better than reality.” In response, our team spent the quarter seeking to understand what is reality with qualitative research. Three members of the team interviewed more......

    Continue Reading

  • How to translate live-spoken human words into computer “truth”

    Our Knight Lab team spent three months in Winter 2018 exploring how to combine various technologies to capture, interpret, and fact check live broadcasts from television news stations, using Amazon’s Alexa personal assistant device as a low-friction way to initiate the process. The ultimate goal was to build an Alexa skill that could be its own form of live, automated fact-checking: cross-referencing a statement from a politician or otherwise newsworthy figure against previously fact-checked statements......

    Continue Reading

Storytelling Tools

We build easy-to-use tools that can help you tell better stories.

View More