MozFest 2013: If it ain't broke, break it — how and why to test your news site

Moments following the Boston Marathon bombings, the Boston Globe's website shut down due to excessive traffic. And it stayed down. For hours. Suddenly, the state's most prominent news provider was no longer an information resource for arguably the state's most newsworthy event in years.

As Dan Sinker, head of the Knight-Mozilla OpenNews project, and speaker at this year's MozFest so eloquently put it: this is a really stupid problem to have.

Sinker teamed with Dylan Richard, former director of engineering for President Barack Obama's Obama for America Technology team, to discuss how, why and what we can learn from a news website's server going down — otherwise known as failure — and potential solutions for dealing with the issue.

To start, the best way to avoid failure is to prepare for it. Basically, this involves baking in the ability for something to stop working in an application, because seeing how things break, or what failure actually looks like (think: the dreaded 404 error message), allows the site's backend team to amass a playbook on how to handle potential unexpected issues (Richard calls them "game days").

It's collective muscle memory on how to fix things.

Richard explained that he and his team performed these game days by shutting off subsequent databases at random for 12 hours straight. What this does is test the server's resiliency, which, while time consuming, is necessary to protect against problems that may be outside the realm of their control, such as a hacker infiltration or natural disasters.

The two then posed three questions to the room in front of them: What are the specific ways a site goes about simulating failure? What does that failure look like? And what can be learned from such incidences?

First, how to simulate failure? I thought first about cutting off email and phone communication, but suggestions from the group included intentionally slowing down or turning off the site's database, turning off its cache and simulating traffic.

Secondly, what failure looks like, in many cases, can be missing widgets or other integral portions of the page or the aforementioned 404. While a larger publication like the Globe isn't likely to lose users over a broken server here or there, I imagine the negative consequences on a smaller, less-established site could be far more significant.

One person in the workshop recommended rectifying the situation by providing a "helpful" error message that explicitly explains to the user why they can't view the content, or, better yet, placing the actual article in plain text on the 404 page itself.

The benefits of performing game days are plentiful. A company can define how they communicate with users in the worst-case scenarios (a solution here would be to direct them to another resource they've created, such as a Tumblr blog), determine the quality of its documentation, who its core users are — i.e. who's complaining the loudest — and, perhaps most important of all, how to fail gracefully.

As someone who may be (re)entering a newsroom environment in the near future, I feel these are the types of discussions that can help improve communication between companies' often segregated tech and reporting teams.

With knowledge of how user traffic and failure scenarios work, journalists can remain informed, which beats the more typical alternative of an exasperated plea to "fix it" without understanding the process behind the problem.

Latest Posts

Storytelling Tools

We build easy-to-use tools that can help you tell better stories.

View More