Grow with us: Join Postmark's new referral partner program and start earning
x

Automated Testing at Postmark

With Postmark, a day doesn’t go by that I don’t think to myself how simple it is to use it and how fast you can start sending emails.

Simple, easy, fast - three words which are our top priority goals. Achieving all three, especially sending emails fast comes with a price: reliability. From day one, our goal has been to deliver email to you reliably. Losing emails or emails not reaching your inbox has never been an option.

We have been working very hard to maintain this. My main goal today is to share with you what I do, as a tester, to make sure Postmark is doing its job.

The tools we use #

Before I go into further detail, let me share with you which tools we use to test Postmark:

  • Selenium (Selenium Grid, Selenium WebDriver + RSpec)
  • Jenkins
  • Statsd/Librato

Writing the first tests #

I have been writing automated tests for Postmark for a while now. The first tests I wrote are the ones which we run most frequently.

When creating the test suite, I decided to work first on the most important test cases, from which myself and the team will benefit the most.

For Postmark, this was a no brainer. The most important thing to test is sending. First I wrote the tests which covered sending email by SMTP and the API. These tests are sending emails and check whether they reach the inbox (or not).

Postmark first's automated testing started with a simple suite.

We are tracking how long it takes to send and receive the email. If we see any slowdown, the tests notify us right away. These sending tests run every 5 minutes.

Extending the test suite #

Once I was sure that we covered all important test scenarios for basic email sending, I started extending our test suite with Inbound tests, Bounce API, and UI tests.

Our simple suite grew into an extended suite of automated tests.

Running the tests #

As soon as I wrote the first tests, I wanted to have a way to run them as frequently as possible. One of the options we considered was running them with SauceLabs. SauceLabs is a great service to run your tests in the cloud with rich reports and the option to run the tests in most of the Browsers/Platforms you can think of.

The only problem with this is we found SauceLabs would not be cost effective for us, since we have to run parts of the test on one of our machines frequently. It made the most sense to have a dedicated machine for testing.

Russ, our Sys Admin setup a dedicated Linux machine on which we installed Jenkins to run Selenium tests. Jenkins is a great CI tool, used a lot these days by other software testers too. We decided to use Jenkins on this machine solely for my tests.

Execute tests as frequently as possible #

The main idea was to run the tests as frequently as possible so we are running Selenium tests all throughout the day. This allowed us to gain much more feedback from Postmark compared to running the tests only when there are changes being introduced.

Why do we run the tests all the time? Our developers have created a lot of tools that monitor Postmark’s health all of the time, but its always better to have two sets of eyes looking at our web application, and Selenium tests act like real users more than any other part of our test suite.

We can sleep soundly knowing that emails are being sent, activity pages are working, and our customers can use all of the features in the UI. As a bonus, the Selenium tests generate useful data in the test account we are using, which can help in manual testing.

To allow faster running of our tests, and continuous running of top priority tests, we use Selenium Grid and Jenkins nodes to be able to run tests in parallel. The sending tests run on separate nodes in Jenkins. The tests that check the UI are running on separate selenium ports.

Postmark's automated build queue.

Statistics #

Since we are running the test all of the time, there is a possibility to track much more information about our tests than whether they are passing or failing.

Postmark's test statistics are collected and displayed in Librato for the entire team to monitor.

My tests are checking sending, search, activity, bounce api, and inbound processing. So why not have statistics for all these tests?

Thanks to Chris, I found out about Librato and we decided to integrate the Selenium Tests with Librato. This allowed us to have a complete performance history of search, bounce api calls, and inbound processing.

Summary #

The regression tests are working hard every day on our testing machine. They are a third eye, watching over Postmark constantly. They helped us out numerous times in preventing issues and even foreseeing issues.

Of course, there is always a room for improvement, and I am eager to hear what do you think, what is the process/workflow you are using to test software.

Igor Balos

Igor Balos

QA Boss, big fan of smart, simple, beautiful design