First and foremost, Postmark is an infrastructure product. It’s our job to send your emails and get them to the inbox, with minimal or no downtime so you can rely on us. After that, everything is secondary. This past Saturday, in addition to our Chicago data center, including:
- Sydney, Australia
- Dublin, Ireland
- San Jose, California
- Ashburn, Virginia
Focused on lower latency and more redundancy #
When your app talks to Postmark, latency can cause problems for both background processes and your customer experience. For instance, imagine that your app is hosted in Australia (a bunch of our customers are). When your server connects to our SMTP servers in the US, due to the long trip and chattiness of the SMTP protocol. Three seconds is a lot, especially if email is not being sent in the background.
The only way to solve this is to connect to a server that is closest to your application server. In the example above, if the app was connecting to a server in Australia, the latency would be less than 400ms, including all of the auth and chattiness of SMTP.
Since this weekend, when your application connects to our SMTP servers, it will automatically hit the closest server to you. If your servers are in California or the west coast, you will hit our SMTP servers in San Jose. If they are in Europe, it will hit our SMTP servers in Ireland. And the same goes for the midwest US, east coast US, and Asia-Pacific.
Even better, since we have so many servers distributed globally now it provides more redundancy. For instance, If our servers in Australia go down, it will automatically failover to our US servers again. By having servers in every region, we can balance load and determine our heaviest hitting regions. Not surprisingly, due to so many apps being hosted in AWS East. That’s actually great, since the new SMTP servers we host are also in AWS, meaning even lower latency.
How did we do it? #
It’s pure DNS magic. Our DNS will detect the source of your app servers and automatically route it to the closest region. In the event of a failure in a region, it will failover to the US to continue accepting emails. We were able to get all of this done in about 6 days of work by using AWS and Chef to quickly spin up new instances.
What about some API love? #
Global support for the API is right around the corner as well. SMTP was just easier, but since we have more people who use our API than SMTP, we want to launch global support for the API very soon.
We hope you like it! I’ve been wanting to do this for a long time and it’s nice it all came together so fast. Big thanks to Russ and Igor on the Wildbit team for getting all of the servers ready and tested.
This post was originally published Apr 02, 2013