Our technical teams often gets asked how they work and what sorts of technologies they are using and integrating – both for Pond and the Managed Network.  Here is the first blog from our Dynamic Services team who are building Pond.

I’m Barry Dahlberg and I head up a team of three that looks after the back end of Pond. Our main job is to develop and maintain all of the data and heavy processing behind Pond so that the front end team can focus on presentation, usability and making the site work well on all of your devices. We have big ambitions for Pond so a key part of our role is thinking about performance, scalability and reliability for the future.

As an example of a process we follow, I’ll focus on what happens when a user adds a new item to the Pond catalogue.

First of all the system runs the necessary checks to make sure that what the user is trying to do is allowed and valid. We check that the user is logged in, has the appropriate permission to add to the catalogue and that the item being submitted has all the required pieces like a title, educational suitability and so on. If everything looks OK we’ll save the item into our database and tell the user that all is well. There’s loads more that we have to do before the item can be shown in the catalogue but we don’t want to hold the user up any longer than we have to, so after saving the item to the database we add the remaining pieces of work to a queue and let the user continue using Pond.  This all happens inside our API (Application Programming Interface) layer web server which is built on Microsoft’s .NET Web API.

Queueing is one of the ways we keep Pond working smoothly and able to scale to a very, very large number of users. We run a number of copies of a queue watching application, also built in .NET, which constantly monitor the queues for work to do. If the queues start filling up we can simply (and automatically!) start more copies of the application to handle it. Amazon’s queuing service (called ‘SQS’) has proven to be a great service for managing our queues and it gives us lots of options for balancing scale, performance and hosting costs as Pond grows.

Some examples of the work that gets queued when a user adds an item include importing any image attachments to our image service, sending email notifications and recalculating any background information such as the number of comments and the average review score. Perhaps the most important task though is adding the item to our search index.

When you browse the Pond catalogue you are actually looking through our search index which is stored in Amazon’s CloudSearch service. By taking information about the item and reformatting it appropriately for CloudSearch, Pond benefits not only from powerful search capabilities but also from not requiring any search queries to hit our core database, keeping everything nice and fast! We expect search to be one of the more challenging workloads for Pond in future so we like having it isolated somewhere that we know we can scale easily and doesn’t require any hands-on maintenance. We can update or rebuild the entire search index from data in our database and we quite often do this to adjust indexing for new search functionality, all while users are actively using Pond!

One of the advantages of having work broken into small tasks like this is resilience against failure. Part of our system, such as attachment processing, can fail and the rest of the system will carry on working just fine. Users can still save items and the image processing tasks will remain in their queue to be retried and completed when the problem is resolved. It’s this kind of design that takes a little while to think through at first but pays off hugely in the end.

This might seem like a lot of complicated work to save a simple item to the catalogue but it is completed in just a few milliseconds and users never notice what goes on. The mark of a great piece of software is that it’s invisible!