View from Section – Behind The Scenes

I’m a tennis fan and going to the US Open has become an annual tradition for me. So, I was more than excited when the US Open was among Ticketmaster’s first few events with the View from Section feature. Our aim as always is to provide a great live event experience to fans – allowing fans to check out the view from their seat before they buy tickets is just another part of that mission.

VfS-screenshot

For the views themselves, we have excellent partners in io-media, who, with their Virtual Venue™ technology, provide us with 3D renderings for the venue views. What we needed then was a glue layer between our front ends and io-media’s views. For this post, I’d like to talk about what went into building that service – a RESTful web service, we called Venue View Service (VVS).

What is it?

The Venue Views Service has two distinct roles:

  • Ingest views for venue configurations from the provider
  • Lookup and return views for a venue configuration, given a seat location

Both functions have their own performance and traffic characteristics. Ingestion can be slow (relatively speaking) and occurs periodically; view lookups get a lot of traffic and have to be fast – in the order of milliseconds. Front-end clients request views for a venue based on section, row, and seats. However, we may not always have views for every seat, in which case we would have to present the next best row-level view and when row-level views are unavailable, we would have to return section-level views. The service also needs to support bulk requests so that clients can retrieve views for multiple locations in a single request.

The Building Blocks

The first decision for us was the choice of database. For each location, the service has to maintain URLs to the views in multiple size profiles and multiple content types. At their most granular, views may be available at the seat level or as coarse grained as the section level.

While a traditional RDBMS database would work, the nested nature of the data model lends itself more naturally to a NoSQL solution. Not having any transactional requirements made the decision easier in favor of NoSQL – MongoDB in our case. While evaluating other NoSQL options, our organizational experience with MongoDB weighed significantly in its favor. We chose a read preference of NEAREST as minimizing read latency was critical at the expense of potentially having some stale data. Having the luxury of time with our writes, we chose a write concern of “Acknowledged”.

While MongoDB is very fast for reads with appropriate indexes, our load tests indicated that we needed a distributed cache to further improve performance, particularly for bulk requests. We opted for Hazelcast – our go-to solution when we need a distributed cache. Its near cache capability allows us to even avoid the round trip to the cache server, translating to very fast response times.

Ingestion jobs can be long running particularly with larger venues and we need to be able to reliably pause, resume and cancel jobs. Once again, with the luxury of prior experience, we chose Spring Batch, which provides us with an easy to use framework for batch processing. To prevent ingestion traffic from affecting performance of the lookup service, we dedicated a class of nodes for ingestion and another class for the lookup service.

component-diagram

Getting It Done

One of our objectives with this project was to rigorously practice Continuous Delivery and get code into production fast. We invested time up front to make sure we had a robust build pipeline in place. We use jenkins and rundeck as our Continuous Integration and Deployment tools.

We use Cucumber BDD for our functional and integration tests and Gatling to exercise our stress tests. Continuously exercising our stress tests as part of our build pipeline help us identify when code changes have negative impacts on performance. The investment in the build pipeline allows our development team to focus on writing code and tests without having to spend time deploying code to multiple environments. As a result, we are able to deploy our code to production multiple times per sprint and sometimes as frequently as once per story.

build-pipeline

What’s Next?

We’re continuing to roll out this feature to other parts of the site and we’re very excited to launch View from Section on our mobile apps soon. The service itself continues to be improved based on what we learn in production. I hope this behind the scenes look at the Venue Views Service sheds some light into how services are stood up at Ticketmaster.

22 thoughts on “View from Section – Behind The Scenes

Comments are closed.