The Tao of Ticketing

Our ticketing Inventory Service here at Ticketmaster searches for and reserves seats in an event. It exists exclusively for inventory management, unlike the classic Ticketmaster “Host”, which also handles reporting, user management, event programming, and more in addition to seat selection. The Inventory Service is arguably Ticketmaster’s core. It is the base layer underneath the entire Ticketmaster architecture. It is the singular part of our software stack that gives the stack its unique Ticketmaster identity.

It is also a seeming anachronism in an SOA architecture. The Inventory Core (the lowest level component) is written in C++ with assembly bits for hyper-critical sections. It communicates via Google Protocol Buffers; doesn’t scale; and can’t be distributed. Why?

Ticketing is a complex and unique technology and computer science challenge which requires the unique solution we’ve developed. How do we allocate multiple high demand spatially aware contiguous seats in a fair but ultimately performance-oriented manner?

The Challenge

Seats are physical entities in a row inside of a section of a venue (like an arena or an amphitheater). There are a finite number of them in any grouping and they each have a spatial relationship with the seats around them. Ticket buyers nearly always buy seats in multiples and so any search algorithm needs to find a specific number of contiguous seats and not just a series of seats from a generic bucket.

But even then, it’s not that simple. Since most requests are for multiples, there is a design goal to eliminate leftover “single” seats whenever possible. That means that the search algorithm needs to be aware of all the seats ahead of and behind the current selection possibility to ensure there are no gaps. It must also be aware of the row size, to ensure that there are no single empty seats at the beginning or end of the row.

This is merely the base of the challenge. We add on multiple layers on top of the seats that dictate zones (prices), special areas, production holds, or are available only through specific deals… and more. Each attribute applied to a seat raises the complexity of the search by a magnitude.

Now combine all this and a bit more and extend it to work over an entire series. That’s the problem for season ticket holders. It multiplies the complexity of the algorithm over each event in the season, rather than just one. One fan may search for seats for an entire 81 game baseball season while another is searching for a single game within the same series, making the search for the season ticket all the more complicated.

Handle all of these requirements and you have a selection algorithm that is fair and efficient. A naive implementation would also be incredibly slow.

It’s tempting to think that the performance could be accelerated by scaling out the processing, as is typically done in SOA architecture. This isn’t feasible, though, due to the fact that the search algorithm needs to run over the entire event. It is not possible to restrict searching (in general) to a specific segment within the event, since the various physical (section, row, etc) and conceptual (price levels, etc) constructs often overlap and span an event. For all practical purposes, this would often mean that an event-level lock would be needed for every distributed search, and that erases any efficiencies gained from sharding the event. But read on for why it’s even worse than that.

The Challenge; Amplified

Ticketmaster has another unique challenge in that demand for tickets doesn’t flow linearly. Events go on sale at a specific time and this “on-sale” creates a dramatic spike in the demand. Think of it as the difference between a regular retail day and “Black Friday”… only Ticketmaster sees the equivalent of such “Black Friday” sales during our most popular event on-sales.

From a search perspective, the biggest challenge is that everybody wants the same seats, simultaneously. Ticket buyers not only want multiple seats, but they want the best ones, as well. That means that any “Best Available” search algorithm will be searching the exact same set of seats (section/row) for all concurrent requests. So even if it was possible to shard an event for distributed processing, in practical terms, every single distributed search will be locking and searching the exact same section.

Effectively distributing processing on an event would require that the event could be sharded and that requests would be distributed evenly throughout the event. Neither is true. The ramification of that combination is that the search algorithm must run in a single process.

On-sales also require both queuing and prioritizing. The queuing is necessary since our Ticketmaster commerce layer could send far more requests to the Inventory Service than there are available physical ports, due to Inventory Core’s requirement of being a single process. The prioritization is necessary to prevent large on-sales from overwhelming all of the requests being sent to the Cores.

Finally, the fact that so many requests can happen at one time also exacerbates the need for a very fast search implementation.

Not a Solution

It’s common to compare Ticketmaster with other massive e-commerce sites like Amazon and eBay. After all, all three sell millions of items a year. Is allocating ticketing inventory similar to selling books on Amazon or MacBooks on eBay?

No. Not even close. The complexity of selling non-fungible, unique, high-demand products makes selling tickets the most complex e-commerce task. The similarities between Ticketmaster and the others end at the upper-level service layers. The core Ticketmaster problem is truly unique.

In the case of selling retail items (ala Amazon), there need only be a counter on a global bucket. This is similar to if the only way to buy tickets was via General Admission (GA) — a much easier proposition. Amazon also can get away with loosely allocating items without actually reserving them. It’s acceptable to get a “Your item is no longer available” at checkout at Amazon but is not at all acceptable for Ticketmaster. Seats aren’t fungible like books are — if a particular seat doesn’t exist, then it never will. A book can be reprinted but once the event (a concert, say) is over, those seats are permanently gone.

In the case of eBay, they are selling individual items that have no spatial or conceptual relationship with the items around them. The closest analogy to ticketing is as if each item is one seat in its own “event”. Again, this is a much more trivial problem to solve.

Airline seat selection is likely the most similar external case. It’s still a much simpler case, as there are far fewer seats; fewer rows; fewer sections; and fewer restrictions. The lack of any truly similar use cases means that the Inventory Service Team has had to create a home-grown singularly unique solution to our own singularly unique and complex challenge.

The Solution

Put all that together and it’s clear that the solution needs to be as unique as the challenge. The demands of the algorithm (must be serialized; cannot be scaled; very high peak loads) requires it to be exceptionally fast, hence the C++/assembly core. The authors of the Core are long-time Ticketmaster Host assembly developers, who are used to maximizing tiny bits of memory and CPU.

Each event is assigned exclusively to one Core, which loads it into memory and performs the search algorithm from there. The result of all this is low level search times measured in microseconds.

There is a service layer on top of the Core that is more typical of an SOA architecture. It is a scalable and distributable web-service layer (written in Camel) that can adapt to handle incoming synchronous requests as fast as possible and route them to the proper Core.

2 thoughts on “The Tao of Ticketing

Comments are closed.