Our journey to shorter release cycles

The team that is responsible for the development of the ticketing websites of Ticketmaster International has been around for many years.

Where we came from

When we started we had a rather undefined project based process where we released new versions of the site when a project was ready. It was generally many months between each new version. Sometimes over a year. We were using a waterfall approach.

The business made project requests and handed over their requirements to the product team. They then analysed the project and wrote a large specification that they handed over to the visual design team. When designs were done the engineering team analysed the specification and made a technical design. After that, we broke the project down in smaller work items and worked until we were done. Then testers got to work and reported bugs for anything that didn’t follow the specifications. After a few loops of bug fixing and testing again, the new version was tested again for regression issues.

Then the new version was handed over to the operational team that deployed it to a staging environment where the business could see the result. It was common that the end result did not meet the expectations of the stakeholders. All the handovers in the process meant that the original intent was lost.

water-fall_454x172

Where we come from (Image courtesy of iquestgroup.com)

One step at a time

Our first step to improving this was to bring the testers into the same team as the developers. That meant they could test right after something was done. The next step was to regularly demo what we had done for the product team and the business. That meant we got feedback earlier if we were not delivering what was asked for.

We started to do sprints of originally 3 weeks with a demo after each sprint. Then we started to bring product managers into our daily meetings to be closer to the development. Then we made the engineering team more involved in defining the work. We broke the project down to stories that would bring value to the end user. The development team was actively helping product managers breaking down the project to user stories.

We shortened our sprints to two weeks. We also started doing test driven development. That is a way to help driving good technical design by doing automatic tests even before the functionality is developed. We also wrote higher level tests that tested the system from the outside. We decided that a new release would happen every 8 weeks.

After doing that for a while, we moved down to a 5-week cycle. We still had low coverage of automatic tests so we had to test the product manually for regressions at the end of each release. We got stuck in this state for quite some time but were happy with the way things worked. After all, we had come a long way from how we used to work.

We took further steps to work better together. We moved people to the same office, we started to create tools for improving our efficiency of deploying and generally automated as many manual tasks as possible. Coverage of automatic test grew over time.

Our most recent step

We wanted to take a further step to reduce our cycle. This time, the goal was to release to production after every sprint. It had a more profound impact than the changes we did before. All stakeholders would be affected as everyone would need to do their part more often. We decided to ask all stakeholders what such a change would mean for them and what changes were needed to enable a 2-week release cycle. We started tackling the identified issues.

Of course, our main priority in this change is to maintain the high-quality standards we have previously set for ourselves. We measure the rate at which high priority bugs are reported from production and how often and for how long business disruptions occur.  

One leap of faith we have to do when releasing after every sprint is that we removed the regression testing step that we used to have at the end of the cycle. That’s not to mean that we don’t test for regressions. Every story has manual exploratory testing included. All our automatic tests also catch regression issues and they are fixed within minutes of being introduced.  We also need to deploy the new version of code without disrupting sales. The local staff in each country that use our product must be able to translate any new text to the local languages before production deployment.

What we learned

We now have some 2-week releases under our belt and things have gone well. It has uncovered lots of inefficiencies that were hidden before and made everyone rethink their ways of working. Doing something that feels wasteful can be tolerated when you do it only every fifth week. But doing it every other week gets you thinking of ways to improve efficiency much more. It has also made us take the incremental approach much more serious.

I’m very proud of what our team has accomplished so far. It has taken hard work and dedication to reach the point where we are now.  The next steps would be to come to the point where we release code as soon as something is done no matter how small the change.