Taking a Stand Against Overdesign

This post was co-authored by Jean-Philippe Grenier and Sylvain Gilbert. 

Have you ever spent countless hours arguing about how things should be done? Have you ever been in an analysis-paralysis scenario where you were going through so many use cases that you couldn’t figure out the architecture? Have you ever seen an architecture being built around a particular case which we may eventually want to support?


We have already been told multiple times to “Keep it simple, stupid”, that “We don’t know what the future holds”, that “You ain’t gonna need it” and that “We’ll cross that bridge when we get there”. So how is it possible that we have such a hard time sticking to these principles?

When it comes to software architecture, we tend to forget about these principles and we want to build software as we build houses and bridges. We want to build the target architecture from ground up right away.

What happens in turn is that people anticipate for needs that they don’t currently have, needs that they may not be ready for, but still they go-ahead and build this architecture as a one-off project. Doing so results in an over-designed architecture which exceeds the needs, is excessively complex, and in turn makes lengthy release cycles for no added value.

Overdesign is expensive

Don’t underestimate the cost of overdesign and future proofing an architecture. Its impact can be found in each and every phases of software development and can easily cripple your team’s velocity.

Design: Discussions/debates have costs and the lack of firm requirements will skyrockets those cost. It is almost impossible to agree to a solution without agreeing on the problem.

Implementation: Future proof code is expensive. Coding, testing and maintaining future use cases is never free and, when times come, chances are that your future proof code will need refactoring anyways.

Testing: Good luck with explaining to your quality engineer that a bug that he found isn’t important because that particular feature isn’t used yet.

Evolution: The more complex your architecture is, the higher the upkeep will be and you will have less time for enhancements.

Onboarding: Learning curve of understanding a complex architecture, complex object models and complex code has a cost.  If your new hire isn’t fully up-to-speed within a few days, it may not be because the individual is a slow learner.

Requirements:  Pushing your architecture beyond your needs can only lead to misalignment and surfaced complexity to your end users. You want your architecture to reflect your current requirements.

If thinking ahead too much gets us in trouble, what other option do we have left? Have you ever heard of Red, Green, Refactor?

Evolving culture


Image by Henrik Kniberg via Chris Pegg

Our software development culture is largely based on the principles behind TDD where we favor clean code and refactoring, we stick to concrete requirements, we focus on the task at hand and we don’t implement the end goal right away.

Going for the simple solution is like putting money in the bank. You will save a tremendous amount of time by not taking the overdesign route, you will be delivering value sooner and, as you get meaningful feedback, you will be spending your money more responsibly.

The biggest challenge is developing an agile and iterative mindset. You have to resist the temptation of thinking too far ahead and instead focus on your immediate needs. You also have to break the preconception that it is wrong to throw away what you just did and start over. You have to allow yourself to make changes and break walls if necessary.

Avoiding refactoring is expensive

“Refactoring makes it easier to understand the code – which makes subsequent changes quicker and cheaper. Preparatory refactoring can pay for itself when adding the feature you’re preparing for.” – Martin Fowler

People typically think of refactoring as a bad thing, that they hadn’t thought of all the possible cases ahead, that they made a mistake and failed. It is quite the opposite actually and refactoring should be celebrated; it removes poor code, improves maintainability and increases quality. Refactoring is the natural evolution of code.

When should you refactor poor code? Always. At every chance you get. As soon as you see poor code, you have to refactor it. If you don’t, you will create technical debt.

When your debt starts to increase, you will start planning ahead for refactoring and this is the first sign that your team hasn’t done enough refactoring. Your debt will eventually get out of control, you will have to build an even more complex architecture in order to allow you to replace the old code by new one, or even worse, you will have to rebuild everything from scratch.

The key

“You can’t stay agile without clean code. You can’t have clean code without refactoring. You can’t refactor without good automated tests” – Matt Wynne, founder at Cucumber Ltd

Refactoring wouldn’t be possible without automated tests and we have a large amount of them that keep us true to our functional offering. All of our acceptance criteria are automated with Cucumber, which includes error handling and resilience tests. We also have a huge amount of unit tests that we built while developing with TDD.

We trust that we have a good safety net in place and we’re confident that refactoring won’t introduce regressions. We feel comfortable refactoring.

Good development practices needs to be put in place and the first rule is that our code needs to be easily testable. We wouldn’t be able to achieve a good maintainability without it and following TDD and SOLID principles has allowed us that.

Give power to the people

We see the role of enterprise architecture a bit differently now. Architects gets the bigger picture and they have a good sense of what the target architecture should be, not will be. Development teams use them as a guide when implementing new features to ensure that, as a company, we are all moving in the same direction.

We follow the agile manifesto as much as we can and we favor:

  • Individuals and interactions over Process and tools
  • Working software over Comprehensive documentation
  • Customer collaboration over Contract negotiation
  • Responding to change over Following the plan

Trust and collaboration are essential values in realizing a successful agile team, so trust your developers, they hold the key to a successful architecture!


One thing we know for certain is that things will change; requirements, priorities, technology, design patterns, people – they have changed and will continue to change.

Focusing on your current needs will allow you to easily evolve and deliver value much sooner, with the added benefit that you will get continuous feedback to support future decisions. You have to be smart about how you spend your time and we know that thinking too far ahead and taking the overdesign route can be disastrous.

Simpler is better!

6 thoughts on “Taking a Stand Against Overdesign

  1. I’d like to add a caveat here:

    When should you refactor poor code? Always. At every chance you get. As soon as you see poor code, you have to refactor it. If you don’t, you will create technical debt.

    While great testing leads to great refactoring confidence, even the best tests with 100% coverage can miss the “business knowledge” driving the functionality, especially in old code.

    Refactoring should be done in areas where you are making a required change. If for instance I have a program with two separate areas “WalkTheDog” and “PetTheCat”, and the user wants to add “StopAtFireHydrant” to “WalkTheDog”, I should not go and refactor “PetTheCat”, even if the smell is horrid coming from the cat. Doing so creates a hidden work-item which suddenly adds down stream QA and UAT work and compounds risk. Better to create a new work item for refactoring “PetTheCat” and socialize it to your team. Best to achieve user buy in that the additional work is necessary before starting on the refactoring mission. Be transparent; don’t do work that no one is asking for!

Comments are closed.