The Why of TDD

Many people have false starts with test-driven development (TDD). When I discuss it with people, many have some misunderstandings around the method, in particular the motivations behind it.

I believe if developers have a better understanding of why to use it, they’ll have a better chance of practicing TDD more effectively and get more value out of it.

This posts explores the reasons for using TDD and offers some guidance as to how to practice it more effectively.

I was invited to speak about this topic at the GoSG, a meet-up for the Go programming enthusiasts in Singapore. You can find the recording on YouTube.

This post quotes Growing Object-Oriented Software Guided by Tests (GOOS) (written by Steve Freeman and Nat Pryce) extensively as it aligns very closely with how I was taught TDD. I strongly suggest grabbing a copy and giving it a read. Ignore that the examples are in Java, the main lessons from the book are broadly language-agnostic.


If you’re new to software development, there’s several knowledge hurdles you need to leap over. The one you’ll be judged on the most is your ability to use the tools in front of you to get stuff working and get it to the customers. How well you do this and how delightful the result can be referred to as external quality.

Once you get proficient at this, the main challenges facing mid-level and above are to do with internal quality. This focuses on your ability to work on and design systems so that the cost of change is reasonable. This is important because if a system is useful, it will have to change over its lifetime.

When talking to non-technical folk about this, you can use an analogy of the quality of a factory, and its equipment. If parts of your machinery are unreliable or hard to deal with, the factory will begin to grind to a halt. This is also true of software. If internal quality declines, the ability to affect the external quality of the software will decline too.

There are a lot of software developers out there who can bash the code out enough to make useful software. A problem in our industry is we’re often incentivised and praised by our ability to work through “tickets” quickly and trying to hit (often arbitrary) deadlines. Frequently though, the internal quality gets neglected, and we hear these stories over and over again:

Why is the estimate for changing the system 3 weeks? It should be 3 days!

We need to spend a month re-writing the Foobar module.

I can’t stand to work on this legacy system again. Please let me move to a different project, or I’ll find somewhere else to work.

There are fewer developers who can write software that can live for many years without painful re-writes and staggering maintenance costs. The reason simply is, it’s really hard. It requires effort and study. I’ve been writing software for over 15 years and I still find it hard and feel the need to keep studying and reflecting.

There are developers and stakeholders who have advanced skills in justifying reasons for not caring about internal quality. They’ll say things like:

The only thing that matters is working software

What about time to market?

All worthwhile systems have to be changed (see Lehman’s laws of software evolution). “All that matters is working software” can’t be used as an excuse for poor internal quality today when you will have to change the external quality tomorrow. People tend to under-estimate the costs of poor internal quality and over-estimate the cost of keeping internal quality high.

You need to be thinking about the design of your system because the problem you are solving is different to everyone else’s. Domain knowledge (or lack-of) has a giant impact on what designs are suitable, so no single blog post can tell you how to design your system. It’s your responsibility as an engineer.

Many developers work in environments which have a real fear around speed of delivery, and you can get tempted in to short-term promises of productivity. A new programming language! A shiny new framework. Even folder structures in the Go community seem to be endlessly discussed as some kind of silver-bullet to good software. Chasing these things is a distraction and is not equipping you to strengthen your skills at writing software with high internal quality, you’re just learning another thing that will be unfashionable in a few years.

This is why I think TDD is such a valuable method. It gives you a structured way to confront and think about the internal quality of your system. TDD is a skill which can work in any context, even on Mars. It is an excellent tool for training yourself and your team to design better software.

TDD can be seen as a developer frivolity that slows teams down. This is true when you are first learning it (like any skill) but with practice and training, you’ll find yourself working at a sustainable pace and consistently delivering value. High internal quality will facilitate improvements to external quality.

Adam Luzsi points out:

… most people who start to learn TDD, are also in the process of learning software design implicitly through TDD. And this causes the feeling of "this slows me down".

High internal quality is not exclusive to practitioners of TDD but if you’re on the fence about TDD I hope this post encourages you to take another look. Maybe at least make you reflect on your own methods for writing great software.

What is TDD?

Test-driven development is conceptually simple to follow:

  1. Write a test for a small, desired behaviour
  2. See it fail
  3. Write just enough code to make it pass
  4. Refactor
  5. Go to 1 for next increment of behaviour

Yet, there appears to be many developers diverging a lot from what was described by Kent Beck and others.

This inconsistency makes discussing the merits of TDD difficult because every so often what someone believed to be a typical TDD process seemed to be following a TDD-like process but getting key parts of it wrong.

Why does this happen? I think tech in general suffers a problem of being extremely faddy and people not really understanding why they are adopting a particular framework or technique. If you don’t understand what TDD is for, you’re unlikely to practice it effectively.

So, why test-driven development?

Here are the typical answers you’ll get:

  • Prevent defects
  • Good test coverage
  • Confidence to refactor

Rightly some will argue:

These benefits are not unique to writing tests first

Why write tests first?

It’s worth exploring the why of TDD in detail as I believe it helps clarify how to use the method effectively. Many people have false starts with TDD.

GOOS picks up on this:

“We’ve seen teams that write tests and code at about the same time where the code is a mess and the tests just raise the cost of maintenance. They’d made a start but hadn’t yet learned that the trick … to let the tests guide development.”

You have to know what behaviour you’re trying to build to write a test

Developers go too fast, diving in to code and letting their imaginations run wild.

Jo Crossick on Twitter

When I write tests first, it's because I know that I'm itching to write over-complicated, over-ambitious code that is destined to become a hot mess, and tests is the only thing that can hold me back from creating instant technical debt

A lack of method to the design process can lead to software designed against loose, ill-defined requirements. This design can quickly become a drag on the team’s productivity. It can confuse developers, baking-in to your system assumptions that are untrue and behaviours that aren’t actually needed.

Writing a test first demands you precisely define, and focus on what you’re trying to achieve; this is a key part of the design process

Writing a test first means you have to:

  • remove any ambiguity, meaning you have to really understand the behaviour you need
  • cut down the scope to a small piece of useful functionality, liberating you from having to design the whole system

Rather than having to be somehow clever enough to know all the requirements the system would ever need and committing to a design, instead you design code for behaviour you have precisely defined. You design, for one thing at a time, and you do it properly.

It offers a fast and continuous feedback loop on your design

Writing a test gives you a sense of the design of a particular unit at the beginning. Many developers will dive in, creating types and abstractions for hours on end and only when they plug it all together and run their application do they begin to see the flaws in their design. Until code is executed and the behaviours observed, the design has had no real validation.

With TDD you get feedback at the start and as you iterate through more behaviour you will keep getting guidance from your tests. There is no faster way of getting feedback on your design than by declaring what success looks like at the start.

You will have a sense of progress as you take small, validated steps toward your goal. If practiced well, you won’t end up in situations of having to throw hours or days worth of code away.

Whilst not being unique to test-first, tests reflect your design back at you. If a test is difficult to read or write, it is telling you something. A key skill for practicing TDD well is to listen to your tests.

It gives you momentum and by focusing on internal quality will help you continue to change your software cheaply.

Many developers “spin many plates”, darting from one part of the code to the other, changing different bits of code and in general working in a fairly chaotic way. This way of working makes life difficult and more prone to error and re-work.

Writing a test first acts as a stabiliser. This sets you a clear, short-term objective and lets you focus on doing one thing well, rather than worrying about competing concerns.

The refactoring step improves your chances of maintaining good internal quality which is essential for a system to be malleable. We’ve all worked on systems where the internal quality is terrible and how frustrating it is how difficult it can be to make a seemingly simple change.

The iterative nature of TDD means you keep working on achieving one small improvement at a time.

It does not slow you down

GeePaw has an excellent video which dispels the idea that TDD slows you down. In fact, it's the opposite when you forget about the silly idea that software development's bottleneck is simply typing code. Thinking is the bottleneck, we work in a knowledge-based field and TDD offers us a way to think with clarity.

It’s true that when you first practice TDD you will go slower than before. In cases where maybe your design skills aren’t as strong as you think, TDD may feel frustrating, but you can use it as a tool to improve your ability to write modular code.

It’s a skill that has to be learned and practiced but with time you’ll find yourself working quicker, confidently and with more focus.

The qualities required to make code unit-testable are good qualities in themselves


“…the effort of writing a test first also gives us rapid feedback about the quality of our design ideas—that making code accessible for testing often drives it towards being cleaner and more modular.”

Modular code is:

  • Easy to repurpose and re-use within different contexts
  • Focused
  • Simple to test
  • Small, with clear responsibility
  • Easy to understand

Many developers have a healthy skepticism to premature abstraction, but it can go too far.

It is not good to have that what and the how of code mixed up within a function featuring 100s of lines of code, with mixed up and complected concerns. Rich Hickey covers the topic of easiness vs simplicity in his incredible talk Simple Made Easy. Having all your code in one method is “easy”, but it is not simple.

Systems built up of modular, cohesive units are simpler to understand and easier to change.

It’s easier to write good tests beforehand

Writing tests after the code is written is challenging to get right. You can end up with tests that may not fail when you want them to, have cryptic error messages or simply not have good coverage. You’ll also fall in to the trap of testing your design, rather than the behaviour (more on this later).

Writing tests first and seeing them fail helps you validate their usefulness and helps you focus on testing from the consumer’s point of view, rather than the person who wrote it.

It requires you to constantly refactor

A key, but often skipped step by novices is the refactoring step. It is far easier and more sustainable to do frequent, small refactors around the code you’re actually working on throughout the day, rather than leaving it for “a refactoring/technical-debt sprint”.

Ill-factored code makes the system harder to re-design. Often the act of refactoring reveals designs. As you consolidate code and DRY things up you start to “see” abstractions that can simplify matters.

Digression: Refactoring sprints are an absurd idea and point to some real team disfunction

  • You want the benefits of increased internal quality now, not in 3 weeks time.
  • Everyone knows they are hard to do and frequently fail to give the pay-off you’d hope for.
  • It’s your job to maintain internal quality because that’s what helps you deliver external quality at a reasonable cost. Don’t ask for permission to do your job.

 How to use TDD

Work iteratively, in small positive steps

  • Working on a single, small, well-defined thing is simple, and you’re more likely to succeed. You’ll get fast feedback on whether you’re going in the right direction.
  • Working on many, large, loosely defined things at once is difficult, and you’re more likely to make mistakes. The feedback loop is slow.

When practicing TDD you should always be thinking about how to reach the vague end-state with specific, achievable steps. If you’re writing a unit test that you know will take hours of effort to make it pass, you need to rethink your approach.

It can be challenging to break work down into small steps, and it will also challenge your design skills but working this way means you are more likely to succeed, and you’ll reduce waste.

When TDD feels right

  • Constant, small progress throughout the day. Never feels slow, feels deliberate.
  • Frequent commits to source control per hour.
  • Work feels “safe”, less chaotic than maybe you’re used to.
  • Mistakes are rarely large. Happy to revert changes if you realise you’ve gone down a wrong path.

Writing the test first

There’s a difference between knowing to write a test first and doing it in a way that helps you design your software.


“We want each test to be as clear as possible an expression of the behaviour to be performed by the system or object. While writing the test, we ignore the fact that the test won’t run, or even compile, and just concentrate on its text”

Rather than worrying about how the code will work at this point, concentrate on the behaviour, particularly from the user’s point of view. It’s essential you block out implementation detail from your head at this point in the process because you’re just trying to nail down exactly what you want to achieve.

The what and the how are two separate design activities which are too often mixed up, which makes coherent decision-making more difficult than it needs to be.

Focus on being precise and limited in scope, this will help you uncover assumptions and help you keep focused.

TDD is an iterative approach to development. When you are writing your tests you should be mindful of how to keep the work moving with fast feedback loops and small positive steps.

Start from the top down for new features

TDD is focused on letting you design for the behaviour you precisely need, iteratively. When starting on a new area you need to identify a key, important behaviour and then aggressively cut scope.

From there you want to take a “top down” approach, starting with an acceptance test (AT) that exercises the behaviour from the outside. This will act as a north-star for your efforts. All you should be focused on is making that test pass. This test will likely be failing for a while whilst you develop enough code to make it pass.

Once you have your AT set up you can then break into the TDD process to drive out enough units to make the AT pass. The trick is to not worry too much about design at this point, just get enough code to make the AT pass because at this point you’re still learning and exploring the problem.

Taking this first step is often bigger than you think, setting up web-servers, routing, configuration e.t.c, which is why keeping the scope of the work small is important. We want to make that first positive step on our blank canvas, have it backed by a passing AT so that we can then continue to iterate quickly and safely.

As you develop, listen to your tests, and they should give you signals to help you push your design in a better direction but again, anchored to the behaviour rather than our imagination.

Typically, your first “unit” that does the hard work to make the AT pass will grow too big to be comfortable, even for this small amount of behaviour. This is when you can start thinking about how to break the problem down and introduce new collaborators.

This is where test-doubles are very useful because most of the complexity that lives internally within software doesn’t usually reside in implementation detail, but “between” the units and how they interact with each other.

The design "outside the braces" is what often has 
the biggest impact on internal quality

func NewStockCheckHandler(
    stockChecker StockChecker, 
    productStore ProductStore,
) http.Handler {

implementation detail inside the braces is usually 
low thrills and simple to test


We can use test-doubles to explore the interactions between our collaborators with a much tighter feedback loop than if we tried to design and develop units isolated and then integrate them later.

For further iterations we may not need to write a new AT, we can simply add new behaviours to our existing units at a faster pace than the first one. We’ve made our first step, and we’re confident that we’re not developing them blindly because the units are integrated with the rest of the system.

On “TDD-ing a design”

Too often people assert that TDD is only compatible with a “bottom-up” approach where you imagine a bunch of abstractions up-front and then you “TDD them”.

This is called “TDD-ing a design”. You’re not using TDD as a design method, merely a way of writing tests around some code. This often results in classes or functions being overly developed in isolation for extended periods and then only when you try to integrate them into the system and use them you realise the design is wrong.

This is the point where people declare TDD to be a poor design tool when in practice, they already had a design in their head and just added a very long feedback loop to it, exhaustively testing code and various edge-cases before using it in their actual system.

Remember that TDD demands we start with a test for a desired behaviour, not a design. This is very much “top down” development. When done correctly you stop “over-designing” and you write and design the code you actually need.

By TDD-ing a design you can fall into the trap of tests being tightly coupled to implementation rather than behaviour. This becomes extremely problematic when you want to change your implementation detail and your tests will become a burden. I talk about this in detail in “The Tests Talk”.

Listen to your tests

If you follow the TDD process well you should end up with modular, loosely coupled, cohesive units within your system. This journey is rarely smooth. As requirements and knowledge changes your design will have to evolve with it.

Tests can offer you a focused lens into areas in your system where you can view the impact of your design isolated, which makes it far easier to appraise. Tests reflect your design back at you.

Here are some common test smells to look out for. Commit them to memory, and you’ll be able to raise some interesting questions about your design to answer.

High number of test-doubles (mocks), complicated setup

This is telling you that the unit under test has to collaborate with many things, this pain in your test world will be multiplied in all the cases you want to use this unit.

This pain can sometimes feel hidden in the “real” usages but comes apparent when you start to change things and feel the pain of the tight and inappropriate coupling. You should be asking yourself why this unit has to collaborate with so many other things.

It points to leaky abstractions which could be remedied by:

  • Look at the test doubles. Are you having to mock out lots of methods? Perhaps you can consolidate this interaction into another more cohesive unit
  • Maybe the unit itself has too many, unrelated behaviours. See if you can come up with a more cohesive design.

Unclear intent

Does your test really explain the code’s behaviour and why?


A test called testBidAccepted() tells us what it does, but not what it’s for.

If the test is unclear it could mean the responsibilities of the unit being tested is unclear and warrants a redesign.

Wide scope

This points to a lack of focus. Perhaps you need to break the problem down in to smaller parts.

Value the quality of your tests

Start being more attentive about the quality of your test code. This will result in you writing tests that are cheaper to maintain and will challenge you to write more focused, modular code resulting a better designed system.

Writing high-quality tests is cheaper than maintaining poorly written ones

In a high-performing team where it must respond to client needs quickly and cheaply, good tests will enable them to have a productive experience. The opposite is also true.

Write the tests you want to see

The easiest, most generic feedback I give to my colleagues is asking them to explain their tests out-loud. Often it’s harder than they imagine, even when the tests appear terse; and then an interesting conversation about the design almost always follows.

Reflect on the tests you have now. Do they express the intent of the behaviour well? Do they describe the why clearly? If not, what would it take? Is it a simple refactoring of your tests or does it require some redesign?

Be attentive when you watch the test fail

Be sure it fails the way you expect and that it fails with a clear message. If you skip this step you may be glossing over design decisions, or perhaps you don’t understand the requirements clearly enough.

Usually making the test fails “nicely” is a multi-step process, but it’s important to go through the steps until you get a satisfactory failure message due to your code not exhibiting the failure you need.

View being in “the red” as something to get out of as soon as possible

You should be running your unit tests extremely frequently, which is why it’s important they run quickly and don’t depend on slow things like the network or file-system.

If at any point your tests are failing (so you’re in the red), you should be thinking about the shortest path to get back to “green”.

Often the safest way to do this is to revert your changes. By working in small increments this should never feel like a big deal.

It is forbidden to refactor or redesign code in the red state because your feedback loop as to whether the changes are valid is broken.

Commit your work frequently

To be able to use your tests to help you guide your design they need to be green, but it’s inevitable that sometimes you’ll go down the wrong path and want to get back to safety. It’s therefore critical that you practice good source control practices and commit your work frequently.

I usually commit my work once I’ve made the test pass and that means if I get into a bad state during refactoring I can easily get back to working software again.

So much of applying TDD is working in small, safe steps, so your mind is freer to think about the design of your system rather than juggling multiple concerns at once.

Refactor aggressively when in the green

Most of the time your system should be in a state when you can refactor freely.

Ill-factored code is problematic because it becomes hard to understand, which makes it harder to re-design. The process of refactoring in general will reveal improvements in the design for you.

Read Martin Fowler’s refactoring book and get in to the habit of applying common refactors. A lot of code I observe is not well-factored, but the good news is it’s a relatively easy skill to learn and once you’re proficient takes a matter of seconds or minutes to perform; in particular with help from your IDE.

Wrapping up

If you don’t practice TDD, what is your method for designing software? You do you! Can you describe your method to someone else, so they can follow a similar approach? It’s healthy to retrospect on how you work, to see if there are ways to improve.

No matter your design method, internal quality is essential for the productivity of your team. So many systems suffer from poor productivity and high costs because some developers and teams will neglect internal quality. It’s a false-economy to ignore internal quality and the pace at which you will be able to affect the external quality of the system will decrease quickly if you do.

It is of course possible to have software with high internal quality without practicing TDD, but I am not especially clever and want an easy life.

When I design up front:

  • it’s often wrong
  • I have to think about too many things at once
  • I design for things I don’t need

TDD offers a methodical, incremental way of designing code which feels like a simpler, more evidence-based way of creating useful, malleable software.

Stick to the simple TDD process, listen to your tests and practice. This should result in you designing software, verified against concrete desired behaviours; rather than your imagination.