Testable Systems and Continuous Deployment
People talk a lot about how to write testable code but testable systems are what you need if you are going to be able to practice continous deployment.
Writing testable systems takes practice and can be quite domain specific, but there are some general rules that you can apply.
You may encounter resistance to making this extra effort so along with some guidance I will discuss how I justify this approach.
Some positive traits of a mature system
- Being able to improve the system without worrying about breaking it.
- Releasing features as soon as they're ready, not tied down to release schedules.
- Fast feedback loops, knowing when there are problems and being able to fix them quickly.
What impedes a mature system from being fun to work with
- Lack of confidence in the system so lots of manual testing.
- This leads to fewer releases.
- Which leads to slower feedback loops which makes it harder to fix problems and you are slower in delivering value.
- Lack of confidence results in fear-driven development, where people become incredibly risk averse, taking decisions which are well intentioned but can end up complicating matters.
So what do you need?
- Releasing as often as possible, preferably every commit to master is going to live.
- Absolutely no manual tests so you have "acceptance tests" running on each deployed component, including live and some end to end tests for some key scenarios.
- These tests are running periodically irrespective of releases so you know your system is working in live
Writing testable code can be more effort but it's worth it and the same is true for systems. With some effort you can make sure the project you're working on is pleasant to work with and you can actually start looking forward to releases rather than worrying about whether it is going to work.
"Just write a ticket and we'll let the product owner prioritise it later"
In a common agile scrumban development environment where everything is focused on "business value", "story points" you must assert that the development team are stakeholders too.
If your system is useful you will continue to work and maintain it so it's important to do work and add features which might not be for the direct benefit of the actual customer but help the development team.
Every job spec in the world talks about developers being empowered and responsible for the systems they create so make sure you call your company out. Fight for this if needs be otherwise you're going to have a miserable time.
Some rules to live by
When you are working on a new feature or system ask yourself
How can I be confident this will work that does not require me to manually test it in live?
Then figure out these answers, the answers may not always be as straightforward as you hope but by putting yourself in this mindset you will write your system in a way that is testable from the start, rather than trying to retrofit it.
- Involve your colleagues, in particular QAs as to how we can be sure that when the commit is merged that it will go to live and deliver high-quality value reliably
Just like how you don't worry about breaking the internals of your system because of unit tests you also wont worry about deploying new versions of your system because everything is built with testing in mind.
Like the tests within your system you should be striving for your tests to be reliable and fast to run.
- If your tests are proving problematic and are being flaky then you should stop working on features until it is sorted.
Over recent years I have been working with this mindset and I am going to describe a number of techniques to help make your system testable
Write good, testable well separated code
An obvious one but if you maintain good separation of concerns then it's much easier to "open up" your systems for tests.
Avoid the browser
For a web app you will need some tests in the browser but keep them to a minimum. Follow Fowler's test pyramid because otherwise you will have a slow, flaky test suite which becomes a big maintenance overhead.
Good code design (separation of concerns especially) will allow you to test a lot of functionality without the need to run a web browser.
A trick I have learned to avoid web browsers but still test content that is being delivered is by adding some functionality to the web app so if you make a request with an Accept
header of application/json
it returns the view model representation as JSON rather than the rendered HTML. This allows you to test with simpler HTTP tools rather than firing up a browser which will be much faster to run and more reliable.
Idempotence
When writing systems if you see an opportunity for an operation to be idempotent then go for it. This makes writing tests a lot simpler as you dont have to worry about the previous state of the system which is often a cause of complication when writing system tests.
Open up your system
Your deployed system may get it's input from sources that are not as easily accessible as you'd hope, for instance maybe a message queue or a file system on a secured server.
This is less easy to test than say a system that is interacted with over HTTP, with which there are tons of simple and quick tools to use.
An option for this is to create an internal endpoint with an API that allows you to work with the system as if you are a file-system, message queue or whatever.
Not only does this make writing tests very easy but in my experience the QAs have found these endpoints invaluable for demos and exploratory testing.
Good separation of concerns should make this relatively simple to accomplish but you may wish to employ some kind of 'isTest' flag to let you separate out that from normal data if you need to.
A more concrete example
Recently I worked on a service that took its input from a message queue in the form of some JSON, perform some business logic on it and then put it onto another exchange for other parts of the system to consume.
The JSON is a payload which is to be rendered as a HTML email. It also calls some other services to help it create this payload.
Taking a very pedantic approach to developing this would've resulted in a tricky system to write a test for in isolation. Obviously we had some unit tests with some fakes services wired up but we still didn't feel super confident.
Thankfully the domain logic was really nicely separated out so we were able to expose the logic over HTTP so that you can POST
to the service and then get the rendered HTML back. Coupling this with PhantomJS we could take screengrabs of the rendered HTML and use image diffs to test for any regressions.
The other benefit was the BA and QA could also use this endpoint to try it out with other datapoints very easily so they could demo it; and of course they found some bugs.
Code just for tests? Wtf
Yes, I've read The Lean Startup too.
Again developers are stakeholders. No one bats an eye when you write logging or metrics code, probably because it is not "seen" and it is perceived as low effort. In addition no one complains when you write unit tests (I hope) even though it's not directly aiding the customer.
Writing testable systems is often just an extension of this logic, everyone in the business should value the maintainability, extensibility, releasability and reliability of your system and these kind of measures facilitate these traits.
Straight to live? But what about bugs
No process in the world will truly stop all bugs apart from very expensive means and 99% of developers don't need to produce bug free systems (despite what their product owners might like to think!)
Perfect is the enemy of good
However when you adopt these methods you are more likely to find bugs quicker but most importantly deploy a fix within potentially minutes rather than on some weekly release schedule.
Benefits for the QAs
By automating away the manual testing process our QA now has more time for exploratory testing and the testing endpoints we add to our system helps facilitates this.
When it comes to starting a new piece of work the QA can come armed with a number of suggestions to help ensure the quality of the system; rather than spending their time re-testing software.
I have also found that QAs will spend more time thinking about the quality of the test suites and deployment process etc, making sure we as a team have quality for the whole system rather than just the end product.
It's 2017: Do continuous deployment (for new projects at least)
When you have your tests in such a state that you are confident to release with every commit a tremendous burden is lifted from your team and it also pays off years down the line. How many times have you had to work with a "legacy" system where it's tough to even deploy the code in the first place and you have to spend hours doing manual tests.
With CD not only are you guaranteeing that you can ship new changes when they are ready quickly but in a few years down the line when some poor sap has to work with your no longer flavour of the month programming language, they can at least be confident that they can make the necessary changes and get it shipped with little fuss.
If you do encounter resistance then you are not working in an agile organisation. You are supposed to be in a position where you are empowered to improve processes and software to make delivering value as quick and as painless as possible. There is a difference between continuous delivery and continous deployment, the latter should not even be up for discussion. Deployments are the team's concerns, showing features to users is usually product owner's concern.
But still, if the company you work for are throwing terms like lean and agile around it should be in their interest to get features, even if they're imperfect infront of users as quickly as possible.