On Unit Testing for Fast Feedback
In Agile and DevOps we learn the value of feedback, and learn that the feedback must be timely and providing high quality information. Unit testing is more than just developers testing their code, it is one of these essential feedback loops that teams must set up and leverage.
So how do we create an effective feedback loop from unit testing? It starts with selecting the right technologies, and setting up a good CI pipeline, and also requires the right technical practices and engineering culture.
Technology Selection
It does indeed start with selecting the right technology for the solution you are trying to build. Older programming languages, even some actively in use today, do not fully support automated unit testing. This is also true of modern no-code and low-code solutions. On the other hand we have languages like TypeScript, Java, C#, and Python that have really strong support for automated unit testing.
Lack of support for unit testing is not always a bad thing, there are many cases where such technologies may be the right fit. For example if we run a restaurant and want to put our menu online for take-out orders, we want a simple solution that makes it easy to put our product out. A simple visual review will be sufficient, nothing needs to be scripted or automated to validate that product.
For anything more complex we must plan for automating our testing. While complex solutions can be implemented using older technology or no-code/low-code solutions, the inability to get quick feedback on changes will lead to more problems later.
So when considering technologies for your solution, make sure to check that the technology options being considered provide for the level of unit testing support necessary.
Continuous Integration
We have selected a technology that supports automated unit testing, so now we need to know when something has broken. We need to establish a continuous integration (CI) pipeline. A well defined pipeline will automatically build our system as changes are made, and will include execution of those automated unit tests. The CI pipeline must notify the team of any failures, whether a failure in the build or in the test.
This combination of running the pipeline on changes, along with notification of success and failure, provides the basis for a timely feedback loop. Being notified of issues in either the build or in unit tests, when they happen, helps to keep the time to discover issues low. But this is only a checkpoint along the road. Better feedback loops, and the key points to closing this feedback loop, come with both the right technical practices and engineering culture.
Technical Practices
A solid continuous integration pipeline is important, but it has to be working from a foundation of solid technical practices.
TDD, or test driven development, as a practice drives us to ensure the code is testable and that we have tests to ensure the code is working properly. These tests are what is executed by developers and the CI pipeline, and the practices of TDD keep the developers running them frequently.
BDD, or behavior driven development, is an evolution of TDD focused on behaviors of the system. This practice introduces the common Given/When/Then construct for our tests, and provides the team and product owner a plain language way of discussing the tests. This enables the team to produce higher quality tests, and avoid some pitfalls of TDD.
Clean Code, as outlined quite well by Bob Martin, helps to ensure that our code is maintainable, so that when we do find issues or make changes it is easier to implement the changes and reduce our time to fix.
Livable Code, as articulated by Sarah Mei, helps us understand how clean is clean enough, which allows us to maintain a focus on value delivery without getting bogged down in making code pristine.
Good Code is Livable Code that also meets many more of the non-functional requirements (NFRs), is well designed, easy to read, performant, testable, and of course works reliably.
These practices will help to ensure that the feedback loop provides high quality information, and that the code can be corrected quickly when issues discovered, keeping the time to resolve low.
Software Engineering Culture
A culture founded in software engineering principles is the critical catalyst for an organization to create value from this feedback loop. Tools and practices if deployed without the backing of an engineering culture will certainly result in either no benefit or wasted efforts.
There are many aspects of an engineering culture, and approaches to building such a culture. The key aspect here is Collective Ownership - collective ownership of the system, the design, the code, the quality, and even the processes that drive and deliver all of that. This ownership drives the team to ensure quality, to implement and seek improvements in technical practices, and respond quickly when issues arise or the system breaks. Collective ownership will also bring the team together, to rally around each other, to build the system and resolve issues together.
Meaningful Metrics
We cannot leave this topic without discussing the unavoidable - metrics. Always take a cautious approach with metrics, focusing on metrics that are actionable for your team. Here’s a few noteworthy metrics and thoughts to consider relative to each.
- Code Coverage - this one is tricky because focusing on it without the right culture and practices can lead to poor and even negative results. A team should strive for a high level of code coverage, but must remember that coverage itself is not measuring the outcome you seek. Treat this metric as a diagnostic metric rather than an actionable metric. When issues are encountered with your other metrics you can use coverage information to help find areas to improve.
- Mean Time To Discover (MTTD) - in this context referring to the time it takes to discover an issue in the system. Issues cost money in lost business, lost productivity, and potentially more. You want to discover issues quickly, ideally while the team is writing the code. Unit testing feedback loops enable quicker and more efficient discovery. When issues are discovered late, such as in your production system, that suggests gaps in your testing strategy and opportunities to improve automated unit tests.
- Mean Time Between Failures (MTBF) - in this context referring to two types of failures: failures of the system in production and failure of the system build to complete successfully. In both cases you want issues to be infrequent, where the former directly impacts your users and the later slows your ability to get new value delivered to your users. When issues do occur frequently it suggests opportunities to improve your technical practices or evolve your organizations culture.
- Mean Time to Recover (MTTR) - in this context referring to recovery from the same two types of failures: failures of the system in production and failure of the system build to complete successfully. Quick recovery in both cases is important, where recovery from the former enables your users to get back to what they were doing quickly, and recovery from the later enables the team to keep the work flowing. When issues do take long to resolve it suggests opportunities in all of the topics here.
Additional Reading
Manifesto for Software Craftsmanship | Unit Testing Best Practices | A Pragmatic Quick Reference | Ownership and Responsibility in Software Development Teams | Showing Ownership as a Developer | Extreme Programming - Collective Ownership | Clean Code by Robert C. Martin | Livable Code by Sarah Mei