Photo by Giorgio Trovato on Unsplash
The first two parts of this series focused on people over technology and continuous improvement. Even with these two foundational items, horrible solutions can still be built. How do we defend against that possibility?
As the lifecycle of a solution takes shape there are several points along the way that need to be validated. But which layers are important and how to validate them? The best approach is to validate both the technical and business aspects of a solution using different mechanisms:
On the business side of things, we need to take our grand ideas and really tease out what is the MVP (Minimum Viable Product). This does not mean the smallest possible thing or all the things. It means spending the least amount of time/work to validate the hypothesis that the new idea/feature/solution is worthy of further/more robust implementation. Where to draw that line can be an art form unto itself.
Once the MVP is done, a decision can be made to either trash it as a failed experiment, leave it be and let it run as is, or iterate on the idea until it becomes a more functional solution.
While this is a short section, it is a very important one. A solution that isn’t vetted for business value is one doomed from the start.
On the technical side, we need to validate what we are building is functioning the way the solution needs it to. Testing can save so much time, especially as applications grow and change. The tests should describe how to use the thing being tested. But what do you test and why? That is where the testing pyramid comes into play. I try to follow a few rules when working on tests:
Many of the testing pyramid articles just assume everything will be automated. While that is possible, there are situations where it is costly in time and money. The place I find this the hardest is when there are many disparate systems communicating that the development team is not in direct control of but there are required interactions on those systems. Yes, you can technically create tests that span multiple systems like that, but that would be super gnarly. For that kind of end-to-end testing, we use manual testing, then do automated testing for the API’s and expected data inputs.
The gist here is that there should be more unit tests than UI tests. The reasoning is UI testing is expensive, both in time and money. They take longer to write and run, are brittle, and are easy to lose faith in. Unit tests are fast and cheap to maintain and build a foundation for everything to rest upon.
When you start by writing a UI/Feature test at the top layer you then can logically flow through all the pieces needed until you have a fully functional solution. However, testing every single feature from the top down will leave you with a cube or worse an upside down pyramid.
So how do you keep a pyramid a pyramid instead of a cube? You have to chip away at things making it top heavy. UI tests that stick around should only be a collection of happy path tests. They shouldn’t include all the edge cases and weird data and strange things that users can get the application into. That is what all the lower lying tests should be covering.
As you write your code to satisfy that top level test, you should be writing all the other layers of the tests as well. Then remove anything on the top layer that isn’t a happy path test. You may be tempted to leave that top layer in, but it will only cause more headaches in the future. It did its job, now let it go.
To avoid building things that are either not solving the actual problem, verify that the original idea is sound with quick iterative solutions. To make sure that over time the solution will falter technically, test it well, and maintain those tests as much as you maintain the code they are testing.
In part four of the series we will explore how our solutions can stand the test of time.
Are you ready to build something brilliant? We're ready to help.