Honey, where are my tests?

Posted 2 years ago
Return to overview
Where are my pants?

Having identified a number of different types of tests, you can already imagine that each tests has its own place in the process.

The process here is key: because software development is complex and consists of several steps, it just makes sense to validate each of these steps. We'll outline a lineair process, but of course in practice there's more iteration between steps. That's fine and is in fact where tests offer the confidence of moving back and forth!

Quality and Contract

Tests serve as a quality mark of what the output of a step in the process should resemble. This is an ideal tool for handing over work between different individuals who all contribute to a single software solution.

The also serve as sort of a contract on what you can expect in terms of interactivity. You needn't be concerned about the inner line by line workings of a codebase, as long as stable tests cover all of the interfaces.

Any good IDEs?

The integrated development environment the the tool where a programmer works on the coding solution. It could be a simple text editor, a drag and drop interface or anything in between. In most cases, it resembles a combination of a text editor and file explorer in one, but on steroids. They are customisable to the needs of the programmer, either by configuration or supporting third party plugins.

That customisation offers tools in terms of syntax highlighting, linting and formatting. Syntax highlighting and linting is usually done real time. Formatting is usually done at the point of saving a block of code (either before or after saving).

Since this is also the environment where Unit tests and End to End tests are typically written, any decent IDE would also offer support for those. Unit tests can usually be run in the background, validating the assertions still keep functioning after saving an edited file.

The unit tests (and component tests) are developed via an IDE, and are usually triggered manually to validate the code works as a final step in the development process of that component. Committing the tests files is like committing to the contract on how the software parts work together.

Tests in this stage serve as a validation of the software that will be added to the landscape. When the addition takes place, via a merge or pull request, we will want assurances that the existing features and code will not break!

Testing Commitment

The role of tests action in this step is to safeguard the quality of the existing code base. The existing code is stable and trusted, so we want to make sure we can keep trusting in keeping the existing features up and running.

Assuming that you have a version control system as part of your software landscape (there's no reason not to), you will eventually end up in the realms of GitHub, GitLab, Azure DevOps and their like. Theses are solutions that can handle your versioning control and have builtin hooks and actions as an added bonus. This is where you can do more advanced testing, because you can configure access to (parts of) your digital landscape.

Step by step

You can usually configure certain rules, actions of steps that will always be run against the codebase. When one or more steps fail, you will be notified to take action accordingly.

It is common to add the linting and formatting and unit testing rules here as well, since you have no control over de IDE that provides the commit and cannot fully trust it's quality. You do have access to the configuration that determines the linting, formatting and unit tests, because they are part of the code base that is committed.

Being snappy

Do you remember the snapshots that come from unit tests? The assertions will run against the snapshots that are part of the code base.

Components and units

Executing the component and unit tests is done to make sure that the pieces of the software are still working as expected. A high percentage of coverage equals a high percentage of trust in the quality of the code. Since these type of tests have very strict and controlled environments, you can trigger them regardless of the target deployment.

Preview environments

Before the code is added to the codebase, you could opt for doing a regression, or end to end test. Setting this up requires you to begin able to scaffold out the minimum environment of what you want to test. This also means any (micro) services that you need for features. You will still need to be able to have control over the input and output to make the tests reliable. Any variation introduces a deterioration to the quality and trust in the outcome of your tests. This step usually takes longer to complete, so designing the process takes this into account. Maybe this only happens when targeting acceptance and beyond. Maybe it's a manual step, or maybe it is just in integral part of any pull request.

Scaffolding and deploying a complete test environment can also be used to tun the synthetic tests, if the preview environment is publicly available and if you don't value performance metrics.

The important bit here is, again: that it reassures that no existing user features break once new software has been added. You cast a wider net of scenarios that should keep working as code feature.

Qualities to look for

This stage is also ideal for analysing the quality of the code and checking for code smells, security vulnerabilities or duplicates. The tools that offer these types of analysis usually integrate well with the platform and offer an extra insight. This is not necessarily testing, but more in the realm of Quality Assurance (which I think are not the same thing at all).

Post merge monitoring

With the above steps, you've done all you could have possible done to try and introduce a new feature that consists of high quality code and doesn't introduce bugs on existing features.

These should be all the reassurances you need to deploy to production. After that's done, you can follow up with some more automated checks, but now in a production environment. The focus is now not on whether the code works (that should already be clear), but on how well the application performs. A slight difference.

So this is more about monitoring. Here the before mentioned synthetic tests are more valuable since they will mimic the usage in the real world. Triggering a suite of these tests should be able to report back whether the software performs within the set benchmarks.

Bear in mind here that with gradual rollouts of software, you want to make sure to target both scenarios!

You can run some performance monitoring tools at this point, but these should be part of a continuous monitoring process rather than tied just to deployments.

Finally

With all of these steps (and to quality of execution of these steps), you have a reliable proces to deliver software in a robust way. Of course this is not necessarily the best way to deliver quality software. What I've described is a fairly common process though, and most of these steps are incorporated one way or another.

This whole process can seem daunting, but the main goal is to facilitate the automation of repetitive tasks. Future you will thank you down the road. And as with all things, implementation depends. Sometimes it is not possible to provide enough fixtures for an enclosed end to end test. The important bit is to apply what is necessary for you to release with confidence, because nobody likes nasty surprises after a Friday afternoon deployment.

Return to overview