Hey, fellow Leader 🚀,
I am Artur and welcome to my weekly newsletter. I am focusing on topics like Project Management, Innovation, Leadership, and a bit of Entrepreneurship. I am always open to suggestions for new topics. Feel free to reach me on Substack and share my newsletter if it helps you in any way.
Automatic testing is a fundamental part of the DevOps strategy, which aims to guarantee a level of quality by replacing a set of tests typically done manually with an automated form. However, the brochure for automatic testing typically came with the promise of improving the application stability while saving time for the team to do other more important tasks, like development.
The reality shows a completely different image. Testing is an overcharge of development and critical errors may pass through hundreds of automatic testing (Looking at you Crowdstrike). By tracking the amount of time a team spends on these tests, we can rapidly see is not an insignificant investment when compared with the development effort linked to new features.
The goal of this first article is to provide some practical tips for managing a strategy of automatic testing for IT development teams with a special focus on budget and cost management. The second article, scheduled for next week, will cover other areas more related to budget and team’s mindset.
Training
Any challenging strategy needs to start with training. Developers tend to be very enthusiastic about the idea of implementing Automated Testing in their projects. The delight of having TDD (Test Driven Development) with a fully automated cycle on DevOps on their CVs is too tempting to pass it away. However, putting a good strategy in motion requires knowledge in testing which developers lack in abundance. That’s why projects might need to invest in training before starting an automated testing campaign.
How many times have we tested a feature that broke with only 2 clicks? One time is too many already. There are two reasons for this:
The developer is rushed to deliver a feature and meet a deadline, however, the tests are very basic and biased therefore missing a lot of possible test cases;
Or the developer is too lazy to test a feature properly because the developer knows the feature will be tested by someone along the road;
Or even a third option which is the combination of both of these reasons.

For every non-scheduled iteration of manual testing cycles, the overall project budget is heavily impacted. Ideally, we would like the software to be in pristine condition when entering the Acceptance Tests. The automated tests help by identifying bugs earlier, improving the overall quality altogether.
Developers aren’t the best QA testers out there, simply because their mindset and skills are not trained to be good at this particular job. As a consequence, the automated testing produced by Developers can be very unsatisfying, and in extreme cases only guarantees if a number is not accepted in a text box if it has a wrong format. Some developers might struggle with Dependency Injection and mocking a component might not be done correctly. No matter what the case, developers need to receive training about the testing technology and ways they can improve their testing practices. This might incur increased costs for the project. The level of investment is directly linked to the team’s structure and how much training the developers require. Some Pluralsight and Udemy courses might fix the need.
Once Developers get a grasp of what some technologies allow them to do, and how to design good tests, they will happily implement good testing strategies. However, if you feel the team needs a tailored approach to guarantee its success, I recommend the article below and arrange a tailored training session.
Test Strategy
Once we get the hard skills out of the way, it’s time to produce a quality test strategy. For software lacking automated tests, is advisable to start easy. Select some crucial features and implement the first tests to cover its functionality. Some features might prove more easier or difficult to automate than others. For the more complex features to test is preferable to wait for the team to be mature on automated testing. Otherwise, the tests won’t be designed properly and their maintenance cost will go through the roof.
Before implementing any kind of test is advisable to go through the features and their test cases:
What are the steps required to test each feature and how feasible it is to automate?
Are the steps connected in any way?
Are steps in existing test cases that provide zero quality long-term?
Are some of the steps duplicated inside the test suite?
How modular can the test be designed?
Which parts of the code aren’t testable at all?
Depending on how the software was produced, some legacy code can be completely untestable mainly due to the lack of good coding practices. A refactoring might be needed to redesign a code in a way that would be testable unitary. Refactoring parts of the code might be expensive and require not only automated tests but comprehensive Acceptance Tests as well. Estimating the effort to correct the refactoring is critical to assess the problem head-on.
The testing strategy should start with an objective and the automated tests should be aligned with that objective in mind. For example, if we are trying to test a part of the software that has a lot of business rules that are difficult to test in the UI, the objective would be to build a comprehensive test suite to test all the different business rules. In this case, if a change impacts this set of rules the test should get the regression or if the new business rules might not be compliant with the logic already in place.
If starting a new testing strategy, I wouldn’t be very concerned about targeting a percentage of code coverage. Aiming too strongly for coverage will only produce tests with little or no value. Having good coverage is important. However, I would focus on the quality invested in the tests and if tomorrow a change is made on the same section of the code, it can detect a regression or not. The main reason behind this rationale is that every test has a hidden maintenance cost. For every change that happens, test cases might need to be updated. Having to update automated tests that provide little value to the overall software will only increase the cost of a change over time. We should have in mind to keep the code and test bases lean.
Once all the features and tests required have been identified the team should work on an effort estimation. The refactoring effort should also be included in the estimation as well.
That’s it. If you find this post useful please share it with your friends or colleagues who might be interested in this topic. If you would like to see a different angle, suggest in the comments or send me a message on Substack.
Cheers,
Artur