If software deliverables from external (or internal) sources were managed well, why is it we see too many regression manifested quality risks towards the end of the project? To understand regression within your project, there is a strong desire to rerun most, if not all test cases.
No matter what methodology you adopt, you will find yourself accruing many test cases over the lifetime of a project. Is it really feasible to run all of these on a regular basis?
Why do issues with regression exist? Poor development practices? Poor build practices? Bad configuration control practices? All of these can be fixed to some extent.
Everyone, regardless of whether you are running Waterfall, V Model, Agile, etc. should have a regression strategy (especially if your tests are predominantly manual).
The challenge: as the software develops, there’s more to test and often, there aren’t enough test resources to test everything that you want to test. Therefore, there is a need to prioritise tests.
Looking at the regression ratio by feature, it looks like ‘Leaf’ and ‘Branch’ are neck and neck. ‘Trunk’ seems a little better. Of course, we can’t really read too much into this at the moment; what we need is a bigger data sample size Perhaps if you display regression ‘by feature’ on a chart by release. We might see a trend or just a lot of noise. But what if, for example, you spot a trend that notably gets worse, or better. Does it correlate with a particular initiative within the project, a phase, a milestone? Integration started for example. If you saw these trends beforehand (from previous experience), you may be able to second guess this, and drive suppliers hard before it happens.
This data is not normalised. We need to introduce weighting factors to ensure data fields have the correct priority. This can differ from company to company. For this example, we’ll use the following weighting:
- Time Since Last Release (TSLR) = 2
- Time Since Last Passed (TSLP) = 1.5
- Time Since Last Fail (TSLF) = 1
Summary – Identifying risk through analysis
This approach to test planning provides a clearer picture of the true project risk towards the end. As tests stagnate, their risk score increases, increasing the likelihood of retest. During regression testing, it helps Identify severe risk factors faster.
You gain a new insight into how ‘features’ are maturing
- Some feature may have little risk scores, some may have high risk scores
- This could indicate deliverables from a specific supplier are poorly managed (Poor practices, no process, waste of your money!)
- Help to identify, and mitigation this supplier risk faster – Your supplier project management models get better in the long term
It’s a great way of selecting candidates for automation!
Complete Presentation Slides