Francis Miles, Head of Customer Experience, Infuse
Francis Miles, Head of Customer Experience, Infuse
Regression is not easily understood, as it seemingly manifests from nowhere. But if you can identify methods to help spot quality failure ‘trends’, you stand a better chance of understanding the root causes. This presentation serves to highlight a number of risk identification and planning techniques that you could add to your arsenal!

If software deliverables from external (or internal) sources were managed well, why is it we see too many regression manifested quality risks towards the end of the project? To understand regression within your project, there is a strong desire to rerun most, if not all test cases.

No matter what methodology you adopt, you will find yourself accruing many test cases over the lifetime of a project. Is it really feasible to run all of these on a regular basis?

Why do issues with regression exist? Poor development practices? Poor build practices? Bad configuration control practices? All of these can be fixed to some extent.

Everyone, regardless of whether you are running Waterfall, V Model, Agile, etc. should have a regression strategy (especially if your tests are predominantly manual).

The challenge: as the software develops, there’s more to test and often, there aren’t enough test resources to test everything that you want to test. Therefore, there is a need to prioritise tests.

This makes for a great interview question! Do all features come from the same supplier? There seems to be a very different approach to testing for each Are the erratic results due to inexperienced testers, test environment configuration, or are they actually real? Why are there gaps in testing? Is test resource limited? Some tests have not been run for a while. Is this acceptable, considering the previous test results? What on earth is going on with Test Case 1?! Considering test case 2 is a net pass, and considering the other ‘Leaf’ test cases are so erratic, do I really trust the result for TC2? In Release7, as the accumulated result is ‘everything passed’, do I take the risk and launch?! etc.

Looking at the regression ratio by feature, it looks like ‘Leaf’ and ‘Branch’ are neck and neck. ‘Trunk’ seems a little better. Of course, we can’t really read too much into this at the moment; what we need is a bigger data sample size Perhaps if you display regression ‘by feature’ on a chart by release. We might see a trend or just a lot of noise. But what if, for example, you spot a trend that notably gets worse, or better. Does it correlate with a particular initiative within the project, a phase, a milestone? Integration started for example. If you saw these trends beforehand (from previous experience), you may be able to second guess this, and drive suppliers hard before it happens.

Now, let’s try to identify a mechanism to analyse the risk of high test latency (tests that were tested a long time ago, that are perhaps not valid anymore). If we go back to the test result model and introduce a ‘Time Since Last Run’ (TSLR) calculation (in this case, releases); TSLR = count of releases NOT RUN from right to left (it’s your call if you include skips, N/A, blocked etc).

This data is not normalised. We need to introduce weighting factors to ensure data fields have the correct priority. This can differ from company to company. For this example, we’ll use the following weighting:

 

  • Time Since Last Release (TSLR) = 2
  • Time Since Last Passed (TSLP) = 1.5
  • Time Since Last Fail (TSLF) = 1

Summary – Identifying risk through analysis

This approach to test planning provides a clearer picture of the true project risk towards the end. As tests stagnate, their risk score increases, increasing the likelihood of retest. During regression testing, it helps Identify severe risk factors faster.

You gain a new insight into how ‘features’ are maturing

  • Some feature may have little risk scores, some may have high risk scores
  • This could indicate deliverables from a specific supplier are poorly managed (Poor practices, no process, waste of your money!)
  • Help to identify, and mitigation this supplier risk faster – Your supplier project management models get better in the long term

It’s a great way of selecting candidates for automation!

Complete Presentation Slides