Test strategies should be thorough and agile – but there may be some fallacies that are undermining your performance testing and costing you time, money, and other valuable resources.
Sofía Palamarchuk has explored seven of the most common testing fallacies in an article based on Jerry Weinberg’s ‘Perfect Software and Other Illusions about Testing’. The fallacies she looks at include:
The Planning Fallacy
Put simply, it can be easy to assume that performance testing only needs to happen at the end of a development cycle – but if you incorporate testing throughout, you can find and deal with potential issues before they become project-destabilising problems.
The ‘Just Add More Hardware’ Fallacy
This fallacy is a true case of treating the symptoms of a problem rather than the problem itself. As Palamarchuk notes, why increase infrastructure costs by adding more hardware when you can take a step back and find a solution to the real problem?
The Testing Environment Fallacy
No matter how well a product performs in testing, if the testing environment doesn’t resemble the actual production environment, there could be issues or bugs that only arise on, for example, a particular operating system or device. By replicating a production environment as closely as possible (by taking into account the wide range of factors involved), this can save valuable time in the long run.