It’s amazing how often you hear that a management team is disappointed in test automation results. What’s going on here? Is it the tooling? Is it the testers? Or is it something else?
Sometimes it seems that automated testing is viewed as a fix for all testing woes but is not often given the resource or backing that is required.
Here are 4 considerations that can make or break test automation in an organisation.
Many management teams have an expectation of automated testing, often it is that it will be able to do testing quicker and cheaper than manual testing and will be able to test everything.
This expectation needs to be managed both as part of the project process but also through the tools available to the test manager, which include:
- Test Policy
- Test Strategy
- Test Plan
By utilising the test policy, the test manager can set out to senior management and stakeholders, the high level approach that is going to be taken for automation as well as the level of initial and ongoing support that is required to make automated testing work in the organisation.
It should be made clear to the senior stake holders that by signing off of this document that they are committing to the ongoing investment in the automation approach and tool set that will be defined.
Through the test strategy the test manager can clearly define the approach being taken for automation testing in an organisation, and can further define expectations of what automated testing can achieve, what areas will be covered and what investment will be required. It is also a document where areas of non-coverage can be outlined, so further setting expectations at this point.
The test plan is the final document where detailed expectations can be set, the test manager can outline what functions will be covered by automated testing and what functions will be partly or not covered as part of an individual project.
Automated testing needs to be taken into consideration throughout the lifecycle of a project, from inception to closure (and often beyond in the case of automated regression packs).
A typical example is where an automated test fails and is reported as a defect. The defect is then rejected as expected behaviour as a new change has been implemented without the automation team being told that one was due. They are then in catch up mode to try to bring the automation test pack up to date.
The earlier that the test team (including the automation tester/team) are included the more likely the automated tests are to succeed. For agile teams a working, proven automated test(s) should be part of the definition of done for the scrum. This means it can be introduced into an automated regression pack without further rework. For DevOps and Continuous Integration systems this definition should be extended to include integration of the automated tests into the automated build and deployment scripts.
The test engineers who write the automated tests are not magicians; they cannot write tests without background information, working tools or applications under test.
Too often management think that if they have a team to whom they provide little or no guidance too and who they have in some remote (or other location) that they will produce all the automated tests required. Similarly, the environments that the actual tool to reside on and also the environment under test needs to be adequately sized and supported.
It needs to be realised that test tools have similar technical debt requirements as the actual systems under test. Operating systems, web browsers, protocols to mention but a few changes overtime and will impact any tools ability to run.
Like the systems under test, the test tools will also need their upgrades and maintenance planned into any release/roll out schedule, with adequate time to regression test and fix any issues identified.
So often test tools are initially housed on substandard hardware as management teams don’t want the initial and ongoing expense of setting up and maintaining the environments until they have proven a return on investment (ROI).
In many cases automated test runs execute at a faster rate than manual tests through screens and in many cases any API. This is often misunderstood by management who often will only pay for test environments that are half or quarter size (or less) of the final production version.
Management are often surprised when the environment under test fails due to the level of automated tests executed against them. So expectations need to be managed (for example, if you have a test pack of 500 tests that are expected to be executed overnight but the environment can only support the execution of 100, the test plan needs to be adjusted to reflect that the task will now take five days instead of one).
Similarly, both the tool and test environments need to be given support to provide an effective test environment. This can include operating system releases, windows upgrades and other housekeeping tasks to ensure that the environments stay in sync with production code and also ensure that testing is executed on a like for like environment.
4. Test Tools
Different expected outcomes can require different test tools to deliver what is required.
The range of tools available stretches from packaged applications to open source to individual bespoke applications, all with various levels of cost and support requirements.
Seldom will one tool be able to deliver all that is expected of automated testing and time needs to be taken to select the correct tool sets to select those that will help deliver the level of testing that is required, rather than remain “package-ware”.
Building and running a proof of concept (POC) is a useful way to identify the suitability of a tool and can prove if a tool is really able to execute the testing and subsequent reporting that is required (we have run these for organisations interested in our test automation tool, useMango). A POC should focus on automating the system under test and trying to automate those complicated areas that form the core functionality. However, before embarking on a POC, the scope of automation needs to be carefully thought through. It is not always possible or financially viable to automate everything, so a clear definition of what will be automated is important.
As previously mentioned the perceived successful outcome of automated testing depends on what expectations have been agreed at all levels across an organisation and what is needed on a project by project basis.
Once agreement has been reached about what is expected from automated testing and the required level of investment/support is in place, then the automation testing in any organisation has a good chance of succeeding in delivering both actual results and value for money, as well as meeting perceived expectations.
To ensure maximum benefit from automated testing, the process cannot be left to operate in a vacuum and needs to be integrated into an organisation’s development approach and lifecycle. To not do so can prove costly in both time and expense.
Setting and agreeing expectations up front as well as having the right resource in terms of environments, initial and ongoing support, documentation, tools & team members, can greatly contribute to the success of automation and the value it delivers to an organisation.
To learn about Infuse’s test automation services, visit: http://infuse.it/services/software-testing/test-automation/