Over the last 20-years the term Test Automation has grown in popularity from a rather niche activity at the end of the project for regression testing to an industry standard. However, many people seem to think that automation can be solved by simply retraining a manual tester in a framework or some tools.
In theory, this logic has substance, after all the manual tester knows the testing process and data as well as the application that you are looking to test, also are fairly technically competent. But it’s not as simple as telling them to watch a few YouTube videos or read a book, because what you are asking for is something more, the technical skills required to be an Automated Tester puts them practically in the same realm as a software engineer.
So just make the developers testers then? That seems to be a logical and reasonable conclusion, and it’s a trend that has gathered steam, I mean, developers have been testing their own code for years, plus the term “Embedded Tester” is one we have heard in many a squad, but in our experience this “tester” and the developers own testing practices often lacks the structure required or doesn’t always follow defined guidelines to ensure quality and leading to a lack of coverage and confidence in the code.
So, you have testers without the technical skills to build an automated test and you have technical engineers without the testing knowledge to conform to the test strategy. Yep, Test Automation is harder than you think!
Let us elaborate...
Firstly, you need to consider the people, Manual Testers and Software Engineers have different skillsets and mindsets. A Software Engineer is technically proficient, understands code and can build practically anything but is often single minded trying to build within the parameters of the requirement assigned. Manual testers on the other hand are not so technically talented but have a firm understanding of acceptance criteria and the scope of tests required, they often look at the bigger picture looking not at the code, but at the processes and its impact into production.
Secondly, the process working as a Software Engineer is different, mostly they don’t understand that quality is their responsibility, they also have a habit of not following proven industry quality standards and are supported by equally misguided senior IT leaders who think deployed code is ready to be tested and then are upset when it doesn’t work and then ask Testers “why is test not passing ?” and appropriating blame.
Then what about your environment control? You can’t just take those regression tests built previously and automate them. You need to think about the process post deployment including configuration, release management discipline, integrations, and test data. Unfortunately, Automated Tests are not intelligent like Manual Testers so you need to rework your test scripts to get the same test objective.
Automated test scripts are in fact simple instructions of code that do what you say, they can’t, unless very complex code supports it, make decisions on the fly nor can they break off from test execution and ask a question. The planning and preparation that needs to be included is quite extensive.
Lastly there are a plethora of tools and languages available for automation with almost one coming out every six months but is your manual tester going to be proficient in a new language in six months? Can they switch and learn a new coding language and framework in a matter of days? Which one do you use if you are not experienced? The choice is bewildering and difficult when trying to decide, sometimes before a single line of code has been written.
So, what are you going to do about it?
When it comes to Test Automation you need to ask yourself a number of questions:
1: Do we need it?
2: Can we do it?
3: Who can do it?
4: Can we afford it?
5: What is the ROI ?
The first two questions are often built around the project purpose and drive, the management answer for “do we need it” is mostly driven by cost or desire. If you leave the expense to one side, there is a trend for managers to insist on automation as they heard it was a good thing to have. Don’t get me wrong, it is, but it should never be the automatic choice. The first thing you need to do, is get your tester in and get them to give you a good appraisal of the viability of the application and processes for automation, not every process will be suitable, either due to jumping from media to media or being broken into multiple activities by multiple people with multiple dependencies.
Another factor to consider is whether we have the right tools for the job or require new ones, this again is where inclusion of the automation tester early in the process is essential. They will know the limitations of the test tool and either give you a quick answer, or they can run a proof-of-concept by attempting to automate a few simple processes that handle the “Can”.
Now, for who can do it? That was discussed already earlier in the blog, well sort of, I mentioned that you can’t just pick up a manual tester or a developer and put them in the automation tester basket. Technology often gets in the way here, manual testers may know the process, but may not have the technical know-how to use the automation tool, whilst engineers have the latter, but not always the former.
So how do we achieve this? Well you can train the developer, that seems the easiest option, but not always the most efficient and effective, you really need to think about the problem; which is mainly around how do I deliver faster, better, smarter (with less risk, optimised cost, lowering TCO, etc, etc). Think about how much of the tech stack do you control, and then explore whether you solution will be tech-centric delivered by developers or QA-centric delivered by SDET’s to deliver the maximum benefit. If you can do that the unit cost of resource is insignificant in respect of the overall advantage gained.
Alternatively, you could select an automation test tool that’s easy to understand.
Currently in automation there is a trend to move to script-less testing, this is having a tool that takes up the heavy lifting of the technology and allows the less technically capable to produce automated tests. This type of tool tends to use some form of keyword driven system, all the tester needs to do is choose the flow of the test and populate with data. This type of tool opens up an interesting position to be in, if it's not that technically hard to use, then why not get the business users to do it? After all they know the process paths better than anyone, but I would advise you to have at least one dedicated tester or test manager available to at least check the quality of test creation and results. Plus the business is not likely to mitigate against sad paths, alternate paths and the impact it can have on data. So we go back to the tester then to help with the risk mitigation with the help of the business.
Yep, Test Automation is harder than you think!
However, as with most things, the final decision comes down to the definition of your problem and then the cost to implement and return on investment. Is automation worth the investment time, effort and money required to complete? Again I can’t answer that question, as I don’t control your budget and investment appraisal measurements, but you need to consider the impact on resourcing, time and coverage in either sticking to manual testing, or the impact of investing in test automation. And while the impact of investing in test automation is onerous, the decision to not invest in test automation creates an unbound cost model to the business.
So the question you then really need to ask yourself is that even though Test Automation is harder than you think, "can you afford not to do it?"
This blog was written by Infuse CEO Nalin Parbhu. You connect with Nalin on LinkedIn here.