| || |
question from my manager
Today I got a question from my manager.
If there are 20 steps in the script and how automation lead would ensure that all the 20 steps are covered in the script. We have around 250+ scripts
I said check points/validations in all the 20 steps and by code review but my manager is not convinced.
What are your opinions?
when things gets harder ,the harder gets going
I think it's a bad idea to collapse automated coverage onto manual coverage.
While it's good to know how the automated tests relate to manual test. I think it's a losing battle trying to make sure automated coverage is in line with manual coverage. I use to think that automation was about reducing the manual regression set. But these days I realize that is a difficult direction to maintain. You're basically still manual testing, but manual testing with code.
My thinking now is, it's already hard enough to maintain even feature parity on even basic tests, and keep them rapidly evolving to keep pace with the development of new features and refactoring that developers do. Then on top of that, maintaining a sub 5 minute test run to avoid holding up the build process. With many shops running 3-1 to 6-1 dev to QA, it's very hard to keep automated tests maintained. I'd avoid throwing extra hoops to jump through.
2nd, the amount of regression isn't solely dependent on how much everything else is automated. It's really a function of risk. You can reduce risk by automated tests. But that's applying a shotgun to flea. Another way to reduce risk is by being more deliberate in the change control.
For example, in one sprint, say you have 5 developers. If all 5 developers worked on different parts of the app, then you have to do a full regression. Say if all 5 developers only worked on one module or one feature. Then the are that needs regression is much less. Now say, you go one step further and have a comprehensive review of what will be changed before and after the code commit, then you have narrowed down the areas at risk even further. Oh wait, isn't that what Agile is? A team working on a focused story, clear entry and exit criteria, and iterative releases. And guess what, you can develop or maintain your automated test in that workflow.
Because you know what your'e changing, you just have to harden or write new automated tests in that area. And because you're working on the areas the developers are working on, you're keeping pace with development instead of playing catchup regression.
Last edited by dlai; 09-10-2015 at 04:39 AM.
BTW.. the answer to your original question,
In the past I have used Test Case Management (TCM) systems. And have my automated tests to report back the test result to the TCM marking a test run as pass/fail. Most TCM systems have an API or hook that you can use with automated tests. Depending on the framework, most of the time you can write a listener that intercept automated test run results or simply as the tear down step report a pass/fail.
But my experience with that has been manual test cases piled up more quickly than automated test cases. What happened was to management I looked bad because the amount of automated coverage went lower and lower in the metrics even though we were shipping a lot more features per sprint because there have been fewer escaped defects. At that point, it became a debate with the boss whether the automated tests are effective or the manual testing have become better.
In my opinion, you should get a new manager. That one is broken.
If he doesn't believe that you use checkpoints to validate things then... why is he having you do automation at all? If he thinks it doesn't do anything, what's the point? Or is that what he's getting at?
Trust, or lack thereof will not be established via code or check points. I'd suggest a different conversation with your manager. Best of luck!
I would ask him to be more specific as to how the tests should be changed.
Patience is like bread I say.... I ran out of that yesterday.
To make it clearer to the manager, my suggestion is this:
Get that script with 20 steps.
Call your manager so s/he can watch.
Set your test tool to Debug Mode
Step through your code line by line and explain to your manager something like "Ok step #1 just got executed, and step #2 and so on until all 20 steps have been executed.".
Some applications provide more than one option to get to a certain point. For example: Option1: a user can click Button1 to get to Page2 then click Button2 in Page2 to get to the Target Page or Option2: the same user can click a link in Page 1 and the system navigates to the Target Page already.
Maybe your manager had seen or heard that the manual test case calling for 20 steps (using Option1) is failing while the automated script (using Option2 and possibly containing less number of steps) is passing.
Last edited by Gilbert; 09-10-2015 at 01:00 PM.
Reason: additional info
I cannot go ahead and show all 250+ scenarios with with different steps. No other than an automation engineer has to evaluate and check if all the steps are included.
when things gets harder ,the harder gets going
Surprised no one has mentioned "Peer review" yet?
Chikki did in his original post, that's why I didn't.