Automation as a bottleneck
I have come across an issue for second time already (first at my previous managerial job and now again), so I wonder whether this is common issue and if there are some particular ways to improve it out there.
Basically, both times I was involved in software improving process, and identifyed that we do need to start automation in order to free up manual tester's time and to speed up deployment process. This seems quite natural to me.
Starting automation from scratch is quite interesting, but also time-consuming work. Taking tester's time from manual testing is not always possible, so automation is moving forward extremely slow.
When talking to my boss, I hear we can't hire outsourcers to quickly automate backlog, we can't hire contractors and that we need to use our own internal workforce. At the same time, our internal workforce is fully occupied with manual regression testing since it is not yet automated. I see this as vicious circle which cannot be quickly broken. And I am being asked to do it quickly! And also I am being given restrictions and responsibility at the same time.
Since this is my second try and it looks like results are similar, I wonder if I am doing something wrong?
What I did this time was:
1) Organize framework (gain help from developers)
2) Allocate all available team members' time to automation
So, I managed to automate small part (first step) of regression in 3 months, still leaving us with 90% of manual work. If I keep doing the same, we will have to invest a whole year into automating remaining 90%, that means our backlog will also grow during this time and we will still stay behind next year.
So I wonder whether there are common techniques or advices on how to make this quickly or how to communicate resource shortage to higher managers?
Sorry if I posted this in the incorrect section, I tried to search for similar topics but did not find any. Please move my post if it is required. And thank you in advance for any ideas, advices, thoughts or consideration on this topic.
In my experience what you're seeing is the norm. Automation - good automation - follows the same patterns as good software. After all, automated regression is software that uses other software in defined ways.
Just like "real" software, automation can be done with any two of good, fast, and cheap - so your priority with your boss is to demonstrate that your automation _is software_. You also need to rephrase the goal of your automation project somewhat: you're trying to reduce the need for time-consuming, _error-prone_ manual regression runs by automating regression and running it on a regularly scheduled basis. This doesn't necessarily speed up anything, but it does assure you that the scenarios you automate haven't broken, and notifies you _quickly_ when a change does break a feature you've automated, so ultimately there will be fewer regression bugs escaping into the wild.
Here's how I'd communicate it to your boss:
- automation _is programming_.
- good automation is at least as complex as the application it tests because it's software that exercises the application it's testing.
- good automation allows you to cover a large number of paths through the application in test in a relatively short time, and does so the same way _every time_.
- good automation catches regressions that impact the areas of the application it tests within a short time of them being integrated into the code base.
- the ROI on automation builds up over time. A regression suite that takes a team of 5 manual testers a week to run (200 hours) may take 1000 hours to automate. Once automated, if you're running it daily you're into positive ROI within a couple of weeks (since there is a cost to running automation, in the form of a machine in use plus analysis time). On the other hand, if it takes 10,000 hours to automate, it will be close to three months before the nominal ROI goes positive - _but_ you will see fewer regression bugs released in that area.
Don't set a goal of automating everything: look to the 80/20 rule - you want solid regression for the 20% of the application that gets 80% of the use. Once you have that, your next target should be, more or less in order:
- mission-critical areas (things like financial data, data that has legal requirements, and so forth)
- regression-prone areas (these will start to get really obvious as you approach the 80/20 rule)
- problems reported by your biggest customers
- new features that will fall into the 80/20 rule once released
- anything else that's easy to automate
- anything left over
It's an ongoing effort, but once you reach the point where most of your new automated regression work is on new features (somewhere in the 5-10 year range for enterprise-level software unless you get dedicated resources) you'll find that the majority of problems reported by customers will fall into a small number of broad categories: things that can't be automated; things that are specific to one customer (and so not really worth automating because they cause problems so rarely); things the customer shouldn't be doing (which is an argument all by itself); and things with so many external dependencies it's not worth automating them.
To add to the previous statement..
I like to think of automation as reducing technical risk, not as a replacement for manual testing. (but as a side effect, you'll do less manual regression since there will be less technical risk)
Other thoughts I have around this area is initially when your smoke and automation coverage is low. So you'll want to be very selective of which changes are made to the system. So instead of each engineer doing working on different parts of the system, have all the engineers/testers working on a narrow vertical slice so all the impacted functionality is localized in a small area to make manual testing needed to be limited.
Then using the time freed up, use it to implement automated tests in the piece the entire team is working on. Then when the next iteration comes along, you'll have a decent level of coverage in that 1 vertical slice, and you can move to another vertical slice. Over time, you'll have a good smoke level coverage, from there you can start getting more aggressive.
Katepaulk, Dlai, thank you so much!
First, I realized that I am not the only on facing this issue so that means that I am doing the right thing and what I need to improve is communication to higher management, which often may be the actual problem.
Second, I realized that this 20/80 rule is actually the great point - I did not think about it, and that was my huge mistake. Probably, due to lack of managerial experience. Thank you for reminding this to me!
Also, to add to number 2, I think vertical coverage is a good point to start - cover 20% of volume this way would cover 80% of critical functionality for sure!
Thank you again, this forum is just awesome!
It is difficult to quantify this part in addition to the two Kat's above. Both ID's start with Kat. Cool!
My favorite benefit of automation is that it keeps testers honest. It is easier to say that a manual test passed by just saying so or thinking that it did. The Automation makes me prove my hypothesis. Others can run the same script. The results really have to be correct for the test to jive. I don't always use automation to speed things up. I also use it as a hash table or checks and balances to make sure that myself or others are not missing things or seeing them as we care to.
This is the usual ask from Manager/Leadership team, who didn't do automation well or not at all. Many commercial tools' marketing teams are using Record and Playback to Management and saying that you can automate any test cases with single or twice amount of manual execution time.
Another side of management, is looking ways to reduce the cost. To this, you can try to develop/evaluate re-usable framework. For example, you can develop a keyword driven framework with few developers and try to utilize your manual testers to cover majority of automate-able tests.
Probably you can put the ball on your Supervisors court and ask "Hey as a process improvement you said we should look at automation, fine we can do that".. Now ask your boss to set this as your Year's Goal to be achieved, Now said this you need to quantify the percentage of automation work that needs to be completed by end of year from your current Regression suite (Considering that you will add more regression tests on a ongoing basis). Calculate effort required to achieve this Percentage by Year end, Have a estimate report sent every week to your BOSS until he/she approves your resource request.
It is not wise to take manual testers time and ask them to create automation scripts for free. After all nothing comes free of cost when you are aiming at a Goal "Faster Time to Market" and that this goal has a pre-goal which i talked above and is directly relevant to the scenario.
I don't build Software but I make them work better, Testing is a Passion
Tags for this Thread