Not Sure if Automation is Worth it Based on ROI Analysis - Small Company
Hi and thanks for reading. I work as a QA Lead at a small software company where we have been struggling to incorporate automated testing. I am trying to do some research so that we can come at this with a plan and really add value. I was excited to find an ROI Example so clearly show the value of automation.
However, I recreated the ROI spreadsheet, used our company's numbers, and did not see as great return. I'm posting here in hopes of feedback or help with anything I've missed. The numbers I've used are based on how much time and money my company spends on regression testing and our experience with automation using TestComplete. The project here has about 200 manual tests written and our team has two developers and one tester.
The main differences I see between the example and my analysis:
- Fewer test cycles--decreases ROI
- Lower manual effort per test--decreases ROI
- Less time to automate (in our experience)--increases ROI
- Less time to maintain automation--increases ROI
In my experience automated testing doesn't start to produce an ROI until you get it into CI. The main benefit of automated testing isn't quality, but developer productivity in that they can save time debugging. Once code is packaged and shipped to QA, the returns on automation starts to lose ROI.
When you get it into CI build pipeline. What happens is you'll find defects/KLOC (Kilo Lines of Operational Code) metrics start to look better. You then cross reference that with the metric of lines of code per bug fix, you'll find that the measured developer productivity has raised a lot.
dlai, thank you for your input. We had mainly been considering automation for the purpose of regression testing, but I see the value of this method and finding bugs before shipped to QA. I think I will bring this up to the team.
What types of tests do you find to be most valuable in the CI pipeline? Unit tests? API tests? GUI? Do you run a full regression suite or just a smoke test? To be honest, we have almost no unit tests on this project and I would also like to analyze whether our focus should be on automated GUI tests or on unit tests.
In theory, if we only wanted to do automation of our regression suite (which manual testers currently execute), do you agree with my analysis that the ROI is not that much? Thanks again for the input!
Originally Posted by Daboa
Integration tests will give you the most value. Most bugs occur usually as a misunderstanding of requirements or a misunderstanding of data exchange between different systems or modules. They are difficult to maintain, but not to the degree as a GUI level acceptance test. Their coverage is somewhat nebulous. Ideally you want to have use case coverage at this level, but what really constitutes a use case? It takes manual intervention to measure it while unit tests can be measured on pure code coverage percentage. This is why you tend to have less of these than unit tests.
Unit tests of course take the least maintenance and runs the fastest, and easy to measure coverage, so you'll tend to have more of these. Since they're easy to measure, % code coverage easly done through instrumented builds and coverage tools automatically.
Acceptance level, or end-to-end tests are extremely difficult to maintain, even more so than the SUT. You tend to use them to ensure proper deployment. But many shops will do something like write 1 test per feature story implemented in order to have peace of mind that they don't break any features in the process of a large refactor. Of course, if you're going into a legacy product with little unit or integration tests, it may become a necessary evil to do more of these.
On a technical level, if you're integration test coverage is high enough, the only additional gain from a end-to-end is testing if deployment was successful.
In the testing pyrimid you probably see alot of different numbers of what percentage of what should be there. http://cdn.ttgtmedia.com/rms/onlineI...n_strategy.jpg
The guideline I like to follow is 80% unit, 15% integration, and 5% acceptance tests (as a total percentage of lines of test code written).
As for what tests to run on the CI pipeline.. it'll depend on what budget I'm given. Ideally I'd parallelize the heck out of the tests and run them all. But in smaller shops I've worked in, in the last few years, I've done a few things differently.
a) In one shop, I've staged the tests. Smoke tests that are headless as part of CI, then full regression out of band (nightly builds).
b) In another shop, I've tagged my tests by features they tested, and developers check their code in against a Jira ticket, which is also tagged by which features they affect, then those similarly tagged tests are prioritized in the CI test run.
Last edited by dlai; 05-06-2015 at 08:18 AM.
dlai, thank you again for the reply. It is helpful to see the strategies of someone with more experience.
For context, our team consists of myself (QA) and 2 devs. Our general approach, which has worked well so far, is for me to write test cases while they code with as much transparency as possible. Then I test, we resolve intermittent issues, and then we wrap up the sprint and I merge my new tests into the regression suite. To be honest, this has served us well and we maintain a good velocity. However, we do not write unit tests or any automation and the last few sprints we have ended up releasing to our customer without a full regression. Again, we're a small team doing one custom solution for one customer and we have had no disasters with quality, but our manager would like to incorporate automated testing and I (also having an interest in it) have been looking into how to do that. I have coding experience and I believe I could tackle Unit and Integration tests, if that's right for our project.
My research into automated testing in Agile affirms what you have suggested: Unit tests and Integration tests are emphasized over GUI testing.
But I'm having trouble envisioning how this will work on our small team. My main questions/concerns are
- It will be hard to motivate the team and management to invest in any automation given that we currently have high velocity with good quality - I think this is partially because we are a small, well-knit team. I'm wondering if automation is more valuable to larger teams and if the ROI for a small team is not very high.
- Assuming we will not hire another QA resource (just assume for a moment), what would happen to manual tests if the only QA resource is focused on automation? Are manual and automated tests supposed to work in tandem or would automation writing replace most of the test case writing we do today? In other words, what is the most efficient focus if there is only one QA resource?
- We have some older but active products that have no automation; it doesn't seem worth it to automate tests when there are so many and we only do a regression cycle once per year. These are old and big products that get a month or two of dev focus per year for updates.
Thank you again for reading this. Part of me is just thinking out loud about what the options are for my team. I have an interest in automation and I want to find what is most efficient for the team, but I'm struggling to see how it would fit, despite how much I want it to. I appreciate any further feedback. Many thanks.
I understand your concerns. It's hard to introduce change when things are "working" any any investment is seen as a detriment to productivity.
I struggle with these problems when proposing new initiatives. We hit the following situation,
* The current velocity doing X is known.
* We have a conceivable estimate for implementing Y, which is pretty huge, both in money and time.
* The final productivity gain is known. And most returns from this investment are a year or more out. ( And in a way, there isn't an accurate clear metric for measuring the productivity gain without introducing some intrusive controls. That's why you don't see any detailed white papers published by other companies who have tried it)
I would say the lowest hanging fruit, is justifying the time savings in integration issues. If your shop is using a multi-branch source control, you can measure objectively the average velocity of how fast it takes code from a dev branch to make it into the stable branch. You'll find there is a huge time savings with integration tests and acceptance tests in this area.
This base benefit has other side benefits such as developers being able to work on the latest cleanest code faster, higher quality shipped to QA, and fewer deployment issues on production.
Thanks again, dlai. You articulate the challenges of change very well. I am still trying to evaluate what would work best for our team. We do not use multi-branch source control and we only have 2 devs on the project in mind.
Based on my (still young) research, I am considering the following strategies.
- Automated Integration Tests with Manual Exploratory Testing - The goal is to develop an integration testing framework such that tests can be written to run the business logic of new and existing features as part of Continuous Integration. In each sprint, instead of writing formal manual test cases, integration tests would be written to target planned changes. I'm thinking of these tests as headless versions of what we would normally do in GUI testing and, therefore, faster and more reliable. This would avoid the duplicate effort of writing manual tests and later writing automated tests. Still, we would have manual exploratory testing very the end user experience of the new features. In the end, we would have good coverage per sprint, low overhead, and automated tests that can merge into regression. I am not emphasizing Unit tests here because our product is already mature and I think their development should be technical debt to address as planned work.
- Automated GUI Tests with Manual Exploratory Testing - Very similar to the above, but prioritizing the end user's experience. The philosophy here is that--with a proper modular automation framework--we can more accurately ensure the end user's experience. Also, each test could touch a large number of code paths (one-to-many ratio of automation lines of code to tested lines of code) and we limit our testing to what the user can actually do (not what they can theoretically be done in the code). But this has higher overhead and does not follow the agile testing pyramid.
- Unit Tests, Integration Tests, and Exploratory Testing - This is the same as the first, but with higher priority given to unit tests and with the goal of implementing them in the project before attempting integration tests. Of course, we'd still need manual exploratory testing. This feels like more overhead than we can handle right now.
I'd love some feedback about this plan. I really want to figure out how best to introduce automation to our small company with positive ROI. Thanks for all your input.
* I think you're on track with the integration testing. Integration tests will give you the most bang for your buck. Most honest devs will have their code working locally, but fails to work with integrated with other modules. Here I think using a TDD approach and outlining the main use case scenarios and creating stubs for those integration tests, then having the devs fill those in with the implementation code as they develop new features. (this can also be applied to major bug fixes)
Originally Posted by Daboa
* The exploratory tests won't go away. You can't simply anticipate every conceivable issue, especially in a rapid change environment. The traditional write test cases first, then test using only those test cases may be good if you're doing a large amount of outsourced testing. But in my experience in the agile shops I've worked in, they are usually only used for reporting and pointing fingers.
* You'll want to get a basic Unit test coverage in place. You can start off with something simple like "no drop in unit test coverage for any new check in." Then from there crank up or down the required unit test coverage you want your dev teams to achieve. The over all goal here, gradually increase unit test coverage. Realistically, unit tests are not that useful until you do re-factoring. But having a good unit test coverage allows dev teams to re-factor quickly and keep their code base clean and properly architected so that they can develop even more quickly. As they say in a french cooking school, Mise En Place, which is basically keeping your kitchen clean and organized so you can rapidly cook more complicated dishes quickly.
* When you don't have enough time for "proper' automated tests, something you can do to reduce the amount of reintroducing bugs is to use record and playback tests in the short term. Record tests using a record and playback tool, then on a clean environment, have them replayed every night. Then after say around 3 months, retire those record/playback tests, they'll be too expensive to maintain long term. The idea here is get tests up quickly, prevent developers from breaking the same thing within a short period of time, then retire the tests before they become a maintenance burden.