Im new at this forum... Well , I work as a Tester in my company in a remote assistance project. This project is essentially web-based and the ERS report almost 50% interface requirements. This is an internal project not linked directly to clients, so It's Marketing department who propose new funcionalities.
No pressure, not many closing dates...Whe don't build layer over layer, but we do constant improvement... when do I know I need to aumomate testing to pass regression tests?
If your application is being update constantly, a lot depends on what kind of updates are being done. If new objects are added or old ones moved or deleted, and/or functionality is being changed, then you might not be able to build a robust enough, and flexible enough automated suite to justify your time spent on it. However if the application doesn't crash a lot, and it mostly functions as designed, then if you must automate, you might as well jump in and start it now. At the very least you can get the framework in place and many of the mundane report validation scripts can be done. Then if they, or you ever decide that it is as it will be for sometime to come, then finish it. Good Luck.
This is along the same lines as what Rich said. I got this from a webinar last year:
A. When is manual testing a better alternative than automated testing?
1. Subjective evaluation of the application, look and feel considerations.
2. New application with major changes included in each new build.
3. Strategic development that requires hands on evaluation, subjective qualities.
4. Complex functionality where the time and cost of automation outweigh the benefits.
5. Exploratory testing where testers search for issues with no guidelines or set use cases.
B. When is automated testing a better alternative than manual testing?
1. Regression testing where development creates several incremental builds with minor changes between builds. The application should have mostly stable code.
2. Quick high level tests of critical functions, also called a smoke test. Assessment of the quality of a build, "go / no go" assessment.
3. Static or repetitive tests.
4. Data driven tests where the same functionality needs to be validated with a wide range of input values or large data sets.
5. For load and performance tests. Automated Testing usually provides better metrics and more accurate measurements.