Starting ... uphill!
I have recently been given the task of learning and using QARun to script a whole pack of regression tests (Delphi/Oracle) for a pretty large utilities application. I have trawled this Forum, and have picked up some very good advice for which I am already grateful. I have learned the basics of QARun and now need to provide a report/evaluation on how effective QARun will be, and essentially how long the task of scripting will take. Now, this seems a pretty hefty and daunting task. Given that there must be a more systematic approach other than having a go at scripting each piece of functionality to get an idea of the tricky and easy bits, would anyone know of a template or check list of things to go through to make this type of evaluation a little more structured - also bearing in mind the quirks associated with the target apps.
Re: Starting ... uphill!
Ugh... sounds like you are starting at square one....
One thing I would suggest you do is think about the type of automated infrastructure you want to acheive. This is needed before you start coding your scripts and will help you design/think about how things will work best. Ask your self the following questions:
1. How will testing occur?
2. How will test data be stored in order for my scripts to run the necessary tests?
3. Do I have the necessary hardware to run the test? (in otherwords, do I have dedicated hardware assigned to me to run the tests?)
4. What are my limitations (such as time to write the test scripts, time to execute the test scripts, time on hardware used, etc.)?
5. Can I make my scripts generic to reduce the number of scripts needed to run the test?
6. What platform is the target application I want to test in?
7. Do I understand the "behavior" of the target application (at an object level)?
8. Is the target application stable to automate testing?
9. are there any dependencies that may affect how I develop my scripts?
There are probably more questions you will think of as you develop bu these are some I have always asked myself when I was in the early stages of development.
Also, development of test automation is a difficult process and requires alot of planning and time. Keep in mind that automated testing is just like developing an application. A while back I heard that automated testing is based on 10% record/playback and 90% programming around the target application to get it to work correctly. After my past year doing extensive automated testing, I would agree with this figure.
P.S. Ask alot of questions to get the answers you want!
Re: Starting ... uphill!
Thank you for your ideas and questions for an approach. This has certainly enabled me to get a handle on things. As I move on with the planning - and now coding - I certainly see your point on the importance of a structured, well thought out plan. I read many months back, way before I thought I'd get involved in auto-testing, that using an automated test tool requires the same approach as for any sofware project - needs analysis, design, timescales, breakdown, resource analysis etc. At the time I didn't quite see this - but now ...! It seems that to make the most of such a tool, thought and planning and breakdown are critical. I also agree with your 10/90 ratio. Try explaining it to those who say "But you just have to record and play ..." yeah, right!