Does anybody know of a good guide for application developers writing application that will have automated tests written for them? It's proving difficult to extract this concept from Google, which insists on showing me guidelines for automated script developers.
What I specifically need is guidelines for writing good code for the AUT - for example, what common mistakes to look out for that make automated scripting difficult (like QTP not being able to consistently recognize objects from session to session etc). In my case I'm specifically looking at a HTML/JQuery environment, testing with QTP in a Windows environment, but any general guidance for developers would be useful. Any ideas?
One that I can think of quickly is for them to try not to switch to/between 3rd party objects in the middle of the development cycle. If they must, then let automation know ahead of time so that they can verify that their tools can work with those objects.
Success is the ability to go from one failure to another with no loss of enthusiasm.
~ Winston Churchill ~
I think what you need to identify are what bits of the AUT are difficult to automate and why, and what changes to the AUT will break your automation scripts and why. I'm working in a different environments with different tools, but here's some things I do that may help;
- Changing Window control IDs and/or tab ordering is liable to break a script. Tell the developer to avoid doing this if it is not necessary, and to make you aware of the change where it is.
- Where regularly used controls are not recognised by your testing tool, provide and alternative automation friendly interface to do the same thing. For example, in my application, certain mouse strokes are used to zoom around a graphic model. For automation purposes, we have a COM interface that does the same thing, and as and aid to recording, the application logs the necessary details of this action to file when done with the mouse.
- As an extension to the above, provide an automation interface to your application (COM based in my case) that corresponds to the GUI. This greatly simplifies hand coding many test cases, as I'm largely avoiding the GUI and the issues previously described.
- Where you have non-GUI automation interfaces, always add to them rather than modifying them, as this will avoid breaking existing scripts.
- Add application specific document or object comparison routines to better support stable checkpoints. Automation tools will typically be able compare text and PDFs (either which might contain legitimate changes between runs based on time, date, etc..) and graphics. Graphics are also liable to change with screen resolution color depth, etc... My experience is your scripts will be more stable if you compare actual data rather than any rendering of that data.