I'm interested to know if anyone has developed a good naming standard within a project on TD.
I'm just setting TD up for a new project. It is a biggie, with a new J2EE enterprise core application and interfaces to about 15 legacy systems. There will be about 35 test staff working on it. We have 4 main test stages: Integration, User Acceptance, Industry Acceptance, Operational Acceptance(which includes substantial performance testing) which will be the 4 main headings in Requirements and Test Plan. Requirements will be named after the Requirement ID from the Req Spec. It was more the naming of test scripts within Test Plan and Test Cycles withon Test Lab I was interested in.
Jimmy, it sounds like you have a reasonable idea already partially formed here.
Good naming conventions for one organisation may not travel well to others though. So the best way of working out the good convention is to first understand the goals of the testing and the audience for the tool.
If the goal is to ensure requirements are covered then look to using a form that references back to these for each test. If the audience are buisness people then with the test scripts it is logical and reasonable to name the script to an element of business functionality or to a business process.
One of the key factors here is with 35 people working on the project in testing alone it will need to be kept fairly simple and will need to be rigourously adhered too, unless there is a risk of the reporting going out of sync.
For me in the past when I have been in these kind of heavily scripted contexts is to work within the similiar guidlines given by development. So each script has a logical name and unique identifier. However they can sit in more then one test area: ie a J2ee call method may be used across all stages named and be mapped into these areas via the test set for execution rather then the test plan.
The aspects that I have found added value are ones that:
1. Are meaningful to the purpose of the test.
2. Traceable to a requirement or test goal
3. Give information to multiple audiences
4. Fit within the constraints of the tool
5. Are unique and unambigious.
These are also affected by any inhouse standards that already are set up within an organisation.
Agile Testers of the World UNIT!