I wanted to check with this forum before I report what I suspect to be a defect in Test Director.
I'm running a medium-sized project (about 1200 test cases in total) to test a web-based transaction application. We're following a four-run accelerated-V strategy - building all tests for the entire project and poulating the test lab with only those tests necessary for the current run (yes, yes - it's gone all iterative).
Anyway, to help execution reporting, we prefix all the test sets with the project code and QA Run number (for instance, "RAN - 1 - <test name>", "RAN - 2 - <test name>"... and filter on Test Set like so: "RAN - 2*" to draw the execution report. We then apply this filter to a Summary graph and use the data grid to produce our reports.
aND ALL GOOD SO FAR. However, after we finished run 1, we created a new Test Lab folder and populated the QA folder with our tests. I built a new filter ("RAN - 2*") and ran the report.
The execution statuses of the tests were not all 'Not run'. They were various - pass, fail, apparently random. I checked in the test lab and all the tests in that run are marked 'No run', so I can't see where the execution status is picking up.
btw: The 'Tester' field is <UNKNOWN> for all tests, as expected.