Status, S-Curves, & Projections
Our functional test group is required to provide so-called "s-curves" of test case projections (executed and successful) along side the actuals. (Note that this is the first tracked "data" - since the design/code/unit-test is tracked solely by end-dates and "are you on track" queries.) FVT must project how many test cases they will create along with how many will be executed and successful months in advance. This also brings out the question as to what a test case is (v. a testpoint or iteration or whatever) and to what granularity an FVTer should provide status. Disparities between projections and actuals cause alerts and require explanations. Is this done elsewhere? How are others' projected curves determined? Do you attempt to make test cases "similar" so that you are reporting apples-to-apples?
Re: Status, S-Curves, & Projections
I'm afraid this is one of those Manufacturing/Software differences.
If you are asking from a manufacturing perspective - the following does not apply.
S-curves and projections require:
1. Like for like statistics from previous projects
2. Totally Scientific Testing
Both apply very well in the manufacturing world.
In the software world - where do you genuinely find a like for like? Just change one developer and the variances begin.
As for scientific testing - static testing can be very scientific, dynamic test execution - never!
As discussed on similar threads on these forums, how do you clump test cases together - are UI tests the same value as business engine tests.
If a UI test passes, does it get the same weight as an engine test?
With source code, we can make predictions for the results of code inspections - we are dealing with finite variance. With testing, no such finite variance exists!
Requirements are, in themselves, more like separate projects than aspects of the same beast.
For instance, calculate x from y and z using developer A are totally different levels of complexity than, say, implementing a billing window using developer's B,C,D and E.
The fact that 18/20 test cases passed with 1 sev 1 bug in the first requirement gives no predictability about how many testcases out of 100 will pass from the second.
All predictions, based on generalities, give (at best) high level general trends - and with them, a false sense of security!
I went down this path in the early 90's, and discarded it within a few years, well before the end of the decade!
Estimate based on current scope and resource, not past results!
(Well, one exception, we can look at average turn around times for fixes and use these as part of the estimation/tuning process - provided we utilise contingency and common sense).
Re: Status, S-Curves, & Projections
Hey, this is a very well articulated response.
However, today there is a lot of pressure to use past projects data for projections.
While, for new comers past projects info usualy provides a starting point - I have a suggestion -use it only as a starting point. Plan for variance from the plan.
Hence while we bow down to the pressures for supplying metrics/ graphs etc., much of htis can't be applied in s/w.
End of the day there are too many vairables - and our processes are not so robust and noise free.
One aspect of being aware of the stats - is that we will start examing our work. so for testing functionality - good re-use metrics would be test producitivity, test paths covered, module-wise defect density, root cause figures. These stats have re-use in the very same project and are good for online project management. (about applying this on other projects....!? that would be telling!)