| || |
Testing vs Production Defects
Color me a hazy shade of grumpy.
Our organization has, for better or worse, decided to migrate from one defect tracking tool (Test Director) to another (ClearQuest). This is not the part that makes me grumpy. (Note: This refers to one product-line migrating...other products lines are already utilizing ClearQuest.) I happen to like the idea...as Rational is the corporate solution.
My issue is with how we use our defect tracking solutions. Currently, we have one "bucket" for production defects...and one "bucket" for the testing efforts associated with each "product". Personally, I think this is less than the ideal solution. What I think should happen is that each "product" has its own bucket...regardless of the implementation phase...where we can develop a history associated with each product rather than having two separate histories. We can build many different queries to suit our needs...by product, by severity, by implementation phase, by whatever-we-gosh-darn-please. The usability is not the issue. The concern on the part of our developers is that they think production and testing defects should not be mingled.
Any words of wisdom? I want to be able to convince our developers (the group that is the thorn in my side) that they are pursuing the less than optimal solution. On a personal note, this has been something I've been preaching about for over a year now.
(Note: I posted this originally in General Discussion...but realized that it would be better server here, I think.)
"The single biggest problem in communication is the illusion that it has taken place."
-George Bernard Shaw, Irish playwright and Nobel Prize winner, 1856-1950
Re: Testing vs Production Defects
I agree that the "buckets" should be by product instead of by production vs. testing. The way that the developers want it can lead to them saying "It works in my environment". This allows them to protect their own hides whatever!!!
As I am sure you are aware of by now in your QA career, developers try to avoid things that they perceive as making their job harder.
My companies Defect Tracking software (Remedy) is set up so that defects are tracked by Product. My fellow QA member and I mainly test against the testing (QA) site while doing spot checks on the production site.
I feel that it is not QAs responsibilty for other sites, although many developers and business people would disagree. Basically a development environment should be for developers, a testing (QA) environment should be for QA and Production should be for IT. If QA has control over areas other than the testing site then QA can be held accountable.
Hope that info helps!!!
(Quote from a t-shirt: "Don't give me any attitude..... I have one of my OWN!)
Re: Testing vs Production Defects
*** Vendor Reply from Sesame Technology - makers of ExtraView ***
This sounds like a reasonable tracking goal. It is interesting because there are several ways that we have implemented this in our defect tracking solution, ExtraView (remember, I AM a vendor). I would be interested in which one best meets your needs. Let’s break down a few problem/solution pairs here:
[B] you said...[b]
1. Currently, we have one "bucket" for production defects...and one "bucket" for the testing efforts associated with each "product".
You are right… This is less than ideal. It should be simple to track the “Bucket” for each product issue. But what if one issue touches TWO buckets? Or three? Or Many? With ExtraView, we offer tracking by issue, or by multiple sub-cases for each issue. It took me a few minutes to set up the following screen shot that assumed many Buckets per issue, and per product. WOuld you ever need multiple buckets per issue? OR one bucket per issue?
2. …where we can develop a history associated with each product rather than having two separate histories.
Implied by the above solution. In addition, ExtraView creates an audit trail for each product, issue, and field change therein. I would hope that any quality tracking tool would implement this.
3. We can build many different queries to suit our needs...by product, by severity, by implementation phase, by whatever-we-gosh-darn-please.
Yes, and a better report might be to create a summary by Product/Bucket/Priority. I did this for kicks, but didn’t take the time to enter a bunch of different cases with bucket data. In the screen shot, I think you can see the column that would fill up according to all the Bucket entries. If you are interested, call me and we can set it up with data.
4. The usability is not the issue. The concern on the part of our developers is that they think production and testing defects should not be mingled…. Any words of wisdom? I want to be able to convince our developer
Convince developers? Unlikely We have a possible solution that we seen used in ExtraView in the past. Use at your own risk. Developers never see the “Bucket” field. ExtraView detects the product, Group, ID, company, etc… and creates the screen to show only the fields that are important to the user. You can track by "Bucket", and they never know it existed. I tried this by logging on as an engineer in ExtraView. See the resulting screen shot. Also workflow rules and other limitations/permissions are simple to arrange.
Does this answer your questions? If you have further interest, please visit the ExtraView web site. We would be happy to have you up and running on ExtraView today.
ExtraView is a highly adaptable Web-based tracking engine especially tuned for enterprise-level projects involving engineering, test, quality assurance, and service teams.</p>
[This message has been edited by MichaelStebbins (edited 09-12-2001).]