Trying to get some industry information. Recently came off a project where 18.5% of the total defects failed after being addressed by dev and put back into QA for retesting. Is that high? low? normal? What are you guys seeing with regards to this in your work place?
I'd say it depends on the project.
Consider the following:
Experience of development staff - are you looking at seasoned programmers or mostly inexperienced junior staff?
Technologies used - any new shiny languages/frameworks/platforms etc can cause an initial increase in defects
Poor development techniques - a high defect ratio and, in particular a high bounce ratio (defects failing retest and going back to dev) can be indicative of a 'chuck it over the wall' (into test) mentality - if that is occurring you're looking at a need to work closely with the individual(s).
2.1 .Net will be used but isn't the main focus.
2.2 .NET 3.5 and 4.0
5. Not currently installed
6. Not currently installed
7. 250 users, perm
9. XP sp 2 on the clients. Red Hat 9 EL on the servers.
[ QUOTE ]
Recently came off a project where 18.5% of the total defects failed after being addressed by dev and put back into QA for retesting. Is that high? low? normal? What are you guys seeing with regards to this in your work place?
[/ QUOTE ]
We've had some projects with a higher rate of failed fixes than that. Our root cause analysis showed that a few developers weren't doing any real unit testing of their fixes. Even worse, they were having a hard time understanding and interpreting the bug reports.
But, usually we don't have anywhere near that rate.
What does your analysis tell you is going on at your shop?
Thanks for the posts. The total defects were 329 defects over a 12 week period. The developers (3) were mixed Jr and Sr. The technology was new to us which brought additonal challenges in DEV and QA. From a planning perspective I took all that into consideration and beefed up the QA team to mitiagte the risks of Jr people, time crunch and new Technology. However, I saw a lot of development shortcuts being taken. Good practices were not followed. This includes a lack of Unit testing. Not one dev member could show me a single Unit test result. I had to implement code handoffs to ensure some level of quality was coming into QA. My thoughts are that as a result of practices not being followed and we took the "Throw it over the wall" approach that it led to 61 of the 329 total defects or 18.5% of those defects having failed. As I read through the defect description it becomes clear that the majority of these would of been caught during Unit testing. Anyways, just wanted to see what some other folks might be seeing out there. Thanks again for the feedback.