I'm testing a server product(both server and GUI). Project what was late from the onset. In last release it was deemed to risky to release. Project cycle is one year. Functional spec wasn't delivered to QA until 2 months before testing should be completed.
One bug I reported introduced changes in most server core source code files. Dev took one month to fix. Now is only a couple of weeks left of testing. I don't feel I have had time to test completely.
What risks do I put myself into if I say these features and code should be rolled back or disabled for this release and be introduced in a later release?
Disabling these features will take a week accordion to dev.
What information do you think Management will ask of me (QA) before disabling the features?
Thanks a million for your advise,
ISEB Foundation in Software Testing Certified
There are a lot of unanswered questions before you can position yourself. Perhaps these are the questions you are refering to when you ask what management will ask.
1. How critical was the bug you found that caused the rework? Can we live without the fix?
2. How critical is it to release on the current release date, or is the overriding factor quality?
3. How much testing CAN you get accomplished in the allotted time that is left?
4. How long will it take you to complete full testing?
5. How confident are you that if dev is told to disable the features, that they will not break something else in the process?
6. Instead of disabling the features, why not do a build on the previous version of source code? (they ARE using version control, right?)
Originally posted by qa4ever: I don't feel I have had time to test completely.
<font size="2" face="Verdana, Arial, Helvetica">When did anyone ever test 'completely'? It is not enough to have a 'feeling'.
You need to quantify the risks involved. You don't give enough information about your software to know what customers/users you are aiming at so I'll try to keep my points general.
If the software goes out with the new functionality what percentage of users will actually utilise these? How much does the functionality affect the way the rest of the software (or any part) of it works - if the new bits go wrong will there be knock on effects?
If the new functionality is disabled who are you disappointing? - customers waiting for new features or developers wanting to see their new widgets in action?
Work out test coverage for the new functionality. Have you covered the riskiest parts? What is left to be tested? Are parts untested likely to be in use? AND how long do you think you need to test this satisfactorily? Can you ask for extra resource for a quick burst of testing activity? Developers can work as testers if you twist their arms hard enough! Can you ask for a little more time? As a final resort if you have to release the software it might need to go out with a Known Issues document to warn users (you might need a degree in Creative Writing to write this without upsetting or alarming people, though!).