Test Plan and Test Cases
If there is a project which does not have enough time and budget to create Test Plan and Test Cases how will the testing proceed?
Testing includes Test Cases and Plan. We can not perform testing if we don't have Requirements, Test Cases and Test data. We should have to follow a process to Verify and Validate testing.
Testing should be performed by testers (resources), need a testing environment, should have entry and exit criteria, what to test, how to test, who will perform what testing, testing methodology, timeline.
Testing is the critical part of the project. We can not neglect or shorten this phase of the SDLC because of budget or time.
If you can provide this information that's the draft Testing Plan. How are you going to validate your testing effort if you don't have this plan?
This is my view point!!!
There are many, many ways to reduce your testing effort to fit into the time and budget that you have. There's an old saying, "Time, Cost, Quality, pick any two". If you want something done quickly and cheaply then you're prepared to sacrifice quality. What that saying doesn't identify, though, is that there is a fourth dimension called "Risk".
Part of our job as testers is to identify that risk as much as possible by using whatever tools are at our disposal to complete our test mission. No time for a formal test plan? Then how about a whiteboard with a few bullet points, brainstorming in a time-limited session with testers, developers, and business analysts. No time for scripted test cases? Then how about session-based test missions, maybe even pairing with a developer or BA to move as quickly as possible through the functional areas, concentrating on high-risk functionality. No time to record test results? Are they needed? Then how about screen recording software to record as you go. Don't have formal requirements? Then use experience and heuristics on how you think the software should feel, act, function.
Your key role will be to provide information to stakeholders on what you have tested, how you tested it, and what you have not tested (and therefore where any hidden risk might lie). This may be as simple as something on Notepad while you test, and a face-to-face with the test manager / product owner during and following the test execution phase.
Your testing should be as formal or as informal as the situation demands.
Yeah time to time we get some ****** fixes which we must put in to production. Sometimes we may get higher level approvals (mostly verbal ) to go ahead with ad-hoc test approach. So we may do so.
But end of the day we are the ones who provide sign-offs. If we are lucky everything may go well. But if we get a production defect then every one might look at us or point finger at us. End of the day it’s a production defect which will be a bad mark for QA.
It's moving at a tangent from the OP, but I have a few questions on that:
Why do you provide sign-offs? I can understand that you would sign off that your testing is complete (and maybe provide a test completion report, even if verbally), but the implication in the way that you said it was that you might be signing something into production. This is a decision of project management and product owners, not the testers. Sure, you might have input into that decision and in some cases you might have some very real clout as to whether something gets moved into production, but ultimately it should never be a test decision to release to prod.
Why would you need to be lucky for everything to go well? Ad-hoc doesn't mean it needs to be unplanned. If you're relying on luck then the odds are that something will slip through. If you're clear on what you have tested and what you haven't, and the product owner is aware of that and makes the decision to ship anyway, then you've done the best you can under the circumstances. If an issue does happen to get through to prod then run a root cause analysis on why it wasn't picked up in your testing and adjust the way that you prep or test for next time.
Why would anyone point a finger at QA, or leave a bad mark? You didn't design it (analysts did), you didn't code it (developers did), you didn't release it (project managers did), so there are plenty of reasons for bugs to get into the software in the first place. In any reasonably complex piece of software it's physically impossible to find every bug so you're left with risk assessment and assuming that you've done this as best as possible and made everyone aware of what you have tested and what you have not then you've done the best you can. If bugs make it to prod then by all means reassess your risk methodology or testing, but I wouldn't beat myself up about it.
Being in small company I faced such situations many times, way which I used to follow was to maintain a Test summary i.e. a simple word file mentioning 3 columns as [Test scenario][Actual Result][Pass/Fail]. So as and when we go on testing any scenario put one liner statement in Test scenario and actual result. Also I used to take screen shots or videos as reference to what testing I have done. At the end of testing I used to send the same Test summary to Project managers too.
I totally agree with meridian_05 as its totally up to project managers whether they really want to release the project in such situations or not.
I think test plan is absolutely critical. Usually this is done as part of project planning, so the project shouldn't even be considered spec'ed out without the test plan.
Originally Posted by Pushkar Joshi
Let me ask you this, if all you did was sign off and said "OK" without them knowing what you did, would they trust the product is tested well enough? Also, how will they audit your effort? A well run company has the ability to do things repeatably, and the control to know the risks and trade off at any one stage.
Obviously there should be some risk analysis, and a test plan for risk mittigation.
As for test cases, test cases ensure repeatability. Depending on your staff's knowledge of the SUT, you could opt for higher or lower level test cases. An idea I'm exploring more of is writing test cases in a form of high level walk throughs, with a check list matrix of areas of emphasis. So for example, a test case might say, "Verify you can login", then the checklist matrix will have: 1. Functional, 2. Secure, 3. Responsive, 4. Robust, 5. Reliable, 6. Stylish
This approach I feel cuts alot of time writing exhaustive test cases, moving towards the ad-hoc/exploratory, while having detailed enough touch points for repeatability, and still have flexibility to stay up to date with today's needs. For example, 'Secure' could mean XSS and SQL Injection today, but tomorrow it could include Cache poisoning, inpersonation, etc...
In line with the said situation, I would like to know how test cycles would be scheduled, Im quite familiar of three test cycles for SIT before UAT, is this a common practice or would it depend on the project manager?
Originally Posted by meridian_05
Do you happen to have any risk analysis or risk assessment exercises or stuff I can read on that is comparable to real world, or has examples? Thanks
Last edited by sqatesting808; 07-15-2015 at 11:38 PM.
In terms of common practice, formal SIT and UAT cycles are more Waterfall than Agile, will depend on your Test Manager (working with the development lead and project manager) and should be written into your Test Strategy. "Three" isn't a magic number that everyone uses, the number of cycles in any test phase will depend on what your test strategy is.
Originally Posted by sqatesting808
Example 1: For SIT we've planned three cycles. The first is for internal integration between modules and components; the second is for external integration using interfaces; and the third is a repeat of core scenarios from the first two cycles but using the correct user roles. The intention behind these cycles is that they are a bell-curve in technical complexity - we get our own system working right first, then integrate with external systems, then run a combined security/defect retest/regression/late CR cycle at the end.
Example 2: For SIT we've planned two cycles. The first covers core and critical functionality and high-risk areas (technically high-risk of failure, or functionally high-risk to the business processes); the second combines testing of lower-risk areas along with regression and defect testing of the previous cycle.
Example 3: For SIT we've planned six cycles. The development release plan is iterative waterfall, so as each module is released we run a combined functional/integration/regression cycle on the new and existing modules.
There's various bits and pieces on the web, try the following:
Originally Posted by sqatesting808
risk based testing » RBCS blog | RBCS
Risk-Based Testing | Gerrard Consulting
first of all testing is a process. the prime necessity of the test plan is to identify the identify the in scope and out of scope test items along with the plan with QA team will move forward. This is like a overall battle strategy (do not get confused with test strategy) . it is always suggestible to have test plan prior to QA activities (test design, test execution etc.) start. we cannot expect quality delivery if we do not have any plan.
We cannot proceed if we do not have any test cases. Test cases (positive or negative) are the foundation bricks with which we can validate the quality of a product. Test cases will help to record the test scenarios that a QA can think (logically or theoretically). Plus availability of the designed test cases will help in the later stage of the project ( e.g. multiple test cycles, UAT, pre-production testing).
From my point of view it is suggestible test plan (even it is a draft version) and proper set of test cases for better quality delivery.