writing a test plan...
I am new to writing the test plan I need your inputs in writing a test plan. I have a test plan template but looking at it I don't feel writing to it will make much difference. I really don't see any great advantage of writing a test plan when there is only functional testing involved.I am a lead and I need to come up with good plan. How to motivate myself and how should I come up with realistic test plan ? Need your guidance.
I think test plans are the most important thing to do in any major release.
2 sections I say you should pay close attention to, and do the legwork is..
the "What is / what is not tested", I would even propose presenting various options for management to choose from. For example, identify which testing activities could be reduced or eliminated, and the resulting consequences of doing so. I feel this is very important because this allows development and operations to identify gaps in the testing plan.
And the 2nd section is allocation of resources. Again, try to present different options, especially possible improvement. If the project could benefit from additional man power, identify that. Identify if additional resources from inhouse, outsourcing, and software and equipment, that can drastically improve the situation. Identify what impacts of having resources taken away could do. You don't have to do an in depth analysis and be super accurate. But you want to give management some levers they can play with to help your team out.
I guess the most basic test plan would include:
- Types of testing (what testing we intend to do)
- inclusions (the bits we will test)
- exclusions (the bits we won't test)
- resources (how many people and what kit we need)
- bug tracking (what we'll do when we find something wrong and how we'll get that fixed/deferred)
- reporting (how we'll tell you about all the above)
But it's all subjective. I've recently seen a 76 page test plan that told me the square root of nothing about the project or application under test :-)
Beauty is only a lightswitch away
Thanks dlai. Test plans are developed in the initial phase of project probably Requirement Analysis ...in this phase how to identify what is tested and what is not tested when software is not developed ? Do we need to imagine or tester should be involved in all discussions of project ?
Originally Posted by dlai
Test Plan is very important document to perform testing on any software/application. You should use an organized approach to create good test plan. You should know the purpose of testing before creating the Test Plan. Following are few factors which should be kept in mind before creating test plan:
- Scope of Testing
- Test objectives
- Budget limitations
- Project deadlines
- Test execution schedule
- Project/product risks
Based on the above points, you should select test strategy and decide how to split testing tasks into different levels. A good test plan should specifically mention who will do what, when and how. You should also clearly mention what will be the test deliverable, how precisely should the testers write the test cases, test design etc. Test plan should also contain entry criteria and exit criteria, i.e. when can you enter the test level or phase and when can you exit from test phase.
Another thing that should be remembered while writing a good test plan is that you should always plan to co-ordinate your testing work with rest of the project activities, like what are the test dependencies which include hardware availability, software availability etc. and the test deliverable required after the testing is complete.
Below are the generic parameters which can be used to create test plan:
- Test Objectives
- Testing Goals
- Testing cycle
- Testing Methodology
- Entrance Criteria
- Exit Criteria
- Test Execution
- Types of Testing
- Test Case Development
- Load Testing
- Browsers to be tested
- Resource Required
- Defect Reporting Tool
- Roles and Responsibilities
“Quality That Creates Value”
QASource delivers your own experienced QA team, a comprehensive QA infrastructure focused on efficiency and excellence, and a new, multi-million-dollar lab.
The Test Plan can be developed at any point before actual testing begins, although the earlier planning begins, the easier it gets to influence decisions based on test requirements: you can review and point out unclear, open-ended or untestable requirements, you can build up the RTM as each phase moves along and point out requirements that are not being covered correctly, etc.
Originally Posted by dhabu
Even without the software to test, however, you should have access to the lists of requirements; use this to assess what needs to be tested based on the requirement priorities and risk assessment, point out which requirements could do with less testing if time is running short, figure out how you'll do the tests, and try to estimate how much time will be necessary to perform the tests.
Once the application is available, you should begin testing; try to use as much time as possible before that moment to plan what you'll do so you don't end up wasting that precious time because you don't know where to start...
Absolutely.. a test lead should be involved in the planning from the beginning. There are some risks that are easily mitigated during planing.
Originally Posted by dhabu
For example, say you have Modules A, B, C, and D where dependencies are B->A, and C->A, D->C
Say you have a 3 features in the backlog which requires changes to corresponding components.
Story 1 -> D,
Story 2 -> A
Story 3 -> A, B
A test lead would opt to push for change in Story 2 being implemented in a separate cycle than Story 3, to reduce risk. Where Story 2 is a lower level component, that has a wider reach, which is generally all is broken or nothing, and thus will need a wide of many features but not deep regression in any single one. Where Story 3, will require a deep regression in the affected features, and if done with Story 2, will be hard to isolate the problems.
As to what to identify needs to be and not needs to be.. You have to talk to product and engineering, and know the system and the customers. Say a less critical area that is far away from the changes being made probably don't need to be tested. While something important, might want to be tested even if it's not changed depending on what level of change control your engineering teams have in place. It's really an art, but you have to take multiple considerations in account, resources you have available, and weight the risks.
Here are some questions to consider..
1) According to your engineers, what modules, hardware, configurations are changed, and what is affected? - determines what you'll want to test. And How much change happened? - a text update, or refactor?
2) Out of the affected features, which one is critical to the customers? - this will help you prioritize your testing.
3) What is your level of change control? Is what you developer say and what's written in the change tickets consistent with that has traditionally happened? - this will help you eliminate lower priority features to test, or you may need to add in unaffected high priority features if the change control is bad.
4) What are your resources? - Given your current resources, what do you think you can comfortable tackle? Out line the risk of the items not tackled. You may want to present options, like how many additional staff/resources you need to cover the not covered items. Also, outline which features you think can be deprioritize and the costs of deprioritizing those, so management has some choice in speeding up time line if needed (I know we all want more time, and do a better job, but we can make it clear to business what risks there are for not testing certain items).
5) What is the release process? - Depending on the release process, you may need to opt for more or less regression. For example, a process prone to human error should probably have some slack allocated for doing sanity checks on multiple environments.
Good information and helpful to lot of testers