I've been in QA now for 2 years and earlier this year I was thrown into the QA Test Lead position which is now turning more into the QA Manager position. Senior Management has basically given me full reign on the QA department to implement testing standards and any software that is required to make us more efficient.
Right now out testing is very informal and I want to resolve that by putting in some test management software and also automate some of our testing. What I am having a difficult time with is trying to figure out how to apply test cases and plans to our software.
Our software is a very large (165 applications) and can be customized in many different ways by our customers. Customer A may do business one way, and customer B another way. We also code into two branches. One being the current release and the other being the next release. After the current release is out QA focuses on testing the software every week. Each day development is in the next release (which is what QA is working on) and checking in new code that requires patches and new exe's.
I want the team to test scenarios in their assigned areas, but I also don't need them running over the same tests week after week as some areas don't get touched that often. How many times do they need to check spelling and functionality on areas that aren't touched. This is where I would guess the automation would come in.
The biggest thing I want to focus on right now is the Test Plan Management. I need a piece of software where I can manage one major release, but also manage the testing of all the weekly releases we put out so that when the final release comes out, it's tested! Right now we seem to focus on risk based testing because the software is so huge and we don't have any management for the releases. The software I've looked at out there doesn't really seem to follow our methodology of multiple releases for 1 large release.
Has anyone run into a similar situation or have any guidance they can provide? I'd like to be able to show management some tangible results from our testing. Right now all I can say is, "We tested it!"
We have a in house bug tracking system that's linked through our intranet so I can link to most of the software solutions out there.
We are also licensed for TestComplete version 3 at the moment, but it wasn't ever used because of how the software kept changing so much. We ended up spending a lot of time making changes to the scripts. Plus half of my QA department aren't developers and it's a steep learning curve for them to write the test scripts manually.
You should look at getting some version functionality in your bug tracking software. Then you could prioritize test cases/plans based on bug history. Some reporting features would also give you more material to present to management to reflect the "testing effort".
We tend to use Bugzilla on the requirements level (very minimal; usually just a line or two explaining the new feature), but it comes in handy when you are verifying new features.
Software Testing, Second Edition: "Intelligently weighing the risks and reducing the infinite possibilities to a manageable effective set is where the magic is."