| || |
Alternative way to Test Cases
we are struggling in our team with test cases. We write them, maintain them, our automation engineers write test with help of our test cases. Nevertheless, when we have release and we are testing our web application, we have never used our test cases. Why? I am no really sure, maybe because of lack of time or maybe lack of attitude?
So is there any option or alternative to test cases? Some simple guidelines and some tools for maintain them? Or we should just rewrite our TCs to be more simple, specific and quick to go through.
Thank you, for your ideas.
An idea I'd like to do in the future, (it's hard convincing companies to throw away their old test cases), is the idea of moving away from Test Cases and moving towards test matrix.
Here are the problems I see with traditional test cases:
1) Most of them are written to the specifications of the work ticket / story / project specs - They serve as a useful checklist at the moment the project is being implemented. But their life time value dramatically drops as they become hard to maintain and don't catch many bugs.
Here's an example, "Verify the user cannot submit a password less than 6 characters long." When you run though this test case say a month later, the chances of that actually breaking is very low, yet you end up having to maintain that test case. Next year the requirement becomes 8 characters with special symbols. You then have to find all your test cases dealing with passwords and update them with the new spec.
2) The level of detail tend to be so low that people running them tend to lose peripheral focus. From the example above, they get too focused on the password is 6 characters long, they might fail to notice the password masking isn't working, and because there isn't a test case or a requirements spec spelled out, then don't notice it. Sounds silly now, but say you ran through 250 test cases as part of a manual regression run earlier that day and have another 300 left that's due tomorrow morning, most people's default will be to get through testing and not bother looking at anything outside of the scope of what is spelled out in a test case that was written so specifically to specifications.
3) To better maintain this crazy long sets of test cases, people started creating traceability matrices and living requirements docs to link test cases to requirements to better aid in test case management. This has the downside of creating about 5 times more work. Every new story, you'll have to trace back all the past requirements it alters, then trace those back to their related test cases, and then update those test cases to match the new reality. This gets exacerbated by long sets of negative test cases.
Here's what I think will work better. This though is heavily influenced by James Whitaker's talks on testing Tours over test cases, https://msdn.microsoft.com/en-us/lib...v=vs.120).aspx, and ACC (Attribute Component Capability) model, Google Testing Blog: Google Test Analytics - Now in Open Source.
Testing Matrix is composed of Risk factors, witch Whitaker calls "Attributes". They are things you care about. "User friendly", "Mobile Friendly", "Secure", "Robust", etc... Then break the affected features for any new project into the following, Components (major module sets or groups), and capabilities (specific things that those modules and components do). For components I like to use the idea of Pages or section of pages and Endpoints for API. For each one of these, describe what it does in the positive.
What you get is a test matrix that requires lower maintainence. The downside is you will need highly trained Jr. QA staff and harder to use temporary workers as they can't just follow a step by step test case.