Has anyone implemented anything like this in their testing? Do you think it would help?
I really like the idea but I'm struggling to see how to include the concept into my testing.
My tests are currently very black and white, things pass or fail, there is no grey area where heuristics would help decide if things were a pass or fail.
I guess it would help make decisions in situations where the data was volatile. For example, I have a script where I check for a specific number of "67.45", however if the underlying data were to change then my test would fail. A person would then investigate the cause of this failure.
With weighting and heuristics I could have the script check for any number less than 100 and greater than 50, that is displayed to 2 decimal places. If it isn't the number I expect, but does meet these conditions then flag the test case as "possibly passed".
When my test analyst comes to review the failed test scripts, this would help them prioritise which ones to look at first. The failures first, then the "possibly passed" ones.
I've played around in the past with such a concept.
I was attempting to excecute automated tests on live, constantly changing data.
After a number of attempts to come up with a system that would score the results and attempting to create a "good enough" cutoff point, I stopped. I found that too often, test results would fall into a gray area, and I was spending too much time researching if the results were really "good enough" that I could ignore them.
I decided that it would be better to break my automated tests into two parts.
One part used unchanging test data. For this part, a test either passed or failed. I could control the oracle that made this determination, since I controlled the data.
The other part still used live data, but didn't attempt any weighting or scoring. Instead, these tests focused on non-data-sensitive tests that could still be judged as simply passed or failed.