| || |
Automated Results Compare
When I benchmark one app run against another, I typically compare the results sets. I'll look at CPU, paging, etc and mentally note the differences. The LR graphs are nice and a pleasure to work with (for the most part), but I still have to do all of this manually.
Are any of you aware of a way I can say, "Look at all of the results and if the results vary by 'X' amount, report it to me"?
Kind of fanciful thinking, I know, but I figured I'd ask in case there's something I missed.
Re: Automated Results Compare
Best guess is you would probably have to use a tool like SAS or SPSS and go after the core data for each test. Compare the area under the curve for each of the datapoints and if you got a deviation in the curve area by <insert your percentage difference here> then you would generate an exception report to be looked at.
Is it doable? Yeah, I would think so. But it's going to take someone conversant with SAS or SPSS to get it done.
Replace ineffective offshore contracts, LoadRunnerByTheHour
. Starting @ $19.95/hr USD.
Put us to the test, skilled expertise is less expensive than you might imagine.
Twitter: @LoadRunnerBTH @PerfBytes