Comparing test results of different executions
Hi all, I'm starting to work on performance testing and have trouble reconciling a couple thoughts, I hope we can have a good discussion about it.
One key element of executing a performance test is being able to compare the results against a baseline and/or other executions.
Now let's say that I have a script that runs my application top 5 flows. Tomorrow I want to start testing a 6th flow, so I add it to my original suite.
Now, the results are going to be different than last time, right? There's going to be an increased load on my system due to that new flow, so I shouldn't expect a result that I can safely compare with the baseline or past execution (or at least that's what I assume)
Should I instead run each of these flows separately? But if that's the case, the total execution time grows considerably, so something like continuous performance is less feasible.
Sorry for the rambling I hope it makes sense what I mean though.
Thanks in advance for your help clarifying this.
If the number of users for business processes 1-5 stays the same - then the comparison is useful - there may or may not be any impact from business process 6 - this is what you are measuring
Test for 5 business processes was baseline, new test for 6 business processes becomes the new baseline
We run a lot of of this type of test and have to add new functionality from time to time
Makes a lot of sense Jim, thank you. I have since setup our jmeter tests to be executed from jenkins in a regular basis, and it's exactly how I've been approaching the results.
Originally Posted by JimHowell1970
Tags for this Thread