When I am doing performance/load testing with a bunch of data points, there are chances that not all data would lead to "functionally" complete scenarios.
The system will behave differently for slightly different types of data. For eg., if I enter a valid user id and password, the system takes a longer time to respond back with the necessary details. However, with an invalid user-id, the error message will come back relatively quick.
Is there a way to handle such situations in a performance testing tool?
Is it a normal practice to ensure that the test data provided will ensure functional completeness?
One of the alternatives I have in mind is to run an automated functional test on all the data provided for the given function and then use the same data for performance testing.
Logistically, this might not work out with huge volumes of data.
It is not invalid to use invalid user credentials. It would depend upon how thorough one wants to be in coming close to real-world simulation. I think it best to involve your customer in this decision. The impact to the customer is that your engineering time** (see bottom) increases for handling the attempt as well as detecting the exception.
Regarding functional completeness and validation, I stop short of that by communicating to the customer the expectation that the app and/or system is previously functionally tested and any issues requiring a work-around are communicated to me. I encourage the customer to provide manual scripts that are system/business use-case equivalents and not - functional test scripts. On the other hand, I feel it extremely important to communicate and functional or performance issues discovered during script development.
** Engineering time and how to handle...
It seems that using the scenario you painted:
1) you would have a list of valid user ids/passwords and some invalid - if attempting real-world modeling.
2) You could/would attempt invalid, then discard those credential (increment a parameter pointer).
3) Have code to detect the error or exception. When that appears, do a retry with valid credentials.
Apologies for a late addition to this thread - haven't been on the forum for a while.
> Is it a normal practice to ensure that the test data provided will ensure functional completeness?
This is a bit of a pet subject of mine. I always advocate "appropriate" functional verification during performance tests, so I can't resist chipping in with my 2c worth.
A very common cause of inappropriate reporting of results from performance tests is failure to detect errors (which may cause dramatically under- or over-stating performance). But there's also another aspect to this. A carefully designed performance test using realistic data (ideally a full copy of production data) often offers the chance of achieving far greater test coverage with respect to obscure data cases than is possible by enumeration of the permutations and testing each with traditional functional testing techniques.
Rather than going on at length about it - here's a link to a talk I gave to a local Test Professionals group on the topic - in case anyone's interested.