| || |
Testing multiple applications
I performance test in house applications for my company using the ODBC protocol and currently there are about 20 applications which I test. (We only test the applications that have been changed for a release so we do not test all 20 applications for every release) The way that performance testing for in house applications has been approached since I have worked here is that we test each system individually by running a separate scenario for each system and once all the systems have been tested individually we build another scenario with a mix of scripts running a small number of users for each system to test system compatibility.
My test manager approached me yesterday and said he was thinking of abondoning the testing of each application individually and instead creating a scenario that runs everything at once in one big test. This one test would have a large number of users and pretty much all the scripts we run. We would therefore only be running one performace test for each release which incorporated everything.
He asked what we thought and I didn't really know what to say. The performance test team also do web testing for the companies external portal service and they use a large single scenario approach for testing and it sounds like my manager wants to adopt the same approach for the in house app testing.
I was under the impression that the best practice was to test each application in isolation but if this is true I want to be able to have a good arguement to put to my boss.
Has anyone got any advice? Am I correct in thinking that it is better to test each application individually?
Re: Testing multiple applications
I have run tests either way. IF you run with the larger set of tests you need to assign these items a role and understand what such background load can bring to the table.
I tend to use such background transactions as a control set. The performance of these items should be well known and constant under the load levels of the application under test. As a control set if performance of the control set begins to deviate then it can tell you something about how the application under test is making use of resources which are common to all of the test groups and that it is not playing "nice" on those resources. If you do find such a behavior then you will probably come back to a solitary test to examine more carefully what an individual application is using.
Also, go back your your manager. It is going to be interesting to find out if your manager is a true test manager, who has a full understanding of the testing process and performance testing in specific, or are they a manager of testers with very little testing knowledge. I would put the questions back to the manager of "hey, that sounds interesting, let's explore that. What are your thoughts on how this will improve our test integrity or delivery quality?" If your manager is like many they are likely seeing time and haven't taken on the challenge of balancing that with test integrity and the quality of your test deliverable. If you can improve all three, or perhaps even two out of three, then that is laudable. But, if you need to sacrifice key test quality metrics to meet a time goal, then that should be noted as well and delivered with every single test result as something which is known about your results.
Replace ineffective offshore contracts, LoadRunnerByTheHour
. Starting @ $19.95/hr USD.
Put us to the test, skilled expertise is less expensive than you might imagine.
Twitter: @LoadRunnerBTH @PerfBytes
Re: Testing multiple applications