| || |
Load testing and Remote Desktop
I am tasked with bringing some automation to a QA group. The point of the automation is two fold. Firstly regression testing takes a-lot of time and the team would like to automate a suite of tests that would server as the regression tests. The second part of the automation is to perform load / performance testing on the applications.
There are 2 applications. One is a public facing web based app so that is fairly straight forward, the second application is an internal desktop application. It is written in VB6. It heavily uses third party controls to achieve its functionality.
I am having success with regression testing. I am evaluating various tools to automate this, where I am left scratching my head is how to performance / load test.
The application runs as follows.
In production there are a number of terminal servers that the internal application sits on. Application users open a remote desktop session to one of these terminal servers, fire up the app and use it. Each terminal server can have a large number of users accessing the app this way.
I am trying to figure out how to replicate this for load testing. I want to monitor various things on the terminal server and DB server as the number of application users increase to make sure the application is not going to cause issues.
This is where I am. I would greatly appreciate any ideas / thoughts / insights
Re: Load testing and Remote Desktop
There are really two questions here, and ones which are often comingled in Remote Desktop and Citrix environments
Question 1: Does the application scale and meet expected response times?
Question 2: Does my Terminal Server Scale to n number of concurrent users without introducing additional response time overhead?
It's difficult to solve both questions at once because you have two unknowns which result in a given response time. I generally recommend decoupling such tests. In the first set, solve for your application scalability and response time question. Once that is a known item, then look at how many people you are able to place on a particular terminal server without degrading the response time from your "known good" reference value.
Along the way you are going to have to have in place tools which support your particular application and deployment architecture. The guiding element is likely to be RDP, as it is very thinly supported in the market and pretty much exclusively by the higher end commercial performance test tools. Most of those will also likely support an interface from your VB app for a pure database performance test as well as web based interfaces for your web components. And yes, they will all pretty much hook into the performance monitor on Windows as well as be able to pull database metrics for the major market database servers while the test is ongoing.
Replace ineffective offshore contracts, LoadRunnerByTheHour
. Starting @ $19.95/hr USD.
Put us to the test, skilled expertise is less expensive than you might imagine.
Twitter: @LoadRunnerBTH @PerfBytes
Re: Load testing and Remote Desktop
Hi James, thanks for the reponse, and sorry for the 3 month delay in getting back here.
I agree with your breakdown into the 2 seperate questions to effectively test the application.
I guess one simple approach may be to sit with various users of the system and understand how they are using the applicaiton. From this build various usage models for the application based on different user groups. Then time typical application response times in each usage model. Use these times as the best approximation of expected and acceptable response times. When newer versions of the applciation are being tested, we can have the users repeat their typical useage scenarios and time the response. Compare the before and after to look for increases in response times.
Then we could use some technology to replicate multiple users on the terminal server. Whatever technology we select we need to be able to measure application response times. As the number of concurrent users grows we want to watch the response times and compare with the expected/acceptable values to see if/when the degradation occurs.
At the end of your initial response you mention "they will all pretty much hook into the performance monitor on Windows". Are the higher end tools, which can be quite expensive, really just looking at the perfmon metrics, or are these metrice just part of their offering?