| || |
Not sure where to start!
Not certain I'm in the right forum - if not, I'd appreciate being pointed to the correct one!
I'm just beginning to look into ways to do performance and load testing for a particular application. However, this is a pretty different application...
It's sort of an ASP.NET application, but not really. It does actually use ASP.NET, but doesn't run in a regular browser. The application is actually a Java wrapper around the gecko engine! So it looks and acts like a native Win32 client-server application, but internally it's ASP.NET, which means involving IIS and C# and all that.
(Side note: I believe this actually started out as a "normal" ASP.NET application, but the owner of the company that develops the app wanted his own unique, proprietary GUI, hence the development of the Java browser shell.)
The problem is that it's pretty sluggish, which shouldn't be too surprising. We're trying to find ways to pinpoint possible bottlenecks, and are having a hard time finding any tools that can be used for this conglomeration of technologies. (I have to stay polite and avoid describing it the way I'd REALLY like to [img]images/icons/frown.gif[/img] )
Actually, there are two areas that need attention: performance and load testing. I've been trying to use OpenSTA, and have managed to at least record a script that I could then duplicate to simulate multiple users. Unfortunately, the documentation for that seems to assume a certain level of knowledge I don't possess, so I don't know how useful it might turn out to be.
We've also used NProf to some extent to try to ascertain possible performance issues with the .NET code for this application.
Oh, the other thing is that there's no money (read: zero dollars) in the budget for any kind of commercial profiling and testing tools.
Eventually the aim is to have an automated test suite that can provide enough information to enable us to optimize as fully as possible.
I realize this is somewhat disjointed, and I apologize for that. I'm at the level of not even being sure what questions I need to ask, since this is the first time I've ever approached this kind of thing. So any and all comments and suggestions are appreciated!
Re: Not sure where to start!
What is the financial risk to this company if this application does not scale in production? Define in your local currency.
If thye level of investment for performance testing (budget for persons and tools) in line with the level of exposed financial risk if the application does not scale?
If the risk financial risk is low and the level of investment is low. Then you may be OK with very little performance testing (if any). On the other hand, if the risk to the company is high and the exposed risk to the downstream customers is also high, then you may need to rephrase to the boss your efforts. if the boss sees 0 value in your efforts, as defined by awillingness to expend 0 dollars. Well, read the tea leaves and find a new position because you are facing Performance Antipattern number one, those who don't understand the value of performance testing are in charge of how you do your work. You will also get the blame for any performance related defect not found and addressed. Your boss will not take the blame for an unwillingness to invest in something he/she sees no value in today.
Since the performance envelope and exposed financial risk for a single user is already known, what you are trying to do with your performance testing efforts is define the exposed financial risk of a scalability problem (or no problem and hence little risk). Perhaps this might get you some budget.
Tools that work well with .Net, expecially given your interesting mix of technologies, are going to be more on the commericial side of the house. Sure, you might be able to get OpenSTA to do the job, but you will burn up the delta in buying a tool versus getting a freebie in the collars spent on labor and patching and extending OpenSTA to report in the same manner as commercial tools will do out of the box.
Replace ineffective offshore contracts, LoadRunnerByTheHour
. Starting @ $19.95/hr USD.
Put us to the test, skilled expertise is less expensive than you might imagine.
Twitter: @LoadRunnerBTH @PerfBytes
Re: Not sure where to start!
James, thanks for your cogent and thoughtful response. I appreciate it.
I'll elaborate a little bit: my position is somewhat ambiguous - I'm a contract developer for this company, which is a small startup, hence the limited budget. Therefore I'm not really privy to any financial information, so I really don't know what kind of budget exists for any part of the operation. Nor do I have a QA background at all, except incidentally as it has impacted various software projects I've worked on in my somewhat varied career. So a formalized approach to this is definitely outside my experience.
I suspect the financial risk is not all that high, since it's a vertical market to a somewhat captive audience. So the incentive to spend dollars and man-hours on a QA process is probably less than in some other situations. Wrong-headed, to be sure, but that's the situation.
(Actually, the correct path would be to re-architect the application and remove ASP.NET from the equation altogether, but THAT's not going to happen! :-))
Again, thank you for your response to my post - you've given me a clearer picture of what the focus should be.
Re: Not sure where to start!
I assume you have read the FAQ for Performance and Load Testing on this forum. If not, check it out, you'll find some great starting points there. Here are a few things off the top of my head that may at least give you some ideas if you haven't already discovered them.
Since there is no budget, some other free tools you might try could be Microsoft Web Application Stress Tool. (available on Microsofts Dowload site) I've only played with it, but it looks easy to use and I've heard of other people getting some good use out of it. If these free tools don't work for you, your stuck with either writing your own test harness, or getting a bunch of manual users coordinated to hit your application at the same time while you monitor it. A test harness could be as simple as a batch file or script that loads some XML data and sends it to your application. Or a database script that loads data into tables. I am not a ‘real’ programmer but I've been able to script out some useful solutions like that in a fairly short amount of time. If you have some real programming expertise, then you can make the API calls to simulate activity in your application. I've watched real programmers come up with stuff like that within a day. Very simple solutions of course compared to load test tool, but sometimes something simple is all you need.
Build a plan to measure and monitor:
Don't get too hung up on coming up with a freeware or homegrown test harness right away, you can build that over time. If you want to get started, then start putting some measuring and monitoring methods in place. Even if your just 1 manual user in the application, that still gives you something to measure, at least a benchmark for going forward. If your finding sluggish performance with only a few users, then you may be able to start uncovering bottlenecks with very little load. Once you figure out how to create load the real fun comes with monitoring it. Lots of free tools are easy to find on this, you might check out IIS Diagnostics toolkit if your using IIS. (again, available on Microsofts download site) If theres a database involved, there must be built in tools in your data management software to monitor. For example if it's a Transact SQL database and you have Enterprise Manager, the SQL profiler is a fantastic built in tool to monitor whats happening on the database, transaction times, locks, etc. The application itself probably logs in different areas, perhaps writing to database activity and or error tables, as well as it's own event logs. Check out the server event logs for errors on all servers involved. My experience with complex applications is that if I keep looking, I'll find more places to measure and monitor whats going on.
A lot of people seem to overlook whats built into Windows, Performance Monitor. If you haven't used this, and your on an XP or windows server machine, go to the run command and type PERFMON. This is an excellent tool for monitoring CPU, Memory and other key areas of a server (or desktop if you think you have client issues) The HELP documentation there even has a good overview of measuring for performance and best practices. Even when we are using Loadrunner, which has fantastic monitor tools and reports, we often use Perfmon as well. It's a great quick way to quickly capture data that can later be loaded into a spreadsheet, graphed, analyzed, etc. (if your using Linux or something else, I am not familiar with what's available, but there might be similar tools built right into the server)
Careful on analyzing and reporting results:
Once you have some results captured, such as individual transaction times, CPU and memory spikes, page faults, average response times to the user front end, etc, you'll have to play with your data to make sure you collect and present it in a useful way. Don't just look at averages, look at things like standard deviation which is the difference between the mean and the max for each datapoint. You might find that when you get to a certain point in the load the average response time seems to be growing in a nice smooth curve, but if you factor in the standard deviation, you may see huge spikes of standard deviation that come into play at a certain load level, or with a certain type of activity, which could indicate more of your users are experiencing unaccaptable response, or some other negative effect is occuring elsewhere. I've found loading transaction times into Excel and mapping them to a scattergraph is a great way to see the effect of standard deviation. Be careful of simplifying overall averages. One great article I read recently reminded that I could put one of your feet into a bucket of cold 0 celsius water, and the other in a bucket of scalding 150 degree celsius water, and tell you that you are averaging out to a comfortable 75 degrees. Watch for this in your averages where you might have a certain number of extremes in the response times that can mislead everyone if you present the number as an overall average.
Good luck. Sorry if that’s a bit unfocused, but you said you were just getting started and looking for comments. If you have any more specific questions or problems I would be happy to give some more specific answers.
Re: Not sure where to start!
I don't know what phil got from your article but i found it really helpful. The part on the std devn is worth thinking upon.
Even i am new to perf testing and such knowledge is like finding a gold coin. I am struggling to understand the graph outputs from the opensta tests, and any further assistance in interpretation of results will be the rest of the gold treasure.
Thank You [img]images/icons/smile.gif[/img]