I would rather monitor atleast the following statistics for future analysis.
If I keep this statistics for x+1 users load and may be if something wrong happen on x+2 user tomorrow I would have a chance to compare. However you can automate the statistics collections so that its not a big pain anyway.
I have a question to pose to all of you performance testers. We do quite a bit of business transaction profiling, and user profiling. BUt one of the things we are struggling with is metrics for performance testing. Every plan I put together I stress the importantance of certain metrics. Response Times, CPU, memory allocation, session information, backend process time, etc.
What I want to know is. If I run my tests (load tests that actual model production usage) and my response times are fine, is there a reason to collect all those other metrics? Lets assume my tests exercise the system to x+1 in terms of usage.
We also run reliability tests that test for stability and uptime. Again if response times are ok, server doesn't crash, and there are no hangups, is there a reason to collect those other metrics? Or should those metrics be used in troubleshooting mode when you identify a problem?
Yeah I would mainly agree with Scott. If however you aren't just looking at Load Testing from an End User perspective then you should still monitor your other metrics.
For example if the application you are testing is not the only application hosted on the server your using, then monitoring the other metrics may be a pre-requisite of the test!
If you are already running 'reliability' tests which haven't shown any huge memory leaks or instability, then there probably isn't any great added value to doing this unless you always want to have the information to hand for any unexpected troubleshooting?