I've read a lot about performance test tools lately and one of the most important aspects from my point of view is the tools ability to analyze any problems/bottlenecks. From my understanding this can be done by having monitors on servers and databases.
When watching a Mercury demo, I also noticed that their drilldown functionality could actually pinpoint where in the code your bottleneck is. How is this possible? Running a profiler at the same time as running a load test can't produce accurate results right??
You are right, that's not so simple as Tool Vendors try to sell it. If the bottleneck is CPU you will see it utilized for 100% which is obvious, but what is your bottleneck is file system - do you know how to read that from monitors? And even if you know that CPU is bottleneck and drilldown shows database operations as a bottleneck how do you know if this is due to a) lack of indexes b) deadlocks in database c) other DB-specific issues related to open connections, not closed transactions, DB client cache, etc. In other words - if DB operations is bottleneck - how do you know how to improve it - by tuning DB, reorganizing table structures, optimizing application code or even architecture, tuning cache, etc.?
?:the art of a constructive conflict perceived as a destructive diagnose.