I have used the Web Application Stress Tool (WAST) quite a bit. It really shines on ASP-driven sites because that is what it is geared for. It works pretty well given that it is free but it is certainly not the best out there if you want some in-depth monitoring or staged loads.
What I like to use it for is to simulate a great deal of background session load while I am using another tool to actually do my monitoring.
I have looked at this tool too and it seems to do quite a lot. Where specifically is it weak because by looking at it, it seems that this tool may actually be good enough for most load testing requirements.
Another good tool (not free) is Forecast by www.facilita.com.
Areas that are weak, at least to me, are analysis and reporting activities. Monitoring activities are virtually non-existent in the form of service times and waiting times (i.e., service demand) which even Microsoft has admitted, although you can supplement this with the Performance Monitor, of course, but network monitoring tied in with it would be nice. Regarding its reporting I do, however, like the statistical outlier approach that it uses from a base level - but I do not like that it has a remove ability for this because workload characterization for the Web needs to account for outliers in the form of burst traffic. I do like its ability to check for "race conditions" but I find this does not always work when compared to path delays under a given load - even when the load is given with a static throughput function.
I have also found that it has some discrepancies if you want to use it to run against Unix boxes - at least this is what I have found but others have told me they have not seen such discrepancies. It probably just has to do with the environments we were running in or perhaps I was just doing something wrong.
I do like its bandwidth throttling and it seems to correspond to reality pretty well but I do find that when you do a staged load and have a heavy-tail, the throttling mechanism they use does not always accurately report the correct distribution. Granted, the error margin is pretty small but it is there and coupled with the outlier approach that they take, sustained stress measures can sometimes be misleading.
It is a very good tool for its price and I like using it but I have never been comfortable enough with it to make it the basis of my performance testing effort.
[This message has been edited by JeffNyman (edited 11-04-2000).]
The price is nothing so for most performance activities I think (especially with NT performance monitor) it is very good.
It allows a tester to get testing alot quicker without the hassle of signing off to buy a new performance tool.
It's a real shame that Microsoft don't support open source as this would be an excellent tool for people to develop and improve.