Hello there,

While doing performance testing on web system I run into a problem, may be someone knows a solution for that. The problem is fallowing, I have a web system (.net + mssql). There was a problem reported from client that one page on that web site opened really slowly, we starting testing and saw that AVG95 was showing ~9.5s. Problem is that that page is around 1mb. I was testing with 50 users without think times. Programmer said that his logs for page opening on server showed that page opens around 2s, and there was a problem, my logs showed 9.5s, programmers that are built in on .net showed 2s, but I canít trust those logs because it measures only what happens on .net but no all the IIS. So I found on Microsoft support page that on IIS 7.0 (what I use on web server) is a IIS log that can measure process time-taken, but there can be some network factor in that time if network is slow and response packages are big, at the time I had 100mbit network so I thought it's not a problem. I retested one more time, my testing tool logs showed ~9s, IIS logs ~9s, programmers logs ~2s. Then we switched a switch that my computer is connected to network from 100mbit to 1gbit, 10 times what I had, retested one more time - Mu log shows ~2s, IIS ~2s, programmers ~2s, so finally, results that are believable. So, we resolved our problem, but the BIG question: How to measure process time on IIS without network factor? I read about TCP buffering, but I canít use it because mu web system uses SSL. So, may be someone has an idea how to measure process time on server without network factor?

Thanks in advance,
Janis K