Page 1 of 2 12 LastLast
Results 1 to 10 of 11
  1. #1

    Allowing for client-side processing time when reporting response times

    I know this is a topic that comes up quite frequently, but I'm interested in people's thoughts on how to handle the case we are dealing with at the moment. It's one of my long posts, I'm afraid, but it's quite a complex topic.

    We are testing a browser-based application with a fairly rich user interface (lots of Javascript). The response time we are interested in is the overall user experience time - the time from the user action (mouse click or keystroke) to completion of rendering of the response. This time can be thought of as consisting of three components:

    1. <font size="2" face="Verdana, Arial, Helvetica">Time between the user mouse click or keystroke and the first HTTP request being submitted by the browser to the server.</font>
    2. <font size="2" face="Verdana, Arial, Helvetica">Time from first byte of first HTTP request being sent, to last byte of last response being received.</font>
    3. <font size="2" face="Verdana, Arial, Helvetica">Time between receipt of the last byte and finishing rendering the page.</font>
    <font size="2" face="Verdana, Arial, Helvetica">
    Performance testing tools (or at least the ones I know) only measure B - which includes network and server time and (in some tools at least) an allowance for client side processing time interspersed between the various HTTP requests on the page. In the past I have generally taken the approach that B forms the bulk of the response time, and A and C are relatively static, so all I have generally done is explain what is and isn't included in the reported results, along with a hand-wavy estimate of the extent of the variance.

    For the application we are currently testing this is not good enough. Some pages have very significant delays due to client-side processing, and the client-side delay time is not constant. The most extreme example we have found takes 7.5 seconds for A, ~1 second for B and ~0.5 seconds for C. Drilling down on where the time is going has produced some interesting results.

    The bulk of the time for A is due to the execution of a generic Javascript routine (which forms part of the application framework and is therefore executed for every page in the application). The time to execute is directly related to the size of the *current* page (i.e. the page we are on, not the one we are navigating to). The routine walks the entire DOM, which for the 7.5 second page means 3,500 objects (yes, it's a very big page).

    Another surprise was finding how much this time varied with different specification client PCs. On a 2.4 GHz Celeron the time is 7.5 seconds. On a 2.8 GHz Pentium 4 it's 3.5 seconds.

    Another complication in trying to allow for client-side time is that C (which is basically the final portion of browser rendering time) may vary depending on server and network response time, because rendering happens asynchronously. The browser can start rendering the page as soon as the start of the page is returned. If delivery of the final page elements is delayed by server or network delays, the page may be almost completely rendered before the last element is delivered, so C may be negligible. On the other hand, if the entire response is delivered effectively in "one chunk", the browser has to do all rendering after the point at which a performance testing tool has stopped timing.

    Whilst the 7.5 second example is an abberation due to an extreme page size, and is being corrected by reconfiguring the specific page, the client-side contribution to response times is still considerable on most pages. Given that the performance requirements for this application are relatively stringent (generally 2 second 95th percentile response times) we clearly can't just ignore client-side processing time. However, with client-side time not being constant, and no direct performance testing tool support for measuring (or emulating) it, we are debating how best to cater for it.

    Options I can think of are:

    1. <font size="2" face="Verdana, Arial, Helvetica">Supplementing our virtual users with GUI scripts driving the browser and emulating individual users on dedicated PCs. I've taken this approach once or twice in the past, but it can double the script development effort, and getting a statistically significant sample can be a challenge (co-ordinating enough PCs running GUI test scripts really adds complexity to the process).</font>
    2. <font size="2" face="Verdana, Arial, Helvetica">Make manual measurements of the overall response time and adjust the reported results accordingly. Once again, getting a statistically significant sample is difficult - especially when client-side time is so dependent on page size and server response time. Also, as the numbers reported by the performance testing tool will not include client-side time, it can be difficult to communicate the results effectively - results reported directly from the tool either have to be massaged or explained each time.</font>
    3. <font size="2" face="Verdana, Arial, Helvetica">As 2, but add emulated client-side delays into the virtual user script timers, so that results reported from the tool already contain an allowance for client-side time. Part A could be calculated as a function of page size, based on a sample of manual results (though this would be difficult). Allowing for part C accurately looks even more difficult.</font>
    <font size="2" face="Verdana, Arial, Helvetica">
    I'm interested to know how others handle this, or any views on the best approach. I'd also be keen to hear about any tools which can help with accounting accurately for client-side time.

    As a matter of interest, the project architect had a bit of a Google for tools to measure the time breakdown, didn't find much, so wrote one. It's a very nifty bit of VBA, which uses low-level Windows APIs to hook IE events and give the breakdown between A, B and C - all nicely reported in a spreadsheet showing the URL of each page visited and the response breakdown. So we have the basis for measuring the times for options 2 and 3 if that's the way we decide to go.

    Apologies for the length of the post.

  2. #2
    Moderator JakeBrake's Avatar
    Join Date
    Dec 2000
    St. Louis - Year 2025

    Re: Allowing for client-side processing time when reporting response times

    Apologies are unnecessary! This is a rare and robust post that explains the issues clearly! (wishing others would do this [img]images/icons/smile.gif[/img] )

    As you have alluded to, this amounts to a significant amount of work. Your last option regarding the VBA seems to the best way to go. On a different plane of thinking, it seems that the ultimate fix to the poor client-side time has nothing to do with measuring what you already know to be unsatisfactory. How would I handle it? I would push back upstream into design and challenge that heavily using a standard tool - a design review. Why? If it takes 3.5 to 7.5 seconds; to me that is unacceptable, especially with those microprocessor specs.

    Design review food:
    1) are there round trips being made to the server whilst this extreme page is being rendered?
    2) can the page be broken into multiple pages?
    3) does this page go beyond recommended usability guidelines for control density, etc.?
    4) can some of the client-side stuff be allocated to the server?
    5) ... and so on.

    I'm assuming 1) memory is sufficient, including virtual, 2) no other CPU-cycles-robbing services or apps are running, and the PC is configured to give max time to foreground processes.

    BTW, Celerons are graphically challenged anyway.

    Some considerations:
    What is the typical configuration of the graphics card on the test platforms? Would your end-user have high-end graphics cards, as it sounds like these are graphically demaning js's.

    Are these js's compiled into a single client-side executable? It doesn't sound like they are. Does it makes sense to do such a thing?
    What is the typical PC profile of your end-user? If they are running on lesser computing power, they would appear to be in trouble. Anyway, that should all be taken into account.

    Does your organization state what they will support for PC configurations? If you are measuring best case, then - you probably already know the issues in this context.

    Which browsers are supported? That will make a huge difference as they vary in footprint size.

    Are the cached js measurements the same as a first time visit to the page(s)?

    All for now from me... my brain is numbed from empathy! [img]images/icons/smile.gif[/img]

    [ 07-23-2006, 06:07 AM: Message edited by: JakeBrake ]

  3. #3

    Re: Allowing for client-side processing time when reporting response times

    Thanks for the thoughts.

    You are quite right about needing to push back on the 7.5 seconds of javascript - and we are already onto it (a bit more on that below). However the fact that one bit of inappropriate javascript could consume almost 4 times the total response time budget for a page before the request had even left the PC just made me realise the danger of relying too much on the results reported by the performance testing tool - and how easily client-side processing taking half a second would slip under the radar. Hence trying to think a bit harder about how to handle the issue generically.

    A few comments on your specific suggestions, then I'll briefly explain the issue taking 7.5 seconds, it's a nice little case-study.

    PCs across the organisation span the range of the machines I quoted (and of course the developers have top-end machines, so are less aware of the issue of slow javascript!). The set of browsers supported is IE6. Times quoted were on a dedicated machine with ample memory and nothing else running, with all javascript already loaded locally. There were no server round-trips in the time quoted, as you'll see below, it was solid local CPU.

    The 7.5 second issue is very simple, and partly resolved. This is a package implementation, so many aspects of the implementation involve configuring table-driven features. The configuration chosen for a particular function involved displaying a long list for the user to choose from (about 650 rows, with several columns in the list). This resulted in a 500 Kb page with 3,500 objects in the DOM. Having identified the performance issue the design was revisted and a package enhancement agreed to provide a smarter search function which avoids the need for the long list and reduces the impact significantly.

    But the javascript causing the delay is a generic component of the framework, which fires for every page and takes about 2 ms per object in the DOM. Many pages still have hundreds of objects, so the javascript may still be taking several hundred ms.

    And what does the javascript do? It sets the mouse pointer to be an hourglass. So the user clicks on the big page. Nothing happens (or at least nothing appears to happen). The user clicks again. Still nothing. 7.5 seconds after the initial click the mouse briefly becomes an hourglass, before turning straight back to a pointer as the response page is rendered 1 second later. And a message pops up telling the user not to be so impatient as to click again while the browser is busy!

    The debate on that particular feature of the package continues. ;-)

  4. #4
    Super Member SteveO's Avatar
    Join Date
    Jul 2004
    St. Louis, MO, USA

    Re: Allowing for client-side processing time when reporting response times

    Yeah, the VBA tool sounds very intriguing!!

    I'm facing this issue as well with an application. Unfortunately for me it's a 3rd party web interface so we don't have the luxury of forcing code changes as easily as if it were developed in house.

    We've had to supplement my response times (of the A & B layers) with manual testers testing in parallel (for the C layer) to find out how much client side (we refer to it as presentation layer latency) processing time is typically stacked onto my server side response times.

    A handful of GUI users was also considered but in the interest of time (our time!) we're approaching that as a last resort.

  5. #5

    Re: Allowing for client-side processing time when reporting response times

    Richard, just an idea. Do you have an option to ad some logging into the application (client-side javascript) code? For example each UI item to display started & ended for you to analyze (time, synchronization, etc). Perhaps my http://www.testingreflections.com/node/view/3837 could be of any help (including the comments).
    ?:the art of a constructive conflict perceived as a destructive diagnose.

  6. #6

    Re: Allowing for client-side processing time when reporting response times

    Why would the traditional GUI Virtual User (as implemented in LoadRunner) be insufficient in this case? Historically this type of user has been used to track client side weight for a commonly named transaction, such as "Login." The use of the GUI Virtual User has somewhat fallen to the side as web has become more ubiquitous, but I expect to see a resurgence as more code is pushed to the browser for execution.
    James Pulley

    Replace ineffective offshore contracts, LoadRunnerByTheHour. Starting @ $19.95/hr USD.

    Put us to the test, skilled expertise is less expensive than you might imagine.

    Twitter: @LoadRunnerBTH @PerfBytes

  7. #7

    Re: Allowing for client-side processing time when reporting response times

    My experiences say that Load Generation tools are good for determining TTLB (Time To Last Byte received), but not rendering time. GUI automation tools are good for determining end-user response time including rendering, but not for generating load.

    If you need both, I recommend a blend of the two methods that makes sense in your specific case.
    Scott Barber
    Chief Technologist, PerfTestPlus
    Executive Director, Association for Software Testing
    Co-Author, Performance Testing Guidance for Web Applications

    If you can see it in your mind...
    you will find it in your life.

  8. #8

    Re: Allowing for client-side processing time when reporting response times

    Testing with GUI tools is definitely the best way to get an accurate answer - it's just an awful lot of additional script development effort that is hard to justify (and fit into the timetable). (The application we are testing is one of the most resistant to automated testing I've encountered - and I know that the functional test team have been having similar issues with attemptig GUI scripting to the ones we've hit with VU scripts.)

    So I have just been trying to think of other vaguely defensible ways of including a reasonable allowance for client-side time into reported (VU Based) figures which are currently misleading.

    So I'm sure you are right, James and Scott, it's just that we probably won't have time to do it.

    One last thought. It strikes me that a worthwhile feature of a performance testing tool would be to report on the breakdown of end user response time seen at *recording* time (into parts A, B and C). Even if there is no attempt to emulate that in a VU playback, at least showing the breakdown at recording time could highlight a potential issue to be followed up (i.e. provide the hint that GUI scripts would be justified in this case). Clearly this would only be possible in tools where the recording method involves intercepting windows events (as opposed to capturing the network traffic). I guess some tools may already do this?

  9. #9

    Re: Allowing for client-side processing time when reporting response times

    Yeah, it is a lot of extra work. Sometimes, we can take a few measurements to come up with an estimate of the client side overhead, but that only works relatively well when the overhead across the majority of pages for a particular site are similar.

    I agree with you that it would extremely valuable for a performance testing tool to be able to, say, capture 1% of the playback load at a GUI level.
    Scott Barber
    Chief Technologist, PerfTestPlus
    Executive Director, Association for Software Testing
    Co-Author, Performance Testing Guidance for Web Applications

    If you can see it in your mind...
    you will find it in your life.

  10. #10

    Re: Allowing for client-side processing time when reporting response times

    Hi Richard and all

    Im working on very rich browser based app at present that does alot of activex and xml parsing on the client side while going through a typical buisness scenario.
    The functional guys are using QTP and it will probally be quicker to run one of there end to end scripts to measure rendering time.
    On projects where there is no Mercury budget then I was actually looking at somthing like watir with some timer functions to time when the pages are fully loaded.
    Martin Croft
    Select Red Ltd


Page 1 of 2 12 LastLast

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
BetaSoft Inc.
All times are GMT -8. The time now is 05:53 PM.

Copyright BetaSoft Inc.