## User Tag List

I need to know the anticipated load time for a web page. Is there any standard calculation to find this.

For example, let us assume I have a web page which contains 5 text box, 2 list box, 3 images each of size 1kb, 1 table with 2 rows and 4 columns. How do I estimate the anticipated load time for this page.

I would be thankful if someone could give an answer to this.

------------------

Hi,

In short the answer is 'No'. Page load time is dependent on a number of factors such as internet usage, available bandwidth, server usage. The best approach is to decide on a connection rate (eg. 56K), load the page at a given frequency (every hour on the hour) over the course of at least one day (I would recommend one week). From the results calculate an average and use that as the baseline page load time.

John.

------------------

John O'Neill.
Quality Automation Ltd.
www.quality-automation.com

There are formulas available for such calculations. Look in the books: "Capacity Planning for Web Performance" and "Scaling for E-business" both by Manasce and Almeida.
Mind you now, these formulas are for labratory conditions and blackboards and the real world has a lot more variables in it.

------------------
-- Mike --

You can get a first order approximation quite easily.

Let N be the number of unique objects, S the size of all the objects, L your network latency, B the network's bandwidth in bps

The data being transmitted is approximately S * 1.1, the 10% increase allowing for network overhead. The number of bits to send is eight times this number.

The time to transmit this information (not allowing for client side parsing etc) is therefore:

( S * 1.1 * 8 ) / B

Now, the latency kicks in for each object (I'm assuming no keep-alives for simplicity). So for N objects (ignoring packets sizes, retries etc) the overhead is 3 * L * N milliseconds. Why 3? Becase setting up a socket involves (from memory) three TCP transmits.

So, 1st order approximation for the time in seconds is:

T = (3 * L * N / 1000) + ( ( S * 1.1 * 8 ) / B )

Note that since bandwidth is the bottleneck for a 56 k connection client multi-threading can be ignored for a first order time. Using HTTP 1.1 and keep-alives reduces the latency factor somewhat, but for slow lines (dialups can be 200ms or more) it's clear that the number of unique objects in a page is a performance killer, and this effect can be much larger than sheer size.

Bear in mind that this is pretty well the best performance you can expect. For example, it assumes an infinitely fast client (i.e. the time does not take into account the time required to parse of the HTML to determine what other objects the page requires, nor does it take into account rendering times). On the other hand it also assumes that none of the objects are in the client's cache, so subsequent visits by the user may well be faster if some of the objects are static.

Hope this helps (I think I have the math more or less correct, it's based on Menasce and Almeida)

Phil

PS I agree with Mike that a good book for this type of modeling is "Scaling for e-Business" by Mensace and Almeida.

[This message has been edited by Phil Hollows (edited 06-19-2001).]

[This message has been edited by Phil Hollows (edited 06-19-2001).]

Thanks a lot for your replies.

Now I have a got a better idea on this subject.

Thanking once again.

Bye
Venky

------------------

#### Posting Permissions

• You may not post new threads
• You may not post replies
• You may not post attachments
• You may not edit your posts
•

vBulletin Optimisation provided by vB Optimise v2.6.0 Beta 4 (Pro) - vBulletin Mods & Addons Copyright © 2016 DragonByte Technologies Ltd.