There is no quick answer that I'm aware of... This will depend on the test tool used and on the actual scripts executed. Depending on how the script is setup and how much memory utilization for parsing and so on.
In QALoad for 1 script I may get 80 users but for a different script I can only get 40 before resources become critical.
From your question it sounds like memory is currently your limiting factor. My favorite anecdote: We had scripts limited to 80 users/PC due to memory usage. We doubled the RAM (from 1 to 2 GB). But I was only able to add about 10 users each PC as we then started hitting the processor utilization limit. [img]/images/graemlins/tongue.gif[/img]
Lots of variables - trial and error is the only way I know to determine this.
A problem is a difference between what is perceived and what is desired, that
we want to reduce (Dewey 1933)
As noted, your virtual user system weight is determined by many items, not the least of which is the programming skills of your testing staff who are building the scripts. Poor programming skills do result in poor management of resources which in turn result in higher weight virtual users. As an example, I have spent the past couple of days pinning down two resource leaks in a Windows Sockets virtual user which incorporates a publicly available algorithm. Had I left it as-was my load generator would have collapsed after a while in execution because blocks of memory were being allocated but never returned to the global pool. We ask server-side developers to be cognizant of how their code is using resources and is scaling, we should be just as cognizant in our efforts when we produce test code.
Practices in test can also limit how many user one can get on a box without the actual test design impacting the running of the virtual user. As an example if you turn logging for every virtual user in your test you will turn your file system into a bottleneck for your entire test. Run your virtual users on an already busy host and you could starve your virtual users for network and/or CPU, preventing them from running. Another example, collapse the think time and iteration models and those hundreds per host could drop to only a couple of dozen (or fewer).
Virtual user architecture does play a role here as well. Some virtual user types can only be executed one per host, others due to their natural resource requirements could potentially be executed in the thousands on the same host.
As others have noted, this will vary greatly depending on the tool you are using and the scenarios you are simulating.
The most reliable way is to measure it! Run a test with 10VUs and measure the memory consumption. Then run a test with 20VUs and measure the memory consumption. The difference will give you the incremental memory requirement for every 10VUs.