Have any of you old-timers found that the computing platforms of today have improved *mdrv's performance such that you find a need to remedy to prevent *mdrv bursting?

Discussion points:

It seems that years ago (on P1s, P2s) the firing of Vuser threads had enough time between them that one would see a relatively smooth pattern if watching a test using hits-per-second as a window into execution. Not many distinct peaks/valleys were apparent. On faster CPUs (1GHz, etc.) with sufficient RAM, it seems that there are distinct peaks and valleys using the same graph as a reference and at the same granularity. Logically, of course - it makes sense that the Vuser threads are executing faster, assuming that the NIC(s) does/do not represent a bottleneck. Also, on the sys-under-test end, things have also gotten faster thus allowing for a theoretical greater throughput that may in fact amplify the effect.

I've bumped into this and been challenged on it. It seems that a reasonable remedy is/was to randomize think time along with using a fixed seed.

Your thoughts, experiences, opinions??

Thank ye!