This may be in the wrong location but..
We intend to test Microsoft patches against multiple applications(web based). Test desktop hardware will be exactly the same as production but test server hardware will always by necessity be very different - test servers will be newer-many production servers are no longer available- but will also be less capable. Is there a method or even a consensus of opinion on how to account for these differences? We will not be testing for performance but looking for any changes to functionality as a result of patch updates-end goal is to confidently deploy the numerous patches and updates.
Where are the production servers located? At your company or distributed whithin the user's facility? At any rate, to be safe you need to validate against the oldest server in use and deffinetly do performance testing. If your app doesn't work on a slower server, then you have a "BUG!"
In order to duplicate on older server I would suggest something like VMWare which can be tweaked to simule speed and hardware.
Since your test servers are not identical to the production servers, is it feasible to perform an additional (quick) functional test on each production server during off time (if there is any)? This could result in better confidence.
Other groups carry out full performance and regression testing, this will be a much smaller effort focusing solely on the effects of patches and updates to the underlying OS and associated databases. Quick turn around- identifying potential issues for a production environment that services 1000s- don't even have to fix,simply screen and pass off to operations group/engineering to figure the production workaround. Production hardware cannot be touched but will pretty much be discontinued by the time this effort has begun.