There is also an advanced search to your upper right. If you select it and input "uat" and select the Functional Testing forum, you will get ~ 300 hits.
You won't prevent "poor user feedback" unless the software is defect free AND it does what the users expect it to and in a timely fashion. That is reality. It would be good to get used to it. It is perfectly fine if they execute tests you already passed. That is also normal. They might have to in order to get at other functions. and they also might discover a "tarnished nugget" that was missed in your own testing.
This is not to say they should not have a plan. They should have a plan and execute it. But they should also have latitude to go off the plan as long as the plan gets completed and is signed off.
During the requirements phase, the users (or their representatives) had some input into what functionality they desired in the application. And, within their inputs, there were some requirements that were more important to them than others. It would be a good idea to make sure that these 'important' requirements are working well above anything else.
Thanks everyone for your advice. Much appreciated.
I managed to find similar topics on UAT, thanks to JakeBrake and I come up with key points in getting the best result for UAT (in my opinion only):
1. Engage the end users early as possible and discuss with them what tests they would like to do. This will take some time as they are not experience with test scenarios. The test team must then translate these test scenarios to test cases with steps and get the end users sign off.
(A lot more work to be done by the test team but I think its worthwhile)
2. Put the end users through training on the application if this is new product otherwise I think it better to have a demo session prior to UAT.
3. Take the end users through formal testing which consists of test cases requested by end users and functional test cases from system testing that is relevant to their job.
4. Allow time, just a small percentage of the entire UAT for the end users to "play" with the application to try out other unusual scenarios and to become more familiar with the system.
5. Have a strict UAT schedule and have a good supervisor to end users ratio so we can monitor what they doing and promptly answer their questions.
I am hoping the training and formal testing + their requested tests will answer their questions and they already familiar with how to use the application so during "play" time they wont be blindly using the app which can lead to frustation and ultimately very poor feedback on the application.
We don't do either 1. or 3. here and I like the way our UAT sessions are run. We prepare a checklist - just a list of functions and/or changes we would like the users to examine and sign off on. We don't "lead the witness" or provide step-by-step test cases. That way, they exercise the new functionality or changes in the same way as they would normally do their jobs. It also keeps us from writing test cases for them or duplicating our efforts.