| || |
I tried to post this earlier but not sure if the thread ever got created so apologies if I've posted twice.
I'm currently employed as a Test Analyst, have been for a few years. My role I'm told is User Acceptance Testing AND I should be basing my tests on Requirements and System Design Documents.
But from what I've read UAT is done by the end-users - I'm not an end-user.
And UAT is based on Requirements - but often I only get the design.
I'm told it is called UAT because we tend to (or try to) involve actual end-users in the test design through meetings and there's some box ticking exercises where the end-users accept the product based on my testing.
Am I really doing UAT or something else?
UAT is a confusing term, because there a lot of misconception of UAT being the same as End-to-end testing, which is not the case. This comes from many shops that don't have QA testing directly on production with their users, then keeping that terminology when they form a QA department.
UAT is meant more of an exercise in User Acceptance / User Feedback than a quality exercise. Where end-to-end regression ensures Quality, UAT is more to get feedback on how to improve the product.
By my definition of the term, you are doing something else.
Originally Posted by Pound
But perhaps there is some degree of UAT, since by my definition, the point of UAT is to gain "acceptance" from the users, or user representatives.
Terminology in QA is often loose. Terms in one shop may mean something completely different than another shop.
This might help: All Things Quality: Testing Terms Glossary
Last edited by Joe Strazzere; 11-19-2015 at 03:43 AM.
UAT is a loosely defined terminology, but with black box testing it can easily fit into the role of a normal tester. You the tester are taking the role of the end user. This is very common when testing web sites, as your defining tests around how a user might try to enter good information or misleading / wrong text, characters ect... into various fields. Now we all know there are many other types of testing that need to happen, but it should be a part of any good test plan. Another way to look at it is Use Case Scenarios. So say you want to set up multi user tests, either for a group of testers in your group or through performance(load testing) scenarios. you define each user with a certain role, like this user logs in and goes to a certain section, or 2 users log in at the same time and both try to access and or write entries at the same time (especially important test for an open website) I am not referring to every type of test here, just giving ideas. I usually do not fret on what someone wants to call the type of testing, as long as the code it goes out the door as bug free as possible, because that really is the crux of it.
Senior SQA Engineer
That's exactly what UAT is not. Generally the whole idea of UAT is to gather feedback that's outside the box of engineering. An engineer (even QA engineers) cannot simply put on a "user hat" and be considered UAT. They are too involved with the code and the development to be able to fully emphasize with the user or fully understand the user's need.
Originally Posted by russell5005
An example I ran into, was a pricing display module for a car site where it would display the cars price and all the possible discounts that can be applied. My team tested the site and all its modules. Verified that it was correct according to specs, the site was responsive, and worked as expected to the best of my team's ability to verify. However, once we handed it over to Car dealers to review, they noticed that some of the discounts applied were not legal in certain states. These are one of the issues why UAT is so crucial, as there is a world of design problems that is not testable from specifications and personal testing experience alone. You need the insight of people in that area in order to find these issues.
Generally UAT falls into these categories.
1) Pre-release testing - these are your alpha, beta releases.
2) UX testing - UX will do some runs with potential users and study the user's usage of the App. They might do things like gather user feedback, build user empathy maps to help with new features and improvements.
3) Focus group testing
4) Milestone inspections - In contracted work, its generally a good idea to review what is built so far with the customer in order to prevent veering too far off course.
5) Business Analyst testing on the behalf of the user. (this one is very suspect, but sometimes necessary especially when there is trade secrets or proprietary knowledge involved.)
Also keep in mind, that "user" could be different people from those who are actually using the app. In this case, the consumers are the one using the App. But the real "user"/customer is the car dealerships that are selling the cars. While the QA in this case, tested from the perspective of the consumer. They could not have anticipated the possible issues that the real customer could have. The specifications were written by Business Analysts, and yet they couldn't anticipate this issue either. (That's why I say BA testing on behalf of users is suspect)
Last edited by dlai; 11-20-2015 at 02:00 PM.