| || |
design and defect
There's an intersting conversation in "Defect Process Corrupt and there is Resistance to Change" that I think is pretty important. How much does design come into play when testing in QA? Where do you draw the line for QA? Do you assume developers are experts? I'd really like to get more into this type of debate because it seems important for the QA role but I'm not sure what degree or how this might be started in QA. Can we discuss?
Re: design and defect
You have touched on what I think is a very important and overlooked area of QA. Design for test and test for design are just two aspects of the QA role in design, usability, and accessibility issues. I am so focused on this aspect of QA that I am either an evangelist for it or a radical zealot of it depending on your personal preferences towards the subject.
QA staff has to show that they have the necessary knowledge. That is pivotal. Backing up your work with other design studies, white papers, or book material is highly beneficial. QA has to try to stay focused on industry standards to some degree but also on Human-Computer Interaction (HCI) studies as well as usabilty/accessibility issues. This is even more so in the dynamic, real-time world of the Internet. Does this mean you need a degree in psychology? No, but it does mean that you have made some attempts to read and understand the literature on the subject as well as make an attempt at seeing how it can be applied to your environment.
Do I assume developers are experts on design/usability? Absolutely not! Does this mean I think they do a poor job? No. To greater and lesser degrees, I was also a developer (in Java, C++, and Visual Basic) so I feel relatively confident in making the statements that I have. You have to remember that developers tend to develop for themselves. Not consciously, mind you, but they develop within a context and that context is usually that of an intelligent, computer-literate user with a set of known expectations. You have to remember that developers will be more impressed by the technical know-how they displayed in solving a particularly thorny memory issue in a program or the suave manner in which they have constructed a series of options dialogs but they will generally not consider the impact of those things on the end-user perception of the product. A Web developer may drool over the Flash and applet controls he has added to the companies Web site but there may have been no thought to download times. It may be great programming. But is it great for the interaction with the user? That is another issue. We also have to keep in mind that programmers do not generally use the software they design except in a minimum fashion and when they do this they do not have a general user-focus in mind but rather are confirming that what they did works; not whether what they did was the "correct" way to do it.
There is a quote I like from Alan Cooper. He says:
<BLOCKQUOTE><font size="1" face="Verdana, Arial, Helvetica">quote:</font><HR>To be a good programmer, one must be sympathetic to the nature and needs of the computer. But the nature and needs of the computer are utterly alien from the nature and needs of the human being who will eventually use it. The creation of software is so intellectually demanding, so all-consuming, that programmers must completely immerse themselves in an equally alien thought process. In the programmer's mind, the demands of the programming process not only supercede any demands from the outside world of users, but the very languages of the two worlds are at odds with each other.<HR></BLOCKQUOTE>
Excellent quote! And, you know what? I would not have it any other way. I do not want to staunch that "can-do" attitude that allows for excellent coding and interesting solutions. I do not want to stifle the creative impulse of the developer or make them change the way they do things. They would probably be less successful developers in that regard. That is why I think a large aspect of QA serves to help developers keep their focus while also helping the project lead see that there are design issues that need to be considered and guidelines that should be followed.
Remember that the key to design/usability/accessibility is the way that the program/Web site/Web-application/etc. displays behavior, provides communication, and informs. Note that this is not the strict province of traditional QA because asking how a program displays behavior is a much different concept than asking how it functions. Asking how the program communicates is much different than asking about the design of dialog boxes or pop-up windows. But I think that it should be a part of QA even if that has not been the traditional role. I will say that I am speaking from a Web-bias here but I have found that this can be applied in the desktop or traditional client-server arena as well.
But, again, all of this means that QA staff have to be up-to-speed on these issues. Does that mean they should be experts in the programming language used to design software or the application server API in a Web environment? To me, no. But it does mean that they should have a smattering of knowledge about the issues involved with that programming language or that application server. In a Java app it is important to realize the presence of garbage collection routines just as it is important to realize their absense in C++. Does this mean you have to know how to program them? Not necessarily, but at least know the issues and how this relates to the aforementioned behavior, communication, and information triumvirate. QA should draw the line if they do not have the required knowledge. But to me QA should be an end-user advocate. The key, of course, is realizing that perfect usability and design principles are generally not achievable or capable of being followed all the time. For example, fully designing a Web site to the W3C recommendations would take a long time and even they recommend a staged approach to this. And, as far as relying on something like Microsoft GUI guidelines, these are good up to a point but, on the other hand, these are general guidelines and sometimes you have to go with what makes the behavior, communication, and information to the customer more applicable. (Of course, if your organization is intent on having the "Designed for Microsoft Windows" logo on their product you are somewhat stuck in that venue.)
Also keep in mind that QA has a dual role to play. It is very easy for some QA people with development experience to become overly sympathetic to the needs of the developer and forget the end-user or forget making a defect prevention process part of the development effort. On the other hand, it is easy for some QA people who have no development experience to just believe that developers are "lazy" or "stuck in their ways." There is a dividing line here and that line has to be straddled by QA - this is part of the process of clean communication between development and quality assurance.
I will end with one more quote by Cooper. He said this of interaction designers and I have simply paraphrased him and replaced that with quality assurance staff.
<BLOCKQUOTE><font size="1" face="Verdana, Arial, Helvetica">quote:</font><HR>[Quality assurance staff] also work from the outside in, starting from the goals the user is trying to achieve, with an eye toward the broader goals of the business, the capabilities of the technology, and the component tasks.<HR></BLOCKQUOTE>
Re: design and defect
It's taking me a little while to get the direction you move in. But then how do you relate defects to all this? It almost seems like you want to redefine defect. It also sounds like you want to push QA back right into the design process. Would you say that it's makes sense to define defects in relation to the design process? I guess in some way this makes sense but if the interface is causing the major problems then your design issue only comes in when QA's working with the interface. So really it's just testing as normal.
Re: design and defect
Well, the idea is not so much re-defining defect but, in some ways, it is relating the defect to documentation. Presumably you have design documents. If not, you should not be accepting this into QA. Consider this: a defect is a deviation from that which is explicitly and unambiguously specified in an official specification document. If it is not in a specification, can it be a defect? Obviously it can if it crashes the program or there is functionality that just does not work. But what about functionality that does work, but perhaps not entirely correctly? On what do you base the notion that it is not functioning entirely correctly (if it does not cause the program to crash)? If QA says the function works poorly and development says it is fine, who is the arbiter? How is it supposed to function? Again, I am only speaking to design issues since that is what you are bringing up here. I am not saying the above is my definition of defect; I only present it as one possible aspect of the idea of "defect" as it relates to design.
I think QA should be pushed back into the design process because then you can relate defects to that process. But the real key is you have a better chance of defect prevention and a means of re-visiting design later on. Also understand that a lot of this thinking comes from treating the interface to the user as the sole arena of design/usability which you sort of hint at in your post. This is most certainly not true to my way of thinking. This allows many defects in design to be covered up by developers and QA because essentially what you do is create a type of "wrapper" (GUI) that subsumes what might be some bad design/usability ideas. Then you get into the context of the wrapper and base your decisions on that, when the wrapper is really just a symptom; the cause is the actual design that necessitated the type of wrapper being used. This also means that when such issues are finally realized you are now dealing with a design, code, and wrapper issue - as opposed to just a design issue.
This may sound nebulous but you could relate this to any environment you want. The HTML pages in a Web environment can serve as a wrapper for the code around, say, pages served up via Active Server Pages or application servers. But just looking at that page and testing that the elements on it appear according to what they pull from the application server does not test the logic being used at the application server (such as Java code or what have you). Thus your interface has gotten in the way of the problem. You are not testing design now. You are testing code based on the design specification (even if that specification was only verbal).
A lot of this boils down to motivation. What is the motivation for the design? Is this because of preference? If so, whose? Is this a user-requested feature? If so, which class of user? If it only comes from a few of your power users you had better think about it. This is the distinction between task-based behavior and goal-based behavior and how that is applied to users. Programmers tend to be task-based. Users tend to be goal-based. But you will get your power users that will be task-based to some degree (or will subsume their goal) because they are sympathetic to the role of developers and know what it takes. They also make underlying assumptions about the technology medium that they feel contrains design decisions. QA, in turn, will be looking at the final interface and not the reasons that the interface was designed in the fashion it was. So a lot of this boils down to not "how was this designed" but "what was the rationale for designing it this way."
I am worried this is getting a little bit away from defects which was what you started with so I better stop now.
Re: design and defect
So is this use cases? How do you approach development with these design issues? To give you an idea I'm in an environment where we do have defects based on design and development is in a position to listen and they do and they help. But the big problem we have isn't motivation but is how we decide whether we're dealing with personal subjective or not. We basically want to decide how this can be decided!
Re: design and defect
Use cases depends on how you use them, no pun intended. Some QA departments treat these in the same way they would test cases. If you have a responsive development group regarding design issues a large part of your battle is already fought and, in that case, assigning defects based on design should be much easier in your environment as long as you understand and use a common rating system for those defects. As you adequately state, subjectivity is a big issue.
This is somewhat where testing comes in. But to my way of thinking QA testing (as opposed to other QA work) is only really good from the code-based (replication of unit testing) or the functionality-based testing. But QA testing relies on the interface as well and as I said before this is one step removed from where QA needs to be in this area of design/usability testing. And yet, this is needed, for regular QA testing as well. So how do you get around these two needs? You introduce QA (or portions of it) into the process earlier as I said before. The actual "how" regards design/usability metrics. You can relate these to the design and, ideally, this should be done during the design review. It is these metrics that should be referred to later in reviews or when problems arise. This is how you can introduce some empirical objectivity to counter the claim of personal subjectivity.
Re: design and defect
A topic which is near & dear to my heart!
"How much does design come into play when testing in QA? Where do you draw the line for QA? "
Ideally, there will be quality measures throughout the software lifecycle. Unfortunately, many organizations equate QA with testing and don't take advantage of their capability of finding defects early in the lifecycle - which is ultimately more cost effective.
"Do you assume developers are experts? " My experience has shown that they are not. Some common problems include:
- The design solution does not address the users requirements
- The application is not intuitive to the users
- The application flow does not mesh with the users' normal workflow patterns
- Not accounting for users "misusing" the program
- Missed critical business rules during the requirements gathering phase.
- Exposing programmatic complexity in the interface.
There are a whole host of design flaws that should never make it out into production, but they do. Result: application becomes shelfware.
QA can help by becoming the user advocate, learning more about how it is being coded, how the data model works, etc. Don't be put off. Don't be put off by comments like "You're just a tester - why do you need a copy of the data model?"
IMHO, too many organizations minimize the importance of QA ("QA = the day between development completion and shrink-wrap").
Re: design and defect
So both of you feel that QA should be pushed back to earlier in the process. But I guess what I don't get is this still seems subjective. I know about code analysis but then your suggestsions here seem to indicate that code analysis is just one part. This is very frustrating in a way because I'm tasked with this.
Re: design and defect
Kristi - take a deep breath. Do not get frustrated. You are verging into territory that is not all that well pathed out yet. Code analysis is one thing but I think the area you are getting into is more nebulous. It is one I have been working on for close to a year now and I am only starting to be able to get my thoughts in order. If you want, provide some concrete examples of what you are dealing with and maybe we can all offer some specific advice.
Keep in mind that there is a difference between strict usability and design metrics but not always much of one. But, just going with design, some people use layout. This is done by absolute and relative position of each layout entity, for example. This is then correlated to the transition probability of its use (based on frequency of use). The relevant equation, in this case, is L = 100 * (cost of L-optimal). (There are variations on that.) Two common methods used are "Design Balance" and "Design Connectivity." Some of this is done to find error-prone modules (thus a form of code analysis, or at least, equivalance partitioning) but it can also be related to actual design issues with the interface. But remember that design issues with the interface are a step removed from the design you want. When you are at the point of the interface, your focus is defect correction and not defect prevention. There is also "preference-based" modeling for design which is a more user-focused design review and development process.
But keep in mind that a lot of this has to do with the deeper levels of design implications. Testing code is one way but large areas of concern are usability and performance. Performance metrics are a dime a dozen and very easy to statistically analyze. Usability might be trickier in some respects but you can certainly used. Usability concerns many things, like accessibility issues (color blind, deaf, or other impairements), plus things like users doing something you did not intend as Tracy mentioned in her post.
A lot of this has to do with realizing that, as I said before, users are goal-based. Programmers (and the systems they design, including the interfaces pasted over those designs) are task-based. Thus there is some friction right away. Basically you have four main ways to do this:
Formally: by using some analysis technique.
Automatically: by using a computerized technique.
Empirically: by using experiments with test users.
Heuristically: by looking at interface and passing judgement according to ones own opinion.
Finally, you can get into actual Human Computer Interaction (HCI) metrics such as:
time to complete a task
percent of task completed
percent of task completed per unit time
ratio of successes to failures
time spent in errors
percent or number of errors
percent or number of competitors better than it
number of commands used
frequency of help and documentation use
time spent using help or documentation
percent of favourable/unfavourable user comments
number of repetitions of failed commands
number of runs of successes and of failures
number of times the interface misleads the user
number of good and bad features recalled by users
number of available commands not envoked
number of regressive behaviours
number ofusers preferring your system
number of times users need to work around a problem
number of times the user is disrupted from a work task
number of times the user loses control of the system
number of the user expresses frustration or satisfaction
Of course, a lot of this depends on how you do your testing in this regard. Is it via inspections? Walkthroughs? Usability studies?
A few books you might want to look at: "Software for Use" and "The Usability Engineering Lifecycle: A Practitioner's Handbook for User Interface Design" which are great books for this. If you are interested in theory, I also recommend you read books like "The Design of Everyday Things" (by Donald Norma), "About Face" and "The Inmates are Running the Asylum" (both of the latter by Alan Cooper). All of these are available from Amazon. There are also many places on the Web you can go but they are generally pretty disorganized but you can check out http://www.useit.com/ for Jakob Nielson's usability issues or check out more general design related material such as "A Validation of Object-Oriented Design Metrics as Quality Indicators" located at http://www.computer.org/tse/ts1996/e0751abs.htm
Re: design and defect
I agree with Jeff. It can be overwhelming if you try to tackle everything at once, so take it in smaller, more digestable pieces. Talk to users, management, developers, DBA's to find out what is working and what isn't. Dig through the defects logs for past issues. When you've compiled a list, dig deeper to determine the root cause of the issues (e.g., missing requirements, lack of unit testing, etc.) Go to management with your analysis and the numbers to back it up (Example: 56% of our defects were caused by missing requirements. This points to a need to improve our requirements management process). Obtaining input from all sides that are impacted helps to get buy-in for any process improvement initiatives. If it proves successful, you'll have more credibility to move forward with additional improvement initiatives.
Good luck and keep us posted.