| || |
This is a good article by Jakob Nielsen:
Who has conducted a Field Study? User Tests and Field Studies sound like they're incredibly informative and imperative to the success of a project. However, how often are they actually conducted? Under what circumstances? What did you learn? What would you do differently and why? Why DON'T you do Field Studies?
Have ANY of your design teams EVER been on a Field Study that you know of? How do they qualify that the interface or structure of the functionality is 'usable'?
[This message has been edited by digits71 (edited 01-21-2002).]
Re: Field Studies
<BLOCKQUOTE><font size="1" face="Verdana, Arial, Helvetica">quote:</font><HR>Originally posted by digits71:
User Tests and Field Studies sound like they're incredibly informative and imperative to the success of a project.<HR></BLOCKQUOTE>
Many projects succeed without doing them so they are not "imperative" in the categorical sense but they are certainly "incredibly informative" and they are very helpful to organizations and allow them to be much more competitive, assuming the field study is done correctly and is balanced with other tasks.
<BLOCKQUOTE><font size="1" face="Verdana, Arial, Helvetica">quote:</font><HR>However, how often are they actually conducted? Under what circumstances?<HR></BLOCKQUOTE>
That probably totally depends on the organization. For myself, in the various places I have been, it is fairly rare - particularly in Web ventures. These studies were usually done, in my experience, with vendors who were making application software for the desktop. For example, one company I worked at was SPSS which makes statistical and mathematical software, mainly for universities and large corporations. Another company, Packtion, was Web-based but was going to be acting as an industry-standard portal and, as such, they did numerous field studies.
<BLOCKQUOTE><font size="1" face="Verdana, Arial, Helvetica">quote:</font><HR>Have ANY of your design teams EVER been on a Field Study that you know of?<HR></BLOCKQUOTE>
At SPSS design teams did not actually go on the field studies too much - something I fought for but did not achieve there. What happened is they sent usability engineers and cognitive engineers who then came back and held reviews with the design team and the development team. At another company, FullAudio, design teams (along with human factors experts) did go with us to field studies and this was mainly because we were developing hardware as well as software.
<BLOCKQUOTE><font size="1" face="Verdana, Arial, Helvetica">quote:</font><HR>How do they qualify that the interface or structure of the functionality is 'usable'?<HR></BLOCKQUOTE>
As far as "usable", there are numerous metrics you can use to more or less determine this aspect. For example, there is layout complexity and layout appropriateness measures that are usually ratios of probabilities of transitions between visual components. There are task concordance metrics, task visiblity metrics, visual coherence metrics, etc. There are also preference metrics, such as valence, acquisition, facility, interpretation, and so forth. All of these are from usability studies with software, coupled with studies from ethnography, psychology, and anthropology and are all readily understandable. There are those that balk at including too much science in QA, but this is all not that difficult and can easily be applied the average QA practitioner. The trick is usually getting the average QA practitioner to broaden their horizons somewhat.
<BLOCKQUOTE><font size="1" face="Verdana, Arial, Helvetica">quote:</font><HR>What did you learn? What would you do differently and why?<HR></BLOCKQUOTE>
You pretty much always learn something new - at least I always did. I learned, over the course of time, to do much better task analysis in the sense of looking at the users' overall goals, their current approach to those goals, the users' model of the task, their information needs, and how they deal with exceptional circumstances or emergencies. That latter is particularly important and is often overlooked. I also, along the way, learned a great deal about how people act and think when they use software and Web pages. I also learned the obvious (but often forgotten) fact that using the system changes the users themselves, and, more importantly, as they change they will use the system in new ways that you often would not consider. Thus "usage pattern" is a very malleable concept I have found. I have learned that you should not focus on user-centered design, but usage-centered design and I have also learned that the emphasis should be on making designs "intuitable" - not "intuitive."
<BLOCKQUOTE><font size="1" face="Verdana, Arial, Helvetica">quote:</font><HR>This is a good article by Jakob Nielsen:<HR></BLOCKQUOTE>
I think the "article" is good but oversimplified. As he says: "Most articles on field studies make it seem like they are terribly complicated and require a team of anthropologists." That is true. But it is also a mistake to assume that you can just send anybody as well. Field studies are a type of contextual inquiry and, as such, involve a great deal of conversation as well as observation. This means you need someone who is good in both. You do not, of course, need cognitive experts, ethnographers, anthropologists, etc. as some articles will assert (and I am glad to see Nielsen makes this point) but you do need someone trained in the nuances of human working habits and contextual situations. You have to be able to ask the appropriate questions of the user, without interrupting their work flow, and, as Nielsen notes, you have to avoid influencing their responses in any way. This can be tricky for some people.
In general, however, I have found much of this can be learned by the average person quite easily. For example, learning user characteristics does not exactly take a degree in cultural anthropology. You simply identify the user characteristic issues that may affect their use of the system to be developed and address these issues during the field study via contextual cues among representative users. What usually requires more thought is understanding the context of how users work in general rather than with just your own software or Web site and then applying that to well-established usability metrics. So there is science behind it; but that does not mean you need a scientist. You can use well-rounded invididuals who are willing to think outside the proverbial box.