Start QA Department
Where do you start for a QA organization in your company if you are the first QA? Start with building templates? Defining Quality policies? Defing processes? Help?
Re: Start QA Department
This can depend on the organization, the mindsets that exist within it, the skill set of those doing the implementing, etc. What I tend to do is first define what we mean by "quality" and what we mean by "quality assurance". These can become dogmatic terms if they are not given a little concrete substance that your organization can recognize. What I then start doing is looking at what exists in terms of a development life cycle. Or if there is no formal life cycle, then what process is done that, in some fashion, gets a product built. Even in the most haphazard of development situation there are some readily discernible phases. I look at each of those phases as a potential entry point for some aspect of QA.
In other words, at each of those phases there is most likely something that QA can be doing to make sure that the definition of what we agreed to call "quality" is being met. (See why it is important to define "quality" and what "quality assurance" is early on?) I then start to look at what the most troublesome aspects of this whole development life cycle currently are. For example, do developers consistently miss deadlines? Are release dates constantly getting moved? Are changes constantly being introduced that the developers have to scramble around to build in? Are bad builds consistently happening because some code was not checked in? Are the developers constantly trying to figure out just what they are truly supposed to build? Do a lot of defects that were found get forgotten?
Do you see what I am doing by asking all these questions? I am finding out weak spots in the various phases. Then I want to look at why they are weak spots and what we might be able to do to patch up those weak spots. That will usually mean instituting some sort of process at that point in the phase. That, of course, can mean a lot of different things depending upon what the problem is, where in the life cycle it is, etc. Note that these processes will be put into a life cycle where people (like developers, testers, project managers, etc.) already are doing certain practices. You have to take account of that.
We have to realize that there are three basic things we can do when instituting something like QA: (1) make decisions, (2) reflect on the overall situation and on possible courses of actions, and (3) ask questions. The problem is that people often tend to spend most of their time on (1), neglecting the other two to a large extent. This means we need to gather as much information as possible before we start building templates, writing up overarching quality policies, or defining processes.
Also note that we need to do more with the information that we gather than just gather it. We need to arrange it into an overall picture, a model of the reality we are dealing with. That "model" is the way your organization works currently. Formless collections of data about random aspects of a situation merely add to the situation's impenetrability and are no aid to decision making. So we need a cohesive picture that lets us determine what is important and what is unimportant (for now) and what belongs together and what does not – in short, that tells us what our information actually means. This allows us to look at the current environment and plan out a target environment that everyone can agree to. Then comes the work of an implementation plan to get from the current environment to the target environment. The key here is to plan out the route that QA will take to getting itself insinuated into the life cylce and that will allow QA to be efficient and effective. (That second point is often left off.)
Does some of this help you, at least in terms of general thinking? Please do not hesitate to ask for clarifications. Also please bear in mind that these are my viewpoints baseds on my experiences in similar situations to what you are describing. As such, I do not claim my views are "correct"; I simply claim that they have worked well in the context of the organizations within which I have applied these ideas.
Re: Start QA Department
Thank you Jeff,
Your response is very helpful in determining my path of action.
I have two questions:
1: How do I implement quality for sprial model for prototypes and development projects?
2: How do I implement quality in Short term project such as technology evaluation or market analysis?
Re: Start QA Department
<BLOCKQUOTE><font size="1" face="Verdana, Arial, Helvetica">quote:</font><HR>Originally posted by KalpanaBuch:
1: How do I implement quality for sprial model for prototypes and development projects?<HR></BLOCKQUOTE>
Well, again exactly how you do this can depend on a few factors. The first thing you do is determine if the spiral is what you want or need. (It is a good model and many other life cylce approaches derive from the spiral concept.) The key focus of this kind of model is the ability to detect risks early and remove them. This is usually done by iterative approaches to development, sometimes employing prototypes, which means long-term project planning should not be applied in the same way with a spiral life cycle as it might be with other life cycles. Spiral models are usually better suited for rapid development projects or where "time-to-market" priorities dominate pretty much above all else. This has its negatives and positives. The idea is that with this kind of model, as your costs increase, your risks should be decreasing. In other words, you should be measuring your risks.
Implementing a spiral life cycle means knowing what you do along each iteration of the spiral. That is (very roughly):
- <LI>Determine objectives, alterantives, and constraints.
<LI>Identify all risks and resolve those of a certain priority.
<LI>Evaluate any alternatives.
<LI>Develop the deliverables for that interation; verify they are correct.
<LI>Plan the next iteration.
Some key points here are that you must have checkpoints at the end of each iteration. Also, you have to define objective milestones that are verifiable. It is these milestones that you are checking for at the checkpoints.
So as to how you go about implementing it, speaking generally here, you take a staged approach to development that will follow a rough idea of what a spiral iteration means. This means you have to set up the means by which you will identify and resolve risks. In other words, make sure you have risk management techniques and that you are comfortable with how to use them. This is a critical part of the spiral model! It also means making sure that you have some plan in place for how to evaluate alternatives. This is where the prototyping comes in. Make sure you have good planning practices because those are what you will use to determine what should be done on the next iteration. (Remember that in spiral models, the planning, at the detailed level, tends to be from iteration to iteration and not over the whole project.)
To generalize what I said above, when you are implementing any sort of life cycle concept, you want to look at the activities that make up that life cycle. In other words, the actual things you are going to have to do in order to successfully carry out the life cycle. Then you have to make sure you can do this things. Again, the spiral model is largely predicated upon the notion of early detection of major risks and coming up with means to resolve the risks or go a different route such that the risk is not even encountered. This means risk analysis and risk management is a must-have for this kind of activity.
<BLOCKQUOTE><font size="1" face="Verdana, Arial, Helvetica">quote:</font><HR>2: How do I implement quality in Short term project such as technology evaluation or market analysis? <HR></BLOCKQUOTE>
It is hard to answer this even generally because it is very contextual to your situation. As far as market analysis goes, it has many activities that you could implement. For example, market segmentation is often used, particularly in short-term projects. Here you segment users into various groups that have similar needs or responses to the type of product you are developing. (This can be done via some educated guesses, outright assumptions, focus groups, experienced marketing help, analysis of competitors with similar products, etc.) This allows for focused targeting, which determines which segments to serve and what specific features to focus on. That also leads to market positioning, which is about how the product should compete with others in the market in terms of what it offers or what it costs (or both). Basically the objective of market segmentation is to accurately meet the needs of selected customers and/or users in a profitable way (profitable for them and for you; the two go hand-in-hand anyway). Again, this can be done short-term or long-term.
Another thing you can look at is sales forecasting, which is the process of organizing and analyzing information about your business and your product in a way that makes it possible to estimate what your sales will be (within given ranges). Here you generally develop a customer profile and determine the trends in your industry (or relative to a given product type in that industry). You then establish the approximate size and location of your planned area of operation. (For the Web this might potentially be global.) "Planned area of operation" just means where your product will be visible from or where it will sell from. (Remember earlier that I mentioned that spiral models are all about decreasing risks as costs increase. You can thus relate sales forecasts to the costs of your project as another way to measure your risks.)
There is a lot more to market analysis but it is a very broad topic and in going any further I risk inundating you with useless trivia that you might not want or need.
As far as technology evaluation, it depends on the technology that you are referring to. A general procedure that I like to follow is roughly this:
- <LI>Define the project's requirements for a given tool or technology. Identify the capabilities that are most significant to you, the other tools/technologies to which you would expect to be able to connect or interface with, and any other issues pertinent to the decision.
<LI>List ten to fifteen specific factors or criteria that will influence your selection decision.
<LI>Distribute some amount of "points", say 100, among the selection factors you listed in Step 2, giving more points to the more important factors. In other words, prioritize via some weighting method.
<LI>Obtain current information about the available tools and rate the candidates against each of your selection factors.
<LI>Calculate the score for each candidate based on the weight you gave to each factor to see which product or technology solution best appears to fit your needs.
<LI>Solicit experience reports from other users of each candidate product or technology solution; also look in online discussion forums, newsgroups, etc.
<LI>If possible, obtain evaluation copies of two or three top-rated tools in your scheme. Define an evaluation process before you install the candidates.
<LI>Evaluate the tools using a real project or as close to a simulated mock-up as you can get. After completing your evaluations, adjust your rating scores if necessary and see which tool ranks the highest.
<LI>To make a decision, combine the ratings, licensing costs, and ongoing costs with information on vendor support, input from current users, and any subjective impressions.
With all this, of course, you must factor in the budget you have and the "time to implement" for the tool. (That is why it is sometimes good to evaluate with as close to a real project as possible; that way if you choose that solution, you have already given it a good implementation test and you are familiar with it in the context of your own system.)
Once again, this was somewhat generalized based on the nature of the questions. If this helps guides your thinking, wonderful. Do not hesitate to formulate more directed and specific questions to your environment.
Re: Start QA Department
Thanks Jeff! Your response is greatly appreciated. It is also very helpful.
Re: Start QA Department
I started about 9 months ago as the first QA in our company. Before this I was a professional tester and testcoordinator. I am the only one with a testing / quality background (on 40 people working on software development).
What I did was nothing.... Well, that's what it may have looked like! But what I did was listening, observing en asking questions and talking with people. We had to change the way we were working (all projects delivered were late and with bad quality). But to change a person, or a department, you first have to change yourself! If you start frome Day One with telling other people how they should work, what they do wrong, and say that the only way they can deliver quality is by following your (!) rules, then nothing will change. After three months, they probably won't even hear you anymore....
If you start making Improvement Plans without cooperating with developers and management: maybe you'll get beautiful plans, maybe even a 100 pages report about what and how to change. But then again, if it's only a PLAN......
What I would say is, don't shout and don't write. Just listen and talk, and discuss. Jeff's reply is right: make sure everybody talks the same language and define together what QUALITY is. Sit together and work together. Plan and evaluate in good harmony.
XP - for eXtreme Programming
Test - for being a professional tester
NL - for Netherlands
“None of us is as smart as all of us” - Gerald Weinberg
Re: Start QA Department
I wanted to weigh in here, though I have avoided ISO and QA startup discussions previously.
As a QA professional of 12 years experience in both Manufacturing and Software, I just want to say this to those of you who have offended the Gods and are now facing a QA System startup for your crimes ...
Each day, when you sit down to reconcile the fact that you want to include the latest, greatest, snazziest QA methodologies (and do so according to the ISO document hierarchy) for bunch of folks who think Defect Density is the measure of thickness of your skull ... ask yourself this ...
"Will this process truly add value to the company and is it sustainable?"
If you sincerely do this every day and truthfully evaluate the answer, you will save yourself much heartache and frustration.
(Former ISO consultant and still a little nutty because of it )
Re: Start QA Department
The first thing to implement a QA Department is identify team and process.
The TMM (Test Maturity Model) developed by the IIT is becoming QA Leader. It is a maturity process with 5 levels similiar to the CMM. Each member of the QA/Software Test team is assigned ATR (activities tasks and responsibilities). Deliverables/reports are defined, resources such as funding, training, management support etc. The model takes you from Level 1-undefined and chaotic QA/Testing to Level 5-Managed, Defined, Supported QA/Software organization
There are assessments and certifications available.
Hope this helps