I would like to initiate a discussion on RISK ANALYSIS in software quality assessment.
(Being a geologist - I have a pretty good understanding about the quantification of risk and the parameters that go into a final assessment of whether or not to drill a well. The fact that 2/3 exploratory wells still turn out to be dry holes, however, is a different story. The explanation lies in the subjective elements of differently interpreting the same set of data)
My question is - in the domain of software QA is there an a priori distribution of risk. say 5-10% for interface, 20 % for transaction, etc, If not, how does one get an estimation of where the risk elements are and what range of values they possibly have. Are there any "litmus tests" that conclusively prove that a certain portion of the software is bug-free?
I am very intrigued by this question and would welcome your input.
And, I must commend the organizers for coming up with such a wonderful forum for discussing professional issues.
Re: RISK ANALYSIS
I don't think we can measure risk in software the same way as geology. In geology, while there are obviously many variables involved when deciding to drill, the framework is generally the same. Water freezes at the same temperature, atmospheric pressure and gravity are the same, and everyone has the same definition of each of the elements and combinations of elements. In software, everyone and their brother has written an "e-commerce shopping cart." Sure they have similar attributes, but since each one uses unique code, it will have unique bugs. There could be interesting examples where it is possible with reusable code or open source software.
From an industry standpoint, someone could attempt to do a survey, but companies might not like to share that information. Even so, different types of interfaces are more complex, different types of transactions are more complex, etc..
From a shop standpoint, metrics could be in place to measure where most bugs are caught. Further data could show which bugs cost the most when they slip into production, which take the longest to fix and which cost the most to fix.
When assessing individual projects, some shops use bug-seeding, where a developer intentionally plants bugs in the program. The theory is that if all of the intentional bugs are found then most of the unintentional bugs will be found as well. This requires some intelligent design of where to plant the bugs. They need to be different types of bugs, in different areas, and so on.
People who oppose this theory say that if you're going to spend this much time, you may as well just start testing. Also, just because you find intentional bugs doesn't mean you found the unintentional bugs. If the developer has to plant bugs, then they know those types of bugs. They will probably also unit test their code for those types of bugs. (Especially if this setup is considered a "trap" for QA, to see if they really do anything. </sarcasm>) But, the unintentional bugs will probably not be the same type as the intentional bugs. They are probably bugs the developer did not think of.
Another approach is to measure code coverage during test. This one offers a measurement of how many logical branches are executed during test. But let's face it, on most projects there isn't enough time in the universe to execute every combination of every variable. Instead we use liberal equivalent class partitioning.
[This message has been edited by vantontl (edited 02-20-2001).]
Re: RISK ANALYSIS
There are many forms of risk and risk analysis and risk management are active parts of the entire QA role. This is the whole point of being proactive. In general, if you cut off one day of QA (not QT) activity early in the project is likely to cost you from three to ten days of activity downstreamAnd this does tie into the development side of things as well. For example, it is risk that 50% or more of defects are introduced in the requirements and/or design phases. It is a risk that 80% of the major defects occur in 20% of the code (and it is 80% likely that that 20% of the code is the most complex in the entire code base).Thus the famous 80/20 rule that is, in fact, a statement of risk - particularly if it is ignored! The idea of scope or feature creep averages at a likelihood of about 40% on most projects. It is taken as a maxim that, on average, source code for a large project changes at a rate of about 10 percent per month. Omitted effort (meaning tasks not stated in estimates or tasks that were assumed and not stated in requirements) often adds about 20% to 30% to a development schedule. So those are some risk factors. The idea now is to come up with ways to mitigate these risks.
As a more specific statement of risk, consider the notion of incremental builds or some sort of evolutionary design approach. It is pretty much established that such practices increase the apparent project cost and time by about 5%-10%. This is because there is extra effort related to doing things incrementally. So that is a risk. But now take that in light of the notion that integration problems and incorrect requirements understanding is usually reduced by about 20%-30% in such incremental practices. So you have a risk, that may be offset by another parameter in the equation. You have a balance between cost/time and delivery viability.
Also, any form of impact analysis for change requests and defect fixes also speaks to the entire concept of risk management. As far as an a priori distribution, however, besides what I said above, that is hard to say because it depends on the organization and the practices that are being adhered to. Also keep in mind that much risk in an SDLC is written in the form of estimations because you will be estimating the size of a project, the cost of a project, the time it will take to complete a project, etc. and any gray areas or changes to those estimations will usually be because of risks that need to be addressed.
Re: RISK ANALYSIS
I sincerely appreciate the comments posted by Tim.
I do not necessarily agree that in Geology we are dealing primarily with constants; If anything, we are dealing with nature - the laws of which are significantly more complex and variable than anything designed by man. However, that is besides the point.
One of the parameters that I do want to use from geology is Hydrocarbon saturation coefficient (HSC). This parameter gives the percentage of pore volume occupied by hydrocarbons; in most cases, rocks are expected to be hydrocarbon- bearing when HSC exceeds 0.5 (or, 50 %).
HSC = porosity * (1-Sw) where Sw is the portion of pore volume occupied by water.
What I have been wondering is: would it not be helpful, if there were an identical parameter in SQA (for simplicity's sake - let's call it CRL - coefficient of risk level) that would take into account not only the frequency of a specific error but also its severity and priority.
CRL = frequency * severity * priority.
Obviously, the higher the number- the greater the risk. Maybe, with sufficient research, we can come up with certain cut-off values that would indicate whether a product can be released or not.
BTW, I do want you all to understand that I am an ousider trying to understand your field, and these are simply musings on a late afternoon when I have some time to spare.
Any input will be highly appreciated.
Re: RISK ANALYSIS
The "CRL" you are looking for is generally called an FMEA - Failure Mode and Effect Analysis. It does account for severity, priority, and weighting schemes. These also account for the detection method and likelihood of detection (i.e., this might correspond to your drilling apparatus and the viability of the land/crust to produce oil in that location). The key is these are more variable than a true science (such as geology) because even though geology, physics, biology, etc. have variables, they are operating within physical constraints dictated by the laws of nature. In the case of risk analysis you are dealing with processes that are operated by human beings and thus the variability is that much greater. That is what an FMEA is supposed to help alleviate.
These also tie into standard Risk Assessment forms which cover risk categories and risk priority numbers.
Re: RISK ANALYSIS
I am delighted to note that my ideas about a "CRL" did not turn out to be entirely out of thin air and there are criteria within the SQA industry that address this question.
I am hopeful that these criteria do more justice in addressing the complexities of estimating risk in software quality.
What I want to discuss is the oft quoted 80/20 scenario. If I understand it correctly, 80% chances of encountering a problem are confined to 20 % of the code that is most complex. What I fail to understand is - how do we define, which 20 % is most complex? Isn't the quality of code a direct function of the programmers ability and experience?
And more importantly, how is the tester expected to plan his tests to ensure that this 20 % is comprehensively covered. Is he guided by code coverage or, are there functional aspects that help him draw a war plan?
The other interesting aspect of this whole question is the observation made by many researchers that the bulk of software quality errors are introduced during the planning stages. Is there a check list that one may follow to ensure that crucial aspects of this process are not overlooked. If there is such a list, where can I find one.
Thanks for your assistance in helping me understand your very interesting profession.
Re: RISK ANALYSIS
The greatest risk to any software project is that Risk Analysis is omitted.
Re: RISK ANALYSIS
<BLOCKQUOTE><font size="1" face="Verdana, Arial, Helvetica">quote:</font><HR>If I understand it correctly, 80% chances of encountering a problem are confined to 20 % of the code that is most complex. What I fail to understand is - how do we define, which 20 % is most complex?<HR></BLOCKQUOTE>
This can be by various measures such as lines of code, function points, etc. It can also be based on the nature of what the code is doing. It can be as arbitrary as you want it to be and if you want to take that as being subjective, you can do so. But, in reality, it is not. Or at least not always. Anyone who has programmed knows that certain areas of the code base (modules, particular subroutines, etc.) are capable of being called "more complex" than comparatively simple areas.
<BLOCKQUOTE><font size="1" face="Verdana, Arial, Helvetica">quote:</font><HR>Isn't the quality of code a direct function of the programmers ability and experience?<HR></BLOCKQUOTE>
Not necessarily - it could be because of the fact that the area being coded is complex no matter who writes it. Perhaps it is complex parsing routines (such as in natural language programs) or complicated embedded software routines that must communicate with firmware and a display driver of some sort. Obviously those are a little more complex than a save routine for documents or a "print preview" feature.
<BLOCKQUOTE><font size="1" face="Verdana, Arial, Helvetica">quote:</font><HR>And more importantly, how is the tester expected to plan his tests to ensure that this 20 % is comprehensively covered. Is he guided by code coverage or, are there functional aspects that help him draw a war plan?<HR></BLOCKQUOTE>
Well, you can do exactly what you said: code coverage or functional coverage, to name just two. The tester is supposed to plan the tests by using the risk factors as a criteria to saying which areas are the most risky. (Also bear in mind that unit testing will many times point out problematic areas of code that can thus by looked at more during a system or integration testing phase.) Functional coverage might dictate hitting those areas that are functionally very complex or that require a great deal of interaction with the system (and you can define system as broadly as you wish). Something that is functionally complex also, in most cases, has the possibility to bring about the most exception situations. Part of your risk, also, will be those areas that your users are most likely to come across and use. So when you have an area that a user is most likely to come across and it is functionally complex (indicating it probably has a great deal of code behind it) you do not even really need formal risk analysis to tell you this is an area to concentrate on.
<BLOCKQUOTE><font size="1" face="Verdana, Arial, Helvetica">quote:</font><HR>The other interesting aspect of this whole question is the observation made by many researchers that the bulk of software quality errors are introduced during the planning stages.<HR></BLOCKQUOTE>
This is true. In general, particularly for large projects or those that are heavily integrated, more than 50% of the defects will creep in during the requirements and design phases - before any coding is even done.
<BLOCKQUOTE><font size="1" face="Verdana, Arial, Helvetica">quote:</font><HR>Is there a check list that one may follow to ensure that crucial aspects of this process are not overlooked.<HR></BLOCKQUOTE>
Well, there are best practices for what to look for during requirements analysis and specification as well as what to look for in design work. Mainly these are questions that should be asked or things that should be sought out in the documents to make sure they are valid. What does valid mean? Well, it varies but most of the time you want to make sure that the requirements are testable in some fashion. Meaning, you need to have a way that you can determine that the requirement has been fulfilled - not just from the code/execution level, but the design level as well. Each phase feeds into the next.
Karl Wiegers has done a great deal of work on this topic and his book "Software Requirements" is heartily recommended. You might also check out his Web site that has a lot of the concepts from the book on it. The URL is: http://www.processimpact.com/
This also gets into the idea of verification testing (as opposed to validation testing) where you are going to be more concerned with the non-executable portions of the entire process. A good book that delves into this and that does not require a lot of previous knowledge of QA/QT processes is by Edward Kit and is called "Software Testing in the Real World: Improving the Process."
Re: RISK ANALYSIS
My sincere thanks to Mr. Nyman et al who have done such a wonderful job in describing the many facets of risk analysis. I am amazed at the quality of info that is presented in this forum; I wish there was anything remotely similar in my field of Petroleum Geology.
Based on the info posted to-date, one can infer that to ensure a quality software, one really has to start early - as a matter of fact before the software is even coded. To me, that essentially implies that by the time a software tester gets involved, the "train has already left the station" and
he has to play catchup. Moreover, it seems to me that he has to be a pretty smart programmer if he has to make a reasonbale assessment of code coverage and risk based on code complexity.
Also, even if there is a tester on board, does he participate in the verification and validation procedures or is it done by folks at the managerial level (In Geology they say, it is not only important to do things right but equally important to do the right things).
It seems to me that well-defined requirements
would go a long way in ensuring software quality but it also appears that it
is not uncommon to be asked to test a software for which even no functional
requirements are provided. In this context, it would appear that a newbie tester is pretty much left to fend for himself and secondly, would really be unable to make a significant contribution simply because he would not have the means or knowledge to enforce software QA
Re: RISK ANALYSIS
<BLOCKQUOTE><font size="1" face="Verdana, Arial, Helvetica">quote:</font><HR>Based on the info posted to-date, one can infer that to ensure a quality software, one really has to start early - as a matter of fact before the software is even coded. To me, that essentially implies that by the time a software tester gets involved, the "train has already left the station" and he has to play catchup.<HR></BLOCKQUOTE>
Absolutely correct - and well-stated. Another way to look at it is that testing of the executable portions of the product are going to be reactive - because you are testing what already exists. Proactive testing is looking at the elements that go into making up the product but that are, generally, non-executable. (I say "generally" because you can consider testing of such things as prototypes which will happen before the final code-base construction has begun.)
<BLOCKQUOTE><font size="1" face="Verdana, Arial, Helvetica">quote:</font><HR>Moreover, it seems to me that he has to be a pretty smart programmer if he has to make a reasonbale assessment of code coverage and risk based on code complexity.<HR></BLOCKQUOTE>
It is not so much that the tester has to be such - but they do certainly have to rely on the programmer's if they want to do testing based on code coverage. Unit testing (a phrase I mentioned in the last post) will normally be done by developers so, in that case, you already have the situation covered in that sense. Complexity is one of those hard things to ascertain because sometimes a piece of code can be incredibly complex and yet do something relatively simple. Other times you might have a piece of code that is very simple and yet it drives processes in the product that are very complex - perhaps it offers a great deal of permutations that are easy to program, but that allow users to make many potential mistakes (or generate exception situations, to put it more programmatically). One way to look at this is to consider code for common dialog controls. In general, if you use the common dialog control provided by Microsoft you have simple code. But if you decide to rewrite the control or extend the control, you might have more complex code. Which method is being used might dictate the degree of testing that is done, both during the unit and system test phases. The key is that the functionality being provided has not changed - just the nature in which the functionality is being provided.
<BLOCKQUOTE><font size="1" face="Verdana, Arial, Helvetica">quote:</font><HR>Also, even if there is a tester on board, does he participate in the verification and validation procedures or is it done by folks at the managerial level (In Geology they say, it is not only important to do things right but equally important to do the right things).<HR></BLOCKQUOTE>
This pretty much brings up the dichotomy between Quality Assurance and Quality Testing. The one (QA) is placed at a higher level to make sure that the "right things" are being done - i.e., processes followed, reviews conducted, test plans written, etc. The other (QT) is more to make sure that what was constructed/implemented/coded provides solutions to the requirements that were eventually derived, matches the design that was put together, and functions according to the way that was specified in documentation. The key is that in a robust type of environment QA should be making sure that testing is really just putting the polish on the product - almost acting as a proof-of-concept for the design and specifications. Of course, in reality there is more to it than this...
<BLOCKQUOTE><font size="1" face="Verdana, Arial, Helvetica">quote:</font><HR>It seems to me that well-defined requirements would go a long way in ensuring software quality but it also appears that it is not uncommon to be asked to test a software for which even no functional requirements are provided.<HR></BLOCKQUOTE>
You are correct on both points. Well-defined requirements are one of the major things that can prevent scope and feature creep. Part of QA's role is making sure that requirements are "testable" in the sense that they are capable of being put to the test (i.e., falsified). Sometimes (and probably most times) this happens by proxy because if you cannot even write up design and/or functional specifications from the requirements, then they are not stated in a fashion that is testable. Likewise for functional specifications: if a developer can code something from them, it is a good bet they are testable. (That still does not mean they are "right," however.) And it is true that in many organizations such documents are not produced or are produced in such as slipshod manner that testers effectively have to deal with getting nothing except the product itself. (Such organizations usually lack a good QA effort or they treat QA like it is QT. Alternatively, it can be that the QA people in-house are simply not doing what their job should be.)
<BLOCKQUOTE><font size="1" face="Verdana, Arial, Helvetica">quote:</font><HR>In this context, it would appear that a newbie tester is pretty much left to fend for himself and secondly, would really be unable to make a significant contribution simply because he would not have the means or knowledge to enforce software QA.<HR></BLOCKQUOTE>
The most significant contribution, in the case of no documentation, is to catch as many critical defects as possible - i.e., those things that just blatantly do not work. This applies to any tester - newbie or not - when they have nothing to go on but the finished product. Of course, talking to developers can give them some idea of how it works. (Of course, that just answers the question of how it works - not if how it works is how it should be working or if it fulfills the requirements that are implied, as opposed to stated.) A lot of times this leads into what I refer to as "tautology testing:" testing that the product does what the product does. This is, for obvious reasons, not the most effective way to test because essentially you end up with the testers and developers designing the application, without any necessary regard for how a product like this should function relative to the goals of its users.
And, yes, many strict testers do not have the means to enforce QA practices - assuming they know them in the first place. I have to say that most of the time, however, I have not found the organization to be at fault per se. Mostly it is that QA practioners have not been trained in how to negotiate for resources and time, what processes to institute, or how to handle cultural issues that come up within organizations when quality initiatives/efforts are proposed and/or established.