QA terms glossary
Does anyone have an info. where I can get a comprehensive list of fairly current QA Terms Glossary?
Re: QA terms glossary
Here are a few that I use religiously:
From the FDA - yes it is a bit dated (late 90's) and from the government, but it still works wonderfully.
Test Works has one at Test Works Glossary
And then there is a short/weak one at Cayman Computers Glossary
Re: QA terms glossary
Thanks to both of you for the info.
Re: QA terms glossary
I can not recall the web site I got this from, but use it often. Also, I bought a CD off ebay that had 45 docs, forms, presentation, etc. the information was pricless for $20. I think it was titled SDLC Project Methodologies.
Acceptance Testing: A software or hardware development test phase designed to demonstrate that the system under test meets requirements. This phase is unique in all test activities in that its purpose is to demonstrate sufficiency and correctness, not to find problems. Acceptance testing is usually the last test phase before a product is released.
Bug: A problem present in the system under test that causes it to fail to meet reasonable expectations. “Reasonable” is usually defined by iterative consensus or management fiat if it is not obvious or defined (in the specifications or requirements documents). Notice that the test team usually sees only the failure (improper behavior); the bug itself is the flaw that causes the failure.
Debugging: The process in which developers determine the root cause of a bug and identify possible fixes. Developers perform debugging activities to resolve a known bug either after development of a sub-system or unit or because of a bug report.
Distributed testing: Testing that occurs at multiple locations, involves multiple teams, or both.
Field-reported bug: A failure in a released, shipping product, usually reported by a customer or a salesperson, that either affects the ability of the customer to use the product or involves side effects that impair the customer’s ability to use other products on the same system.
Functional tests: Tests based on what a computer system, hardware or software, is supposed to do. Such tests are usage-based and functional, at the levels of features, operational profiles, and customer scenarios. Also called black-box tests.
Granularity: Fineness or coarseness of focus. A highly granular test allows the tester to check low-level details; a structural test is very granular. Behavioral tests, which are less granular, provide the tester with information on general system behavior, not details.
Integration testing: A software development test phase (referred to as product testing in hardware development) that finds bugs in the relationships and interfaces between pairs and groups of components in the system under test, often in a staged fashion. This test phase occurs when all the constituent components of the system under test are being integrated.
Peer review: A quality improvement idea common in software development, in which one or more testers read and comment on a test deliverable such as a bug report, a test suite, or a test plan. The reading is followed by a review meeting in which the deliverable is discussed. Based on this discussion, the deliverable is updated, corrected, and re-released.
Pilot testing: In hardware development, a test phase generally following or accompanying acceptance testing, which demonstrates the ability of the assembly line to mass-produce the completely tested, finished system under test. In software development, pilot testing is a test phase that demonstrates the ability of the system to handle typical operations from live customers on live hardware. First customer ship often immediately follows the successful completion of the pilot test phase.
Quality risk: The possibility of undesirable classes of behaviors, or failure modes, in which the system under test does not meet stated product requirements or end users’ reasonable expectations of behavior; in plain terms, the possibility of a bug.
Quality risk management: The process of identifying, prioritizing, and managing quality risks, with the aim pf preventing them or detecting and removing them.
Regression: A problem that occurs when, as a result of a change in the system under test, a new revision. Sn+1, contains a defect not present in revisions S1 through Sn. In other words, regression occurs when some previously correct operation misbehaves. (If a new revision contains a new piece of functionality that fails without affecting the rest of the system, this is not considered regression.) Usually you’ll detect regression when test cases that previously passed now yield anomalies.
Regression test gap: For any given change or revision in the system under test, the difference between the areas of test coverage provided by the entire test system and the test coverage provided by the portion of the test system that is actually rerun. For a system release, a regression test gap is the extent to which the final release version of every component and change in the system did not experience the full brunt of the test system.
Regression tests: A set of tests selected to find regression introduced by changes in component, interface, or product functionality, usually associated with bug fixes or new functionality. Regression is a particularly insidious risk in a software maintenance effort because there is seldom time for a full retest of the product, even though seemingly innocuous changes can have knock-on effects in remote areas of functionality or behavior.
Reporting logs: Raw test output produced by low-level test tools, which is “human-readable” to varying degrees. Examples include text files containing test condition pass/fail results, screen shots, and diagnostics.
Reporting tools:Special test tools that can process reporting logs into reports and charts, given some information about the context in which the log was produced.
Scalability:The ability of a test component’s parameters of operation to expand without necessitating major changes or fundamental redesign in the test system.
Severity:The absolute impact of a bug on the system under test, regardless of the likelihood of its occurrence under end user conditions. I use a severity scale that ranges from 1 (most severe or dangerous) to 5 (least severe or dangerous).
System testing: A software or hardware development test phase that finds bugs in the overall and particular behaviors, functions, and responses of the system under test as a whole operating under realistic usage scenarios. The various system operations are performed once the system if fully integrated.
System under test: The entirety of the product, or system, being tested, which often consists of more than the immediately obvious pieces; abbreviated SUT. Test escapes can arise through misunderstanding the scope of the system under test.
Test case: A sequence of steps, substeps, and other actions, performed serially, in parallel, or in some combination of consecution, that creates the desired test conditions that the test case is designed to evaluate.
Test case library: A collection of independent, reusable test cases.
Test case (suite) setup: The steps required to configure the test environment for execution of a test case or a test suite.
Test case (suite)teardown: The steps required to restore the test environment to a “clean” condition after execution of a test case or a test suite.
Test Condition: A system state or circumstance created by proceeding through some combination of steps, substeps, or actions in a test case. The term is sometimes also used to refer to the steps, substeps, or actions themselves.
Test Coverage: 1. The extent to which a test system covers, or exercises, the structure (the code or components) of the system under test. The metric is usually express as a percentage of the total count of the structural element being covered, such as lines of code or function points. 2. The extent to which a test system covers, or exercises, the behavior (the operations, activities, functions, and other uses) of the system under test. The extent is measured—albeit qualitatively—against the uses to which the customer base as a whole is likely to subject it. Through coverage in both respects is necessary for good testing.
Test cycle:A partial or total execution of all the test suites planned for a given test phase as part of that phase. A test phase involves at least one cycle (usually more) through all the designated test suites. Test cycles are usually associated with a release of the system under test, such as a build of software or a motherboard. Generally, new releases occur during a test phase, triggering another test cycle.
Test phase:A distinct test subproject that addresses a particular class of quality risks. Test phases often overlap.
Test platform: Any piece of hardware of which a test can be run. The test platform is not necessarily the system under test, especially when testing software.
Test suite: A framework for the execution of a group of test cases; a way of organizing test cases. In a test suite, test cases can be combined to create unique test conditions.
Test system:An integrated and maintainable test environment and reporting system, whose primary purpose is to find, reproduce, isolate, describe, and manage bugs in the software or hardware under test.
Test to fail: The mind-set involved in designing, developing, and executing tests with the aim of finding as many problems as possible. This attitude represents the right way to think about testing.
Test tool:Any general-purpose hardware, software, or hardware/software system used during test case execution to set up or tear down the test environment, to create test conditions, or to measure test results. A test tool is separate from the test case itself.
Test to pass The mind-set involved in designing, developing, and executing tests with the aim of proving compliance with requirements and correctness of operation. Such an attitude not only misses opportunities to increase product quality but also is demonstrably futile. It represents the wrong way to think about testing (except in the case of acceptance testing).
Unit testing:A software development concept that refers to the basic testing of a piece of code, the size of which is often undefined in practice, although it is usually a function or a subroutine. Unit testing is generally performed by developers.