Subscribe by Email


Saturday, August 11, 2007

More details about verification vs. validation

Verification & Validation

Essentially, the very nature of Quality Testing (QT) is broken into two aspects: verification testing and validation testing. It is thus possible to say:

QT = (Verification + Validation)

IEEE/ANSI defines verification testing as "The process of evaluating a system or component to determine whether the products of a given development phase satisfy the conditions imposed at the start of that phase." This is somewhat vague although verification testing is generally thought of as a proactive type of testing. Verification is, to a large extent, the Quality Control (QC) activities that are done throughout the lifecycle that help to assure that interim product deliverables meet their initial specifications. Verification is also the least understood of the two.

So, for example, if you are "Verifying Functional Design" then you have to consider the result of what made the functional design: the process of translating user requirements into a set of external human interfaces. This gives a functional design specification - it describes what the user can see, but nothing of what they cannot see. So, in verifying that type of document, the goal here is to determine how successfully the user requirements were incorporated into the functional design. The requirements document serves as a source document to verify the functional design. The best practice is to look for unwarranted additions or ommissions. Or consider the process of "Verifying Internal Design." The process by which this document is created is that the functional specification is broken up into a detailed set of internal attributes: data structures, data diagrams, flow diagrams, etc. This gives an internal design specification - it describes how the product has to be built. So, during verification, we can begin looking at limits and constraints with the product - basic boundary conditions. We also learn about performance nodes and possible failure conditions or scenarios.

These tend to be considered proactive activities because you are attempting to catch problems are early as possible and you are verifying if the "right" thing is being done. This is different from validation which does not ask: are we doing the right thing? It asks: are we doing what we said was the right thing? In other words, during verification you are, to a large extent, definining what the "right thing to do" actually is. In validation you are making sure you adhered to your previous definitions. And that means that validation is inherently more reactive in nature than is verification.

The IEEE/ANSI definition of validation is "The process of evaluating a system or component during or at the end of the development process to determine whether it satisfies specified requirements."

These requirements can refer to a lot of things. The idea of verification is asking whether or not the requirements formulated correctly. The idea of verification is asking whether the product was constructed in accordance with those correctly-formulated requirements. In this sense, validation is usually used to refer to the "test phase" of the lifecycle, which assures that the end product (e.g., system, application, etc.) meets stated (and sometimes implied) specifications. There are, in general, eight validation axioms, as they are usually referred to, and those are given here:

1. Testing can be used to show the presence of errors, but never their absence.
2. One of the most difficult problems in testing is knowing when to stop.
3. Avoid unplanned, non-reusable, throwaway test cases.
4. A necessary part of a test case is a definition of the expected output or result. A comparison should be made of the actual versus the expected result.
5. Test cases must be written for invalid and unexpected, as well as valid and expected, input conditions. "Invalid" is defined as a condition that is outside the set of valid conditions and should be diagnosed as such by the program being tested.
6. Test cases must be written in generate desired output conditions. Do not think just in terms of inputs. Determine the input required to generate a pre-designed set of outputs.
7. A program should not be tested (except in unit and integration phases) by the person or organization that developed it.
8. The number of undiscovered errors is directly proportional to the number of errors already discovered.


No comments:

Facebook activity