As a matter of policy or good practice, should deficiency reports (DR) and/or watch items (WITS) always be explicitly declared as such in formal test reports?
How about in cases in which one or more of the following factors are present: (1) the test is an operational assessment in support of development; (2) no formal program office is designated; and/or (3) operational users are not available?
First, let me applaud you for being diligent in looking for relevant policy and direction. In general, most policy and direction either explicitly states, or due to applicability infers, some sort of context. Each of the instances you cite: (1) the test is an operational assessment in support of development (2) no formal program office is designated and (3) operational users are not available; do indeed address a context. The question becomes what policy, guidance and best practices apply given the context. It may be that a bit of work is necessary to flesh out the particular situation including program/project/effort specific documentation and intent of the “testing”.
Open full Question Details
Having said that, if there are objective (or even subjective) standards to which the performance of the system under test is being compared, Deficiency Reporting would seem applicable. If the observed performance of the system under test (however that is defined) fails to achieve a documented standard, it is the responsibility of the test organization to explicitly state that. Otherwise, how can decisions related to the program progress, system employment, techniques/tactics/ procedures, etc. be informed?
On the other hand, if the context is early technology maturation, proof of principle demonstration or some other activity that does not compare performance to a standard, a documentation of observed performance without addressing what might be construed as deficiency in a more mature product may be appropriate. It depends. What should be pursued is a clear and documented understanding of roles and responsibilities of the test organization before testing begins. This should include applicability of any relevant guidance as well as an appropriate degree of detail related to the analysis and standards used in reporting.