Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

In parallel, monitoring of test progress is performed on the base of test status and related measures/metrics/reports, stakeholders recieve receive reports on test status, and issue control/correction directives are issued and corrective/adaptive actions are made for/on the test design, environment, execution, and perhaps even evaluation level test plan.

...

  • Test Plan
  • Test Status Report
  • Test Completion Report

Test Level Documentation

XXX Explain

  • Test Design Specification
  • Test Case Specification
  • Test Data Requirements
  • Test Data Report
  • Test Environment Requirements
  • Test Environment Report
  • Test Execution Log
  • Detailed Test Results
  • Test Incident Report

Test Documents

Test Plan

The test plan outlines the operational aspects of executing the test strategy for the particular testing effort. It provides an overview of what the system needs to meet in order to satisfy its intended use, user needs, or scope of the intended testing effort, and how the actual validation is to be conducted. This plan outlines the objectives, scope, approach, resources (including people, equipment, facilities and tools) and amount of their allocation, methodologies and schedule of the testing effort. It usually also describes the team composition, training needs, entry and exit criteria, risks, contingencies, test cycle details, quality expectations and tracking and reporting process. The test plan may account for one or several test suites, but it does not detail individual test cases.

...

  • Test case ID
  • Test case name
  • Last execution date and time
  • The last execution status (not run, failed, partial, passed) or counts of failed and passed test executions
  • Number of associated defects
  • Brief comment

...

  • Performance, availability, stability, load capacity, efficiency, effectiveness, scalability, response time;
  • Reliability, robustness, fault tolerance, recoverability, resilience, recoverability;
  • Privacy, security, safety;
  • Configuratability, supportability, operability, maintainability, modifiability, extensibility;
  • Testability, compliance, certification;
  • Usability, accessibility, localization, internationalization, documentation;
  • Compatibility, interoperability, portability, deployability, reusability.

...

  • Test case ID or short identifying name
  • Related requirement(s)
  • Requirement type(s)
  • Test level
  • Author
  • Test case description
  • Test bed(s) to be used (if there are several)
  • Environment information
  • Preconditions, prerequisites, prerequisite states  states or preexisting initial persistent data
  • Inputs (test data)
  • Execution procedure or scenario or test steps
  • Expected postconditions or system states
  • Expected outputs
  • Evaluation parameters/criteria
  • Relationship with other use cases
  • Whether the test can be or has been automated
  • Other remarks

The test case typically comprises of several steps that are necessary to assess the tested functionality. The explained  steps should include all necessary actions, including those assumed to be a part of common knowledge.

The test suite is a collection of test cases that are related to the same testing work in terms of goals and associated testing process. There may be several test suites for a particular system, each one grouping together many test cases based on corresponding goal and functionality or shared preconditions, system configuration, associated common actions, execution sequence of actions or test cases, or reporting requirements. An individual test suite may validate whether the system complies with the desired set of behaviors or fulfills the envisioned purpose or associated use cases, or be associated with different phases of system lifecycle, such as identification of regressions, build verification, or validation of individual components. A test case can be included into several test suites. If test cases descriptions are organised along test suites, the overlapping cases should be documented within their primary test suites and referenced elsewhere.

A The test procedure defines detailed instructions and sequence of steps to be followed while executing a group of test cases (such as a test suite) or single test case. It can give information on how to create the needed test setup, perform execution, evaluate results and restore the environment. The test procedures are developed on the base of the test design specification and in parallel or as parts of test case specifications. Having a formalised test procedure is very helpful when a diverse set of people is involved in performing the same tests at different times and situations, as this supports consistency of test execution and result evaluation. Test procedure can combine test cases based on a certain logical reason, like executing an end-to end situation, when the order in which the test cases are to be run is also fixed.

The test script is a sequence for instructions that need to be carried out on the tested system in order to execute a test case, or test a part of system functionality. These instructions may be given in the form suitable for manual testing or, in automated testing, as short programs written in a scripting or general purpose programming language. For software systems or applications, there are test tools and frameworks that allow specification and continuous or repeatable execution of prepared automated tests.

...

This log provides a chronological record of relevant details about the execution of tests, by recording which tests cases were run, who ran them, in what order, and whether the test were passed or failed. The test passed if the actual and expected results were identical; it failed if there was a discrepancy. For each test execution, the versions of the system under test, its components, test environment, and specifics of input data are recorded.

...

...

Each Test Execution should start with a standardised header for each execution, executions should be ordered from the oldest to the newest.

...

XXX groped by test cases.

...

If this is done but in reality the different test cases were executed interchangeably, it is important to be able to trace the actual execution sequence of all tests in order to detect possible interference between tests.

  • Test case ID or short identifying name - may be placed at the top of the group
  • Order of execution number - useful for cross-referencing
  • Date and time
  • Testers - Tester - the person who run the test, may also include observers
  • Test bed/facility - if several test beds are used in testing
  • Environment information - versions of test items, configuration(s) or other specifics
  • Specific setup presets - initial states or persistent data, if any
  • Specific input - inputs - inputs/test parameters or data that are varied across executions, if any
  • Execution status Specific results - outputs, postconditions or final states, if different from the expected; may refer to detailed test results
  • Execution status - passed, failed, passed, failed, partial (if permitted)
  • Incident reports - if one or several test incident reports are associated with this execution, they is referenced here
  • Comments - notes about the significant test procedure steps, impressions, suspicions, and other observations, if any

If the test execution log is natively maintained or presented in a human readable format, and there are series of many repeated executions of the same test case adorned with same values of an attribute (tester, test bed, versionsenvironment, presets, setupinput...), these common values can be documented in the heading of the group, along with the test case ID. if any, may also refer to a previous execution of the same test caseIf there are several typical versions of the environment, presets or inputs, they may be described in the test case or in elsewhere in the test execution log and referenced in the test case executions that use them.

A more detailed description of test setup (configuration) in free text, if needed.

A more detailed description of input data/load, if needed.

outputs separately in the detailed test results documents, especially if a detailed discussion of the alignment of actual and expected outcomes is needed.

If needed, a more detailed description of possible necessary modifications or adjustments of the standard test procedure for the given Test Case.This and the previous two paragraphs should provide sufficient information for test reproducibility, i.e. how to create the needed test setup, execute test and produce same or similar results.

Details of actual outputs and results. In case of a slight deviation or partial success or failure, a more detailed description of the above declared execution status should be provided.Detailed

observationsClassifying of the execution status as "partial" may be allowed, but then it must be clarified how to treat such outcomes in feature pass/fail criteria and acceptance criteria.

The test execution log allows progress of the testing to be checked, as well as providing valuable information for finding out what caused an incident.

actual outcome.

Detailed Test Results

 

Test Incident Report

 

 

Detailed Test Results, which a tester gets after performing the test, is always documented along with the test case during the test execution phase. After performing the tests, the actual outcome is compared with the expected outcome and the deviations are noted. The deviation, if any, is known as defect.

Trace of how the results measure up with expected results (postconditions, states and outputs) from the comparison of actual outcomes to predicted outcomes and by applying the corresponding evaluation criteria.

Test Incident Report

The test incident report is used to document any event that occurs during the testing process that requires investigation. A discrepancy between expected and actual results can occur because the expected results are wrong, the test was wrongly run, or due to inconsistent or unclear requirements, fault or defect in the system or problem with the test environment. The report consists of all details of the incident such as actual and expected results, when it failed, and any supporting evidence that will help in its resolution. The report will also include, if possible, an assessment of the impact of an incident upon testing.

The test incident report needs to be a standalone document, so it has to provide some information that is recorded in the corresponding test case and test execution log record.

The relationship between the Test Log and the Test Incident Report is not one to one. A failed test may raise more than one incident, and at the same time an incident may occur in more than one test failure. Taking the Billing project example, if both test cases completely failed than three Test Incident Reports would be raised:

  • The first would be for failure to produce a normal bill,
  • The second would be for failure to produce a final bill,
  • The third for failure to calculate the volume discount for both the normal and the final bill.

It is important to separate incidents by the features being tested so as to get a good idea of the quality of the system, and allow progress in fixing faults to be checked.

A useful derivative document from the Test Incident Report is a Test Incident Log to summarise the incidents and the status. This is not an IEEE 829 document as all it values can be derived from the Test Incident Reports.

Metadata

The test incident report ID is crucial, as it allows the report to be referenced in the test execution log and issue tracking system. If one or several test incident reports are raised or updated during the single test execution, and of them need to be recorded in the test execution log.

Summary

It briefly recapitulates the incident.

Description

The following elements should be provided.

  • Test case ID or short identifying name*
  • Order of execution number*
  • Date and time
  • Testers
  • Test procedure step - where the event occurred*
  • Test bed/facility
  • Environment information
  • Presets*
  • Inputs*
  • Expected results*
  • Actual results*
  • Anomalies - discrepancies, errors or faults that occurred
  • Attempts to repeat - whether they were made, how, and what was the outcome

Test case related details (marked with *) will be omitted if the incident in not linked to a specific test case or scenario. In such situations, a detailed narrative description should be provided.

If an incident is the consequence of a fault or bug, the causing error may have occurred not in the failed execution, but in one that was run previously. It the case of apparently random incidents, the previously recorded executions and incidents should be checked in an attempt to recognise the causing pattern. All other related activities, observations, and deviations the standard test procedure should be included, as they may also help to identify and correct the cause of the incident.

Impact

If known, indicate the impact of the incident on test plans, test design specifications, test case specifications or test procedures.