Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

Traceability is the key characteristic of both Evaluation and Test Enactment related artifacts. Repeatability for tests related to core requirements and functionality.  No   There is no need to repeat and maintain repeatability of tests that were already done unless it is possible to fully automate the test execution or is required for compliance or periodic audits.

...

  • Test Design Specification
  • Test Case Specification
  • Test Data Requirements
  • Test Data Report
  • Test Environment Requirements
  • Test Environment Report
  • Test Execution Log
  • Detailed Test Results
  • Test Incident ReportDetailed test results (base for status and completion reports on the base of individual tests execution logs and anomalies/incidents reports)

Test Plan

The test plan outlines the operational aspects of executing the test strategy for the particular testing effort. It provides an overview of what the system needs to meet in order to satisfy its intended use, user needs, or scope of the intended testing effort, and how the actual validation is to be conducted. This plan outlines the objectives, scope, approach, resources (including people, equipment, facilities and tools) and amount of their allocation, methodologies and schedule of the testing effort. It usually also describes the team composition, training needs, entry and exit criteria, risks, contingencies, test cycle details, quality expectations and tracking and reporting process. The test plan may account for one or several test suites, but it does not detail individual test cases.

...

The schedule of phases should be detailed to the level which ensures certainty that is attainable with information available at the moment of planning. It should define the timing of individual testing phases, milestones, activities and deliverables and be based on realistic and validated estimates, particularly as the testing is often interweaved with development. Testing is the most likely victim of slippage in the upstream activities, so it is a good idea to tie all test dates directly to the completion dates of their related developments. This section should also define when the test status reports are to be produced (continously, periodically, or on demand).

Risks and Contingencies

This sections, which complements "Suspension Criteria and Resumption Requirements", defines all risk events, their likelihood, impact and counter measures to overcome them. Some risks may be testing related manifestations of the overall project risks. Some risk examples are lack or loss of personnel at the beginning or during testing, unavailability or late delivery of required hardware, software, data or tools, delays in training, or changes to the original requirements or designs.

...

The approach for dealing with schedule slippages should be described in responses to associated risks. Possible actions include simplification or reduction of non-crucial activities, relaxation of the scope or coverage, elimination of some test cases, engagement of additional resources, or extension of testing duration.

Test Status Report

Test Status Report status report is a one-time interim summary of the results of the execution of testing activities. It may describe the status of the all testing activities or be limited on a single test suite. This report, as well as the Test Summary Reporttest summary report, is sublimes the information obtained during test execution and recorded in test logs and incident reports. It needs to be highly informative and concise and should not elaborate minor operational details.

...

This section refines the approach described in the test plan. Provided are details of the included test levels and how the individual features are addressed at those levels.

Specific test techniques to Specific test techniques to be used are selected and justified. Particular test management, configuration management and incident management tools may be mandated. Code reviews, static code analysis or unit testing tools may support the testing work. Test automation software tools may be used to generate, prepare and inject data, set up test preconditions, control the execution of tests, and capture outputs. The method for the inspection and analysis of test results is identified (for example, visual should be also identified. The evaluation can be done on base of visual inspection of behaviours and outputs, or use of instruments or comparator or , monitors, assertions, log scanners, pattern matching programs, output comparators, or test automation tools that can capture and process outputs).

Provided are details of the included test levels and how the individual features are addressed at those levels.

coverage measurement and performance testing tools.

In order to avoid redundancy, common information is related to several test cases or procedures is provided here. It may include details of the test environment or environmental needs, system setup and recovery or reset, and dependencies between the test cases.

...

  • Test case ID or short identifying name
  • Related requirement(s)
  • Requirement type(s)
  • Test level
  • Author
  • Test case description
  • Environment information
  • Test bed(s) to be used (if there are several)
  • Environment information
  • Preconditions, prerequisite states or preexisting persistent data
  • Inputs (test data)
  • Execution scenario or test steps
  • Expected postconditions or system states
  • Expected outputs
  • Evaluation parameters/criteria
  • Relationship with other use cases
  • Whether the test can be or has been automated
  • Other remarks

...

Test Environment or Test Bed is an execution environment configured for testing. It may consist of specific hardware, OS, network topology, configuration of the product under test, other application or system software, test tools, etc. The Test Plan for a project should enumerated the test beds(s) to be used.

Test Environment Report

Detailed test results

(base for status and completion reports on the base of individual tests execution logs and anomalies/incidents reports)

(Test records - for each test, an unambiguous record of the identities and versions of the component or system under test, the test specification and actual outcome.)

 

Test Execution Log

The test execution log is the record of test cases executions and obtained results, in the order of their running. Along with test incident reports, the test execution log is a base for test status and completion reports. If the testing is organized around scenarios instead of test cases, the general structure of the log is unchanged, except that inputs, states and outputs are replaced with interpretation of the interactions and results for each segment of the scenario, supported by key excerpts or snapshots of characteristic inputs and outputs.

This log provides a chronological record of relevant details about the execution of tests, by recording which tests cases were run, who ran them, in what order, and whether the test were passed or failed. The test passed if the actual and expected results were identical; it failed if there was a discrepancy. For each test execution, the versions of the system under test, its components, test environment, and specifics of input data are recorded.

  • Preconditions, prerequisite states or preexisting persistent data
  • Inputs (test data)
  • Execution scenario or test steps
  • Expected postconditions or system states
  • Expected outputs
  • Evaluation parameters/criteria
  • Other remarks

Each Test Execution should start with a standardised header for each execution, executions should be ordered from the oldest to the newest.

groped by test cases.

  • Order of execution number - useful for cross-referencing
  • Date and time
  • Test case ID or short identifying name - may be placed at the top of the group
  • Tester - the person who run the test
  • Test bed/facility - if several test beds are used in testing
  • Environment information - versions of test items
  • Specific setup - if any
  • Specific input - if any
  • Execution status - passed, failed, partial (if permitted)

If the test execution log is natively maintained or presented in a human readable format, and there are series of many repeated executions of the same test case adorned with same values of an attribute (tester, test bed, versions, setup...), these common values can be documented in the heading of the group, along with the test case ID.

if any, may also refer to a previous execution of the same test case

A more detailed description of test setup (configuration) in free text, if needed.

A more detailed description of input data/load, if needed.

If needed, a more detailed description of possible necessary modifications or adjustments of the standard test procedure for the given Test Case.This and the previous two paragraphs should provide sufficient information for test reproducibility, i.e. how to create the needed test setup, execute test and produce same or similar results.

Details of actual outputs and results. In case of a slight deviation or partial success or failure, a more detailed description of the above declared execution status should be provided.

Detailed observations.

The test execution log allows progress of the testing to be checked, as well as providing valuable information for finding out what caused an incident.

actual outcome.

Detailed Test Results

 

Test Incident Report

 

 The Test Log is the record of Test Cases execution and obtained results, in the order of their running.