Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

Avoid over-administrative approach; no one-size-fits-all. Schematic and formal application and implementation of the standards, or procedures templates “by the book”, this document included, endangers to petrify, bureaucratise, and over-proscribe validation and testing process and impede innovation. What matters is the quality. However, running production-level services may require formalized formalised processes, traceability or even auditability of validation, and particularly of check procedures.

...

Evaluation Planning

Need to familiarize familiarise with the context in order to define the scope, organize organise development of the test plan, identify and estimate risks, establish approach towards risks, design test strategy, determine staffing profile and schedule, draft test plan, establish consensus/get approval for the plan, publish the plan.

...

Evaluation Level Documentation

Evaluation Level Documentation These are the elements of documentation related to testing of as a specific service, product or solution that are intended for communication between those responsible for test management processes (planning, monitoring and control, and completion assessment) and those who actually perform ( design, implement, execute and document ) the planned tests.On the base of of status and completion reports, control and correction directives are issued which lead to corrective or adaptive actions and changes in test level documentation or even the test plan.

  • Test Plan
  • Test Status Report
  • Test Completion Report

Test Level Documentation

...

These documents are produced by the testing team. They refine the practical details of the design and execution of the work specified in the test plan and capture or relevant information gathered during testing in order to support the evaluation level reports.

  • Test Design Specification
  • Test Case Specification
  • Test Data Requirements
  • Test Data Report
  • Test Environment Requirements
  • Test Environment Report
  • Test Execution Log
  • Detailed Test Results
  • Test Incident Report

...

This is the executive summary part of the plan which summarizes summarises its purpose, level, scope, effort, costs, timing, relation to other activities and deadlines, expected effects and collateral benefits or drawbacks. This section should be brief and to the point.

...

This section should be aligned with the level of the test plan, so it may itemize itemise applications or functional areas, or systems, components, units, modules or builds.

...

  • Detailed and prioritised objectives
  • Scope (if not fully define by lists of items and features)
  • Tools that will be used
  • Needs for specialized specialised trainings (on testing, used tools or the system)
  • Metrics to be collected and granularity of their collection
  • How the results will be evaluated
  • Resources and assets to be used, such as people, hardware, software, and facilities
  • Amounts of different types of testing at all included levels
  • Other assumptions, requirements and constrains
  • Overall organisation and timing of the internal processes, phases, activities and deliverables
  • Internal and external communication and organisation of the meetings
  • Number and kinds of test environment configurations (for example, for different testing levels/types)
  • Configuration management for the tested system, used tools and test environment
  • Change management

...

Besides testing tools that interact with the tested system, other tools may be need, like those used to match and track scenarios, requirements, test cases, test results, defects and issues and acceptance criteria. They may be manually maintained documents and tables, or tool specialized specialised to support testing.

Some assumptions and requirements must be satisfied before the testing is even started. Any special requirements or constrains of the testing in terms of the testing process, environment, features or components need to be noted. They may include a special hardware, supporting software, test data to be provided, or restrictions in use of the system during the testing.

Testing can be organized organised as periodic or continuous until all pass criteria are met, with passing of identified issues to the development team. This requires defining the approach to modification of test items, in terms of regression testing.

The discussion of change management should define how to manage the changes of the testing process that may be caused by the feedback from the actual testing or due to external factors. This includes the handling of the consequences of defects that affect further testing, dealing with requirements or elements that cannot be tested, and dealing with parts of testing process that may be recognized recognised as useless or impractical.

...

The exit criteria for the testing are also defined, and may be based on achieved level of completion of tests, number and severity of defects sufficient for the abortion of testing, or code coverage. Some exit criterion may by bound to a specific critical functionality, component or test case. The evaluation team may also decide to end the testing on the base of available functionality, detected or cleared defects, produced or updated documentation and reports, or progress of testing.

If testing is organized organised into phases or parallel or sequential activities, the transitions between them may be gated by corresponding exit/entry criteria.

...

The test execution log is the record of test cases executions and obtained results, in the order of their running. Along with test incident reports, the test execution log is a base for test status and completion reports. If the testing is organized organised around scenarios instead of test cases, the general structure of the log is unchanged, except that inputs, states and outputs are replaced with interpretation of the interactions and results for each segment of the scenario, supported by key excerpts or snapshots of characteristic inputs and outputs.

XXX It allows checking of the progress of the testing and provides valuable information for finding out what caused an incident.

This log provides a chronological record of relevant details about the execution of testsThis log provides a chronological record of relevant details about the execution of tests, by recording which tests cases were run, who ran them, in what order, and whether the test were passed or failed. The test passed if the actual and expected results were identical; it failed if there was a discrepancy. For each test execution, the versions of the system under test, its components, test environment, and specifics of input data are recorded.

Each Test Execution should start with a standardised header for each execution, . The executions should be ordered from the oldest to the newest. XXX groped Optionally, the recorded executions may grouped by test cases. If , but, if this is done but in reality the different test cases were executed interchangeably, it is important to be able maintain ability to trace the actual execution sequence of all tests test cases in order to detect possible interference between teststhem.

  • Test case ID or short identifying name - may be placed at the top of the group
  • Order of execution number - useful for cross-referencing
  • Date and time
  • Testers - person who run the test, may also include observers
  • Test bed/facility - if several test beds are used in testing
  • Environment information - versions of test items, configuration(s) or other specifics
  • Specific presets - initial states or persistent data, if any
  • Specific inputs - inputs/test parameters or data that are varied across executions, if any
  • Specific results - outputs, postconditions or final states, if different from the expected; may refer to detailed test results
  • Execution status - passed, failed, partial (if permitted)
  • Incident reports - if one or several test incident reports are associated with this execution, they is referenced here
  • Comments - notes about the significant test procedure steps, impressions, suspicions, and other observations, if any

If the test execution log is natively maintained or presented in a human readable format, and there are series of many repeated executions of the same test case adorned with same values of an attribute (tester, test bed, environment, presets, input...), these common values can be documented in the heading of the group, along with the test case ID.

If there are several typical versions of the environment, presets or inputs, they may be described in the test case or in elsewhere in the test execution log and referenced in the test case executions that use them.

A more detailed description of test setup (configuration) in free text, if needed.

A more detailed description of input data/load, if needed.

outputs separately . This reduces the clutter. However, any particular variances in the configuration, input data, and results need to be documented. The actual outputs may be separately captured in the detailed test results documents, especially if a detailed in-depth discussion of the alignment of actual and expected outcomes is needed.

If neededIn case of a deviation or partial success or failure, a more detailed description of possible necessary modifications or adjustments of the standard test procedure for the given Test Case.This and the previous two paragraphs should provide sufficient information for test reproducibility, i.e. how to create the needed test setup, execute test and produce same or similar results.

Details of actual outputs and results. In case of a slight deviation or partial success or failure, a more detailed description of the above declared execution status should be provided.

Classifying of the execution status as "partial" may be allowed, but then it must be clarified how to treat such outcomes in of the execution status should be provided. If the test design specification permits to classify the execution status as "partial", it must also clarify how to treat such outcomes within the feature pass/fail criteria and acceptance criteria.

The data captured in the test execution log allows progress of the testing to be checked, as well as providing valuable information for finding out what caused an incident., along with other test level specifications and reports, should be sufficient to reproduce individual tests, that is, to recreate the needed setup, execute the test and produce same or similar results.

 

Detailed Test Results

Detailed Test Results, which a tester gets after performing the test, is always documented along with the test case during the test execution phase. After performing the tests, the actual outcome is compared with the expected outcome and the deviations are noted. The deviation, if any, is known as defect.

...

The test incident report is used to document any event that occurs during the testing process that requires investigation. A discrepancy between expected and actual results can occur because the expected results are wrong, the test was wrongly run, or due to inconsistent or unclear requirements, fault or defect in the system or problem with the test environment. The report consists of It should provide all details of the incident such as actual and expected results, when it failed, and any supporting evidence that will help in its resolution. All other related activities, observations, and deviations the standard test procedure should be included, as they may also help to identify and correct the cause of the incident. The report will also includeincludes, if possible, an assessment of the impact of an incident upon testing.

The test incident report needs to be a standalone document, so it has to provide some provides pieces of information that is are already recorded in the corresponding test case and test execution log record.

The relationship between the Test Log and the Test Incident Report is not one to one. A failed test may raise more than one incident, and at the same time while an incident may occur in more than one test failure. Taking the Billing project example, if both test cases completely failed than three Test Incident Reports would be raised:

  • The first would be for failure to produce a normal bill,
  • The second would be for failure to produce a final bill,
  • The third for failure to calculate the volume discount for both the normal and the final bill.

It is important to separate incidents by the features being tested so as to get a good idea of the quality of the system, and allow progress in fixing faults to be checked.

The testers should, according to their knowledge and understanding, try to identify unique incidents and associate them with the tested features or originating test items. This will provide a good indication of the quality of the system and its components, and allow monitoring of their improvement.

If an incident is the consequence of a fault or bug, the causing error may be not in the failed execution, but in the previous one. It the case of apparently random incidents, the earlier executions and incidents should be checked in an attempt to recognise the pattern of tests that leads to themA useful derivative document from the Test Incident Report is a Test Incident Log to summarise the incidents and the status. This is not an IEEE 829 document as all it values can be derived from the Test Incident Reports.

Metadata

The test incident report ID is crucial, as it allows the report to be referenced in the test execution log and issue tracking system. If one or several test incident reports are raised or updated during the single test execution, and of them need to be recorded in the test execution log.

...

  • Test case ID or short identifying name*
  • Order of execution number*
  • Date and timeTesters
  • Testers
  • Associated requirement/feature/test items
  • Test procedure step - where the event occurred*
  • Test bed/facility
  • Environment information
  • Presets*
  • Inputs*
  • Expected results*
  • Actual results*
  • Anomalies - discrepancies, errors or faults that occurred
  • Attempts to repeat - whether they were made, how, and what was the outcome

Test case related details (marked with *) will be omitted if the incident in not linked to a specific test case or scenario. In such situations, a detailed narrative description should be provided.If an incident is the consequence of a fault or bug, the causing error may have occurred not in the failed execution, but in one that was run previously. It the case of apparently random incidents, the previously recorded executions and incidents should be checked in an attempt to recognise the causing pattern. All other related activities, observations, and deviations the standard test procedure should be included, as they may also help to identify and correct the cause of the incident.

Impact

If known, indicate the impact of the incident on test plans, test design specifications, test case specifications or test procedures.