Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

Wise application of standards, tools, best practices, adapted to a specific case, goal, available resources, and local culture. This document as a support for establishing the common ground and customisable toolbox, and supporting guide. Relationship to the existing GN documents, testing standards, and software testing practices. Acknowledge that the recent standardization efforts try to build upon practices and experiences from different fields of engineering and industries. Do not reinvent the wheel, use the existing expertise, tools, practices, and processes. Actual testing, even with automated execution, is designed and performed supervised by humans.

No guarantee that any specification can ensure fully repeatable testing practice that is applicable in all possible contexts. Often, inherent fluidity of the environment and goals and complexity of tested system is likely to preclude any exhaustive formalization.

Avoid over-administrative approach; no one-size-fits-all. Schematic and formal application and implementation of the standards, or procedures templates “by the book”, this document included, endangers to petrify, bureaucratise, and over-proscribe validation and testing process and impede innovation. What matters is the quality. However, running production-level services may require formalised processes, traceability or even auditability of validation, and particularly of check procedures.

Generic Testing Process Diagram

General Testing and Validation Governance

XXX This document provides GN and, more specifically, GN4.1 SA4 with a generalised technical specification for testing and validation. It can be seen as a a contribution to the general specification of GN approach to validation and testing, on the base of its the overall testing policy (for implementation and management of services -  any any sources/refs?), and as an (tentative?) technical expression of the shared testing strategy.

...

Traceability is the key characteristic of both Evaluation and Test Enactment related artifacts. Repeatability for tests related to core requirements and functionality.   There is no need to repeat and maintain repeatability of tests that were already done unless it is possible to fully automate the test execution or is required for compliance or periodic audits.

...

It recapitulates the evaluation of the test items. Although the  the test completion report can reflect the structure of the test status report, the details that were only temporarily significant can be omitted from it. The table rows or subsections should correspond to the test items or scenarios listed in the test design specification. The summary should indicate the versions of the items that were tested, as well as the used testing environment. For each item it should be briefly explained what was tested and what was the outcome.

...

The requirement is a description of necessary capability, feature, functionality, characteristic or constraint that the system must meet or be able to perform. It is a statement that identifies a necessary quality of a system for it to have value and utility to a user, customer, organization,   or other stakeholder. It is necessary for the fulfillment of one or several use cases or usage scenarios (in scenario testing).

...

The test case specifications are produced after the test design specification is prepared. The  The test case specification is a detailed elaboration of a test case identified in the test design specification and includes a description of the functionality to be tested and the preparation required to ensure that the test can be conducted. A single test case is sometimes associated with several requirements. It may be partially or fully automated.

...

The test case typically comprises of several steps that are necessary to assess the tested functionality. The explained  explained steps should include all necessary actions, including those assumed to be a part of common knowledge.

...

Test case specifications provide more elaborate details about data needed and should be sufficient to support the actual collection or generation of adequate test data. The During the second stage, the selected tools are then used to prepare these data for execution of all use cases, including their injection into the system or interactions during the test procedure steps. At the same time, the corresponding expected test outputs are defined, and, if possible, automated methods for comparing the baseline test data against anticipated actual results. The limitations of test data and supporting tools are identified and explained what to do in order to mitigate these limitations them during the test execution and in interpretation and evaluation of the results. Finally, the measures that ensure usability and relevancy of test data throughout testing process need to be conducted. They include data maintenance, for example to support the changes in the system, but also data versioning and backup. The decisions and knowledge produced during the preparation of the test data should be captured in the second stage of writing of this reportare captured.

Test Environment Report

This document describes the test environment. It is typically produced in two stages.

First, the The requirements for the test bed implied by the test plan, test design specification and individual test cases are put together, and the initial the test environment setup is designed. The test bed requirement related to the test level, system features and requirements, test items, test data, testing scenarios and procedures, chosen support, measurement and monitoring tools are put together. Security, safety and regulatory concerns are also considered. The policies Policies and arrangement arrangements for sharing of the test bed and allocated resources with other teams or users are established.

The initial test bed design can be a simple deployment diagram or test bed implementation project, but it should cover all elements of the setup, including hardware, software, network topology and configuration of hardware, external equipment, system software, other required software, test tools, system under test and individual test items. If some needed components are not immediately available, the staged implementation schedule or workarounds need to be devised. At A walkthrough through at least the most important test cases are eche

setup, limitations and maintenance for test environment

 

 

Test Environment or Test Bed is an execution environment configured for testing. It may consist of specific hardware, OS, network topology, configuration of the product under test, other application or system software, test tools, etc. The Test Plan for a project should enumerated the test beds(s) to be used.

 

requirements and test cases needs to be performed in order to validate the proposed design.

The report is updated after the test environment is set up. A smoke test can be performed. The limitations of the test environment The limitations of test data and supporting tools are identified and explained what to do in order to mitigate these limitations them during the test execution and in interpretation and evaluation of the results. Finally, the measures that ensure usability and relevancy of test data throughout testing process need to be conducted. They include data maintenance, for example to support the changes in the system, but also data versioning and backupThe maintenance plans, responsibilities and arrangements are established. If envisioned in the test plan, this includes upgrades of current versions of test items, upgrading of used resources and network topology, and updating of corresponding configurations. The initial design is updated to reflect the test environment as it was built. The decisions and knowledge produced during the preparation implementation of the test data should be captured in the second stage of writing of this reportbed are captured..

Test Execution Log

The test execution log is the record of test cases executions and obtained results, in the order of their running. Along with test incident reports, the test execution log is a base for test status and completion reports.These documents allow direct checking of the progress of the testing and provides valuable information for finding out what caused an incident.

...

Detailed Test Results

Detailed Test Results, which a tester gets after performing the test, is always documented along with the test case during the test execution phase. After performing the tests, the actual outcome is compared with the expected outcome and the deviations are noted. The deviation, if any, is known as defect.Trace of how the test results, are the actual outputs, assertions, and system and monitoring logs produced during the execution of tests. They should be at least paired with the corresponding test execution log records. Their format may depend on test tools that are used to capture them. In addition, the detailed results may encompass the reports produced by test automation tools that compare the baseline test data against actual results, which highlight all noted deviations. Such reports are valuable traces of how obtained results measure up with expected results ( postconditions, states and outputs ) from the comparison of actual outcomes to predicted outcomes and by applying the corresponding evaluation criteriaand can be used in assessment of the the execution status and writing of the test execution log and test incident reports.

Test Incident Report

The test incident report is used to document any event that occurs during the testing process that requires investigation. A discrepancy between expected and actual results can occur because the expected results are wrong, the test was wrongly run, or due to inconsistent or unclear requirements, fault or defect in the system or problem with the test environment. It should provide all details of the incident such as actual and expected results, when it failed, and any supporting evidence that will help in its resolution. All other related activities, observations, and deviations the standard test procedure should be included, as they may also help to identify and correct the cause of the incident. The report also includes, if possible, an assessment of the impact of an incident upon testing.

...