Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

This "framework" outlines a structured approach to testing, which is aligned with the major standards in the field, such as ISO/IEC/IEEE 29119, IEEE 829 and BS 7925, complements ITIL recommendations and processes, but is at the same time in agreement line with the common language and practices in testing. It currently meant to support the GÉANT SA4 software service validation and testing process, but may grow into a tool for strengthening of the common ground between software developers and testers and the computer networking community, and . Such a tool can be used in consolidation of evaluations of various products, solutions and solutions developed and applied within the GÉANT the GÉANT project.

Streamlining of the testing process for software product has been the subject of significant standardisation effort in the last two decades. As software becomes an intrinsic part of almost every system, an effort has been has to align its validation with needs, practices and experiences from many industries and different fields of engineering. However, this field is still rapidly evolving and there is currently a strong opposition to its formalisation. There is no guarantee that any specification can ensure fully repeatable testing practice that is applicable in all possible contexts. Often, the inherent fluidity of the environment and goals and complexity of tested systems is likely to preclude any exhaustive codification.

...

Here proposed descriptions and outlines of documentation artifacts and related comments are of informative and guiding nature. This material does not provide any guidelines for organisational or strategic level of testing related documentation. However, the top level policy and strategy may define the overall approach to validation and testing on the base of documents such as this one.

...

Test management primarily deals with the evaluation level documentation. These The evaluation level artifacts are related to testing of as a specific service, product or solution that are intended for communication between those responsible for test management processes (planning, monitoring and control, and completion assessment) and those who actually design, implement, execute and document the planned tests. The evaluation level documentation consists of:

...

The actual testing is initially specified through the development of the test plan. In order to produce the test plan, its creators authors need to familiarise with the context in order to define the scope, organise its development of the test plan, identify and estimate risks, establish approach towards them, design the strategy for the particular testing, determine staffing profile and schedule, and after the plan is drafted, to establish consensus or get approval for it, and distribute it to all involvedinterested parties.

The stakeholders monitor the progress through test status and completion reports on in order to appraise test status and assess related measures and metrics. They issue directives for corrective and adaptive actions on test design, environment and execution, which results in changes in test level documentation. Ultimately, these measures may lead to modification of the test plan.

...

The enactment consists of test design and preparation and execution of all planned tests. Its processes produce and update the test level documentation.

...

Traceability is the key characteristic of both test management and test enactment processes and artifacts. Repeatability is highly desirable in tests related to core requirements and functionalities. For less important or involving features and requirements, there is no need to maintain repeatability of tests that were already done, unless it is possible to fully automate the test execution, or if it is required for compliance or periodic audits.

...

All documents should start with the following common elements (metadata):

  • Organizational numeric identifier of the document , - it may be omitted if the document name is used as the identifier
  • Descriptive and identifying name of the document
  • Name of testing (sub)project or use case, if not clear from document name
  • Document date
  • Version number
  • Author or authors and their contactscontact information
  • Version history (table with version numbers, dates, contributors and descriptions of changes)

...

The test plan outlines the operational aspects of execution of executing the test strategy for the particular testing effort. It provides an overview of what the system needs to meet in order to satisfy its intended use, user needs, or scope of the intended testing effort, and how the actual validation is to be conducted. This plan outlines the objectives, scope, approach, resources (including people, equipment, facilities and tools) and amount of their allocation amounts, methodologies and schedule of the testing effort. It usually also describes the team composition, training needs, entry and exit criteria, risks, contingencies, test cycle details, quality expectations and tracking and reporting processprocesses. The test plan may account for one or several test suites, but it does not detail individual test cases.

As a pivotal document, the test plan is an instrument of mutual understanding between the testing team, and development team teams and management. In case of major impediments or changes, it should be updated as needed and communicated to all concerned. Such updates may lead to further changes in documents that specify test design, test cases, and data and environment requirements. Given the multifaceted and overarching nature of the test plan and In in order to avoid unnecessary backpropagation of changes, in it should not over-specify the implementation details that are to be articulated in subordinate test level documents.

...

Test status report is a one-time interim summary of the results of the execution of testing activities. It may describe the status of the all testing activities or be limited on to a single test suite. This report, as well as the test summary report, is sublimes the information obtained during test execution and recorded in test logs and incident reports. It needs to must be highly informative and concise and should not elaborate minor operational details.

Depending on the approach defined in the test plan, it may be produced periodically, on completion of milestones or phases, or on demand. If periodic, it may be a base for continuous tracking through progress charts. Individual status reports are also sources for the test completion report.

On the top of the report that contains standard The report should start with metadata in the condensed form should be provided, but without version history, since the document itself is considered to be a quick one-time snapshot.

Summary

The document can start with totals of passed, failed and pending test cases, scenarios or tests, and identified defects (if individually tracked). It may also show the coverage of test cases or code, consumption of resources and other established progress metrics, in numbers and, possiblyif possible, charts. Close to this summary, a comprehensive assessment should be provided.

...

If needed, the document provides evaluations and recommendations based on the interim results and incidents encountered during since the testingprevious status report. It may also signal the red flags that deserve recipients’ attention. It should report the resolution of issues that were highlighted in the previous status report, but there is no need to report the issues that were already reported as solved.

Activities

The summary of activities conduced since the previous status report is optional. If present, is should exist in all status reports.

...

The test completion or summary report is a management report that brings together the key information uncovered by the accomplished tests. It recapitulates the results of the testing activities and indicates whether the tested system is fit for purpose according to whether it has met the acceptance criteria defined in the project plan. This is the key document in deciding whether the quality of the system and the performed testing are sufficient for to allow taking of the decision or step that was the reason for the testing effort. Although the completion report provides a working assessment on success or failure of the system under test, the final decision is made by the evaluation team.

...

It recapitulates the evaluation of the test items. Although the test completion report can reflect the structure of the test status report, the details that were only temporarily significant can be omitted from it. The table rows or subsections should correspond to the test items or scenarios listed in the test design specification. The summary should indicate the versions of the items that were tested, as well as the used testing environment. For each item it should be briefly explained what was tested and what was the outcome.

...

This section refines the approach described in the test plan. Provided are The details of the included test levels are provided and how the individual features are addressed at those levels.

...

If the test design includes some deviations from the test plan, they are have to be described here.

Test Cases

...

The test case refines criteria that need to be met in order to consider some system feature, set of features or use case as working. It is the smallest unit of testing and is sometimes colloquially referred as Test. A single test case may be included into several test suites or related to a requirement associated with several use cases. If different test levels have separate test design specification, a particular single test case may be present in several design specifications.

...

Test case specifications provide more elaborate details about data needed and should be sufficient to support the actual collection or generation of adequate test data. During the second stage, the selected tools are used to prepare these data for execution of all use cases, including their injection into the system or interactions during the test procedure steps. At the same time, the corresponding expected test outputs are defined, and, if possible, automated methods for comparing the baseline test data against actual results. The limitations of test data and supporting tools are identified and it is explained what to do in order to mitigate them during the test execution and in interpretation and evaluation of the results. Finally, the measures that ensure usability and relevancy of test data throughout testing process need to be conducted. They include data maintenance, for example to support the changes in the system, but also data versioning and backup. The decisions and knowledge produced during the preparation of the test data are captured.

...

The requirements for the test bed implied by the test plan, test design specification and individual test cases are put together, and the initial the test environment setup is designed. The test bed requirement related to the test level, system features and requirements, test items, test data, testing scenarios and procedures, chosen support, measurement and monitoring tools are put together. Security, safety and regulatory concerns are also considered. Policies and arrangements for sharing of the test bed and allocated resources with other teams or users are established.

...

The test execution log is the record of test cases executions and obtained results, in the order of their running. Along with test incident reports, the test execution log is a base for test status and completion reports. These documents allow direct checking of the progress of the testing and provide valuable information for finding out solving what caused an incident.

This log provides a chronological record of relevant details about the execution of tests, by recording which tests cases were run, who ran them, in what order, and whether the test were passed or failed. The test passed if the actual and expected results were identical; it failed if there was a discrepancy. If the test design specification permits to classify the execution status as "partial", it must also clarify how to treat such outcomes within the feature pass/fail criteria and acceptance criteria.

...

The test incident report is used to document any event that occurs during the testing process that requires investigation. A discrepancy between expected and actual results can occur because the expected results are wrong, the test was wrongly run, or due to inconsistent or unclear requirements, fault or defect in the system or problem with the test environment. It should provide all details of the incident such as actual and expected results, when it failed, and any supporting evidence that will help in its resolution. All other related activities, observations, and deviations the standard test procedure should be included, as they may also help to identify and correct the cause of the incident. The report also includes , if possible, an assessment of the impact of an incident upon testing, if possible.

The test incident report needs to be a standalone document, so it provides pieces of information that are already recorded in the corresponding test case and test execution log record.

A failed test may raise more than one incident, while an incident may occur in more than one test failure. The testers should , (according to their knowledge and understanding, ) try to identify unique incidents and associate them with the tested features or originating test items. This will provide a good indication of the quality of the system and its components, and allow monitoring of their improvement.

...