Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

Testing Framework

Table of Contents

...

Introduction

Testing as is a cognitive, critical and often creative process. Wise It builds upon experience, discipline and persistence, but also benefits from thinking out of the box. It benefits from the wise application of standards, tools, best practices, adapted to a specific case, goal, available resources, and local culture.

This document as a support for establishing the common ground and customisable toolbox, and supporting guide. Relationship to the existing GN documents, testing standards, and software testing practices. Acknowledge that the recent standardization efforts try to build upon practices and experiences from different fields of engineering and industries. Do not reinvent the wheel, use the existing expertise, tools, practices, and processes. Actual testing, even with automated execution is designed and performed by humans.

No guarantee that any specification can ensure fully repeatable testing practice that is applicable in all possible contexts. Often, inherent fluidity of the environment and goals and complexity of tested system is likely to preclude any exhaustive formalization.

Avoid over-administrative approach; no one-size-fits-all. Schematic and formal application and implementation of the standards, or procedures templates “by the book”, this document included, endangers to petrify, bureaucratise, and over-proscribe validation and testing process and impede innovation. What matters is the quality. However, running production-level services may require formalised processes, traceability or even auditability of validation, and particularly of check procedures.

Generic Testing Process Diagram

General Testing and Validation Governance

This document provides GN and, more specifically, GN4.1 SA4 with a generalised technical specification for testing and validation. It can be seen as a a contribution to the general specification of GN approach to validation and testing, on the base of its the overall testing policy (for implementation and management of services -  any sources/refs?), and as an (tentative?) technical expression of the shared testing strategy.

The overall policy and strategy, as well as this technical document, must be further discussed and agreed by stakeholders and published.

Should be periodically reviewed and updated on the base the experience from the actual performed tests, organizational and environmental changes and evolution of overall policies and goals. It is (one of / input for) general GN-level testing related policy and strategy documents.

Evaluation Planning

Need to familiarise with the context in order to define the scope, organise development of the test plan, identify and estimate risks, establish approach towards risks, design test strategy, determine staffing profile and schedule, draft test plan, establish consensus/get approval for the plan, publish the plan.

Traceability is the key characteristic of both Evaluation and Test Enactment related artifacts. Repeatability for tests related to core requirements and functionality.  There is no need to repeat and maintain repeatability of tests that were already done unless it is possible to fully automate the test execution or is required for compliance or periodic audits.

Test Enactment

Test design includes specification of requirements with conditions for their fulfillment. The conditions may also be contextual and include repeatability or presentability. It is followed by specification of individual tests cases and procedure for their execution. The subsequent steps are test environment set-up, test execution and related reporting. The feedback from environment set-up and execution may lead to an update of the original design. For example, the constraints in capabilities of the test environment may lead to the update of test cases. Alternatively, actual preparation of test cases and environment may require additional elaboration and refinement of test cases. On the other hand, significance/priority of some requirements may lead to modification of the test environment in order to enable execution of corresponding test cases. Both will lead to modification of the test procedure.

...

framework outlines a structured approach to testing, which is aligned with the major standards in the field, such as ISO/IEC/IEEE 29119, IEEE 829 and BS 7925, complements ITIL recommendations and processes, but is at the same time in line with the common language and practices in testing. It currently meant to support the GÉANT SA4 software service validation and testing process, but may grow into a tool for strengthening of the common ground between software developers and testers and the computer networking community. Such a tool can be used in consolidation of evaluations of various products, solutions and solutions developed and applied within the GÉANT project.

Tailoring Approach to Testing

Streamlining of the testing process for software product has been the subject of significant standardisation effort in the last two decades. As software becomes an intrinsic part of almost every system, a testing effort has to align its validation with needs, practices and experiences from many industries and different fields of engineering. However, this field is still rapidly evolving and there is currently a strong opposition to its formalisation. There is no guarantee that any specification can ensure fully repeatable testing practice that is applicable in all possible contexts. Often, the inherent fluidity of the environment and goals, combined with complexity of tested systems, is likely to preclude any exhaustive codification.

This material should be seen as a highly customisable toolkit. It is intended to prevent the reinventing of the wheel, and to digest the existing expertise and practices. An over-administrative approach must be avoided, as there is no single size that fits all. Schematic and formal application and implementation of the standards, procedures or templates “by the book” (this document included) may petrify, bureaucratise, and over-proscribe validation and testing process and impede innovation. What matters is the quality. Actual testing, even with automated execution, is designed and supervised by humans. On the other hand, running of production-level services does require formalised processes, traceability or even auditability of validation, and unification of control procedures.

General Testing and Validation Governance

Although this document can be seen as a contribution to the overall GÉANT approach to validation and testing, it is beyond its scope to discuss the general validation and testing policy and strategy for implementation and management of services. Such policy and strategy may be further discussed and refined by the project management and stakeholders.

Both strategy and supporting guidelines should be periodically reviewed and updated on the base the experience from the actual evaluation, organizational and environmental changes and evolution of general policies and goals.

As a technical approach to validation and testing, this material supports the practical organisation and documentation of individual evaluations. They consist of two groups of processes: test management and test enactment.

Here proposed descriptions and outlines of documentation artifacts and related comments are of informative and guiding nature. This material does not provide any guidelines for organisational or strategic level of testing related documentation. However, the top level policy and strategy may define the overall approach to validation and testing on the base of documents such as this one.

Test Management

Test management primarily deals with the evaluation level documentation. The evaluation level artifacts are related to testing of as a specific service, product or solution that are intended for communication between those responsible for test management processes (planning, monitoring and control, and completion assessment) and those who actually design, implement, execute and document the planned tests. The evaluation level documentation consists of:

  • Test Plan
  • Test Status Report
  • Test Completion Report

The actual testing is initially specified through the development of the test plan. In order to produce the test plan, its authors need to familiarise with the context in order to define the scope, organise the development of the test plan, identify and estimate risks, establish approach towards them, design the strategy for the particular testing, determine staffing profile and schedule, and after the plan is drafted, to establish consensus or get approval for it, and distribute it to all interested parties.

The stakeholders monitor the progress through test status and completion reports on in order to appraise test status and assess related measures and metrics. They issue directives for corrective and adaptive actions on test design, environment and execution, which results in changes in test level documentation. Ultimately, these measures may lead to modification of the test plan.

The test enactment may also provide some suggestions or inputs for updates of the general

...

approach and strategy on testing and validation. It is on those responsible for test management to further articulate and propagate such initiatives.

Test

...

Enactment

The enactment consists of test design and preparation and execution of all planned tests. Its processes produce and update the test level documentation.

Test design includes specification of requirements with conditions for their fulfilment. The conditions may also be contextual and include repeatability of tests or ability to present the results. It is followed by specification of individual tests cases and procedure for their execution. The subsequent steps are test environment set-up, test execution and related reporting. The feedback from environment set-up and execution may lead to an update of the original design. For example, the constraints in capabilities of the test environment may lead to the update of test cases. Alternatively, actual preparation of test cases and environment may require additional elaboration and refinement of test cases. On the other hand, significance or priority of some requirements may lead to modification of the test environment in order to enable execution of corresponding test cases. Both may lead to modification of the test procedure.

Figure: Test Enactment Processes Diagram

The test level documentation is produced by the testing team. It refines the practical details of the design and execution of the work specified in the test plan and captures or relevant information gathered during testing in order to support the evaluation level reports. It consists:

  • Test Design Specification
  • Test Case Specification
  • Test Data Report
  • Test Environment Report
  • Test Execution Log
  • Detailed Test Results
  • Test Incident Report

Traceability is the key characteristic of both test management and test enactment processes and artifacts. Repeatability is highly desirable in tests related to core requirements and functionalities. For less important features and requirements, there is no need to maintain repeatability of tests that were already done, unless it is possible to fully automate the test execution, or if it is required for compliance or periodic audits.

Test Documentation

Below described documentation provides an hierarchy of artifacts that is aligned with common industry practices and is generally compatible with the IEEE 829 standard and its sequel/update ISO/IEC/IEEE 29119-3, which even provides detailed templates for both traditional (sequential and iterative) and agile approaches.

Metadata

All documents should start with the following common elements (metadata):

  • Organizational numeric identifier of the document - it may be omitted if the document name is used as the identifier
  • Descriptive and identifying name of the document
  • Name of testing (sub)project or use case, if not clear from document name
  • Document date
  • Version number
  • Author or authors and their contact information
  • Version history (table with version numbers, dates, contributors and change descriptions)

Version histories, date(s) and authors are not needed if the master documents are kept up to date in already highly versioned environment, such as a CMS system or Wiki. However, the main or corresponding author and document date need to be visible in self-contained standalone snapshots that are as such published on the web or shared by email.

References/Supporting Documents/Literature

The list of documents with their identifiers, names, version numbers and hyperlinks to individual documents should be provided.

At least the reference to the project plan should be present in its subordinate documents. There is no need to reference other documents that represent the common background that is already listed in the project plan. Only the references that are crucial for understanding need to be provided in lower level documents, as well as those that point to the relevant external documentation that is not already referenced in higher level documents.

Test

On the common industry practices and with the strong influence from IEEE 829, and its sequel/update ISO/IEC/IEEE 29119-3, which even provides even templates for both traditional (sequential and iterative) and agile approaches.

Metadata

All documents should start the following common elements (metadata):

  • Organizational numeric identifier of the document, may be ommitted if the document name is used as the identifier
  • Descriptive and identifying name of the document
  • Name of testing (sub)project or use case, if not clear from document name
  • Document date
  • Version number
  • Author or authors and their contacts
  • Version history (table with version numbers, dates, contributors and descriptions of changes)

Version history, date(s) and authors are not needed for documents with masters that are kept up to date in already highly versioned environment, such as an CMS system or Wiki. However, the main or corresponding author and document date need to be visible in self-contained standalone snapshots that are as such published on the web or shared by email.

References/Supporting Documents/Literature

The list of documents with their identifiers, names, version numbers and hyperlinks to individual document should be provided.

At least the reference to the project plan should be provided in its subordinate documents. There is no need to reference other documents that represent the common background that is already listed in the project plan. In lower level documents, only the references that are crucial for its understanding need to be provided, and in order to point to the relevant external documentation that is not already referenced in higher level documents.

Organisational/Strategic Level

Policy and Strategy, also mandating the overall approach on validation and testing on base of documents such as this one.

Evaluation Level Documentation

These are the elements of documentation related to testing of as a specific service, product or solution that are intended for communication between those responsible for test management processes (planning, monitoring and control, and completion assessment) and those who actually design, implement, execute and document the planned tests.On the base of of status and completion reports, control and correction directives are issued which lead to corrective or adaptive actions and changes in test level documentation or even the test plan.

  • Test Plan
  • Test Status Report
  • Test Completion Report

Test Level Documentation

These documents are produced by the testing team. They refine the practical details of the design and execution of the work specified in the test plan and capture or relevant information gathered during testing in order to support the evaluation level reports.

  • Test Design Specification
  • Test Case Specification
  • Test Data Report
  • Test Environment Report
  • Test Execution Log
  • Detailed Test Results
  • Test Incident Report

Test Documents

...

Plan

The test plan outlines the operational aspects of execution of executing the test strategy for the particular testing effort. It provides an overview of what the requirements the system needs to meet in order to satisfy its intended use, user needs, or scope of the intended testing effort, and how the actual validation is to be conducted. This plan outlines the objectives, scope, approach, resources (including people, equipment, facilities and tools) and amount of their allocation amounts, methodologies and schedule of the testing effort. It usually also describes the team composition, training needs, entry and exit criteria, risks, contingencies, test cycle details, quality expectations and tracking and reporting processprocesses. The test plan may account for one or several test suites, but it does not detail individual test cases.

As a pivotal document, the test plan is an instrument of mutual understanding between the testing team, and development team teams and the management. In case of major impediments or changes, it should be updated as needed and communicated to all concerned. Such updates may lead to further changes in documents that specify test design, test cases, and data and environment requirements. Given the multifaceted and overarching nature of the test plan and In in order to avoid unnecessary backpropagation of changes, in it should not over-specify the implementation details that are to be articulated in subordinate test level documents.

...

The descriptive name should briefly, in three to six words, express what state which system is tested, the target aspects, features of components, and level, type or purpose of conducted testing.

It should be immediately followed by separate description of the testing level (unit, integration, system , and acceptance) and/or type or subtype (functional, non-functional, alpha, beta, performance, load, stress, usability, security, conformance, compatibility, resilience, scalability, volume, regression…).

References/Supporting Documents

List of all All documents that support the test plan . Documents that can be referenced should be listed. They may include:

  • Project plan
  • Product plan
  • Related test plans
  • Requirements specifications
  • High level design document
  • Detailed design document
  • Development and testing standards
  • Methodology guidelines and examples
  • Organisational standards and guidelines
  • Source code, documentation, user guides, implementation records

...

Key terms and acronyms used in the document, target domain and testing are described here. The glossary facilitates communication and helps in eliminating avoiding confusion.

Introduction

This is the executive summary part of the plan which summarises its purpose, level, scope, effort, costs, timing, relation to other activities and deadlines, expected effects and collateral benefits or drawbacks. This section should be brief and to the point.

...

The purpose of the section is to list individual features and , their significance or risk and risks from the user perspective. It is a listing of what is to be tested from the users’ viewpoint in terms of what the system does. The individual features may be operations, scenarios, and functionalities that are to be tested across all or within individual tested sub-systems. Features may be rated according to their importance or risk.

An additional list of features that will not be tested may be included, along with the reasons. For example, it may be explained that a feature will not be available or completely implemented at the time of testing. This map may prevent possible misunderstandings and waste of effort in tracking the defects that are not related to the plan.

Together with the list of test items, this section describes the scope of testing.

Test Items

DescriptionThis is the description, from the technical point of view, of the items to be tested, as hardware, software and their combinations. Version numbers and configuration requirements may be included where needed, as well as delivery schedule for critical items.

This section should be aligned with the level of the test plan, so it may itemise applications or functional areas, or systems, components, units, modules or builds.

For some items, their critical areas or associated risks may be highlighted, such as those related to origin, history, recent changes, novelty, complexity, known problems, documentation inadequacies, failures, complaints or change requests. There probably Probably there had been some general concerns and issues that triggered the testing process, such as history of defects, poor performance, changes in the team and so on , etc., that could be directly associated with some specific items. Other concerns that may that need to be mentioned can be related to safety, importance and impact on users or clients, regulatory requirements, etc. Or the reason The key concern may be general misalignment of the system with the intended purpose, or vague, inadequately captured or understood requirements.

Sometimes, the The items that should not be tested can may be also listed.

Approach

This section describes the strategy of the test plan that is appropriate for the plan level of the plan and in agreement with other related plans. It may extend the background and contextual information provided in the introduction.

...

  • Detailed and prioritised objectives
  • Scope (if not fully define defined by lists of items and features)
  • Tools that will be used
  • Needs for specialised trainings (on testing, used tools or the system)
  • Metrics to be collected and granularity of their collection
  • How the results will be evaluated
  • Resources and assets to be used, such as people, hardware, software, and facilities
  • Amounts of different types of testing at all included levels
  • Other assumptions, requirements and constrains
  • Overall organisation and timing schedule of the internal processes, phases, activities and deliverables
  • Internal and external communication and organisation of the meetings
  • Number and kinds of test environment configurations (for example, for different testing levels/types)
  • Configuration management for the tested system, used tools and test environment
  • Change management

...

Besides testing tools that interact with the tested system, other tools may be need, like those used to match and track scenarios, requirements, test cases, test results, defects and issues and acceptance criteria. They may be manually maintained documents and tables, or tool specialised to testing support testingtools.

Some assumptions and requirements must be satisfied before the testing is even started. Any special requirements or constrains of the testing in terms of the testing process, environment, features or components need to be noted. They may include a special hardware, supporting software, test data to be provided, or restrictions in use of the system during the testing.

...

The discussion of change management should define how to manage the changes of the testing process that may be caused by the feedback from the actual testing or due to by external factors. This includes the handling of the consequences of defects that affect further testing, dealing with requirements or elements that cannot be tested, and dealing with parts of testing process that may be recognised as useless or impractical.

...

The exit criteria for the testing are also defined, and may be based on achieved level of completion of tests, number and severity of defects sufficient for the abortion of testing, or code coverage. Some exit criterion criteria may by be bound to a specific critical functionality, component or test case. The evaluation team may also decide to end the testing on the base of available functionality, detected or cleared defects, produced or updated documentation and reports, or progress of testing.

...

This section describes what is produced by the testing process. These The deliverables my may be the subject of quality assessment before their final approval or acceptance, and besides all the elements of test documentation that are described here, may also include test data used during testing, test scripts, code for execution of tests in testing frameworks and outputs from test tools.

...

This is the specification of the people staff profiles and skills needed to deliver the plan. Depending on the profile of the personnel, it should also detail needed trainings on the tested system, elements of the test environment and test tools.

...

This section specifies personal responsibilities for approvals, processes, activities and deliverables described by the plan. It may also detail detail responsibilities in development and modification of the elements of the test plan.

...

The schedule of phases should be detailed to the level which ensures certainty that is attainable with information available at the moment of planning. It should define the timing of individual testing phases, milestones, activities and deliverables and be based on realistic and validated estimates, particularly as the testing is often interweaved with the development. Testing is the most likely victim of slippage in the upstream activities, so it is a good idea to tie all test dates directly to the completion dates of their related developments. This section should also define when the test status reports are to be produced (continouslycontinuously, periodically, or on demand).

Risks and Contingencies

This sectionssection, which complements "Suspension Criteria and Resumption Requirements", defines all risk events, their likelihood, impact and counter measures to overcome them. Some risks may be testing related manifestations of the overall project risks. Some risk examples The exemplary risks are lack or loss of personnel at the beginning or during testing, unavailability or late delivery of required hardware, software, data or tools, delays in training, or changes to the original requirements or designs.

...

The approach for dealing with schedule slippages should be described in responses to associated risks. Possible actions include simplification or reduction of non-crucial activities, relaxation of the scope or coverage, elimination of some test cases, engagement of additional resources, or extension of testing duration.

Test Status Report

Test status report is a one-time interim summary of the results of the execution of testing activitiestests. It may describe the status of the all testing activities or be limited on to a single test suite. This report, as well as the test summary report, is sublimes the information obtained during test execution and recorded in test logs and incident reports. It needs to must be highly informative and concise and should not elaborate minor operational details.

Depending on the approach defined in the test plan, it may be produced periodically, on completion of milestones or phases, or on demand. If periodic, it may be a base for continuous tracking through progress charts. Individual status reports are also sources for the test completion report.

On the top of the report that contains standard The report should start with metadata in the condensed form should be provided, but without version history, since the document itself is considered to be a quick one-time snapshot.

Summary

The document can start with totals of passed, failed and pending test cases, scenarios or tests, and identified defects (if individually tracked). It may also show the coverage of test cases or code, consumption of resources and other established progress metrics, in numbers and, possiblyif possible, charts. Close to this summary, a comprehensive assessment should be provided.

...

  • Test case ID
  • Test case name
  • Last execution date Date and time of the last execution
  • The last execution status (not run, failed, partial, passed) or counts of failed and passed test executions
  • Number of associated defects
  • Brief comment

If there are if more than dozen items, consider grouping and subtotaling and sub-totaling the table rows according to the test suites, test items or scenarios (as listed in the project plan), test types or areas. If a single report addresses several suites, each should have a separate test status report, or at least its own totals and details tables.

...

If needed, the document provides evaluations and recommendations based on the interim results and incidents encountered during since the testingprevious status report. It may also signal the red flags that deserve recipients’ attention. It should report the resolution of issues that were highlighted in the previous status report, but there is no need to report the issues that were already reported as solvedreport.

Activities

The summary of activities conduced since the previous status report is optional. If present, is should exist in all status reports.

Test Completion Report

The test completion or summary report is a management report that brings together the key information uncovered by the accomplished tests. It recapitulates the results of the testing activities and indicates whether the tested system is fit for purpose according to whether it has met the acceptance criteria defined in the project plan. This is the key document in deciding whether the quality of the system and the performed testing are sufficient for to allow taking of the decision or step that was follows the reason for the testing effort. Although the completion report provides a working assessment on success or failure of the system under test, the final decision is made by the evaluation team.

This document reports all pertinent information about the testing, including an assessment about how well the testing has been done, the number of incidents raised and outstanding events. It must describe all deviations from the original test plan, their justifications for the deviations and impacts. Provided data should be sufficient for the assessments assessment of the quality of the testing effort.

The provided narrative should be more elaborate elaborated than in the test status reports.

...

It recapitulates the evaluation of the test items. Although the  the test completion report can reflect the structure of the test status report, the details that were only temporarily significant can be omitted from it. The table rows or subsections should correspond to the test items or scenarios listed in the test design specification. The summary should indicate the versions of the items that were tested, as well as the used testing environment. For each item it should be briefly explained what was tested and what was the outcome.

...

This is a comprehensive assessment of the conducted testing. It should also point at indicate the areas that may require further investigation and testing.

...

This is a comprehensive interpretation of the test results. Include It includes description of issues or defects discovered during the testing. Also It should also describe the unexpected results and problems that occurred during the testing. For resolved incidents, their resolutions should be summarised. For unresolved test incidents, an approach to their resolution should be proposed.

...

Propose the decisions regarding the tested system and suggest further actions on the base of the acceptance criteria, quality of the test process, test results and outcomes for individual test items. Provide recommendations for improvement of the system or future testing that result from the reported conducted testing.

Activities

The summary of activities conduced conducted during testing should record what testing was done and how long it took. In order to facilitate improvement of future test planning , it should be in a form that does not require later thorough investigation of the plans and records of the conducted testing.

Test Design Specification

Test design specification addresses the test objectives by refining the features to be tested, testing approach, test cases, procedures and pass criteria. This document also establishes groups of related test cases.

Features to be Be Tested

This section describes the features or requirements and combinations of features that the subject of testing and, if some of features are closely related to test items, expresses these associations. Each feature is elaborated through its characteristics and attributes, with references to the original documentation where the feature is detailed. The requirement descriptors include ID/short name, type, description and risks. The references lead to the associated requirements or feature specifications in the system/item requirement specification or design description (in the original development documentation). If the test design specification covers several levels or types of testing, the associated levels or types are noted for each individual requirement, feature or test item.

...

The use case is a high level description of a specific system usage, or set of system behaviours or functionalities. It should not be mistaken for UML use case. It implies, from the end-user perspective, a set of tests that need to be conducted in order to consider the system as operational for particular use. It therefore usually describes the system usage, features and related requirements and that are necessary for utilization of the system by end users on regular basis.

The requirement is a description of necessary capability, feature, functionality, characteristic or constraint that the system must meet or be able to perform. It is a statement that identifies a necessary quality of a system for it to have value and utility to a user, customer, organization,   or other stakeholder. It is necessary for the fulfillment fulfilment of one or several use cases or usage scenarios (in scenario testing).

The higher high level of requirements include business, architectural and stakeholder/user requirements. There are also some transitional requirements that are only relevant during the implementation of the system. On The detailed requirements are defined on the base of identified high-level features or requirements, the detailed requirements are defined. Some of them are the consequence consequences of system's functions, services and operational constraints, while others are pertaining to the the application domain.

Functional requirement defines a specific behavior behaviour or function (. It describes what the system does; , but not how - it is being done in terms of implementation, quality, or performance).

Non-functional requirement specifies the quality criteria used to assess the characteristics or properties the system should possess (what it is). Typical non-functional requirements include:

...

In the classical engineering and waterfall software engineering, requirements are inputs into the design stages of development. The requirements specification is an explicit set of requirements to be satisfied met by the system, and therefore is therefore usually produced quite early in its development. However, when iterative or agile methods of software development are used, the system requirements are incrementally developed in parallel with design and implementation.

The requirements specification is an important input into the testing process, as it lays out all requirements that were, hopefully, addressed during the system development, so the tests to be performed should could trace back to them. Without the access to the requirements from the development, the requirements that are directly associated with testing should be formulated during its planning. If an agile methodology was used for development, these requirements can reflect the completed Scrum epics, user stories and product backlog features or "done" Kanban board user stories and features cards.

The individual requirement requirements need to be mutually consistent, , i.e. consistent with the external documentation, verifiable, and traceable towards high-level requirements or stakeholder needs, but also towards the test cases.

...

Scenario testing is a higher level approach to testing of complex systems that is not based on test cases, but on working through realistic and complex stories reflecting user activities. These stories may consist of one or several user stories, which capture what a user does or needs to should do as a part of his or /her job function, expressed through one or more sentences in the everyday or domain language. The tester who follows the scenario must interpret the results and evaluate judge whether they can be considered as a pass or failure. This interpretation may require backing by domain experts. This term should be distinguished from test procedure and test case scenario.

...

This section refines the approach described in the test plan. Provided are The details of the included test levels are provided and how the individual features are addressed at those levels.

Specific test techniques to be used are selected and justified. Particular test management, configuration management and incident management tools may be mandated. Code reviews, static and dynamic code analysis or unit testing tools may support the testing work. Test automation software tools may be used to generate, prepare and inject data, set up test preconditions, control the execution of tests, and capture outputs. The method for the inspection and analysis of test results should be also identified. The evaluation can be done based on base of visual inspection of behaviours and outputs, or use on the usage of instruments, monitors, assertions, log scanners, pattern matching programs, output comparators, or coverage measurement and performance testing tools.

In order to avoid redundancy, common information is related to several test cases or procedures is provided here. It may include details of the test environment or environmental needs, system setup and recovery or reset, and dependencies between the test cases.

If the test design includes some deviations from the test plan, they are have to be described here.

Test Cases

...

The test case refines criteria that need to be met in order to consider some system feature, set of features or the entire use case as working. It is the smallest unit of testing and is sometimes colloquially referred as Testtest. A single test case may be included into several test suites or related to a requirement associated with several use cases. If different test levels have separate test design specification, a particular single test case may be present in several design specifications.

The selection of the test cases may be the result of an analysis that provides a rationale for a particular battery of test cases associated with a single requirement. For example, the same feature may be tested with distinct test cases that cover valid and invalid inputs and subsequent successful or negative outcomes. This distinction is made in terms of system responses and not testing outcomes, as reporting of an error may actually indicate passing of a test. The logic behind the selection of test cases should be described here.

A feature from the test design specification may be tested in more than one test case, and a test case may test more than one feature. The test cases should cover all features, that is, each feature should be tested at least once. The relationship between the requirements/features and test cases is summarised in the Requirements/Test Cases Traceability Matrix, which is usually placed in a separate document that is updated with the evolution of the requirements and test design Specificationspecification document, but also with refinement of individual test cases. It enables both forward and backward traceability, as it simplifies how the modification of test cases need to be modified upon after the change of requirements, and vice versa. It is used to verify whether all the requirements have corresponding test cases, and to identify for which requirement(s) a particular test case has been written for. The Requirements/Test Cases Traceability Matrix is a table where requirements and test cases are paired, thus ensuring their mutual association and coverage. Since there are always more test cases than requirements, the requirements are placed in columns, and tests cases in rows. The requirements are identified by their IDs or short names and can be grouped by their types, while the test cases can be grouped into sections according to levels: unit, integration, system , and acceptance.

Feature Pass/Fail Criteria

This specifies the criteria to be used to determine whether the feature or a group of features has passed or failed, on the base of results of individual test cases.

Test Case Specification

The test case specifications are produced after the test design specification is has been prepared. The  The test case specification is a detailed elaboration of a test case identified in the test design specification and includes a description of the functionality to be tested and the preparation required to ensure that the test can be conducted. A single test case is sometimes associated with several requirements. It may be partially or fully automated.

...

For a system without preexisting formal requirements, the test cases can be written based on system’s desired or usual operation, or operation of similar systems. In this case, they may be The test cases are a result of decomposition of a high-level scenario, which is scenario. This scenario provides a story or setting description used to and description of the setting  that explain the system and its operation to the tester. Alternatively, test cases may be omitted altogether and replaced with scenario testing, which substitutes a sequence or group of test cases, as they may be hard to precisely formulate and maintain with the evolution of the system.

...

  • Test case ID or short identifying name
  • Related requirement(s)
  • Requirement type(s)
  • Test level
  • Author
  • Test case description
  • Test bed(s) beds to be used (if there are several)
  • Environment information
  • Preconditions, prerequisites, states or initial persistent data
  • Inputs (test data)
  • Execution procedure or scenario
  • Expected postconditions or system states
  • Expected outputs
  • Evaluation parameters/criteria
  • Relationship with other use cases
  • Whether the test can be or has been automated
  • Other remarks

The test case typically comprises of several steps that are necessary to assess the tested functionality. The explained  explained steps should include all necessary actions, including those assumed to be a part of common knowledge.

The test suite is a collection of test cases that are related to the same testing work in terms of goals and associated testing process. There may be several test suites for a particular system, each one grouping together many test cases based on corresponding goal and functionality or shared preconditions, system configuration, associated common actions, execution sequence of actions or test cases, or reporting requirements. An individual test suite may validate whether the system complies with the desired set of behaviors behaviours or fulfills fulfils the envisioned purpose or associated use cases, or be associated with different phases of system lifecycle , (such as identification of regressions, build verification, or validation of individual components). A test case can be included into several test suites. If test cases descriptions are organised along test suites, the overlapping cases should be documented within their primary test suites and referenced elsewhere.

The test procedure defines detailed instructions and sequence of steps to be followed while executing a group of test cases (such as a test suite) or single test case. It can give information on how to create the needed test setup, perform execution, evaluate results and restore the environment. The test procedures are developed on the base of the test design specification and in parallel or as parts of test case specifications. Having a formalised deatiled test procedure is very helpful when a diverse set of people is involved in performing the same tests at different times and situations, as this supports consistency of test execution and result evaluation. Test procedure can combine test cases based on a certain logical reason, like executing an end-to-end situation, when the order in which the test cases are to be run is also fixed.

The test script is a sequence for instructions that need to be carried out on the tested system in order to execute a test case, or test a part of system functionality. These instructions may be given in the form suitable for manual testing or, in automated testing, as short programs written in a scripting or general purpose programming language. For software systems or applications, there are test tools and frameworks that allow specification and continuous or repeatable execution of prepared automated tests.

Test Data Report

This document describes the data used in testing. It is typically produced in two stages.

First, the data requirements implied by the test plan and test design are put together. They include requirements related to type, range, representativeness, quality, amount, validity, consistency and coherency of test data. Additional concerns may be related to the sharing of test data with the development team or even end users.

If test The set test levels and use cases determine the tools and means that are used for collection, generation and preparation of test data. If data are not entirely fabricated, but is are extracted from the existing databases or services and can be associated with the real services, processes, business entities or persons, then the policies and technical procedures for its anonymisationdepersonalisation, obfuscation or protection may need to be established. If data are generated Otherwise, then the approach for to creation of adequate data should be established, and validity of tests results obtained with such data elaborated. On the base of the set test levels and use cases, the tools or means for or collection, generation preparation of test data are definedinput data should be established, and validity of test results obtained with such inputs assessed.

Test case specifications provide more elaborate details about data needed and should be sufficient to support the actual collection or generation of adequate test data. The During the second stage, the selected tools are then used to prepare these data for execution of all use cases, including their injection into the system or interactions during the test procedure steps. At the same time, the corresponding expected test outputs are defined, and, if possible, automated methods for comparing the baseline test data against anticipated actual results. The limitations of test data and supporting tools are identified and it is explained what to do in order to mitigate these limitations them during the test execution and in interpretation and evaluation of the results. Finally, the measures that ensure usability and relevancy of test data throughout testing process need to be conductedare specified. They include data maintenance , for example to support the changes in the system, but also data versioning and backup. The decisions and knowledge produced during the preparation of the test data should be captured in the second stage of writing of this reportare captured.

Test Environment Report

This document describes the test environment. It is typically produced in two stages.

First, the The requirements for the test bed implied by the test plan, test design specification and individual test cases are put together, and the initial the test environment setup is designed. The test bed requirement related to the test level, system features and requirements, test items, test data, testing scenarios and procedures, chosen support, measurement and monitoring tools are put together. Security, safety and regulatory concerns are also considered. The policies Policies and arrangement arrangements for sharing of the test bed and allocated resources with other teams or users are established.

The initial test bed design can be a simple deployment diagram or test bed implementation project, but it should cover all elements of the setup, including hardware, software, network and configurationtopology and configuration of hardware, external equipment, system software, other required software, test tools, system under test and individual test items. If some needed components are not immediately available, the staged implementation schedule or workarounds need to be devised. At A walkthrough through at least the most important requirements and test cases are eche

setup, limitations and maintenance for test environment

 

 

Test Environment or Test Bed is an execution environment configured for testing. It may consist of specific hardware, OS, network topology, configuration of the product under test, other application or system software, test tools, etc. The Test Plan for a project should enumerated the test beds(s) to be used.

 

needs to be performed in order to validate the proposed design.

The report is updated after the test environment is set up. A smoke test can be performed. The limitations of the test environment The limitations of test data and supporting tools are identified and explained what to do in order to mitigate these limitations them during the test execution and in interpretation and evaluation of the results. Finally, the measures that ensure usability and relevancy of test data throughout testing process need to be conducted. They include data maintenance, for example to support the changes in the system, but also data versioning and backupThe maintenance plans, responsibilities and arrangements are established. If envisioned in the test plan, this includes upgrades of current versions of test items, upgrading of used resources and network topology, and updating of corresponding configurations. The initial design is updated to reflect the test environment as it was built. The decisions and knowledge produced during the preparation implementation of the test data should be captured in the second stage of writing of this reportbed are captured.

Test Execution Log

The test execution log is the record of test cases executions and obtained results, in the order of their running. Along with test incident reports, the test execution log is a base for test status and completion reports. These documents allow direct checking of the progress of the testing and provides provide valuable information for finding out solving what caused an incident.

This log provides a chronological record of relevant details about the execution of tests, by recording which tests cases were run, who ran them, in what order, and whether the test were passed or failed. The test passed if the actual and expected results were identical; it failed if there was a discrepancy. If the test design specification permits to classify the execution status as "partial", it must also clarify how to treat such outcomes within the feature pass/fail criteria and acceptance criteria.

...

  • Test case ID or short identifying name - may be placed at the top of the group
  • Order of execution number - useful for cross-referencing
  • Date and time
  • Testers - person who run the test, may also include observers
  • Test bed/facility - if several test beds are used in testing
  • Environment information - versions of test items, configuration(s) or other specifics
  • Specific presets - initial states or persistent data, if any
  • Specific inputs - inputs/test parameters or data that are varied across executions, if any
  • Specific results - outputs, postconditions or final states, if different from the expected; may refer to detailed test results
  • Execution status - passed, failed, partial (if permitted)
  • Incident reports - if one or several test incident reports are associated with this the execution, they is are referenced here
  • Comments - notes about the significant test procedure steps, impressions, suspicions, and other observations, if any

...

If there are several typical versions of the environment, presets or inputs, they may be described in the test case or in elsewhere in the test execution log and referenced in the test case executions that use them. This reduces the clutter. However, any particular variances in the configuration, input data, and results need to be documented. The actual outputs may be separately captured in the detailed test results, especially if a an in-depth discussion of the alignment of actual and expected outcomes is needed.

...

If the testing is organised around scenarios instead of test cases, the general structure of the log is unchanged, except that inputs, states and outputs are replaced with interpretation of the interactions and results for each segment of the scenario, supported by key excerpts or snapshots of characteristic inputs and outputs.

Detailed Test Results

Detailed Test Results, which a tester gets after performing the test, is always documented along with the test case during the test execution phase. After performing the tests, the actual outcome is compared with the expected outcome and the deviations are noted. The deviation, if any, is known as defect.Trace of how the test results are the actual outputs, assertions and system and monitoring logs produced during the execution of tests. They should be at least paired with the corresponding test execution log records. Their format may depend on test tools that are used to capture them. In addition, the detailed results may encompass the reports produced by test automation tools that compare the baseline test data against actual results, which highlight all noted deviations. Such reports are valuable traces of how obtained results measure up with expected results ( postconditions, states and outputs ) from the comparison of actual outcomes to predicted outcomes and by applying the corresponding evaluation criteriaand can be used in assessment of the execution status and writing of the test execution log and test incident reports.

Test Incident Report

The test incident report is used to document any event that occurs during the testing process that requires investigation. A discrepancy between expected and actual results can occur because the expected results are wrong, the test was wrongly run, or due to inconsistent or unclear requirements, fault or defect in the system or problem with the test environment. It should provide all details of the incident such as actual and expected results, when it failed, and any supporting evidence that will help in its resolution. All other related activities, observations, and deviations the standard test procedure should be included, as they may also help to identify and correct the cause of the incident. The report also includes , if possible, an assessment of the impact of an incident upon testing, if possible.

The test incident report needs to be a standalone document, so it provides pieces of information that are already recorded in the corresponding test case and test execution log record.

A failed test may raise more than one incident, while an incident may occur in more than one test failure. The testers should , (according to their knowledge and understanding, ) try to identify unique incidents and associate them with the tested features or originating test items. This will provide a good indication of the quality of the system and its components, and allow monitoring of their improvement.

...