You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 17 Next »

Testing Framework

Tailoring Approach to Testing

Testing as a cognitive, critical and often creative process.

Wise application of standards, tools, best practices, adapted to a specific case, goal, available resources, and local culture. This document as a support for establishing the common ground and customisable toolbox, and supporting guide. Relationship to the existing GN documents, testing standards, and software testing practices. Acknowledge that the recent standardization efforts try to build upon practices and experiences from different fields of engineering and industries. Do not reinvent the wheel, use the existing expertise, tools, practices, and processes. Actual testing, even with automated execution is designed and performed by humans.

No guarantee that any specification can ensure fully repeatable testing practice that is applicable in all possible contexts. Often, inherent fluidity of the environment and goals and complexity of tested system is likely to preclude any exhaustive formalization.

Avoid over-administrative approach; no one-size-fits-all. Schematic and formal application and implementation of the standards, or procedures templates “by the book”, this document included, endangers to petrify, bureaucratise, and over-proscribe validation and testing process and impede innovation. What matters is the quality. However, running production-level services may require formalized processes, traceability or even auditability of validation, and particularly of check procedures.

Generic Testing Process Diagram

General Testing and Validation Governance

This document provides GN and, more specifically, GN4.1 SA4 with a generalised technical specification for testing and validation. It can be seen as a a contribution to the general specification of GN approach to validation and testing, on the base of its the overall testing policy (for implementation and management of services -  any sources/refs?), and as an (tentative?) technical expression of the shared testing strategy.

The overall policy and strategy, as well as this technical document, must be further discussed and agreed by stakeholders and published.

Should be periodically reviewed and updated on the base the experience from the actual performed tests, organizational and environmental changes and evolution of overall policies and goals. It is (one of / input for) general GN-level testing related policy and strategy documents.

Evaluation Planning

Need to familiarize with the context in order to define the scope, organize development of the test plan, identify and estimate risks, establish approach towards risks, design test strategy, determine staffing profile and schedule, draft test plan, establish consensus/get approval for the plan, publish the plan.

Traceability is the key characteristic of both Evaluation and Test Enactment related artifacts. Repeatability for tests related to core requirements and functionality.  No need to repeat and maintain repeatability of tests that were already done unless possible to fully automate or required for compliance or periodic audits.

Test Enactment

Test design includes specification of requirements with conditions for their fulfillment. The conditions may also be contextual and include repeatability or presentability. It is followed by specification of individual tests cases and procedure for their execution. The subsequent steps are test environment set-up, test execution and related reporting. The feedback from environment set-up and execution may lead to an update of the original design. For example, the constraints in capabilities of the test environment may lead to the update of test cases. Alternatively, actual preparation of test cases and environment may require additional elaboration and refinement of test cases. On the other hand, significance/priority of some requirements may lead to modification of the test environment in order to enable execution of corresponding test cases. Both will lead to modification of the test procedure.

In parallel, monitoring of test progress is performed on the base of test status and related measures/metrics/reports, stakeholders recieve reports on test status, and issue control/correction directives are issued and corrective/adaptive actions are made for/on the test design, environment, execution, and perhaps even evaluation level test plan.

The test enactment may also provide suggestions or inputs for updates of the general Testing and Validation Process and Strategy.

Test Documentation

On the common industry practices and with the strong influence from IEEE 829, and its sequel/update ISO/IEC/IEEE 29119-3, which even provides even templates for both traditional (sequential and iterative) and agile approaches.

All documents should start the following common elements (metadata):

  • Organizational numeric identifier of the document
  • Descriptive identifying name
  • Version number
  • Authors and their contacts
  • Version history (table with version numbers, dates, contributors and descriptions of changes)

Organisational/Strategic Level

Policy and Strategy, also mandating the overall approach on validation and testing on base of documents such as this one.

Evaluation Level Documentation

Evaluation Level Documentation are the elements of documentation related to testing of as a specific service, product or solution that are intended for communication between those responsible for test management processes (planning, monitoring and control, and completion assessment) and those who actually perform (design, implement, execute and document) the planned tests.On the base of of status and completion reports, control and correction directives are issued which lead to corrective or adaptive actions and changes in test level documentation or even the test plan.

  • Test Plan
  • Test Status Report
  • Test Completion Report

Test Level Documentation

  • Test Design Specification
  • Test Case Specification
  • Test Data Requirements
  • Test Data Report
  • Test Environment Requirements
  • Test Environment Report
  • Detailed test results (base for status and completion reports on the base of individual tests execution logs and anomalies/incidents reports)

Test Plan

Test Plan details out the operational aspects of executing the test strategy. It outlines the objectives, scope, approach, resources (including people, equipment, facilities and tools) and amount of their allocation, methodologies and schedule of intended testing activities. It usually also describes the team composition, training needs, entry and exit criteria, risks, contingencies, test cycle details, quality expectations and reporting and tracking process. The Test Plan may account for one or several test suites, but it does not detail individual test cases.

It is an instrument mutual understanding between the testing team, development team and management. In case of major impediments or changes, it should be updated as needed and communicated to all concerned. Such updates may lead to further changes documents specifying test design, test cases, and data and environment requirements.

The recommended structure of the test plan is as follows.

Metadata

The descriptive name should briefly express what system is tested, which aspects, feature of components are, what kind of testing is conducted or for which purpose.

It should be immediately followed by separate description of the testing level (unit, integration, system, acceptance) and/or type or subtype (functional, non-functional, alpha, beta, performance, load, stress, usability, security, conformance, compatibility, resilience, scalability, volume, regression…).

References/Supporting Documents

List of all documents that support the test plan. Document identifiers/names, version numbers and hyperlinks of individual document should be provided. Documents that can be referenced include:

  • Project plan
  • Product plan
  • Related test plans
  • Requirements specifications
  • High level design document
  • Detailed design document
  • Development and testing standards
  • Methodology guidelines and examples
  • Organisational standards and guidelines
  • Source code, documentation, user guides, implementation records

Glossary

Terms and acronyms used in the document, target domain, and testing as needed are described here. The glossary facilitates communication and helps in eliminating confusion.

Introduction

This is the executive summary part of the plan which summarizes its purpose, level, scope, effort, costs, timing, relation to other activities and deadlines, expected effects and collateral benefits or drawbacks. This section should be brief and to the point.

Test Items

Description, from the technical point of view, of the items to be tested, as hardware, software and their combinations. Version numbers and configuration requirements may be included where needed, as well as delivery schedule for critical items.

This section should be aligned with the level of the test plan, so it may itemize applications or functional areas, or systems, components, units, modules or builds.

For some items, their critical areas or associated risks may be highlighted, such as those related to origin, history, recent changes, novelty, complexity, known problems, documentation inadequacies, failures, complaints or change requests. There probably had been some general concerns and issues that triggered the testing process, such as history of defects, poor performance, changes in the team and so on that could be directly associated with some specific items. Other concerns may that need to be mentioned can be related to safety, importance and impact on users or clients, regulatory requirements etc. Or the reason may be general misalignment of the system with the intended purpose, or vague, inadequately captured or understood requirements.

Sometimes, the items that should not be tested can be also listed.

Features to be Tested

The purpose of the section is to describe individual features and their significance or risk from the user perspective. It is a listing of what is to be tested from the users’ viewpoint in terms of what the system does. The individual features may be operations, scenarios, and functionalities that are to be tested across all or within individual tested sub-systems. Features may be rated according to their importance or risk.

An additional list of features that will not be tested may be included, along with the reasons. For example, it may be explained that a feature will not be available or completely implemented at the time of testing. This map prevent possible misunderstandings and waste of effort in tracking the defects that are not related to the plan.

Approach

This section describes the strategy of the test plan that is appropriate for the plan level of the plan and in agreement with other related plans. It may extend the background and contextual information provided in the introduction.

Rules and processes that should be described include:

  • Detailed and prioritised objectives
  • Scope (if not fully define by lists of items and features)
  • Tools that will be used
  • Needs for specialized trainings
  • Metrics to be collected and granularity of their collection
  • How the results will be evaluated
  • Resources and assets to be used, such as people, hardware, software, and facilities
  • Amounts of different types of testing at all included levels
  • Other requirements and constrains
  • Overall organisation and timing of the internal processes, phases, activities and deliverables
  • Internal and external communication and organisation of the meetings
  • Configuration management for the tested system, used tools and overall test environment
  • Number and kind of different tested configurations
  • Change management

For example, the objectives may be to determine whether the delivered functionalities work in the usage or user scenarios or use cases, whether all functionalities required the work are present, whether all predefined requirements are met, or even whether the requirements are adequate.

Besides testing tools that interact with the tested system, other tools may be need, like those used to match and track scenarios, requirements, test cases, test results, defects and issues and acceptance criteria. They may be manually maintained documents and tables, or tool specialized to support testing.

Any special requirements or constrains of the testing in terms of the testing process, environment, features or components need to be noted. They may include a special hardware, supporting software, test data to be provided, or restrictions in use of the system during the testing.

Testing can be organized as periodic or continuous until all pass criteria are met, with passing of identified issues to the development team. This requires defining the approach to modification of test items, in terms of regression testing.

The discussion of change management should define how to manage the changes of the testing process that may be caused by the feedback from the actual testing or due to external factors. This includes the handling of the consequences of detected defects that affect further testing, but also dealing with requirements or elements that cannot be tested as well as parts of testing process that may be recognized as useless or impractical.

Some elements of the approach are detailed in subsequent sections.

Item (and Phase) Criteria

This section describes the process and overall standards for evaluating the test results, not detailed criteria pass for each individual item, feature or requirement.

The final decisions may be made by a dedicated evaluation team comprised of various stakeholders and representatives of testers and developers. The team evaluates and discusses the data from the testing process to make a pass/fail decision that may be based on the benefits, utility, detected problems, their impact and risks.

The exit criteria for the testing are also defined, and may be based on achieved level of completion of tests, number and severity of defects sufficient for the abortion of testing, or code coverage. Some exit criterion may by bound to a specific critical functionality, component or test case. The evaluation team may also decide to end the testing on the base of available functionality, detected or cleared defects, produced or updated documentation and reports, or progress of testing.

If testing is organized into phases or parallel or sequential activities, the transitions between them may be gated by corresponding exit/entry criteria.

If the testing runs out of time or resources before the completion or is aborted by stakeholders or the evaluation team, the conclusions about the quality of the system may be rather limited, and this may be an indication of the quality of the testing itself.

Suspension Criteria and Resumption Requirements

These criteria are used to determine in advance whether the testing should be prematurely suspended or ended before the plan has been completely executed, and when the testing can be resumed or started, that is, when the problems that caused the suspension have been resolved.

The reason for the suspension can be the failure of the test item (for example, a software build that is the subject of the test) to work properly due to critical defects which seriously prevent or limit testing progress, accumulation of non-critical defect to the point where the continuation of testing has no value, client’s changes of the requirements, system or environment downtime, inability to provide some critical component or resource at the time indicated in the project schedule.

It is may be required to perform a smoke test before the full resumption of the tests.

Issues noticed during testing are often consequence of other previously noticed defects, so continuation of testing after certain is number of identified defects in the affected functionality or item is wasting of resources, particularly if it is obvious that the system or the item cannot satisfy the pass criteria.

Deliverables

This section describes what is produced by the testing process. These deliverables my be the subject of quality assessment before their final approval or acceptance, and besides all the elements of test documentation that are described here, may also include test data used during testing, test scripts, code for execution of tests in testing frameworks and outputs from test tools.

Activities

This section details testing activities and tasks, along with their dependencies and estimates of duration and the resource required.

Staffing and Training Needs

This is the specification of the people and skills needed to deliver the plan. It should also describe trainings on the tested system, elements of the test environment and test tools that need to be conducted.

Responsibilities

This section specifies personal responsibilities for approvals, processes, activities and deliverables described by the plan. It may also detail detail responsibilities in development and modification of the elements of the test plan.

Schedule

The schedule of phases should be detailed to the level which ensures certainty that is attainable with information available at the moment of planning. It should define the timing of individual testing phases, milestones, activities and deliverables and be based on realistic and validated estimates, particularly as the testing is often interweaved with development. Testing is the most likely victim of slippage in other activities, so it is a good idea to tie all test dates directly to the completion dates of their related developments.

This section may also describe the approach for dealing with specific slippages, which may include simplification or reduction of some non-crucial activities, relaxation of the scope or coverage, elimination of some test cases, engagement of additional resources, extension of testing duration.

Risks and Contingencies

This defines all other risk events, their likelihood, impact and counter measures to overcome them. Some risks may be testing related manifestations of the overall project risks. Some risk examples are lack or loss of personnel at the beginning or during testing, unavailability or late delivery of required hardware, software, data or tools, delays in training, or changes to the original requirements or designs.

The risks can be described using the usual format of the risk register, with attributes such as:

  • Category
  • Risk name
  • Responsible (tracker)
  • Associated phase/process/activity
  • Likelihood (low/medium/high)
  • Impact (low/medium/high)
  • Mitigation strategy (avoid/reduce/accept/share or transfer)
  • Response action
  • Actionee
  • Response time

Although some contingency events may actually be opportunities, it is, due to the limited scope of testing within the wider project or service delivery context, quite unlikely that the opportunities related to the very testing process will occur. However, the outcome of testing or some of its results may offer opportunities related to the subject of testing that may be enhanced, exploited or shared.

Test Status Report

Test Status Report is a one-time interim summary of the results of the execution of testing activities. It may describe the status of the all testing activities or be limited on a single test suite. This report needs to be highly informative and concise and should not elaborate minor operational details.

Depending on the approach defined in the test plan, it may be produced periodically, on completion of milestones or phases, or on demand. If periodic, it may be a base for continuous tracking through progress charts. Individual status reports are also sources for the test completion report.

The document can start or end with totals of passed, failed and pending test cases, scenarios or tests, and identified defects (if individually tracked). It may also deliver the coverage of test cases or code, consumption of resources and other established progress metrics. Close to this summary, a comprehensive assessment should be provided.

If needed, it provides evaluations and recommendations based on the interim results and incidents encountered during the testing. It may also signal the red flags that deserve recipients’ attention. It should report the resolution of issues that were highlighted in the previous status report, but there is no need to report the issues that were already reported as solved.

The summary of activities conduced since the previous status report is also optional. If present, is should exist in all status reports.

The details of the progress are expressed in a form of a table describing the outcome of execution of individual test cases or scenarios, cumulatively since the start of testing. Typical columns to be used are:

  • Test case ID
  • Test case name
  • Last execution date
  • The last execution status (not run, failed, partial, passed) or counts of failed and passed test executions
  • Number of associated defects
  • Brief comment

The listed tests cases, scenarios or procedures, can be grouped by test suites (if the report addresses several suites), test types or areas.

Test Completion Report

The test summary report is a management report that brings together the key information uncovered by the accomplished tests. It recapitulates the results of the testing activities and indicates whether the tested system is fit for purpose according to whether or not it has met acceptance criteria defined in the project plan. This is the key document in deciding whether the quality of the system and the performed are sufficient for to allow taking of the step following the testing. Although it provides a working assessment on success or failure of the system under test, the final decision is made by the evaluation team.

This document reports all pertinent information about the testing, including an assessment about how well the testing has been done, the number of incidents raised and outstanding, and crucially an assessment about the quality of the system. Also recorded for use in future planning are details of what was done, and how long it took.

The test summary report can have a structure similar to the test status report, but some details can be omitted from its tables. On the other hand, the narrative can be more elaborate, and it should also bring together all significant issues that were reported in individual status reports.

Provided data should be also sufficient for the assessments of the quality of the testing effort. In order to improve future test planning, it should records what testing was done and how long it took.

Test Design Specification

The Use Case is a high level description of a specific system usage, or set of system behaviours or functionalities. It should not be mistaken for UML use case. It implies, from the end-user perspective, a set of tests that need to be conducted in order to consider the system as operational for particular type of use. It therefore usually describes the system usage, features and related requirements and that are necessary for utilization of the system by end users on regular basis.

Requirement is a high level description of necessary capability, feature, functionality, characteristic or constraint that the system must meet or be able to perform. It is a statement that identifies a necessary quality of a system for it to have value and utility to a user, customer, organization,  or other stakeholder. It is necessary for the fulfillment of one or several Use Cases or usage scenarios (in Scenario Testing).

The requirements specification is an explicit set of requirements to be satisfied by the system which is to be developed, and is therefore produced quite early in its development. Such a specification may be a direct input for testing process, as it lays out all requirements that were, hopefully, addressed during the system development.

Functional

Non-functional

Scenario Testing is a higher level approach to testing of complex systems that is not based on Test Cases, but on working through realistic and complex stories reflecting user activities. These stories may consist of one or several user stories,which capture what a user does or needs to do as part of his or her job function, expressed through one or more sentences in the everyday or domain language. The tester who follows the scenario must interpret the results and evaluate whether they can be considered as a pass or failure. This interpretation may require backing by domain experts. This term should be distinguished from Test Procedure and Test Case scenario.

Test Case Specification

The Test Case is a specification of criteria that need to be met in order to consider some system feature, set of features or Use Case as working. It is the smallest unit of testing and is sometimes colloquially referred as Test. It should include a description of the functionality to be tested, and the preparation required to ensure that the test can be conducted. A single test case is sometimes associated with several requirements. It may be partially or fully automated.

A formal written test case is characterised by a known preconditions, input and expected output and postconditions, which are worked out before the execution. The test case may comprise of a single or several steps that are necessary to assess the tested functionality.

For a system without preexisting formal requirements, the Test Cases can be written based on system’s desired or usual operation, or operation of similar systems. In this case, they may be a result of decomposition of a high-level scenario, which is a story or setting description used to explain the system and its operation to the tester. Alternatively, Test Cases may be omitted altogether and replaced with scenario testing, which substitutes a sequence or group of Test Cases, as they may be hard to precisely formulate and maintain with the evolution of the system.

The Test Suite is a collection of Test Cases that are related to the same testing work in terms of goals and associated testing process. There may be several Test Suites for a particular system, each one grouping together many Test Cases based on corresponding goal and functionality or shared preconditions, system configuration, associated common actions, execution sequence of actions or Test Cases, or reporting requirements.

An individual Test Suite may validate whether the system complies with the desired set of behaviors or fulfills the envisioned purpose or associated Use Cases, or be associated with different phases of system lifecycle, such as identification of regressions, build verification, or validation of individual components.

A Test Case can be included into several Test Suites. If Test Cases descriptions are organised along Test Suites, the overlapping cases should be documented within their primary Test Suites and referenced elsewhere.

A Test Procedure defines detailed instructions and sequence of steps to be followed while executing a group of Test Cases (such as a Test Suite) or single Test Case. It can give information on how to create the needed test setup, perform execution and evaluate results for a given test case. Having a formalised test procedure is very helpful when a diverse set of people is involved in performing the same tests at different times and situations, as this supports consistency of test execution and result evaluation. Test Procedure is a combination of Test Cases based on a certain logical reason, like executing an end-to end situation. The order in which the Test Cases are to be run is fixed.

The Test Script is a sequence for instructions that need to be carried out on the tested system in order to execute a test case, or test a part of system functionality. These instructions may be given in the form suitable for manual testing or, in automated testing, as short programs written in a scripting or general purpose programming language. For software systems or applications, there are test tools and frameworks that allow specification and continuous or repeatable execution of prepared automated tests.

Traceability Matrix is a table or standalone document matching Requirements and Test Cases, thus ensuring their mutual association and coverage. It enables both forward and backward traceability, as it simplifies how the test cases need to be modified upon the change of requirements, and vice versa. It is used to verify whether all the requirements have corresponding test cases, and to identify for which requirement(s) a particular test case has been written for. Columns REQS, rows Tests Cases.

Test Data Requirements

Test Data Report

Test Environment Requirements

Test Environment or Test Bed is an execution environment configured for testing. It may consist of specific hardware, OS, network topology, configuration of the product under test, other application or system software, test tools, etc. The Test Plan for a project should enumerated the test beds(s) to be used.

Test Environment Report

Detailed test results

(base for status and completion reports on the base of individual tests execution logs and anomalies/incidents reports)

(Test records - for each test, an unambiguous record of the identities and versions of the component or system under test, the test specification and actual outcome.)

 

The Test Log is the record of Test Cases execution and obtained results, in the order of their running.


 

 

  • No labels