Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

Testing Framework

Table of Contents

...

Introduction

Testing is a cognitive, critical and often creative process. It builds upon experience, discipline and persistence, but also benefits from thinking out of the box. It benefits from the wise application of standards, tools, best practices, adapted to a specific case, goal, available resources, and local culture.

This " framework " outlines a structured approach to testing, which is aligned with the major standards in the field, such as ISO/IEC/IEEE 29119, IEEE 829 and BS 7925, complements ITIL recommendations and processes, but is at the same time in line with the common language and practices in testing. It currently meant to support the GÉANT SA4 software service validation and testing process, but may grow into a tool for strengthening of the common ground between software developers and testers and the computer networking community. Such a tool can be used in consolidation of evaluations of various products, solutions and solutions developed and applied within the GÉANT project.

Tailoring Approach to Testing

Streamlining of the testing process for software product has been the subject of significant standardisation effort in the last two decades. As software becomes an intrinsic part of almost every system, an a testing effort has been has to align its validation with needs, practices and experiences from many industries and different fields of engineering. However, this field is still rapidly evolving and there is currently a strong opposition to its formalisation. There is no guarantee that any specification can ensure fully repeatable testing practice that is applicable in all possible contexts. Often, the inherent fluidity of the environment and goals and , combined with complexity of tested systems, is likely to preclude any exhaustive codification.

Therefore, the here presented This material should be seen as a highly customisable toolboxtoolkit. It is intended to prevent the reinventing of the wheel, and to digest the existing expertise , tools, practices, and processesand practices. An over-administrative approach must be avoided, as there is no single size that fits all. Schematic and formal application and implementation of the standards, procedures or templates “by the book” , (this document included, ) may petrify, bureaucratise, and over-proscribe validation and testing process and impede innovation. What matters is the quality. Actual testing, even with automated execution, is designed and supervised by humans. On the other hand, running of production-level services does require formalised processes, traceability or even auditability of validation, and unification of control procedures.

...

Although this document can be seen as a contribution to the overall GÉANT approach to validation and testing, it is beyond its scope to discuss the general validation and testing policy and strategy for implementation and management of services. Such policy and strategy may be further discussed and refined by the project management and stakeholders.

Both the strategy and supporting guidelines should be periodically reviewed and updated on the base the experience from the actual evaluation, organizational and environmental changes and evolution of general policies and goals.

...

The actual testing is initially specified through the development of the test plan. In order to produce the test plan, its authors need to familiarise with the context in order to define the scope, organise its the development of the test plan, identify and estimate risks, establish approach towards them, design the strategy for the particular testing, determine staffing profile and schedule, and after the plan is drafted, to establish consensus or get approval for it, and distribute it to all interested parties.

...

Test design includes specification of requirements with conditions for their fulfilment. The conditions may also be contextual and include repeatability or presentabilityof tests or ability to present the results. It is followed by specification of individual tests cases and procedure for their execution. The subsequent steps are test environment set-up, test execution and related reporting. The feedback from environment set-up and execution may lead to an update of the original design. For example, the constraints in capabilities of the test environment may lead to the update of test cases. Alternatively, actual preparation of test cases and environment may require additional elaboration and refinement of test cases. On the other hand, significance or priority of some requirements may lead to modification of the test environment in order to enable execution of corresponding test cases. Both may lead to modification of the test procedure.

...

The test level documentation is produced by the testing team. It refines the practical details of the design and execution of the work specified in the test plan and captures or relevant information gathered during testing in order to support the evaluation level reports. It consists of:

  • Test Design Specification
  • Test Case Specification
  • Test Data Report
  • Test Environment Report
  • Test Execution Log
  • Detailed Test Results
  • Test Incident Report

...

  • Organizational numeric identifier of the document - it may be omitted if the document name is used as the identifier
  • Descriptive and identifying name of the document
  • Name of testing (sub)project or use case, if not clear from document name
  • Document date
  • Version number
  • Author or authors and their contact information
  • Version history (table with version numbers, dates, contributors and change descriptions of changes)

Version histories, date(s) and authors are not needed if the master documents are kept up to date in already highly versioned environment, such as a CMS system or Wiki. However, the main or corresponding author and document date need to be visible in self-contained standalone snapshots that are as such published on the web or shared by email.

...

The list of documents with their identifiers, names, version numbers and hyperlinks to individual document documents should be provided.

At least the reference to the project plan should be provided present in its subordinate documents. There is no need to reference other documents that represent the common background that is already listed in the project plan. In lower level documents, only Only the references that are crucial for its understanding need to be provided , and in order to in lower level documents, as well as those that point to the relevant external documentation that is not already referenced in higher level documents.

...

The test plan outlines the operational aspects of execution of the test strategy for the particular testing effort. It provides an overview of what the requirements the system needs to meet in order to satisfy its intended use, user needs, or scope of the intended testing effort, and how the actual validation is to be conducted. This plan outlines the objectives, scope, approach, resources (including people, equipment, facilities and tools) and their allocation amounts, methodologies and schedule of the testing effort. It usually also describes the team composition, training needs, entry and exit criteria, risks, contingencies, test cycle details, quality expectations and tracking and reporting processes. The test plan may account for one or several test suites, but it does not detail individual test cases.

As a pivotal document, the test plan is an instrument of mutual understanding between the testing and development teams and the management. In case of major impediments or changes, it should be updated as needed and communicated to all concerned. Such updates may lead to further changes in documents that specify test design, test cases, and data and environment requirements. Given the multifaceted and overarching nature of the test plan and in order to avoid unnecessary backpropagation of changes, it should not over-specify the implementation details that are to be articulated in subordinate test level documents.

...

The descriptive name should briefly, in three to six words, express what state which system is tested, the target aspects, features of components, and level, type or purpose of conducted testing.

...

Key terms and acronyms used in the document, target domain and testing are described here. The glossary facilitates communication and helps in eliminating avoiding confusion.

Introduction

This is the executive summary part of the plan which summarises its purpose, level, scope, effort, costs, timing, relation to other activities and deadlines, expected effects and collateral benefits or drawbacks. This section should be brief and to the point.

Features to Be be Tested

The purpose of the section is to list individual features and , their significance or risk and risks from the user perspective. It is a listing of what is to be tested from the users’ viewpoint in terms of what the system does. The individual features may be operations, scenarios, and functionalities that are to be tested across all or within individual tested sub-systems. Features may be rated according to their importance or risk.

...

This section should be aligned with the level of the test plan, so it may itemise applications or functional areas, or systems, components, units, modules or builds.

For some items, their critical areas or associated risks may be highlighted, such as those related to origin, history, recent changes, novelty, complexity, known problems, documentation inadequacies, failures, complaints or change requests. There probably Probably there had been some general concerns and issues that triggered the testing process, such as history of defects, poor performance, changes in the team and so on , etc., that could be directly associated with some specific items. Other concerns that may that need to be mentioned can be related to safety, importance and impact on users or clients, regulatory requirements, etc. Or the reason The key concern may be general misalignment of the system with the intended purpose, or vague, inadequately captured or understood requirements.

Sometimes, the The items that should not be tested can may be also listed.

Approach

This section describes the strategy of the test plan that is appropriate for the plan level of the plan and in agreement with other related plans. It may extend the background and contextual information provided in the introduction.

...

  • Detailed and prioritised objectives
  • Scope (if not fully define defined by lists of items and features)
  • Tools that will be used
  • Needs for specialised trainings (on testing, used tools or the system)
  • Metrics to be collected and granularity of their collection
  • How the results will be evaluated
  • Resources and assets to be used, such as people, hardware, software, and facilities
  • Amounts of different types of testing at all included levels
  • Other assumptions, requirements and constrains
  • Overall organisation and timing schedule of the internal processes, phases, activities and deliverables
  • Internal and external communication and organisation of the meetings
  • Number and kinds of test environment configurations (for example, for different testing levels/types)
  • Configuration management for the tested system, used tools and test environment
  • Change management

...

Besides testing tools that interact with the tested system, other tools may be need, like those used to match and track scenarios, requirements, test cases, test results, defects and issues and acceptance criteria. They may be manually maintained documents and tables, or tool specialised to testing support testingtools.

Some assumptions and requirements must be satisfied before the testing is even started. Any special requirements or constrains of the testing in terms of the testing process, environment, features or components need to be noted. They may include a special hardware, supporting software, test data to be provided, or restrictions in use of the system during the testing.

...

The discussion of change management should define how to manage the changes of the testing process that may be caused by the feedback from the actual testing or due to by external factors. This includes the handling of the consequences of defects that affect further testing, dealing with requirements or elements that cannot be tested, and dealing with parts of testing process that may be recognised as useless or impractical.

...

The exit criteria for the testing are also defined, and may be based on achieved level of completion of tests, number and severity of defects sufficient for the abortion of testing, or code coverage. Some exit criterion criteria may by be bound to a specific critical functionality, component or test case. The evaluation team may also decide to end the testing on the base of available functionality, detected or cleared defects, produced or updated documentation and reports, or progress of testing.

...

This section describes what is produced by the testing process. These The deliverables may be the subject of quality assessment before their final approval or acceptance, and besides all the elements of test documentation that are described here, may also include test data used during testing, test scripts, code for execution of tests in testing frameworks and outputs from test tools.

...

This is the specification of the people staff profiles and skills needed to deliver the plan. Depending on the profile of the personnel, it should also detail needed trainings on the tested system, elements of the test environment and test tools.

...

The schedule of phases should be detailed to the level which ensures certainty that is attainable with information available at the moment of planning. It should define the timing of individual testing phases, milestones, activities and deliverables and be based on realistic and validated estimates, particularly as the testing is often interweaved with the development. Testing is the most likely victim of slippage in the upstream activities, so it is a good idea to tie all test dates directly to the completion dates of their related developments. This section should also define when the test status reports are to be produced (continuously, periodically, or on demand).

...

This section, which complements "Suspension Criteria and Resumption Requirements", defines all risk events, their likelihood, impact and counter measures to overcome them. Some risks may be testing related manifestations of the overall project risks. Some risk examples The exemplary risks are lack or loss of personnel at the beginning or during testing, unavailability or late delivery of required hardware, software, data or tools, delays in training, or changes to the original requirements or designs.

...

Test status report is a one-time interim summary of the results of execution of testing activitiestests. It may describe the status of the all testing activities or be limited to a single test suite. This report, as well as the test summary report, sublimes the information obtained during test execution and recorded in test logs and incident reports. It must be highly informative and concise and should not elaborate minor operational details.

...

  • Test case ID
  • Test case name
  • Last execution date Date and time of the last execution
  • The last execution status (not run, failed, partial, passed) or counts of failed and passed test executions
  • Number of associated defects
  • Brief comment

If there are if more than dozen items, consider grouping and subtotaling sub-totaling the table rows according to the test suites, test items or scenarios (as listed in the project plan), test types or areas. If a single report addresses several suites, each should have a separate test status report, or at least its own totals and details tables.

...

The test completion or summary report is a management report that brings together the key information uncovered by the accomplished tests. It recapitulates the results of the testing activities and indicates whether the tested system is fit for purpose according to whether it has met the acceptance criteria defined in the project plan. This is the key document in deciding whether the quality of the system and the performed testing are sufficient for the decision that was the reason for follows the testing effort. Although the completion report provides a working assessment on success or failure of the system under test, the final decision is made by the evaluation team.

This document reports all pertinent information about the testing, including an assessment about how well the testing has been done, the number of incidents raised and outstanding events. It must describe all deviations from the original test plan, their justifications for the deviations and impacts. Provided data should be sufficient for the assessments assessment of the quality of the testing effort.

The provided narrative should be more elaborate elaborated than in the test status reports.

...

This is a comprehensive assessment of the conducted testing. It should also point at indicate the areas that may require further investigation and testing.

...

This is a comprehensive interpretation of the test results. Include It includes description of issues or defects discovered during the testing. Also It should also describe the unexpected results and problems that occurred during the testing. For resolved incidents, their resolutions should be summarised. For unresolved test incidents, an approach to their resolution should be proposed.

...

Propose the decisions regarding the tested system and suggest further actions on the base of the acceptance criteria, quality of the test process, test results and outcomes for individual test items. Provide recommendations for improvement of the system or future testing that result from the reported conducted testing.

Activities

The summary of activities conducted during testing should record what testing was done and how long it took. In order to facilitate improvement of future test planning , it should be in a form that does not require later thorough investigation of the plans and records of the conducted testing.

...

The use case is a high level description of a specific system usage, or set of system behaviours or functionalities. It should not be mistaken for UML use case. It implies, from the end-user perspective, a set of tests that need to be conducted in order to consider the system as operational for particular use. It therefore usually describes the system usage, features and related requirements and that are necessary for utilization of the system by end users on regular basis.

...

The high level requirements include business, architectural and stakeholder/user requirements. There are also some transitional requirements that are only relevant during the implementation of the system. On The detailed requirements are defined on the base of identified high-level features or requirements, the detailed requirements are defined. Some of them are the consequence consequences of system's functions, services and operational constraints, while others are pertaining to the application domain.

Functional requirement defines a specific behaviour or function (. It describes what the system does; , but not how - it is being done in terms of implementation, quality, or performance).

Non-functional requirement specifies the quality criteria used to assess the characteristics or properties the system should possess (what it is). Typical non-functional requirements include:

...

In the classical engineering and waterfall software engineering, requirements are inputs into the design stages of development. The requirements specification is an explicit set of requirements to be satisfied met by the system, and therefore is therefore usually produced quite early in its development. However, when iterative or agile methods of software development are used, the system requirements are incrementally developed in parallel with design and implementation.

The requirements specification is an important input into the testing process, as it lays out all requirements that were, hopefully, addressed during the system development, so the tests to be performed should could trace back to them. Without the access to the requirements from the development, the requirements that are directly associated with testing should be formulated during its planning. If an agile methodology was used for development, these requirements can reflect the completed Scrum epics, user stories and product backlog features or "done" Kanban board user stories and features cards.

The individual requirements need to be mutually consistent, i.e. consistent with the external documentation, verifiable, and traceable towards high-level requirements or stakeholder needs, but also towards the test cases.

...

Scenario testing is a higher level approach to testing of complex systems that is not based on test cases, but on working through realistic and complex stories reflecting user activities. These stories may consist of one or several user stories, which capture what a user does or needs to should do as a part of his or /her job function, expressed through one or more sentences in the everyday or domain language. The tester who follows the scenario must interpret the results and evaluate judge whether they can be considered as a pass or failure. This interpretation may require backing by domain experts. This term should be distinguished from test procedure and test case scenario.

...

Specific test techniques to be used are selected and justified. Particular test management, configuration management and incident management tools may be mandated. Code reviews, static and dynamic code analysis or unit testing tools may support the testing work. Test automation software tools may be used to generate, prepare and inject data, set up test preconditions, control the execution of tests, and capture outputs. The method for the inspection and analysis of test results should be also identified. The evaluation can be done based on base of visual inspection of behaviours and outputs, or use on the usage of instruments, monitors, assertions, log scanners, pattern matching programs, output comparators, or coverage measurement and performance testing tools.

In order to avoid redundancy, common information is related to several test cases or procedures is provided here. It may include details of the test environment or environmental needs, system setup and recovery or reset, and dependencies between the test cases.

...

The test case refines criteria that need to be met in order to consider some system feature, set of features or the entire use case as working. It is the smallest unit of testing and is sometimes colloquially referred as Testtest. A single test case may be included into several test suites or related to a requirement associated with several use cases. If different test levels have separate test design specification, a single test case may be present in several design specifications.

The selection of the test cases may be the result of an analysis that provides a rationale for a particular battery of test cases associated with a single requirement. For example, the same feature may be tested with distinct test cases that cover valid and invalid inputs and subsequent successful or negative outcomes. This distinction is made in terms of system responses and not testing outcomes, as reporting of an error may actually indicate passing of a test. The logic behind the selection of test cases should be described here.

A feature from the test design specification may be tested in more than one test case, and a test case may test more than one feature. The test cases should cover all features, that is, each feature should be tested at least once. The relationship between the requirements/features and test cases is summarised in the Requirements/Test Cases Traceability Matrix, which is usually placed in a separate document that is updated with the evolution of the requirements and test design Specificationspecification document, but also with refinement of individual test cases. It enables both forward and backward traceability, as it simplifies how the modification of test cases need to be modified upon after the change of requirements, and vice versa. It is used to verify whether all the requirements have corresponding test cases, and to identify for which requirement(s) a particular test case has been written for. The Requirements/Test Cases Traceability Matrix is a table where requirements and test cases are paired, thus ensuring their mutual association and coverage. Since there are always more test cases than requirements, the requirements are placed in columns, and tests cases in rows. The requirements are identified by their IDs or short names and can be grouped by their types, while the test cases can be grouped into sections according to levels: unit, integration, system and acceptance.

...

The test case specifications are produced after the test design specification is has been prepared. The test case specification is a detailed elaboration of a test case identified in the test design specification and includes a description of the functionality to be tested and the preparation required to ensure that the test can be conducted. A single test case is sometimes associated with several requirements. It may be partially or fully automated.

...

For a system without preexisting formal requirements, the test cases can be written based on system’s desired or usual operation, or operation of similar systems. In this case, they may be The test cases are a result of decomposition of a high-level scenario, which is scenario. This scenario provides a story or setting description used to and description of the setting  that explain the system and its operation to the tester. Alternatively, test cases may be omitted altogether and replaced with scenario testing, which substitutes a sequence or group of test cases, as they may be hard to precisely formulate and maintain with the evolution of the system.

...

  • Test case ID or short identifying name
  • Related requirement(s)
  • Requirement type(s)
  • Test level
  • Author
  • Test case description
  • Test bed(s) beds to be used (if there are several)
  • Environment information
  • Preconditions, prerequisites, states or initial persistent data
  • Inputs (test data)
  • Execution procedure or scenario
  • Expected postconditions or system states
  • Expected outputs
  • Evaluation parameters/criteria
  • Relationship with other use cases
  • Whether the test can be or has been automated
  • Other remarks

...

The test suite is a collection of test cases that are related to the same testing work in terms of goals and associated testing process. There may be several test suites for a particular system, each one grouping together many test cases based on corresponding goal and functionality or shared preconditions, system configuration, associated common actions, execution sequence of actions or test cases, or reporting requirements. An individual test suite may validate whether the system complies with the desired set of behaviours or fulfils the envisioned purpose or associated use cases, or be associated with different phases of system lifecycle , (such as identification of regressions, build verification, or validation of individual components). A test case can be included into several test suites. If test cases descriptions are organised along test suites, the overlapping cases should be documented within their primary test suites and referenced elsewhere.

The test procedure defines detailed instructions and sequence of steps to be followed while executing a group of test cases (such as a test suite) or single test case. It can give information on how to create the needed test setup, perform execution, evaluate results and restore the environment. The test procedures are developed on the base of the test design specification and in parallel or as parts of test case specifications. Having a formalised deatiled test procedure is very helpful when a diverse set of people is involved in performing the same tests at different times and situations, as this supports consistency of test execution and result evaluation. Test procedure can combine test cases based on a certain logical reason, like executing an end-to-end situation, when the order in which the test cases are to be run is also fixed.

...

First, the data requirements implied by the test plan and test design are put together. They include requirements related to type, range, representativeness, quality, amount, validity, consistency and coherency of test data. Additional concerns may be related to the sharing of test data with the development team or even end users.

If test The set test levels and use cases determine the tools and means that are used for collection, generation and preparation of test data. If data are not entirely fabricated, but is are extracted from the existing databases or services and can be associated with the real services, processes, business entities or persons, then the policies and technical procedures for its anonymisationdepersonalisation, obfuscation or protection may need to be established. If data are generated Otherwise, then the approach for to creation of adequate input data should be established, and validity of tests test results obtained with such data elaborated. On the base of the set test levels and use cases, the tools or means for or collection, generation preparation of test data are defined.inputs assessed.

Test case specifications provide more elaborate details about data needed and should be sufficient to support the actual collection or generation of adequate test data. During the second stage, the selected tools are used to prepare these data for execution of all use cases, including their injection into the system or interactions during the test procedure steps. At the same time, the corresponding expected test outputs are defined, and, if possible, automated methods for comparing the baseline test data against actual results. The limitations of test data and supporting tools are identified and it is explained what to do in order to mitigate them during the test execution and in interpretation and evaluation of the results. Finally, the measures that ensure usability and relevancy of test data throughout testing process need to be conductedare specified. They include data maintenance , for example to support the changes in the system, but also data versioning and backup. The decisions and knowledge produced during the preparation of the test data are captured.

...

  • Test case ID or short identifying name - may be placed at the top of the group
  • Order of execution number - useful for cross-referencing
  • Date and time
  • Testers - person who run the test, may also include observers
  • Test bed/facility - if several test beds are used in testing
  • Environment information - versions of test items, configuration(s) or other specifics
  • Specific presets - initial states or persistent data, if any
  • Specific inputs - inputs/test parameters or data that are varied across executions, if any
  • Specific results - outputs, postconditions or final states, if different from the expected; may refer to detailed test results
  • Execution status - passed, failed, partial (if permitted)
  • Incident reports - if one or several test incident reports are associated with this the execution, they is are referenced here
  • Comments - notes about the significant test procedure steps, impressions, suspicions, and other observations, if any

...

If there are several typical versions of the environment, presets or inputs, they may be described in the test case or in elsewhere in the test execution log and referenced in the test case executions that use them. This reduces the clutter. However, any particular variances in the configuration, input data, and results need to be documented. The actual outputs may be separately captured in the detailed test results, especially if an in-depth discussion of the alignment of actual and expected outcomes is needed.

...

Detailed Test Results

Detailed test results , are the actual outputs, assertions , and system and monitoring logs produced during the execution of tests. They should be at least paired with the corresponding test execution log records. Their format may depend on test tools that are used to capture them. In addition, the detailed results may encompass the reports produced by test automation tools that compare the baseline test data against actual results, which highlight all noted deviations. Such reports are valuable traces of how obtained results measure up with expected postconditions, states and outputs and can be used in assessment of the execution status and writing of the test execution log and test incident reports.

...