Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

On the common industry practices and with the strong influence from IEEE 829, and its sequel/update ISO/IEC/IEEE 29119-3, which even provides even templates for both traditional (sequential and iterative) and agile approaches.

All docments should start the following common elements:

  • Organizational numeric identifier of the document
  • Descriptive identifying name
  • Version number
  • Authors and their contacts
  • Version history (table with version numbers, dates, contributors and descriptions of changes)

Organisational/Strategic Level

...

  • Test Design Specification
  • Test Case Specification
  • Test Data Requirements
  • Test Data Report
  • Test Environment Requirements
  • Test Environment Report
  • Detailed test results (base for status and completion reports on the base of individual tests execution logs and anomalies/incidents reports)

Test Plan

Test Plan : A document describing the details out the operational aspects of executing the test strategy. It outlines the objectives, scope, approach, resources (including people, equipment, facilities and tools) and amount of their allocation, methodologies and schedule of intended testing activities. It usually also describes the team composition, training needs, entry and exit criteria, risks, contingencies, test cycle details, quality expectations and reporting and tracking process. The Test Plan may account for one or several test suites, but it does not detail individual test cases. It identifies test items, the features to be tested, the testing tasks, who will do each task, and any risks requiring contingency planning. Ref IEEE Std 829.

It is an instrument mutual understanding between the testing team, development team and management. In case of major impediments or changes, it should be updated as needed and communicated to all concerned. Such updates may lead to further changes documents specifying test design, test cases, and data and environment requirements.

The recommended structure of the test plan is as follows.

Metadata

The descriptive name should briefly express what system is tested, which aspects, feature of components are, what kind of testing is conducted or for which purpose.

It should be immediately followed by separate description of the testing level (unit, integration, system, acceptance) and/or type or subtype (functional, non-functional, alpha, beta, performance, load, stress, usability, security, conformance, compatibility, resilience, scalability, volume, regression…).

References/Supporting Documents

List of all documents that support the test plan. Document identifiers/names, version numbers and hyperlinks of individual document should be provided. Documents that can be referenced include:

  • Project plan
  • Product plan
  • Related test plans
  • Requirements specifications
  • High level design document
  • Detailed design document
  • Development and testing standards
  • Methodology guidelines and examples
  • Organisational standards and guidelines
  • Source code, documentation, user guides, implementation records

Introduction

This is the executive summary part of the plan which summarizes its purpose, level, scope, effort, costs, timing, relation to other activities and deadlines, expected effects and collateral benefits or drawbacks. This section should be brief and to the point.

Test Items

Description, from the technical point of view, of the items to be tested, as hardware, software and their combinations. Version numbers and configuration requirements may be included where needed, as well as delivery schedule for critical items.

This section should be aligned with the level of the test plan, so it may itemize applications or functional areas, or systems, components, units, modules or builds.

For some items, their critical areas or associated risks may be highlighted, such as those related to origin, history, recent changes, novelty, complexity, known problems, documentation inadequacies, failures, complaints or change requests. There probably had been some general concerns and issues that triggered the testing process, such as history of defects, poor performance, changes in the team and so on that could be directly associated with some specific items. Other concerns may that need to be mentioned can be related to safety, importance and impact on users or clients, regulatory requirements etc. Or the reason may be general misalignment of the system with the intended purpose, or vague, inadequately captured or understood requirements.

Sometimes, the items that should not be tested can be also listed.

Features to be Tested

The purpose of the section is to describe individual features and their significance or risk from the user perspective. It is a listing of what is to be tested from the users’ viewpoint in terms of what the system does. The individual features may be operations, scenarios, and functionalities that are to be tested across all or within individual tested sub-systems. Features may be rated according to their importance or risk.

An additional list of features that will not be tested (and why) may prevent possible misunderstandings and feature creep.

Approach

This section describes the strategy of the test plan that is appropriate for the plan level of the plan and in agreement with other related plans. It may describe the background to the greater extent than expressed in the introductions.

Rules and processes that should be described include:

  • Detailed and prioritised objectives
  • Scope (if not fully define by lists of items and features)
  • Tools that will be used
  • Needs for specialized trainings
  • Metrics to be collected and granularity of their collection
  • How the results will be evaluated
  • Resources and assets to be used, such as people, hardware, software, and facilities
  • Amounts of different types of testing at all included levels
  • Other requirements and constrains
  • Organisation and timing of the internal processes, phases, activities and deliverables
  • Organisation of the meetings
  • Configuration management for the tested system, used tools and overall test environment
  • Number and kind of different tested configurations
  • Change management

For example, the objectives may be to determine whether the delivered functionalities work in the usage or user scenarios or use cases, whether all functionalities required the work are present, whether all predefined requirements are met, or even whether the requirements are adequate.

Besides testing tools that interact with the tested system, other tools may be need, like those used to match and track scenarios, requirements, test cases, test results, defects and issues and acceptance criteria. They may be manually maintained documents and tables, or tool specialized to support testing.

Any special requirements or constrains of the testing in terms of the testing process, environment, features or components need to be noted.

Testing can be organized as periodic or continuous until all pass criteria are met, with passing of identified issues to the development team. This requires defining the approach to modification of test items, in terms of regression testing.

The discussion of change management should define how to manage the changes of the testing process that may be caused by the feedback from the actual testing or due to external factors. This includes the handling of the consequences of detected defects that affect further testing, but also dealing with requirements or elements that cannot be tested and parts of testing process that may be recognized as useless or impractical.

Item (and Phase) Criteria

This section describes the process and overall standards for evaluating the test results, not detailed criteria pass for each individual item, feature or requirement.

The final decisions may be made by a dedicated evaluation team comprised of various stakeholders and representatives of testers and developers. The team evaluates and discusses the data from the testing process to make a pass/fail decision that may be based on the benefits, utility, detected problems, their impact and risks.

The exit criteria for the testing are also defined, and may be based on achieved level of completion of tests, number and severity of defects sufficient for the abortion of testing, or code coverage. Some exit criterion may by bound to a specific critical functionality, component or test case. The evaluation team may also decide to end the testing on the base of available functionality, detected or cleared defects, produced or updated documentation and reports, or progress of testing.

If testing is organized into phases or parallel or sequential activities, the transitions between them may be gated by corresponding exit/entry criteria.

If the testing runs out of time or resources before the completion or is aborted by stakeholders or the evaluation team, the conclusions about the quality of the system may be rather limited, and this may be an indication of the quality of the testing itself.

 

 

Test Environment or Test Bed is an execution environment configured for testing. It may consist of specific hardware, OS, network topology, configuration of the product under test, other application or system software, test tools, etc. The Test Plan for a project should enumerated the test beds(s) to be used.

Test Status Report

Test Completion Report

...