Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

  • Test Design Specification
  • Test Case Specification
  • Test Data Requirements
  • Test Data Report
  • Test Environment Requirements
  • Test Environment Report
  • Detailed test results (base for status and completion reports on the base of individual tests execution logs and anomalies/incidents reports)

Test Plan

The test plan outlines the operational aspects of executing the test strategy for the particular testing effort. It provides an overview of what the system needs to meet in order to satisfy its intended use, user needs, or scope of the intended testing effort, and how the actual validation is to be conducted. It details out the operational aspects of executing the test strategy for the particular testing. It This plan outlines the objectives, scope, approach, resources (including people, equipment, facilities and tools) and amount of their allocation, methodologies and schedule of the testing effort. It usually also describes the team composition, training needs, entry and exit criteria, risks, contingencies, test cycle details, quality expectations and tracking and reporting process. The test plan may account for one or several test suites, but it does not detail individual test cases.

As a pivotal document, the test plan is an instrument of mutual understanding between the testing team, development team and management. In case of major impediments or changes, it should be updated as needed and communicated to all concerned. Such updates may lead to further changes in documents specifying that specify test design, test cases, and data and environment requirements. Given the multifaceted and overarching nature of the test plan and In order to avoid unnecessary backpropagation of changes, in  in should not over-specify the implementation details precised that are to be articulated in subordinate test level documents.

...

This is the executive summary part of the plan which summarizes its purpose, level, scope, effort, costs, timing, relation to other activities and deadlines, expected effects and collateral benefits or drawbacks. This section should be brief and to the point.

Test Items

Description, from the technical point of view, of the items to be tested, as hardware, software and their combinations. Version numbers and configuration requirements may be included where needed, as well as delivery schedule for critical items.

This section should be aligned with the level of the test plan, so it may itemize applications or functional areas, or systems, components, units, modules or builds.

For some items, their critical areas or associated risks may be highlighted, such as those related to origin, history, recent changes, novelty, complexity, known problems, documentation inadequacies, failures, complaints or change requests. There probably had been some general concerns and issues that triggered the testing process, such as history of defects, poor performance, changes in the team and so on that could be directly associated with some specific items. Other concerns may that need to be mentioned can be related to safety, importance and impact on users or clients, regulatory requirements etc. Or the reason may be general misalignment of the system with the intended purpose, or vague, inadequately captured or understood requirements.

Sometimes, the items that should not be tested can be also listed.

Features to be Tested

The purpose of the section is to list individual features and their significance or risk from the user perspective. It is a listing of what is to be tested from the users’ viewpoint in terms of what the system does. The individual features may be operations, scenarios, and functionalities that are to be tested across all or within individual tested sub-systems. Features may be rated according to their importance or risk.

An additional list of features that will not be tested may be included, along with the reasons. For example, it may be explained that a feature will not be available or completely implemented at the time of testing. This map prevent possible misunderstandings and waste of effort in tracking the defects that are not related to the plan.

Features to be Tested

The purpose of the section is to list individual features and their significance or risk from the user perspective. It is a listing of what is to be tested from the users’ viewpoint in terms of what the system does. The individual features may be operations, scenarios, and functionalities that are to be tested across all or within individual tested sub-systems. Features may be rated according to their importance or risk.

An additional list of features that will not be tested may be included, along with the reasons. For example, it may be explained that a feature will not be available or completely implemented at the time of testing. This map prevent possible misunderstandings and waste of effort in tracking the defects that are not related to the plan.

Together with the list of test items, this section describes the scope of testing.

Test Items

Description, from the technical point of view, of the items to be tested, as hardware, software and their combinations. Version numbers and configuration requirements may be included where needed, as well as delivery schedule for critical items.

This section should be aligned with the level of the test plan, so it may itemize applications or functional areas, or systems, components, units, modules or builds.

For some items, their critical areas or associated risks may be highlighted, such as those related to origin, history, recent changes, novelty, complexity, known problems, documentation inadequacies, failures, complaints or change requests. There probably had been some general concerns and issues that triggered the testing process, such as history of defects, poor performance, changes in the team and so on that could be directly associated with some specific items. Other concerns may that need to be mentioned can be related to safety, importance and impact on users or clients, regulatory requirements etc. Or the reason may be general misalignment of the system with the intended purpose, or vague, inadequately captured or understood requirements.

Sometimes, the items that should not be tested can be also listedTogether with the list of test items, this section describes the scope of testing.

Approach

This section describes the strategy of the test plan that is appropriate for the plan level of the plan and in agreement with other related plans. It may extend the background and contextual information provided in the introduction.

...

  • Detailed and prioritised objectives
  • Scope (if not fully define by lists of items and features)
  • Tools that will be used
  • Needs for specialized trainings (on testing, used tools or the system)
  • Metrics to be collected and granularity of their collection
  • How the results will be evaluated
  • Resources and assets to be used, such as people, hardware, software, and facilities
  • Amounts of different types of testing at all included levels
  • Other assumptions, requirements and constrains
  • Overall organisation and timing of the internal processes, phases, activities and deliverables
  • Internal and external communication and organisation of the meetings
  • Number and kinds of test environment configurations (for example, for different testing levels/types)
  • Configuration management for the tested system, used tools and overall test environment
  • Number and kind of different tested configurations
  • Change management

For example, the objectives may be to determine whether the delivered functionalities work in the usage or user scenarios or use cases, whether all functionalities required the work are present, whether all predefined requirements are met, or even whether the requirements are adequate.

Besides testing tools that interact with the tested system, other tools may be need, like those used to match and track scenarios, requirements, test cases, test results, defects and issues and acceptance criteria. They may be manually maintained documents and tables, or tool specialized to support testing.specialized to support testing.

Some assumptions and requirements must be satisfied before the testing is even started. Any special requirements or constrains of the testing in terms of the testing process, environment, features or components need to be noted. They may include a special hardware, supporting software, test data to be provided, or restrictions in use of the system during the testing.

...

The discussion of change management should define how to manage the changes of the testing process that may be caused by the feedback from the actual testing or due to external factors. This includes the handling of the consequences of detected defects that affect further testing, but also dealing with requirements or elements that cannot be tested as well as , and dealing with parts of testing process that may be recognized as useless or impractical.

Some elements of the approach are further detailed in subsequent sections.

...

This is the specification of the people and skills needed to deliver the plan. It Depending on the profile of the personnel, it should also describe detail needed trainings on the tested system, elements of the test environment and test tools that need to be conducted.

Responsibilities

This section specifies personal responsibilities for approvals, processes, activities and deliverables described by the plan. It may also detail detail responsibilities in development and modification of the elements of the test plan.

...

  • Performance, availability, stability, load capacity, efficiency, effectiveness, scalability, response time
  • Reliability, robustness, fault tolerance, resilience, recoverability;
  • Privacy, security, safety;
  • Configuratability, supportability, operability, maintainability, modifiability, extensibility;
  • Testability, compliance, certification;
  • Usability, accessibility, localization, internationalization, documentation;
  • Compatibility, interoperability, portability, deployability, reusability.

In the classical engineering and waterfall software engineering, requirements are inputs into the design stages of development. The requirements specification is an explicit set of requirements to be satisfied by the system, and is therefore usually produced quite early in its development. However, when iterative or agile methods of software development . Such a specification may be a direct input for are used, the system requirements are incrementally developed in parallel with design and implementation.

The requirements specification is an important input into the testing process, as it lays out all requirements that were, hopefully, addressed during the system development. Alternatively, so the tests to be performed should trace back to them. Without the access to the requirements from the development, the requirements that are directly associated with testing can should be formulated , without the prior access to the requirements produced during developmentduring its planning. If an agile methodology was used for development, these requirements can reflect the completed Scrum epics, user stories and product backlog features or "done" Kanban board user stories and features cards.

The individual requirement need to be mutually consistent, consistent with the external documentation, verifiable, and traceable towards high-level requirements or stakeholder needs, but also towards the test cases.

The requirements are the base for development of test cases.

Scenario testing is a higher level approach to testing of complex systems that is not based on test cases, but on working through realistic and complex stories reflecting user activities. These stories may consist of one or several user stories, which capture what a user does or needs to do as part of his or her job function, expressed through one or more sentences in the everyday or domain language. The tester who follows the scenario must interpret the results and evaluate whether they can be considered as a pass or failure. This interpretation may require backing by domain experts. This term should be distinguished from test procedure and test case scenario.

...

  • Test case ID or short identifying name
  • Related requirement(s)
  • Requirement type(s)
  • Test level
  • Author
  • Test case description
  • Environment information
  • Test bed(s) to be used (if there are several)
  • Preconditions, prerequisite states or preexisting persistent data
  • Inputs (test data)
  • Execution scenario or test steps
  • Expected postconditions or system states
  • Expected outputs
  • Evaluation parameters/criteria
  • Relationship with other use cases
  • Whether the test can be or has been automated
  • Other remarks

...