You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 24 Next »

Testing Framework

Tailoring Approach to Testing

Testing as a cognitive, critical and often creative process.

Wise application of standards, tools, best practices, adapted to a specific case, goal, available resources, and local culture. This document as a support for establishing the common ground and customisable toolbox, and supporting guide. Relationship to the existing GN documents, testing standards, and software testing practices. Acknowledge that the recent standardization efforts try to build upon practices and experiences from different fields of engineering and industries. Do not reinvent the wheel, use the existing expertise, tools, practices, and processes. Actual testing, even with automated execution is designed and performed by humans.

No guarantee that any specification can ensure fully repeatable testing practice that is applicable in all possible contexts. Often, inherent fluidity of the environment and goals and complexity of tested system is likely to preclude any exhaustive formalization.

Avoid over-administrative approach; no one-size-fits-all. Schematic and formal application and implementation of the standards, or procedures templates “by the book”, this document included, endangers to petrify, bureaucratise, and over-proscribe validation and testing process and impede innovation. What matters is the quality. However, running production-level services may require formalized processes, traceability or even auditability of validation, and particularly of check procedures.

Generic Testing Process Diagram

General Testing and Validation Governance

This document provides GN and, more specifically, GN4.1 SA4 with a generalised technical specification for testing and validation. It can be seen as a a contribution to the general specification of GN approach to validation and testing, on the base of its the overall testing policy (for implementation and management of services -  any sources/refs?), and as an (tentative?) technical expression of the shared testing strategy.

The overall policy and strategy, as well as this technical document, must be further discussed and agreed by stakeholders and published.

Should be periodically reviewed and updated on the base the experience from the actual performed tests, organizational and environmental changes and evolution of overall policies and goals. It is (one of / input for) general GN-level testing related policy and strategy documents.

Evaluation Planning

Need to familiarize with the context in order to define the scope, organize development of the test plan, identify and estimate risks, establish approach towards risks, design test strategy, determine staffing profile and schedule, draft test plan, establish consensus/get approval for the plan, publish the plan.

Traceability is the key characteristic of both Evaluation and Test Enactment related artifacts. Repeatability for tests related to core requirements and functionality.  No need to repeat and maintain repeatability of tests that were already done unless possible to fully automate or required for compliance or periodic audits.

Test Enactment

Test design includes specification of requirements with conditions for their fulfillment. The conditions may also be contextual and include repeatability or presentability. It is followed by specification of individual tests cases and procedure for their execution. The subsequent steps are test environment set-up, test execution and related reporting. The feedback from environment set-up and execution may lead to an update of the original design. For example, the constraints in capabilities of the test environment may lead to the update of test cases. Alternatively, actual preparation of test cases and environment may require additional elaboration and refinement of test cases. On the other hand, significance/priority of some requirements may lead to modification of the test environment in order to enable execution of corresponding test cases. Both will lead to modification of the test procedure.

In parallel, monitoring of test progress is performed on the base of test status and related measures/metrics/reports, stakeholders recieve reports on test status, and issue control/correction directives are issued and corrective/adaptive actions are made for/on the test design, environment, execution, and perhaps even evaluation level test plan.

The test enactment may also provide suggestions or inputs for updates of the general Testing and Validation Process and Strategy.

Test Documentation

On the common industry practices and with the strong influence from IEEE 829, and its sequel/update ISO/IEC/IEEE 29119-3, which even provides even templates for both traditional (sequential and iterative) and agile approaches.

Metadata

All documents should start the following common elements (metadata):

  • Organizational numeric identifier of the document, may be ommitted if the document name is used as the identifier
  • Descriptive and identifying name of the document
  • Name of testing (sub)project or use case, if not clear from document name
  • Document date
  • Version number
  • Author or authors and their contacts
  • Version history (table with version numbers, dates, contributors and descriptions of changes)

Version history, date(s) and authors are not needed for documents with masters that are kept up to date in already highly versioned environment, such as an CMS system or Wiki. However, the main or corresponding author and document date need to be visible in self-contained standalone snapshots that are as such published on the web or shared by email.

References/Supporting Documents/Literature

The list of documents with their identifiers, names, version numbers and hyperlinks to individual document should be provided.

At least the reference to the project plan should be provided in its subordinate documents. There is no need to reference other documents that represent the common background that is already listed in the project plan. In lower level documents, only the references that are crucial for its understanding need to be provided, and in order to point to the relevant external documentation that is not already referenced in higher level documents.

Organisational/Strategic Level

Policy and Strategy, also mandating the overall approach on validation and testing on base of documents such as this one.

Evaluation Level Documentation

Evaluation Level Documentation are the elements of documentation related to testing of as a specific service, product or solution that are intended for communication between those responsible for test management processes (planning, monitoring and control, and completion assessment) and those who actually perform (design, implement, execute and document) the planned tests.On the base of of status and completion reports, control and correction directives are issued which lead to corrective or adaptive actions and changes in test level documentation or even the test plan.

  • Test Plan
  • Test Status Report
  • Test Completion Report

Test Level Documentation

  • Test Design Specification
  • Test Case Specification
  • Test Data Requirements
  • Test Data Report
  • Test Environment Requirements
  • Test Environment Report
  • Detailed test results (base for status and completion reports on the base of individual tests execution logs and anomalies/incidents reports)

Test Plan

The test plan provides an overview of what the system needs to meet in order to satisfy its intended use, user needs, or scope of the intended testing effort, and how the actual validation is to be conducted. It details out the operational aspects of executing the test strategy for the particular testing. It outlines the objectives, scope, approach, resources (including people, equipment, facilities and tools) and amount of their allocation, methodologies and schedule of the testing effort. It usually also describes the team composition, training needs, entry and exit criteria, risks, contingencies, test cycle details, quality expectations and tracking and reporting process. The test plan may account for one or several test suites, but it does not detail individual test cases.

As a pivotal document, the test plan is an instrument mutual understanding between the testing team, development team and management. In case of major impediments or changes, it should be updated as needed and communicated to all concerned. Such updates may lead to further changes documents specifying test design, test cases, and data and environment requirements. Given the multifaceted and overarching nature of the test plan and In order to avoid unnecessary backpropagation of changes, in  should not over-specify the implementation details precised in subordinate test level documents.

The recommended structure of the test plan is as follows.

Metadata

The descriptive name should briefly, in three to six words, express what system is tested, the target aspects, features of components, and level, type or purpose of conducted testing.

It should be immediately followed by separate description of the testing level (unit, integration, system, acceptance) and/or type or subtype (functional, non-functional, alpha, beta, performance, load, stress, usability, security, conformance, compatibility, resilience, scalability, volume, regression…).

References/Supporting Documents

List of all documents that support the test plan. Documents that can be referenced include:

  • Project plan
  • Product plan
  • Related test plans
  • Requirements specifications
  • High level design document
  • Detailed design document
  • Development and testing standards
  • Methodology guidelines and examples
  • Organisational standards and guidelines
  • Source code, documentation, user guides, implementation records

Glossary

Key terms and acronyms used in the document, target domain and testing are described here. The glossary facilitates communication and helps in eliminating confusion.

Introduction

This is the executive summary part of the plan which summarizes its purpose, level, scope, effort, costs, timing, relation to other activities and deadlines, expected effects and collateral benefits or drawbacks. This section should be brief and to the point.

Test Items

Description, from the technical point of view, of the items to be tested, as hardware, software and their combinations. Version numbers and configuration requirements may be included where needed, as well as delivery schedule for critical items.

This section should be aligned with the level of the test plan, so it may itemize applications or functional areas, or systems, components, units, modules or builds.

For some items, their critical areas or associated risks may be highlighted, such as those related to origin, history, recent changes, novelty, complexity, known problems, documentation inadequacies, failures, complaints or change requests. There probably had been some general concerns and issues that triggered the testing process, such as history of defects, poor performance, changes in the team and so on that could be directly associated with some specific items. Other concerns may that need to be mentioned can be related to safety, importance and impact on users or clients, regulatory requirements etc. Or the reason may be general misalignment of the system with the intended purpose, or vague, inadequately captured or understood requirements.

Sometimes, the items that should not be tested can be also listed.

Features to be Tested

The purpose of the section is to list individual features and their significance or risk from the user perspective. It is a listing of what is to be tested from the users’ viewpoint in terms of what the system does. The individual features may be operations, scenarios, and functionalities that are to be tested across all or within individual tested sub-systems. Features may be rated according to their importance or risk.

An additional list of features that will not be tested may be included, along with the reasons. For example, it may be explained that a feature will not be available or completely implemented at the time of testing. This map prevent possible misunderstandings and waste of effort in tracking the defects that are not related to the plan.

Together with the list of test items, this section describes the scope of testing.

Approach

This section describes the strategy of the test plan that is appropriate for the plan level of the plan and in agreement with other related plans. It may extend the background and contextual information provided in the introduction.

Rules and processes that should be described include:

  • Detailed and prioritised objectives
  • Scope (if not fully define by lists of items and features)
  • Tools that will be used
  • Needs for specialized trainings
  • Metrics to be collected and granularity of their collection
  • How the results will be evaluated
  • Resources and assets to be used, such as people, hardware, software, and facilities
  • Amounts of different types of testing at all included levels
  • Other requirements and constrains
  • Overall organisation and timing of the internal processes, phases, activities and deliverables
  • Internal and external communication and organisation of the meetings
  • Configuration management for the tested system, used tools and overall test environment
  • Number and kind of different tested configurations
  • Change management

For example, the objectives may be to determine whether the delivered functionalities work in the usage or user scenarios or use cases, whether all functionalities required the work are present, whether all predefined requirements are met, or even whether the requirements are adequate.

Besides testing tools that interact with the tested system, other tools may be need, like those used to match and track scenarios, requirements, test cases, test results, defects and issues and acceptance criteria. They may be manually maintained documents and tables, or tool specialized to support testing.

Any special requirements or constrains of the testing in terms of the testing process, environment, features or components need to be noted. They may include a special hardware, supporting software, test data to be provided, or restrictions in use of the system during the testing.

Testing can be organized as periodic or continuous until all pass criteria are met, with passing of identified issues to the development team. This requires defining the approach to modification of test items, in terms of regression testing.

The discussion of change management should define how to manage the changes of the testing process that may be caused by the feedback from the actual testing or due to external factors. This includes the handling of the consequences of detected defects that affect further testing, but also dealing with requirements or elements that cannot be tested as well as parts of testing process that may be recognized as useless or impractical.

Some elements of the approach are detailed in subsequent sections.

Item (and Phase) Criteria

This section describes the process and overall standards for evaluating the test results, not detailed criteria pass for each individual item, feature or requirement.

The final decisions may be made by a dedicated evaluation team comprised of various stakeholders and representatives of testers and developers. The team evaluates and discusses the data from the testing process to make a pass/fail decision that may be based on the benefits, utility, detected problems, their impact and risks.

The exit criteria for the testing are also defined, and may be based on achieved level of completion of tests, number and severity of defects sufficient for the abortion of testing, or code coverage. Some exit criterion may by bound to a specific critical functionality, component or test case. The evaluation team may also decide to end the testing on the base of available functionality, detected or cleared defects, produced or updated documentation and reports, or progress of testing.

If testing is organized into phases or parallel or sequential activities, the transitions between them may be gated by corresponding exit/entry criteria.

If the testing runs out of time or resources before the completion or is aborted by stakeholders or the evaluation team, the conclusions about the quality of the system may be rather limited, and this may be an indication of the quality of the testing itself.

Suspension Criteria and Resumption Requirements

These criteria are used to determine in advance whether the testing should be prematurely suspended or ended before the plan has been completely executed, and when the testing can be resumed or started, that is, when the problems that caused the suspension have been resolved.

The reason for the suspension can be the failure of the test item (for example, a software build that is the subject of the test) to work properly due to critical defects which seriously prevent or limit testing progress, accumulation of non-critical defect to the point where the continuation of testing has no value, client’s changes of the requirements, system or environment downtime, inability to provide some critical component or resource at the time indicated in the project schedule.

It is may be required to perform a smoke test before the full resumption of the tests.

Issues noticed during testing are often consequence of other previously noticed defects, so continuation of testing after certain is number of identified defects in the affected functionality or item is wasting of resources, particularly if it is obvious that the system or the item cannot satisfy the pass criteria.

Deliverables

This section describes what is produced by the testing process. These deliverables my be the subject of quality assessment before their final approval or acceptance, and besides all the elements of test documentation that are described here, may also include test data used during testing, test scripts, code for execution of tests in testing frameworks and outputs from test tools.

Activities/Tasks

This section outlines the testing activities and tasks, dependencies and estimates their duration and required resources.

Staffing and Training Needs

This is the specification of the people and skills needed to deliver the plan. It should also describe trainings on the tested system, elements of the test environment and test tools that need to be conducted.

Responsibilities

This section specifies personal responsibilities for approvals, processes, activities and deliverables described by the plan. It may also detail detail responsibilities in development and modification of the elements of the test plan.

Schedule

The schedule of phases should be detailed to the level which ensures certainty that is attainable with information available at the moment of planning. It should define the timing of individual testing phases, milestones, activities and deliverables and be based on realistic and validated estimates, particularly as the testing is often interweaved with development. Testing is the most likely victim of slippage in the upstream activities, so it is a good idea to tie all test dates directly to the completion dates of their related developments.

Risks and Contingencies

This sections, which complements "Suspension Criteria and Resumption Requirements", defines all risk events, their likelihood, impact and counter measures to overcome them. Some risks may be testing related manifestations of the overall project risks. Some risk examples are lack or loss of personnel at the beginning or during testing, unavailability or late delivery of required hardware, software, data or tools, delays in training, or changes to the original requirements or designs.

The risks can be described using the usual format of the risk register, with attributes such as:

  • Category
  • Risk name
  • Responsible (tracker)
  • Associated phase/process/activity
  • Likelihood (low/medium/high)
  • Impact (low/medium/high)
  • Mitigation strategy (avoid/reduce/accept/share or transfer)
  • Response action
  • Actionee
  • Response time

Although some contingency events may actually be opportunities, it is, due to the limited scope of testing within the wider project or service delivery context, quite unlikely that the opportunities related to the very testing process will occur. However, the outcome of testing or some of its results may offer opportunities related to the subject of testing that may be enhanced, exploited or shared.

The approach for dealing with schedule slippages should be described in responses to associated risks. Possible actions include simplification or reduction of non-crucial activities, relaxation of the scope or coverage, elimination of some test cases, engagement of additional resources, or extension of testing duration.

Test Status Report

Test Status Report is a one-time interim summary of the results of the execution of testing activities. It may describe the status of the all testing activities or be limited on a single test suite. This report, as well as the Test Summary Report, is sublimes the information obtained during test execution and recorded in test logs and incident reports. It needs to be highly informative and concise and should not elaborate minor operational details.

Depending on the approach defined in the test plan, it may be produced periodically, on completion of milestones or phases, or on demand. If periodic, it may be a base for continuous tracking through progress charts. Individual status reports are also sources for the test completion report.

On the top of the report that contains standard metadata in the condensed form should be provided, but without version history, since the document itself is considered to be a quick snapshot.

Summary

The document can start with totals of passed, failed and pending test cases, scenarios or tests, and identified defects (if individually tracked). It may also show the coverage of test cases or code, consumption of resources and other established progress metrics, in numbers and, possibly, charts. Close to this summary, a comprehensive assessment should be provided.

The details of the progress are expressed in a form of a table describing the outcome of execution of individual test cases or scenarios, cumulatively since the start of testing. Typical columns to be used are:

  • Test case ID
  • Test case name
  • Last execution date
  • The last execution status (not run, failed, partial, passed) or counts of failed and passed test executions
  • Number of associated defects
  • Brief comment

If there are if more than dozen items, consider grouping and subtotaling the table rows according to the test suites, test items or scenarios (as listed in the project plan), test types or areas. If a single report addresses several suites, each should have a separate test status report, or at least its own totals and details tables.

Observations and Highlights

If needed, the document provides evaluations and recommendations based on the interim results and incidents encountered during the testing. It may also signal the red flags that deserve recipients’ attention. It should report the resolution of issues that were highlighted in the previous status report, but there is no need to report the issues that were already reported as solved.

Activities

The summary of activities conduced since the previous status report is optional. If present, is should exist in all status reports.

Test Completion Report

The test completion or summary report is a management report that brings together the key information uncovered by the accomplished tests. It recapitulates the results of the testing activities and indicates whether the tested system is fit for purpose according to whether it has met the acceptance criteria defined in the project plan. This is the key document in deciding whether the quality of the system and the performed are sufficient for to allow taking of the decision or step that was the reason for the testing effort. Although the completion report provides a working assessment on success or failure of the system under test, the final decision is made by the evaluation team.

This document reports all pertinent information about the testing, including an assessment about how well the testing has been done, the number of incidents raised and outstanding events. It must describe all deviations from the original test plan, their justifications for the deviations and impacts. Provided data should be sufficient for the assessments of the quality of the testing effort.

The provided narrative should be more elaborate than in the test status reports.

Summary

It recapitulates the evaluation of the test items. Although the  test completion report can reflect the structure of the test status report, the details that were only temporarily significant can be omitted from it. The table rows or subsections should correspond to the test items or scenarios listed in the test design specification. The summary should indicate the versions of the items that were tested, as well as the used testing environment. For each item it should be briefly explained what was tested and what was the outcome.

Test Assessment

This is a comprehensive assessment of the conducted testing. It should also point at the areas that may require further investigation and testing.

Variances

All variations and discrepancies from the original test plan should be noted here. This section can also provide an assessment of differences between the test environment and the operational environment and their effect on the test results.

Test Results

This is a comprehensive interpretation of the test results. Include description of issues or defects discovered during the testing. Also describe the unexpected results and problems that occurred during the testing. For resolved incidents, their resolutions should be summarised. For unresolved test incidents, an approach their resolution should be proposed.

Evaluation and Recommendations

Propose the decisions regarding the tested system and suggest further actions on the base of the acceptance criteria, quality of the test process, test results and outcomes for individual test items. Provide recommendations for improvement of the system or testing that result from the reported testing.

Activities

The summary of activities conduced during testing should record what testing was done and how long it took. In order to facilitate improvement of future test planning, it should be in a form that does not require later thorough investigation of the plans and records of the conducted testing.

Test Design Specification

Test design specification addresses the test objectives by refining the features to be tested, testing approach, test cases, procedures and pass criteria. This document also establishes groups of related test cases.

Features to be Tested

This section describes the features or requirements and combinations of features that the subject of testing and, if some of features are closely related to test items, expresses these associations. Each feature is elaborated through its characteristics and attributes, with references to the original documentation where the feature is detailed. These references lead to the associated requirements or feature specifications in the system/item requirement specification or design description (in the original development documentation). If the test design specification covers several levels or types of testing, the associated levels or types are noted for each individual requirement, feature or test item.

The features may be grouped into a few key applications, use cases or scenarios. If such groping is made, a single feature or requirement may be present in several groups.

The use case is a high level description of a specific system usage, or set of system behaviours or functionalities. It should not be mistaken for UML use case. It implies, from the end-user perspective, a set of tests that need to be conducted in order to consider the system as operational for particular use. It therefore usually describes the system usage, features and related requirements and that are necessary for utilization of the system by end users on regular basis.

The requirement is a description of necessary capability, feature, functionality, characteristic or constraint that the system must meet or be able to perform. It is a statement that identifies a necessary quality of a system for it to have value and utility to a user, customer, organization,  or other stakeholder. It is necessary for the fulfillment of one or several use cases or usage scenarios (in scenario testing).

The higher level of requirements include business, architectural and stakeholder/user requirements. There are also some transitional requirements that are only relevant during the implementation of the system. On the base of identified high-level features or requirements, the detailed requirements are defined. Some of them are the consequence of system's functions, services and operational constraints, while others are pertaining to the the application domain.

Functional requirement defines a specific behavior or function (what the system does; not how - in terms of implementation, quality, or performance).

Non-functional requirement specifies the quality criteria used to assess the characteristics or properties the system should possess (what it is). Typical non-functional requirements include:

  • Performance, availability, stability, load capacity, efficiency, effectiveness, scalability, response time
  • Reliability, robustness, fault tolerance, resilience, recoverability;
  • Privacy, security, safety;
  • Configuratability, supportability, operability, maintainability, modifiability, extensibility;
  • Testability, compliance, certification;
  • Usability, accessibility, localization, internationalization, documentation;
  • Compatibility, interoperability, portability, deployability, reusability.

The requirements specification is an explicit set of requirements to be satisfied by the system, and is therefore usually produced quite early in its development. Such a specification may be a direct input for testing process, as it lays out all requirements that were, hopefully, addressed during the system development. Alternatively, the requirements that are directly associated with testing can be formulated, without the prior access to the requirements produced during development.

The individual requirement need to be mutually consistent, consistent with the external documentation, verifiable, and traceable towards high-level requirements or stakeholder needs, but also towards the test cases.

The requirements are the base for development of test cases.

Scenario testing is a higher level approach to testing of complex systems that is not based on test cases, but on working through realistic and complex stories reflecting user activities. These stories may consist of one or several user stories,which capture what a user does or needs to do as part of his or her job function, expressed through one or more sentences in the everyday or domain language. The tester who follows the scenario must interpret the results and evaluate whether they can be considered as a pass or failure. This interpretation may require backing by domain experts. This term should be distinguished from test procedure and test case scenario.

Approach Refinements

This section refines the approach described in the test plan. Specific test techniques to be used are selected and justified.The method for the inspection and analysis of test results is identified (for example, visual inspection of behaviours and outputs, use of instruments or comparator or pattern matching programs, or test automation tools that can capture and process outputs).

Provided are details of the included test levels and how the individual features are addressed at those levels.

In order to avoid redundancy, common information is related to several test cases or procedures is provided here. It may include details of the test environment or environmental needs, system setup and recovery or reset, and dependencies between the test cases.

If the test design includes some deviations from the test plan, they are described here.

Test Cases

Individual test cases are identified here. After the identifier, a brief description of the test case and associated test procedure is provided. Associated test levels or other important common attributes may also be recorded.

A single test case may be included into several test suites or related to a requirement associated with several use cases. If different test levels have separate test design specification, a particular test case may be present in several design specifications.

The selection of the use cases may be the result of an analysis that provides a rationale for a particular battery of test cases. For example, the same feature may be tested with distinct test cases that cover valid and invalid inputs and subsequent successful or negative outcomes. This distinction is made in terms of system responses and not testing outcomes, as reporting of an error may actually indicate passing of a test.

The relationship between the requirements/features and test cases is summarised in the Traceability Matrix, which is usually placed in a separate document that is updated with the evolution of the requirements and test design specification, but also refinement of individual test cases. It enables both forward and backward traceability, as it simplifies how the test cases need to be modified upon the change of requirements, and vice versa. It is used to verify whether all the requirements have corresponding test cases, and to identify for which requirement(s) a particular test case has been written for. The Traceability Matrix is a table where requirements and test cases are paired, thus ensuring their mutual association and coverage. Since there are always more test cases than requirements, the requirements are placed in columns, and tests cases in rows.

Feature Pass/Fail Criteria

This specifies the criteria to be used to determine whether the feature or a group of features has passed or failed, on the base of results of individual test cases.

Test Case Specification

The Test Case is a specification of criteria that need to be met in order to consider some system feature, set of features or Use Case as working. It is the smallest unit of testing and is sometimes colloquially referred as Test. It should include a description of the functionality to be tested, and the preparation required to ensure that the test can be conducted. A single test case is sometimes associated with several requirements. It may be partially or fully automated.

A formal written test case is characterised by a known preconditions, input and expected output and postconditions, which are worked out before the execution. The test case may comprise of a single or several steps that are necessary to assess the tested functionality.

For a system without preexisting formal requirements, the Test Cases can be written based on system’s desired or usual operation, or operation of similar systems. In this case, they may be a result of decomposition of a high-level scenario, which is a story or setting description used to explain the system and its operation to the tester. Alternatively, Test Cases may be omitted altogether and replaced with scenario testing, which substitutes a sequence or group of Test Cases, as they may be hard to precisely formulate and maintain with the evolution of the system.

The Test Suite is a collection of Test Cases that are related to the same testing work in terms of goals and associated testing process. There may be several Test Suites for a particular system, each one grouping together many Test Cases based on corresponding goal and functionality or shared preconditions, system configuration, associated common actions, execution sequence of actions or Test Cases, or reporting requirements.

An individual Test Suite may validate whether the system complies with the desired set of behaviors or fulfills the envisioned purpose or associated Use Cases, or be associated with different phases of system lifecycle, such as identification of regressions, build verification, or validation of individual components.

A Test Case can be included into several Test Suites. If Test Cases descriptions are organised along Test Suites, the overlapping cases should be documented within their primary Test Suites and referenced elsewhere.

A Test Procedure defines detailed instructions and sequence of steps to be followed while executing a group of Test Cases (such as a Test Suite) or single Test Case. It can give information on how to create the needed test setup, perform execution and evaluate results for a given test case. Having a formalised test procedure is very helpful when a diverse set of people is involved in performing the same tests at different times and situations, as this supports consistency of test execution and result evaluation. Test Procedure is a combination of Test Cases based on a certain logical reason, like executing an end-to end situation. The order in which the Test Cases are to be run is fixed.

The Test Script is a sequence for instructions that need to be carried out on the tested system in order to execute a test case, or test a part of system functionality. These instructions may be given in the form suitable for manual testing or, in automated testing, as short programs written in a scripting or general purpose programming language. For software systems or applications, there are test tools and frameworks that allow specification and continuous or repeatable execution of prepared automated tests.


Test Data Requirements

Test Data Report

Test Environment Requirements

Test Environment or Test Bed is an execution environment configured for testing. It may consist of specific hardware, OS, network topology, configuration of the product under test, other application or system software, test tools, etc. The Test Plan for a project should enumerated the test beds(s) to be used.

Test Environment Report

Detailed test results

(base for status and completion reports on the base of individual tests execution logs and anomalies/incidents reports)

(Test records - for each test, an unambiguous record of the identities and versions of the component or system under test, the test specification and actual outcome.)

 

The Test Log is the record of Test Cases execution and obtained results, in the order of their running.


 

 

  • No labels