Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

Streamlining of the testing process for software product has been the subject of significant standardisation effort in the last two decades. As software becomes an intrinsic part of almost every system, an effort has been has to align its validation with needs, practices and experiences from many industries and different fields of engineering. However, this field is still rapidly evolving and there is currently a strong opposition to  to its formalisation. There is no guarantee that any specification can ensure fully repeatable testing practice that is applicable in all possible contexts. Often, the inherent fluidity of the environment and goals and complexity of tested systems is likely to preclude any exhaustive codification.

...

Test management primarily deals with the evaluation level documentation. These artifacts are related to testing of as a specific service, product or solution that are intended for communication between those responsible for test management processes (planning, monitoring and control, and completion assessment) and those who actually design, implement, execute and document the planned tests.   The evaluation level documentation consists of:

...

The enactment consists of test design and preparation and execution of all planned tests. its Its processes produce and update test level documentation.

Test design includes specification of requirements with conditions for their fulfillmentfulfilment. The conditions may also be contextual and include repeatability or presentability. It is followed by specification of individual tests cases and procedure for their execution. The subsequent steps are test environment set-up, test execution and related reporting. The feedback from environment set-up and execution may lead to an update of the original design. For example, the constraints in capabilities of the test environment may lead to the update of test cases. Alternatively, actual preparation of test cases and environment may require additional elaboration and refinement of test cases. On the other hand, significance or priority of some requirements may lead to modification of the test environment in order to enable execution of corresponding test cases. Both may lead to modification of the test procedure.

...

Version histories, date(s) and authors are not needed if the master documents are kept up to date in already highly versioned environment, such as an a CMS system or Wiki. However, the main or corresponding author and document date need to be visible in self-contained standalone snapshots that are as such published on the web or shared by email.

...

It should be immediately followed by separate description of the testing level (unit, integration, system , and acceptance) and/or type or subtype (functional, non-functional, alpha, beta, performance, load, stress, usability, security, conformance, compatibility, resilience, scalability, volume, regression…).

...

This is the executive summary part of the plan which summarises its purpose, level, scope, effort, costs, timing, relation to other activities and deadlines, expected effects and collateral benefits or drawbacks. This section should be brief and to the point.

Features to be Be Tested

The purpose of the section is to list individual features and their significance or risk from the user perspective. It is a listing of what is to be tested from the users’ viewpoint in terms of what the system does. The individual features may be operations, scenarios, and functionalities that are to be tested across all or within individual tested sub-systems. Features may be rated according to their importance or risk.

...

This section describes what is produced by the testing process. These deliverables my may be the subject of quality assessment before their final approval or acceptance, and besides all the elements of test documentation that are described here, may also include test data used during testing, test scripts, code for execution of tests in testing frameworks and outputs from test tools.

...

The requirement is a description of necessary capability, feature, functionality, characteristic or constraint that the system must meet or be able to perform. It is a statement that identifies a necessary quality of a system for it to have value and utility to a user, customer, organization, or other stakeholder. It is necessary for the fulfillment fulfilment of one or several use cases or usage scenarios (in scenario testing).

The higher high level of requirements include business, architectural and stakeholder/user requirements. There are also some transitional requirements that are only relevant during the implementation of the system. On the base of identified high-level features or requirements, the detailed requirements are defined. Some of them are the consequence of system's functions, services and operational constraints, while others are pertaining to the application domain.

...

First, the data requirements implied by the test plan and test design are put together. They include requirements related to type, range, representativeness, quality, amount, validity, consistency and coherency of test data. Additional concerns may be related to the sharing of test data with the development team or even end users.

...

The report is updated after the test environment is set up. A smoke test can be performed. The limitations of the test environment are identified and explained what to do in order to mitigate them during the test execution and in interpretation and evaluation of the results. The maintenance plans, responsibilities and arrangements are established. If envisioned in the test plan, this includes upgrades of current versions of test items, upgrading of used resources and network topology, and updating of corresponding configurations. The initial design is updated to reflect the test environment as it was built. The decisions and knowledge produced during the implementation of the test bed are captured..

Test Execution Log

The test execution log is the record of test cases executions and obtained results, in the order of their running. Along with test incident reports, the test execution log is a base for test status and completion reports. These documents allow direct checking of the progress of the testing and provides provide valuable information for finding out what caused an incident.

...

If there are several typical versions of the environment, presets or inputs, they may be described in the test case or in elsewhere in the test execution log and referenced in the test case executions that use them. This reduces the clutter. However, any particular variances in the configuration, input data, and results need to be documented. The actual outputs may be separately captured in the detailed test results, especially if a an in-depth discussion of the alignment of actual and expected outcomes is needed.

...