Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

Test status report is a one-time interim summary of the results of testing activitiesexecution of tests. It may describe the status of the all testing activities or be limited to a single test suite. This report, as well as the test summary report, sublimes the information obtained during test execution and recorded in test logs and incident reports. It must be highly informative and concise and should not elaborate minor operational details.

...

The test case refines criteria that need to be met in order to consider some system feature, set of features or the entire use case as working. It is the smallest unit of testing and is sometimes colloquially referred as Testtest. A single test case may be included into several test suites or related to a requirement associated with several use cases. If different test levels have separate test design specification, a single test case may be present in several design specifications.

The selection of the test cases may be the result of an analysis that provides a rationale for a particular battery of test cases associated with a single requirement. For example, the same feature may be tested with distinct test cases that cover valid and invalid inputs and subsequent successful or negative outcomes. This distinction is made in terms of system responses and not testing outcomes, as reporting of an error may actually indicate passing of a test. The logic behind the selection of test cases should be described here.

A feature from the test design specification may be tested in more than one test case, and a test case may test more than one feature. The test cases should cover all features, that is, each feature should be tested at least once. The relationship between the requirements/features and test cases is summarised in the Requirements/Test Cases Traceability Matrix, which is usually placed in a separate document that is updated with the evolution of the requirements and test design Specificationspecification document, but also with refinement of individual test cases. It enables both forward and backward traceability, as it simplifies how the modification of test cases need to be modified upon after the change of requirements, and vice versa. It is used to verify whether all the requirements have corresponding test cases, and to identify for which requirement(s) a particular test case has been written for. The Requirements/Test Cases Traceability Matrix is a table where requirements and test cases are paired, thus ensuring their mutual association and coverage. Since there are always more test cases than requirements, the requirements are placed in columns, and tests cases in rows. The requirements are identified by their IDs or short names and can be grouped by their types, while the test cases can be grouped into sections according to levels: unit, integration, system and acceptance.

...

The test case specifications are produced after the test design specification is has been prepared. The test case specification is a detailed elaboration of a test case identified in the test design specification and includes a description of the functionality to be tested and the preparation required to ensure that the test can be conducted. A single test case is sometimes associated with several requirements. It may be partially or fully automated.

...

For a system without preexisting formal requirements, the test cases can be written based on system’s desired or usual operation, or operation of similar systems. In this case, they may be The test cases are a result of decomposition of a high-level scenario, which is . This scenario provides a story or setting description used to and description of the setting  that explain the system and its operation to the tester. Alternatively, test cases may be omitted altogether and replaced with scenario testing, which substitutes a sequence or group of test cases, as they may be hard to precisely formulate and maintain with the evolution of the system.

...

  • Test case ID or short identifying name
  • Related requirement(s)
  • Requirement type(s)
  • Test level
  • Author
  • Test case description
  • Test bed(s) beds to be used (if there are several)
  • Environment information
  • Preconditions, prerequisites, states or initial persistent data
  • Inputs (test data)
  • Execution procedure or scenario
  • Expected postconditions or system states
  • Expected outputs
  • Evaluation parameters/criteria
  • Relationship with other use cases
  • Whether the test can be or has been automated
  • Other remarks

...

The test suite is a collection of test cases that are related to the same testing work in terms of goals and associated testing process. There may be several test suites for a particular system, each one grouping together many test cases based on corresponding goal and functionality or shared preconditions, system configuration, associated common actions, execution sequence of actions or test cases, or reporting requirements. An individual test suite may validate whether the system complies with the desired set of behaviours or fulfils the envisioned purpose or associated use cases, or be associated with different phases of system lifecycle , (such as identification of regressions, build verification, or validation of individual components). A test case can be included into several test suites. If test cases descriptions are organised along test suites, the overlapping cases should be documented within their primary test suites and referenced elsewhere.

The test procedure defines detailed instructions and sequence of steps to be followed while executing a group of test cases (such as a test suite) or single test case. It can give information on how to create the needed test setup, perform execution, evaluate results and restore the environment. The test procedures are developed on the base of the test design specification and in parallel or as parts of test case specifications. Having a formalised deatiled test procedure is very helpful when a diverse set of people is involved in performing the same tests at different times and situations, as this supports consistency of test execution and result evaluation. Test procedure can combine test cases based on a certain logical reason, like executing an end-to-end situation, when the order in which the test cases are to be run is also fixed.

...

First, the data requirements implied by the test plan and test design are put together. They include requirements related to type, range, representativeness, quality, amount, validity, consistency and coherency of test data. Additional concerns may be related to the sharing of test data with the development team or even end users.

If test The set test levels and use cases determine the tools and means that are used for collection, generation and preparation of test data. If data are not entirely fabricated, but is are extracted from the existing databases or services and can be associated with the real services, processes, business entities or persons, then the policies and technical procedures for its anonymisationdepersonalisation, obfuscation or protection may need to be established. If data are generated Otherwise, then the approach for to creation of adequate input data should be established, and validity of tests test results obtained with such data elaborated. On the base of the set test levels and use cases, the tools or means for or collection, generation preparation of test data are defined.inputs assessed.

Test case specifications provide more elaborate details about data needed and should be sufficient to support the actual collection or generation of adequate test data. During the second stage, the selected tools are used to prepare these data for execution of all use cases, including their injection into the system or interactions during the test procedure steps. At the same time, the corresponding expected test outputs are defined, and, if possible, automated methods for comparing the baseline test data against actual results. The limitations of test data and supporting tools are identified and it is explained what to do in order to mitigate them during the test execution and in interpretation and evaluation of the results. Finally, the measures that ensure usability and relevancy of test data throughout testing process need to be conductedare specified. They include data maintenance , for example to support the changes in the system, but also data versioning and backup. The decisions and knowledge produced during the preparation of the test data are captured.

...

  • Test case ID or short identifying name - may be placed at the top of the group
  • Order of execution number - useful for cross-referencing
  • Date and time
  • Testers - person who run the test, may also include observers
  • Test bed/facility - if several test beds are used in testing
  • Environment information - versions of test items, configuration(s) or other specifics
  • Specific presets - initial states or persistent data, if any
  • Specific inputs - inputs/test parameters or data that are varied across executions, if any
  • Specific results - outputs, postconditions or final states, if different from the expected; may refer to detailed test results
  • Execution status - passed, failed, partial (if permitted)
  • Incident reports - if one or several test incident reports are associated with this the execution, they is are referenced here
  • Comments - notes about the significant test procedure steps, impressions, suspicions, and other observations, if any

...

If there are several typical versions of the environment, presets or inputs, they may be described in the test case or in elsewhere in the test execution log and referenced in the test case executions that use them. This reduces the clutter. However, any particular variances in the configuration, input data, and results need to be documented. The actual outputs may be separately captured in the detailed test results, especially if an in-depth discussion of the alignment of actual and expected outcomes is needed.

...

Detailed Test Results

Detailed test results , are the actual outputs, assertions , and system and monitoring logs produced during the execution of tests. They should be at least paired with the corresponding test execution log records. Their format may depend on test tools that are used to capture them. In addition, the detailed results may encompass the reports produced by test automation tools that compare the baseline test data against actual results, which highlight all noted deviations. Such reports are valuable traces of how obtained results measure up with expected postconditions, states and outputs and can be used in assessment of the execution status and writing of the test execution log and test incident reports.

...