Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

Tailoring Approach to Testing

Testing as is a cognitive, critical and often creative process. It builds upon experience, discipline and persistence, but also benefits from thinking out of the box. It benefits from the wise application of standards, tools, best practices, adapted to a specific case, goal, available resources, and local culture.

This "framework" outlines a structured approach to testing, which is aligned with the major standards in the field, such as ISO/IEC/IEEE 29119, IEEE 829 and BS 7925, complements ITIL recommendations and processes, but is at the same time in agreement with the common language and practices in testing. It currently meant to support the GÉANT SA4 software service validation and testing process, but may grow into a a tool for strengthening of the common ground between software developers and testers and the computer networking community, and be used in consolidation of evaluations of various products, solutions and solutions developed and applied within the GÉANT project.

Streamlining of the testing process for software product has been the subject of significant standardisation effort in the last two decades. As software becomes an intrinsic part of almost every system, an effort has been has to align its validation with needs, practices and experiences from many industries and different fields of engineering. However, this field is still rapidly evolving and there is currently a strong opposition to to  its its formalisatonformalisation. There is no guarantee that any specification can ensure fully repeatable testing practice that is applicable in all possible contexts. Often, the inherent fluidity of the environment and goals and complexity of tested systems is likely to preclude any exhaustive codification.

Therefore, the here presented material should be see seen as a customisable toolbox. It is intended to prevent the reinventing of  of the wheel, and to digest the existing expertise, tools, practices, and processes. An over-administrative approach must be avoided, as there is no single size that fits all. Schematic and formal application and implementation of the standards, procedures or templates “by the book”, this document included, may petrify, bureaucratise, and over-proscribe validation and testing process and impede innovation. What matters is the quality. Actual testing, even with automated execution, is designed and supervised by humans. On the other hand, running of production-level services does require formalised processes, traceability or even auditability of validation, and unification of control procedures.

...

Although this document can be seen as a a contribution to the overall GÉANT approach to validation and testing, it is beyond its scope to discuss the general validation and testing policy and strategy for implementation and management of services. Such policy and strategy may be further discussed and refined by the project management and stakeholders.

...

Here proposed descriptions and outlines of documentation artifacts and related comments are informative and guiding nature. This material does not provide any guidelines for organisational or strategic level of testing related documentation. However, the top level policy and strategy may mandating define the overall approach to validation and testing on the base of documents such as this one.

...

  • Organizational numeric identifier of the document, may be ommitted omitted if the document name is used as the identifier
  • Descriptive and identifying name of the document
  • Name of testing (sub)project or use case, if not clear from document name
  • Document date
  • Version number
  • Author or authors and their contacts
  • Version history (table with version numbers, dates, contributors and descriptions of changes)

Version historyhistories, date(s) and authors are not needed for documents with masters that if the master documents are kept up to date in already highly versioned environment, such as an CMS system or Wiki. However, the main or corresponding author and document date need to be visible in self-contained standalone snapshots that are as such published on the web or shared by email.

...

References/Supporting Documents

List of all All documents that support the test plan . Documents that can be referenced should be listed. They may include:

  • Project plan
  • Product plan
  • Related test plans
  • Requirements specifications
  • High level design document
  • Detailed design document
  • Development and testing standards
  • Methodology guidelines and examples
  • Organisational standards and guidelines
  • Source code, documentation, user guides, implementation records

...

An additional list of features that will not be tested may be included, along with the reasons. For example, it may be explained that a feature will not be available or completely implemented at the time of testing. This map may prevent possible misunderstandings and waste of effort in tracking the defects that are not related to the plan.

Together with the list of test items, this section describes the scope of testing.

Test Items

DescriptionThis is the description, from the technical point of view, of the items to be tested, as hardware, software and their combinations. Version numbers and configuration requirements may be included where needed, as well as delivery schedule for critical items.

...

This section specifies personal responsibilities for approvals, processes, activities and deliverables described by the plan. It may also detail detail responsibilities in development and modification of the elements of the test plan.

...

The schedule of phases should be detailed to the level which ensures certainty that is attainable with information available at the moment of planning. It should define the timing of individual testing phases, milestones, activities and deliverables and be based on realistic and validated estimates, particularly as the testing is often interweaved with development. Testing is the most likely victim of slippage in the upstream activities, so it is a good idea to tie all test dates directly to the completion dates of their related developments. This section should also define when the test status reports are to be produced (continouslycontinuously, periodically, or on demand).

Risks and Contingencies

This sectionssection, which complements "Suspension Criteria and Resumption Requirements", defines all risk events, their likelihood, impact and counter measures to overcome them. Some risks may be testing related manifestations of the overall project risks. Some risk examples are lack or loss of personnel at the beginning or during testing, unavailability or late delivery of required hardware, software, data or tools, delays in training, or changes to the original requirements or designs.

...

If there are if more than dozen items, consider grouping and subtotaling and subtotaling the table rows according to the test suites, test items or scenarios (as listed in the project plan), test types or areas. If a single report addresses several suites, each should have a separate test status report, or at least its own totals and details tables.

...

The summary of activities conduced conducted during testing should record what testing was done and how long it took. In order to facilitate improvement of future test planning, it should be in a form that does not require later thorough investigation of the plans and records of the conducted testing.

...

Test design specification addresses the test objectives by refining the features to be tested, testing approach, test cases, procedures and pass criteria. This document also establishes groups of related test cases.

Features to be Be Tested

This section describes the features or requirements and combinations of features that the subject of testing and, if some of features are closely related to test items, expresses these associations. Each feature is elaborated through its characteristics and attributes, with references to the original documentation where the feature is detailed. The requirement descriptors include ID/short name, type, description and risks. The references lead to the associated requirements or feature specifications in the system/item requirement specification or design description (in the original development documentation). If the test design specification covers several levels or types of testing, the associated levels or types are noted for each individual requirement, feature or test item.

...

The higher level of requirements include business, architectural and stakeholder/user requirements. There are also some transitional requirements that are only relevant during the implementation of the system. On the base of identified high-level features or requirements, the detailed requirements are defined. Some of them are the consequence of system's functions, services and operational constraints, while others are pertaining to the the application domain.

Functional requirement defines a specific behavior behaviour or function (what the system does; not how - in terms of implementation, quality, or performance).

...

The requirements specification is an important input into the testing process, as it lays out all requirements that were, hopefully, addressed during the system development, so the tests to be performed should trace back to them. Without the access to the requirements from the development, the requirements that are directly associated with testing should be formulated during its planning. If an agile methodology was used for development, these requirements can reflect the completed Scrum epics, user stories and product backlog features or "done" Kanban board user stories and features cards.

The individual requirement requirements need to be mutually consistent, consistent with the external documentation, verifiable, and traceable towards high-level requirements or stakeholder needs, but also towards the test cases.

...

A feature from the test design specification may be tested in more than one test case, and a test case may test more than one feature. The test cases should cover all features, that is, each feature should be tested at least once. The relationship between the requirements/features and test cases is summarised in the Requirements/Test Cases Traceability Matrix, which is usually placed in a separate document that is updated with the evolution of the requirements and test design Specification, but also with refinement of individual test cases. It enables both forward and backward traceability, as it simplifies how the test cases need to be modified upon the change of requirements, and vice versa. It is used to verify whether all the requirements have corresponding test cases, and to identify for which requirement(s) a particular test case has been written for. The Requirements/Test Cases Traceability Matrix is a table where requirements and test cases are paired, thus ensuring their mutual association and coverage. Since there are always more test cases than requirements, the requirements are placed in columns, and tests cases in rows. The requirements are identified by their IDs or short names and can be grouped by their types, while the test cases can be grouped into sections according to levels: unit, integration, system , and acceptance.

Feature Pass/Fail Criteria

...

The test suite is a collection of test cases that are related to the same testing work in terms of goals and associated testing process. There may be several test suites for a particular system, each one grouping together many test cases based on corresponding goal and functionality or shared preconditions, system configuration, associated common actions, execution sequence of actions or test cases, or reporting requirements. An individual test suite may validate whether the system complies with the desired set of behaviors behaviours or fulfills fulfils the envisioned purpose or associated use cases, or be associated with different phases of system lifecycle, such as identification of regressions, build verification, or validation of individual components. A test case can be included into several test suites. If test cases descriptions are organised along test suites, the overlapping cases should be documented within their primary test suites and referenced elsewhere.

...

Detailed test results, are the actual outputs, assertions, and system and monitoring logs produced during the execution of tests. They should be at least paired with the corresponding test execution log records. Their format may depend on test tools that are used to capture them. In addition, the detailed results may encompass the reports produced by test automation tools that compare the baseline test data against actual results, which highlight all noted deviations. Such reports are valuable traces of how obtained results measure up with expected postconditions, states and outputs and can be used in assessment of the the execution status and writing of the test execution log and test incident reports.

...