You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 2 Next »

GÉANT maturity model framework comprises several items:

  • Maturity levels are well-rounded descriptions of maturity that could be elaborated for any specific field or target area. By elaborating a maturity model, this generic scale is adapted for a field or target area, by analysing and detailing the related subjects, goals, processes, common stages, practices, tools. The proposed generic scale has five levels that can be easily identified and matched in other developed maturity models, even if the characteristics of levels are expressed in terms that are specific for some topic.
  • Target areas are thematic subjects that need to be determined in order to refine the field of interest and focus the work on the improvement of teams or software development projects, their processes and practices, and development of related instruments that should direct, support or measure the advancement. These areas should be cohesive, relevant at the specific point in time, complete, fine-grained and independent, while individual elements and goals in them should be close to each other. It would be difficult to capture the maturity in a wide area, so the coverage would have to be either vague or sparse. If the target area comprises a wide spectrum of elements and goals, it is quite likely the resulting recommendations would not be actionable and that the teams will not be able to master all elements, but rather intentionally focus on just a few. The selected target areas should be already recognised as relevant and hopefully familiar to the audience. Whether they should be thematically close or not or how detailed they should be is determined by the practical needs. If close, they support each other in terms of maturity and mutual dependence. These areas may be already identified or acknowledged in other maturity models when those models should be reused or adapted if suitable. The Guide to the Software Engineering Body of Knowledge (SWEBOK Guide) maintained by the IEEE Computer Society provides a suitable taxonomy of the field. It can be used as a frame of reference, but the actual names and scopes of areas should be rather expressed in the terms aligned with the actual understanding of the target audience, in this case, software developers. The actual scope of used areas should be narrowed down the relevant and attainable goals. Most maturity models focus on one target area.
  • Relevant subjects – (conceptual cloud/vocabulary - (used to be parameters)) - Relevant subjects, topics and concepts - a wider range of elements pertaining to the individual domain and its areas) – could be also presented and described for individual or several target areas, but may also come from various frameworks, guidelines a maturity models’ subjects etc. The listed items may include sub-disciplines, processes, practices, team characteristics, tools, success indicators, quality attributes, etc. or even stand for several of these at the same time. Various classifications could be developed as needed for those “things”, but this is not necessary as long as they are used as a reference checklist for the identification and development of maturity elements. As long as the list is manageable, it may be associated with several areas. The relevant subjects may also come from a  reference taxonomy used to identify target areas, but should primarily consist of lower level items in the taxonomy.
  • Specific goals are concerns associated with one target area that may be observed and analysed across several or all levels of the maturity model. Sometimes these goals may be similar in nature across several areas or closely related. The existing maturity models often analyse one or several closely related goals. In order to identify and develop these goals, it may be useful to observe the goals established for nearby areas. Often, elements from various maturity levels that aim at one specific goal are progressively dependent - in order to be able to meet some maturity element and a certain level, a corresponding element from the preceding level must be met. The simplest maturity models collapse target areas and specific goals into one and observe their subject without further thematic decomposition, which results in a one-dimensional maturity scale instead of maturity matrix. Other explicitly address them by listing and elaborating each specific goal in every cell of maturity matrix (that is, as an element of maturity level).
  • Common stages and elements are higher-order common phases and sub-phases of work and are therefore quite useful in the development of process-related maturity models. The example provided here lists the stages such as preparing, design, delivery and closing, with each of them comprising of several elements. In addition to relevant subjects, which are very specific and only occasionally useful, stages and elements can be used to refine specific goals in almost any target area. They actually represent a simple model of how the things are usually done, and as such can be even related to PRINCE2 or PMBoK project decomposition. The actual coverage of elements with specific goals for a given target area may be sparse or localised, as only some stages or elements may be relevant and thus present. Stages and their elements are not performed executed in a waterfall manner, may partially overlap or be repeated in iterative loops. Stage elements may be executed in a slightly different order. Since only the work of the team observed, they do not include handover or management of external work.
  • Elements of maturity levels (within specific target areas) are the elements that are indicative for specific levels of maturity within one target area. They are actualisations of maturity levels the specific target areas, in which the generic descriptions of levels are replaced with direct explanations that use subjects and specific goals of the target area, such as "linkage of code changes to requirements, features, or detected or addressed defects". They can be interpreted as concrete and direct recommendations, indicators, assessment criteria, or even maturity prerequisites. If not present, they become detailed goals associated with a specific maturity level. They are developed by analysing what concrete elements of the target area should be associated with individual maturity levels as defined in a more general way. In order to simplify this analysis, the individual elements could be sought by observing how each specific goal should be satisfied at each level. They may be also formulated by looking at the adjacent levels and related target areas and by referring to the list of relevant subjects. Since some specific goals or concepts present in several target areas may be shared or similar, it is also possible for elements of levels from various target areas to be mutually dependent or related. In most maturity models they are these elements are expressed as individual items, bullets or short statements within the maturity matrix.
  • Maturity matrix is the pivotal element of most maturity models and all the above elements serve to develop this one. Its rows are maturity levels, whereas each row represents one target area. In their intersections are elements of maturity levels for each specific target area, expressed in bullets or short statements; further details can be explained separately.

Applications of GÉANT maturity model

The primary goal of our maturity model is currently rather specific and related to the improvement of software development within GÉANT, by identifying and assessing practices and gaps and suggesting what should be improved in each particular case. However, this may change with the time, the same way the target areas are expected to develop and evolve. Many other maturity models are focused on the evaluation and ranking of organisations. Other try to map the processes that are identified and elaborated within a framework or toolkit such as ITIL at the levels of maturity, probably with measurement as their ultimate goal. Other maturity models do not target or organisations or services but technologies, products, assurances or certain qualities. Even the GÉANT software maturity model could be applied in different areas and with different goals.

The landscape of possible uses:

  • Self-assessment
  • Intelligence and information gathering
  • Identification, capture and classification of practices or knowledge
  • Development of guidelines, best practices or educational materials
  • Planning of training
  • Evaluation – not necessarily translates into ranking on a discrete or continuous scale, as a descriptive evaluation may suffice and be even more useful.
  • support for related initiatives
  • Assigning maturity levels – different interpretations of whether all or some elements of levels need to be met
  • Ranking
  • Development of conformance criteria.
  • Certification
  • communication, validation!!!!

Evaluation of maturity can be bound to individual Specific Goals, or rather maturity of a team in certain Process Area

Instruments/tools/artifacts and their use in action

One application may employ several steps, actions and instruments

Our maturity model, therefore, must be able to provide a stable and widely applicable reference frame. The more fundamental some concept or classification is, the more stable it should be. Refinement and delineation are welcome, as long as the foundations and basic assumptions are preserved. On the base

  • Surveys
  • Best practices and guidelines
  • Reference models for specific areas
  • Assessment materials
  • Scoring or maturity level assignment systems for use on surveys or assessment results - they may cover one or several target areas.
  • Certification schemes for organisations and individuals

The instruments that are developed can be used to address one or several applications.

Scope for teams and projects defined by target areas and relevant levels, sometimes an area may not be of interest for the particular team, while the actual target level may not be the highest one that is needed. If once, then heroic, otherwise over-engineering.

Generalised SMM maturity levels

Most maturity models use these the below listed five levels with sliight variations in naming, but some also introduce the additional lowest level to mark the complete lack of anything on the matter in terms of non-existing awareness, methods or values. From the practical perspective, such an empty level is rarely defined and used, since as soon as a group starts to discuss or deal with a topic, something is going on and it is already at the initial level of ad-hoc, chaotic and heroic efforts.

Heroic (Would)

Doing for the first time and ad-hoc, learning by doing. Having some ideas about the area of work, from the literature or others' narrative, but experiencing it for the first time. Knowing some individual facts and having ideas about concepts, but without a firm grasp. Participants are possibly aware of the issues, have some raw facts, or are in possession of some objective or axiomatic information. 

Work, processes and tasks are not defined and are being established in the base of ongoing development and, partially, external influences. They are unpredictable, uncontrolled, and reactive. The overall experience is characterised by surprises.

Stabilizing (Know-what)

Has been there, learnt from the past and the effort is likely to work, still should refine or additionally assert the experience, could elaborate and give own examples if asked. A sense of meaning is associated with the information, did some comparison or analysis, believing in having some knowledge and understanding of the key concepts. Able to match, compare or convert the existing information from the domain.

The knowledge is anecdote-based and applicable in particular and limited contexts. There are references to previous attempts. Team members had taken part in previous attempts.

Tasks are defined and organisation of work depends on the experience, quality people and their intuition. They are managed but not standardised. There are occasional hiccups, failures and disappointments.

Integrated (Know-how, Know-what-for)

In possession of explicit and possibly generalised knowledge that can be readily applied to new situations without extensive adaptation of the past particulars, able to explain, provide support or contextualise. Oriented towards informative and useful.

The knowledge is documented and codified and can be transferred to new team members without specifics of past examples, it is associated with applicable indicative cases, immediate causes, goals or consequences. Most of the relevant concepts are well-captured and internalised. Some best practices exist, the used knowledge may be externally provided but is tentatively applied internally. Processes, policies, procedures and standards are defined. Documentation that defines them does exist but may not be up to date, while the changes may not be tracked. The accountability is clear and distributed in line with the processes.

The team may track the overall delivery, but not its individual elements.

Changes to the plans are being coordinated, verified and go through the management approval. Decisions are documented. Changes are tracked and managed.

Controlled (Know-how-much, Know-where)

Collective application of knowledge is in action. Actively using the possessed and captured knowledge, assessing and enforcing a specific level of usage by measuring and managing the application of knowledge. Well-informed about actual applicability of knowledge, mapping it to practice in novel ways, comfortable when not applying it for clear reasons.

Processes and tasks fully defined, well-established, controlled, measured and analysed. The performance and other quantitative aspects are covered so that inefficiencies and bottlenecks can be identified. The team is comfortable with changing quantitative aspects and understands their limitations. They are mistakes that are identified as early as possible.

The knowledge is routinely contextualised and evaluated but is not regularly updated or reassessed beyond the scope of the immediate application. Documentation is structured, maintained and verified for each release. The change management process is strictly followed.

Optimising (Know-why, Know-art)

In full control of the knowledge, continuously updating and improving the codified knowledge, comfortable with changing its qualitative aspects or not applying it due to subtle underlying reasons.

Comfortable in expressing understanding and having insight into the underlying nature of the knowledge or the matter in terms of drivers, deeper purposes, principles, values, or generalised patterns across individual knowledge elements.

All repetitive tasks are automated to up to the limits of feasibility and technology. Processes are continuously updated on the base of obtained feedback. There is ongoing self-remediation, self-learning and optimisation.


  • No labels