This dimension describes the architectural and technological capabilities that are necessary to develop, establish and continue to evolve an OAV environment capable of implementing the organisation’s objectives. It focuses on OAV platform flexibility and interoperability, modularity and APIs, virtualisation technologies, modelling capabilities, AI powered data analytics and efficient & adaptable resource leverage.

Several subdimensions are defined for the Architecture and Technology dimension:

Architecture & Technology

Stage

Subdimension

None

Ad Hoc

Use Case

Integrated

Proactive

Self-*

Components

closed separated components used for different purposes

A traditional siloed approach is used where different services and types of devices are managed independently. Specialised components are used for management of groups of devices (usually vendor-provided tools). Vendor-issued suites may be used for advanced network management. There are some functional and data overlaps between components.

data extracted from one component to be used in another

Sporadic cases of component integration are implemented on a small scale. Examples include pushing relevant information to a monitoring component to autostart monitoring after service provisioning or attempting to auto-generate a service specific report using data available in several independent components.

components integrated as needed per use case

All components that are used in a defined use case or project are now being integrated so that they can easily exchange information. Any functional duplication is being avoided and decisions are being made on which component is going to provide which functionality. Data deduplication is being investigated for the identified components. A vendor-neutral approach is emerging. The organisation is moving away from the siloed architecture.

all components within the organisation are integrated, platform is established

The definition of organisation-wide functional components where each component provides well-defined functionalities (e.g. single source of truth) is achieved in a vendor-free manner. To implement complex behaviour multiple functionalities can be used in a process. All service and device management can be done using these functional components. In this way, the organisation has abandoned the siloed approach and is now establishing an OAV platform that responds to user requests and network managers' actions.

components are available as services listed in a service catalogue; platform is fully functional

All functional components can now be used as flexible puzzle pieces, where each functionality is advertised in a component catalogue. Dynamic integration is possible by searching components and combining them using an intelligent orchestrator. Each component not only exposes a management API but also provides notification capabilities -  thus supporting the definition of process choreography with message exchanges leading to higher levels of collaboration.

intelligent self-discovery


All platform components are now easily discoverable by internal and external parties. They intelligently respond to the addition or removal of other components and are adaptable to a changing environment. E.g. auto-discovery of relevant information stored in federated inventories on-the-fly.

APIs

APIs are not available or not used

The organisation does not use APIs to interact with components/tools. Only available GUIs and readily available exporting/importing facilities are used. Many of the tools/components might not expose APIs at all.

Few APIs investigated and used

Ad hoc interest in components APIs is rising. Some existing APIs are now being used for small automated tasks. Available APIs may follow different paradigms (REST, SOAP, etc.).

additional APIs developed as needed per use case

An API analysis is performed for the components chosen to implement the selected process. Additional APIs or API extensions/wrappers may be developed for certain components to facilitate automation and data exchange. The organisation is becoming aware that a standardised approach to APIs is needed. A common API definition guide is being developed.

Northbound APIs/Southbound APIs

Following the API guidebook, all components are now exposing standardised (preferably open) APIs that are based on a common data model. Using the available APIs the organisation is able to define workflows in both directions: from user to network (Southbound) and from network to user (Northbound).

Eastbound APIs / Westbound APIs

The organisation exposes external APIs that can be used by customers and partners. Standardised Open East-West API specifications are used for these purposes. An API Gateway is established with full access control and accountability  - redirecting calls to the internal APIs. A Notification Broker (following the Pub/Sub paradigm) is implemented for all services.

self-adaptive

The organisation uses adaptive APIs that dynamically adapt its control and display features to react in real-time to different user, system or environment states. Machine-to-Machine communication is fully supported.

Compatibility

not important

Compatibility between different tools/management components is not a requirement - nor is it investigated. Compatibility of network devices is important regarding the ability to implement services only.

becoming aware of compatibility issues

As ad-hoc attempts of integration start to occur, compatibility issues start to surface, and workarounds are proposed. This awareness influences the future considerations regarding new hardware/software purchases.

components are compatible within a use case

While attempting to automate particular processes, the compatibility between the identified/involved components is addressed. Customised extensions or additional software implementations are used to enable interoperability between the components in question. The use of common models and data duplication issues are starting to be addressed.

organisation-wide compatibility

All components and tools that are used to implement the processes in the organisation are now fully compatible and interoperable. Standards and models are chosen to ensure compatibility with development efforts in progress or future procurements.

multi-domain interoperability

The organisation is now interoperable with other partners and can easily exchange information with their systems. Remote (controlled) auto-triggering of its processes is also provided to the partners based on common agreements, standards and regulations.

seamless interoperability

The organisation can establish interoperability agreements with a selected partner on-the-fly - intelligently adapting the communication channel between the organisations. Smart contracts can be used to define agreements.

Virtualisation

minimal, scattered virtual objects

The use of virtualisation technology is minimal - there is no common virtualisation platform in use. There may be VPNs in place, a few proprietary vertically integrated boxes available in the network or individually managed VMs hosting a few services.

experimenting with virtualisation technologies

Awareness of the need to choose common virtualisation platforms is rising. Examples are used to understand virtualisation abstractions and implications. There are a number of isolated virtual deployments of standalone VMs, containers and/or VNFs.

common virtual infrastructures available

A common virtualisation platform is chosen and implemented. There is now support for multi-vendor VNFs and horizontal scalability of virtual objects. Some services are transferred to virtual based hosting. There is a distinct difference between virtual and physical objects.

hybrid = physical + virtual

Physical and virtual components are now interoperable and hybrid services can be implemented. Network service chaining is in use for production network services. Virtual network services can be orchestrated using an NFV orchestrator. Implementations are being scaled vertically.

common virtual layer integrating different technologies, VMs, containers, NFV, SDN…

A common virtual layer is available that provides a complete end-to-end network visibility, overarching different technologies and implementations. The organisation has a fully virtualised environment with auto-scaling support: desktop, application, network, storage, and data virtualisation.

federated virtual layers

The common virtualisation layer transcends the organisation domain/control and federates with other virtual infrastructures that belong to partner organisations in a fully transparent manner using next-gen virtualisation technologies.

Security

minimal security

Manual configuration of the prevention-oriented tools (e.g., firewalls, antivirus, etc. in place). There is no defined security architecture and changes are implemented on individual systems. Logs are stored locally and managed by separate security tools.

may not have all of the security tools and capabilities

Basic network security perimeters are established where trust is mostly assumed within the owned network. Log data and security events stats are exported to a centralised server. Endpoint security is starting to be implemented in a consistent manner. There is no common framework to cover all OAV-related security aspects.

good cyber hygiene

A centralised architecture has been developed where all log data and security logs are made available in one location. Automated configuration of prevention-oriented security equipment is the norm. Security policies enforcement (password expiration, password hardening, etc.) are enforced. Mandated network forensics are gathered.

a robust suite of tools to implement security controls

A centralised architecture has been realised with endpoint/edge security devices, analysis of the monitoring data for security threat analytics, endpoint forensics and prioritisation of alarms. Security equipment has been commissioned with automatic configuration based on threat and risk assessment in a reactive manner. The target in this stage is to secure all physical and virtual assets in areas of higher risk. Resilient and highly effective compliance posture. All Security Operations Centre (SOC) functionalities are in place.

next-gen security

With the growing number of components that need to be secured, a shift towards decentralised architecture is made with distributed security devices. Security Information and Event Management (SIEM) is used to holistically analyse the distributed log events data based on Indicators of Attack (IOA) and Indicators of Compromise (IOC) for known threats. Targeted advanced analytics for anomaly detection and endpoint forensics is used. Security equipment setup is based on threat and risk evaluation with automatic configuration in a proactive manner.

anticipatory

Decentralised architecture with distributed security devices. Advanced analysis of the monitoring data based on IOA and IOC for known threats on the security devices using AI/ML and federated learning. The security devices execute alerting with holistic network forensics and Tactics, Techniques, and Procedures (TTPs)-based scenario machine analytics for known threat detection. Advanced machine analytics for holistic anomaly detection (e.g., via multi-vector AI/ML-based behavioural analytics). Cross organisational case management and automation. Extensive proactive capabilities for threat prediction and threat hunting. Extremely resilient and highly efficient compliance posture. Security equipment setup is self-configurable.

Modelling abstractions

no/simple specification

There is no attempt to define the specifications or abstract models of resources/objects of interest (services, devices, metrics, etc.). The readily available models provided in the tools/components are used with very little or no customisation.

min custom model based on per need basis

The initial steps in automation and integration lead to the need to define initial data models and specifications that will be used to understand the data flow in/out of components. The models employed are rudimentary with low granularity and very difficult to extend. There are a few attempts to distinguish between logical and physical object modelling.

scattered generalised models

The requirement for a common approach to object modelling is recognised. Modular models are being defined. Hierarchical composition and abstractions are considered.

common data model

The organisation has a functional, data model that can be used to describe physical and logical objects and their relationships. Standardised approach to object specification is used.

extensible data model

The data model is now easily extensible with custom object parameters. A highly granular approach is implemented, so that objects can be constructed using a number of hierarchically arranged modules. Multi-layered abstractions can be described, enabling the definition of different views (complete, partial, limited with fine control) of the organisation resources.

self-expandable

The object models can be extended on-the-fly. New abstraction layers can be auto-generated based on a mix of intentions and controls. The model definition is something that can be learned and optimised by the system.

Analytics

simple data analysis integrated in readily available tools (e.g. monitoring)

There are no specific analytics tools in use. Only the readily available data visualised in different tools is used. Basic statistical information is provided but not analysed automatically.

human-driven

Basic analytic tools are used on gathered historical data by exporting/importing different databases/sources. There is no integration between the analytic tools and the data sources. Analytics is usually performed for the purposes of reporting or capacity management on given time intervals. No real-time analytics is provided.

descriptive

Real-time data analytics together with the relevant statistics is provided using a centralised analytics tool. Statistical data analysis is performed to understand what happened in the past and what are the current performances using visualisations, reports and dashboards.

diagnostic

Automated root cause analysis is employed. Deeper analysis is performed on the descriptive data to be able to understand why something is happening. The process includes data discovery, data mining, drilling down and drilling through.

predictive

The historical data is fed into learning models that analyse trends and patterns. When combining the model with the real-time data, predictions on what will happen next are performed.

prescriptive - self-scoring

Engines make smart decisions by analysing what actions can be taken to affect the forecasted outcomes. The predictive data is used to automatically generate options on different courses of actions (prescriptions) together with an impact analysis of applying each option.

AI

no AI implementations

AI solutions are not used at all, no AI algorithms are in place for detection, classification or predictions.

investigating AI

Initial investigations in the use of AI are starting. Ad hoc AI implementations are available based on specific operations’ needs, usually for reporting purposes.

initial AI application - project based piloting

AI is being piloted in chosen projects. Some of the processes are coupled with AI solutions to enable branching based on classification. All AI deployments are carefully supervised to ensure consistent behaviour.

intelligent alerts, uncovering hidden knowledge

There are AI modules integrated in several cross-department processes. AI is used to be able to generate intelligent alarms based on recognition of hidden patterns using unsupervised approaches to learning.

AIOps

AI is used in most of the organisation processes not only for classification purposes, but also for smart decision making. A central AI platform is used to manage the lifecycle of all AI deployments. Standardised approach to AI development is adopted.

intention based

AI is used as the innovation driver in the organisation leading to fully autonomous systems that work according to pre-defined intents. AI deployments are characterised with self-learning and auto-optimisation behaviours.

Data

scattered structured data

Structured data is collected and stored in a number of databases based on its purpose and on the tools used for the collection. Each datastore follows its own rules regarding data structure, format, naming conventions etc. There is no unified view of all data collected, and combining data from different sources and/or formats is difficult.

centralising data store

Centralised data infrastructure in the form of a data warehouse or similar is used. Data can be processed using ETL (extract, transform, load) pipelines where at least minimal data validation is performed when integrating data from multiple systems. Metadata repository is starting to be used. Unstructured data is not collected.

advanced data warehouse

An enterprise data warehouse is used as a data hub supporting business intelligence. Structured data from multiple sources across all organisation processes is stored and used for different purposes such as reporting, analysis, dashboards, etc. Metadata provides access to end-users for searching and understanding data. Pilot data lakes may be in place to take advantage of semi-structured and unstructured data for project-based analytics efforts.

automated governed data lake

A governed data lake is used to set up an advanced data environment that supports all types of data and data formats. Raw data is stored for different types of analytics where schema on the fly is employed. Automation processes related to the whole data lifecycle are being put in place.

self-service intelligent data lake

A unified, optimised, intelligent data lake is being used with a virtualisation layer set on top of all data stores in the organisation, creating one cohesive environment. The complete data lifecycle is fully automated, supporting self-service data ingestion and provisioning. Data stewards ensure the quality of the data used by the organisation.

data federation

Federated data - either in data lakes or in a data mesh - are in use. Data from various sources in the ecosystem (e.g. other partner organisations) can be accessed and used in combination with the local responsive data lake. Access and insight into the available data is done without any special technical support - by using fully automated and orchestrated tools that adapt to the needs of business users.

  • No labels