4. Architecture

4.1. Architecture Overview

../_images/acumos-architecture.png

4.2. Component Interactions

The following diagram shows the major dependencies among components of the Acumos architecture, and with external actors. The arrow represent dependency, e.g. upon APIs, user interfaces, etc. The arrows are directed at the provider of the dependency. Some dependencies are so common that they aren’t shown directly, for diagram clarity. These include:

  • collection of logs from all components
  • dependency upon the Common Data Service (shown as a single block of components)

The types of components/actors in the diagram are categorized as:

  • Core Component: components that are developed/packaged by the Acumos project, and provide/enable core functions of the Acumos platform as a service
  • Supplemental Component: components that are integrated from upstream projects, in some cases packaged as Acumos images, and provide supplemental/optional support functions for the platform. These functions may be provided by other components, or omitted from the platform deployment.
  • Platform Dependency: upstream components that are required, to support key platform functions such as relational database and docker image creation. The examples shown (Nexus and Docker) may be replaced with other components that are API-compatible, and may be pre-existing, or shared with other applications.
  • External Dependency: external systems/services that are required for the related Acumos function to be fully usable
../_images/acumos-architecture-detail.png

4.3. Interfaces and APIs

4.3.1. External Interfaces and APIs

4.3.1.1. E1 - Toolkit Onboarding

The various clients used to onboard models call the APIs in the Onboarding service. See the Onboading App documentation for details.

4.3.1.2. E2 - Web APIs

The portal Web API (E2) are the interface for the users to upload their models to the platform. It provides means to share AI microservices along with information on how they perform. See the following for more information:

4.3.1.3. E3 - OA&M APIs

The OA&M subsystem defines data formats supported by the various logging and analytics support components described under Operations, Admin, and Maintenance (OAM). These are primarily focused on log formats that Acumos components will follow when saving log files that are collected by the logging subsystem.

4.3.1.4. E4 - Admin APIs

The Admin API (E4) provides the interfaces to configure the site global parameters. See the following for more information:

4.3.1.5. E5 - Federation APIs

The federation (public) E5 interface is a REST-based API specification. Any system that decides to federate needs to implement this interface, which assumes a pull-based mechanism. As such, only the server side is defined by E5. The server allows clients to poll to discover solutions, and to retrieve solution metadata, solution artifacts and user-provided documents. See the following for more information:

4.3.1.6. E6 - Deployment APIs

The Deployment subsystem primarily consumes APIs of external systems such as cloud service environments, including Azure, OpenStack, and private kubernetes clouds. The developer guides for the “Deployers” that coordinate model deployment in those specific environments address the specific APIs consumed by those Deployers. See the following for more information:

4.3.1.7. Microservice Generation

The DCAE model API is intended to be used with models dedicated for ONAP. It builds a DCAE/ONAP microservice and required artifacts. See the Microservice Generation documentation for details.

4.3.2. Internal Interfaces and APIs

4.3.2.1. Common Data Service

The Common Data Service provides a storage and query micro service for use by system components, backed by a relational database. The API provides Create, Retrive, Update and Delete (CRUD) operations for system data including users, solutions, revisions, artifacts and more. The microservice endpoints and objects are documented extensively using code annotations that are published via Swagger in a running server, ensuring that the documentation is exactly synchronized with the implementation. View this API documentation in a running CDS instance at a URL like the following, but consult the server’s configuration for the exact port number (e.g., “8000”) and context path (e.g., “ccds”) to use:

http://localhost:8000/ccds/swagger-ui.html

See the following for more information:

4.3.2.2. Hippo CMS

4.3.2.3. Portal Backend

4.3.2.4. Federation Gateway

The federation (local) E5 interface is a REST-based API specification, just like the public interface. This interface provides secure communication services to other components of the same Acumos instance, primarily used by the Portal. The services include querying remote peers for their content and fetching that content as needed. See the following for more information:

4.3.2.5. Microservice Generation

4.3.2.6. Azure Client

The Azure Client exposes two APIs that are used by the Portal-Markeplace to initiate model deployment in the Azure cloud service environment:

  • POST /azure/compositeSolutionAzureDeployment
  • POST /azure/singleImageAzureDeployment

The Azure Client API URL is configured for the Portal-Markeplace in the Portal-FE component template (docker or kubernetes).

See Azure Client API for details.

4.3.2.7. OpenStack Client

The OpenStack Client exposes two APIs that are used by the Portal-Markeplace to initiate model deployment in an OpenStack service environment hosted by Rackspace:

  • POST /openstack/compositeSolutionOpenstackDeployment
  • POST /openstack/singleImageOpenstackDeployment

The OpenStack Client API URL is configured for the Portal-Markeplace in the Portal-FE component template (docker or kubernetes).

See OpenStack Client API for details.

4.3.2.8. Kubernetes Client

The Kubernetes Client expose one API that is used by the Portal-Markeplace to provide the user with a downloadable deployment package for a model to be deployed in a private kubernetes service environment:

  • GET /getSolutionZip/{solutionId}/{revisionId}

The Kubernetes Client API URL is configured for the Portal-Markeplace in the Portal-FE component template (docker or kubernetes).

See Kubernetes Client API for details.

4.3.2.9. ELK Stack

The ELK Stack is used to provide the E3 - OA&M APIs via which components publish standard-format log files for aggregation and presentation at operations dashboards.

4.3.2.10. Nexus

The Nexus component exposes two APIs enabling Acumos platform components to store and access artifacts in various repository types, including:

The Maven repository service is accessed via an API exposed thru the Nexus Client Java library. The docker repository service is accessed via the Docker Registry HTTP API V2. Both services are configured for clients through URLs and credentials defined in the component template (docker or kubernetes).

4.3.2.11. Docker

The docker-engine is the primary service provided by Docker-CE, as used in Acumos. The docker-engine is accessed by the Docker Engine API.

The docker-engine API URL is configured for Acumos components in the template (docker or kubernetes) for the referencing component.

4.3.2.12. Kong

Kong provides a reverse proxy service for Acumos platform functions exposed to users, such as the Portal-Marketplace UI and APIs, and the Onboarding service APIs. The kong proxy service is configured via the Kong Admin API.

4.4. Core Components

The following sections describe the scope, role, and interaction of the core Acumos platform components and component libraries. The sections are organized per the Acumos project teams that lead development on the components.

4.4.1. Portal and User Experience

4.4.1.1. Portal Frontend

The Portal Frontend is designed to make it easy to discover, explore, and use AI models. It is completely built on angularJs and HTML. It uses portal backend APIs to fetch the data and display.

4.4.1.2. Portal Backend

Provides REST endpoints and Swagger documentation. Portal backend is built on Spring Boot which exposes the endpoints to manage the models.

For more information: Portal Backend Documentation

4.4.1.3. Acumos Hippo CMS

Acumos Hippo CMS is a content management system which is used to store the images of the text descriptions for the Acumos instance.

For more information: Acumos Hippo CMS Documentation

4.4.2. Model Onboarding

4.4.2.1. Onboarding App

The Onboarding app provides an ingestion interface for different types of models to enter the Acumos platform. The solution for accommodating a myriad of different model types is to provide a custom wrapping library for each runtime. The client libraries encapsulate the complexity surrounding the serialization and deserialization of models.

The Onboarding App interacts with the following Acumos platform components and supporting services:

For more information: Onboading Documentation.

4.4.2.2. Java Client

The Acumos Java Client is a Java client library used to on-board H2o.ai and Generic Java models. This library creates artifacts required by Acumos, packages them with the model in a bundle, and pushes the model bundle to the onboarding server.

The Java Client interacts with the Onboading app.

For more information: Java Client Documentation.

4.4.2.3. Python Client

The Acumos Java Client is a Python client library used to on-board Python models and more specifically Scikit learn, TensorFlow and TensorFlow/Keras models. It creates artifacts required by Acumos, packages them with the model in a bundle, and pushes the model bundle to the onboarding app.

The Python Client interacts with the Onboading app.

For more information: Python Client Documentation.

4.4.2.4. R Client

The R client is a R package that contains all the necessary functions to create a R model for Acumos. It creates artifacts required by Acumos, packages them with the model in a bundle, and pushes the model bundle to the onboarding app.

The R Client interacts with the Onboading app.

For more information: R Client Documentation.

4.4.3. Design Studio

The Design Studio component repository includes following components:

  • Composition Engine
  • TOSCA Model Generator Client
  • Generic Data Mapper Service
  • Data Broker (CSV and SQL)

For more information: Design Studio Documentation

Additional components are in separate repositories.

4.4.3.1. Design Studio Composition Engine

The Acumos Portal UI has a Design Studio that invokes the Composition Engine API to:

  1. Create machine learning applications (composite solutions) out of the basic building blocks – the individual Machine Learning (ML) models contributed by the user community
  2. Validate the composite solutions
  3. Generate the blueprint of the composite solution for deployment on the target cloud

The Design Studio Composition Engine is Spring Boot backend component which exposes REST APIs required to carry out CRUD operations on composite solutions.

4.4.3.2. TOSCA Model Generator Client

The TOSCA Model Generator Client is a library used by the Onboarding component to generate artifacts (TGIF.json, Protobuf.json) that are required by the Design Studio UI to perform operations on ML models, such as drag-drop, display input output ports, display meta data, etc.

4.4.3.3. Generic Data Mapper Service

The Generic Data Mapper Service enables users to connect two ML models ‘A’ and ‘B’ where the number of output fields of model ‘A’ and input fields of model ‘B’ are the same. The user is able to connect the field of model ‘A’ to required field of model ‘B’. The Data Mapper performs data type transformations between Protobuf data types.

4.4.3.4. Data Broker

At a high level, a Data Broker retrieves and converts the data into protobuf format. The Data Brokers retrieve data from the different types of sources like database, file systems (UNIX, HDFS Data Brokers, etc.), Router Data Broker, and zip archives.

The Design Studio provides the following Databrokers:

  1. CSV DataBroker: used if source data resides in text file as a comma (,) separated fields.
  2. SQL DataBroker: used if source data is SQL Data base. Currently MYSQL database is supported.

4.4.3.5. Runtime Orchestrator

The Runtime Orchestrator (also called Blueprint Orchestrator or Model Connector) is used to orchestrate communication between the different models in a Composite AI/ML solution.

For more information: Runtime Orchestrator Documentation.

4.4.3.6. Proto Viewer

This component allows visualization of messages transferred in protobuf format. This is a passive component that shows the messages explicitly delivered to it; it does not listen (“sniff”) all network traffic searching for protobuf data. Displaying the contents of a protobuf message requires the corresponding protocol buffer definition (.proto) file, which are fetched from a network server, usually a Nexus registry.

For more information: Proto Viewer Documentation.

4.4.4. Deployment

The deployment components enable users to launch models and solutions (composite models with additional supporting components) in various runtime environments, which are generally specific to the deployment component “client”. These clients are invoked by user actions in the Portal, e.g. selecting a deployment target for a model in the various UI views where deployment is an option.

4.4.4.1. Azure Client

The Azure Client assists the user in deploying models into the Azure cloud service, as described in the Deploy Acumos Model to Azure User Guide. The Azure Client uses Azure APIs to perform actions such as creating a VM where the model will be deployed. The process depends upon a variety of prerequisite configuration steps by the user, as described in the user guide linked above.

Once a VM has been created, the Azure Client executes commands on the VM to download and deploy the various model components. See the Acumos Azure Client Developers Guide for more info.

The Azure Client interacts with the following Acumos platform components and supporting services:

  • the Portal, for which the Azure Client coordinates model deployment upon request by the user
  • the Nexus Client, which retrieves model artifacts from the Nexus maven repo
  • the Common Data Service Client, which retrieves model attributes stored in the CDS
  • the Runtime Orchestrator, which the Azure Client configures with the information needed to route protobuf messages through a set of composite model microservices
  • the Data Broker, which the Azure Client configures with the information needed to ingest model data into the model
  • the Proto Viewer, which the Azure Client configures with the information needed to display model messages on the Proto Viewer web interface
  • the Filebeat service, which collects the log files created by the Azure Client, using a shared volume
  • supporting services
    • the docker-engine, which retrieves docker images from the Acumos platform Nexus docker repo
    • the Acumos project Nexus docker repo, for access to deployment components such as the Runtime Orchestrator, Data Broker, and Proto Viewer

4.4.4.2. Openstack Client

The Openstack Client assists the user in deploying models into an Openstack based public cloud hosted by Rackspace, as described in the Openstack Client Users Guide. The Openstack Client uses OpenStack APIs to perform actions such as creating a VM where the model will be deployed. The process depends upon a variety of prerequisite configuration steps by the user, as described in the user guide linked above.

Once a VM has been created, the Openstack Client executes commands on the VM to download and deploy the various model components. See the Openstack Client Developers Guide for more info.

The Openstack Client interacts with the following Acumos platform components and supporting services:

  • the Portal, for which the OpenStack Client coordinates model deployment upon request by the user
  • the Nexus Client, which retrieves model artifacts from the Nexus maven repo
  • the Common Data Service Client, which retrieves model attributes stored in the CDS
  • the Runtime Orchestrator, which the Openstack Client configures with the information needed to route protobuf messages through a set of composite model microservices
  • the Data Broker, which the Openstack Client configures with the information needed to ingest model data into the model
  • the Proto Viewer, which the Openstack Client configures with the information needed to display model messages on the Proto Viewer web interface
  • the Filebeat service, which collects the log files created by the Openstack Client, using a shared volume
  • supporting services
    • the docker-engine, which retrieves docker images from the Acumos platform Nexus docker repo
    • the Acumos project Nexus docker repo, for access to deployment components such as the Runtime Orchestrator, Data Broker, and Proto Viewer

4.4.4.3. Kubernetes Client

The Kubernetes Client and associated tools assists the user in deploying models into a private kubernetes cloud, as described in Acumos Solution Deployment in Private Kubernetes Cluster.

For a model that the user wants to deploy (via the “deploy to local” option), the Kubernetes Client generates a deployable solution package, which as described in the guide above, is downloaded by the user. After unpacking the solution package (zip file), the user then takes further actions on the host where the models will be deployed, using a set of support tools included in the downloaded solution package:

  • optionally installing a private kubernetes cluster (if not already existing)
  • deploying the model using an automated script, and the set of model artifacts included in the solution package

The Kubernetes Client interacts with the following Acumos platform components:

  • the Portal, for which the Kubernetes Client coordinates model deployment upon request by the user
  • the Nexus Client, which retrieves model artifacts from the Nexus maven repo
  • the Common Data Service Client, which retrieves model attributes stored in the CDS
  • the Filebeat service, which collects the log files created by the Kubernetes Client, using a shared volume

The Kubernetes Client deployment support tool “deploy.sh” interacts with the following Acumos platform components and supporting services:

  • the Runtime Orchestrator, which deploy.sh configures with the information needed to route protobuf messages through a set of composite model microservices
  • the Data Broker, which deploy.sh configures with the information needed to ingest model data into the model
  • the Proto Viewer, which deploy.sh configures with the information needed to display model messages on the Proto Viewer web interface
  • supporting services
    • the docker-engine, which retrieves docker images from the Acumos platform Nexus docker repo
    • the kubernetes master (via the kubectl client), to configure, manage, and monitor the model components
    • the Acumos project Nexus docker repo, for access to deployment components such as the Runtime Orchestrator, Data Broker, and Proto Viewer

4.4.4.4. Docker Proxy

As described in Acumos Solution Deployment in Private Kubernetes Cluster, the Docker Proxy provides an authentication proxy for the Acumos platform docker repo. Apart from the use for model deployment into kubernetes, the Docker Proxy addresses a key need of Acumos platform users, and opportunities to enhance the other deployment clients related to:

  • the ability to retrieve model microservice docker images from the Acumos platform using the normal process of “docker login” followed by “docker pull”

Using the normal docker protocol for image download will enhance the simplicity, speed, efficiency, and reliability of:

  • user download of a model for local deployment, e.g. for local testing
  • deployment processes using the Azure and OpenStack clients, to be considered as a feature enhancement in the Boreas release

The Docker Proxy interacts with the following Acumos platform components and supporting services:

  • the Kubernetes Client deployment support tool “deploy.sh”, for which the Docker Proxy provides docker login and image pull services
  • supporting services
    • The Nexus docker repo, from which the Docker Proxy pulls model microservice images

4.4.5. Catalog, Data Model and Data Management

This project includes the Common Data Service, the Federation Gateway, and the Model Schema subprojects.

4.4.5.1. Common Data Service

The Acumos Common Data Service provides a storage and query layer between Acumos system components and a relational database. The server component is a Java Spring-Boot application that provides REST service to callers and uses Hibernate to manage the persistent store. The client component is a Java library that provides business objects (models) and methods to simplify the use of the REST service.

For more info: Common Data Service

4.4.5.2. Federation Gateway

The Federation Gateway component provides a mechanism to exchange models between two Acumos instances via a secure network channel. The Gateway is implemented as a server that listens for requests on a REST API. It also has a client feature that communicates with remote instances.

For more info: Federation Gateway

4.4.5.3. Model Schema

The Model Schema is the JSON schema used to define and validate the Acumos model metadata generated by client libraries such as the Acumos python client library.

For more info: Model Schema

4.4.6. Common Services

4.4.6.1. Microservice Generation

The Microservice Generation component is in charge of dockerize the model, create the microservice and store artifacts in Nexus.

For more information Microservice Generation.

4.4.6.2. Nexus Client

4.4.6.3. Generic Model Runner

4.4.6.4. Python DCAE Model Runner

4.5. Supplemental Components

The following sections describe the scope, role, and interaction of components that supplement the Acumos platform as deployed components and tools. These components and tools are developed and/or packaged by the Acumos project to provide supplemental support for the platform.

4.5.1. Operations, Admin, and Maintenance (OAM)

The Platform-OAM project maintains the repos providing:

  • Acumos platform deployment support tools
  • Logging and Analytics components based upon the ELK Stack, of which Acumos uses the open source versions

4.5.1.1. System Integration

The System Integration repo contains Acumos platform deployment support tools e.g.

  • Docker-compose templates for manual platform installation under docker-ce
  • Kubernetes templates for platform deployment in Azure-kubernetes
  • Oneclick / All-In-One (AIO) platform deployment under docker-ce or kubernetes
    • See One Click Deploy User Guide

4.5.1.2. Filebeat

Filebeat is a support component for the ELK stack. Filebeat monitors persistent volumes in which Acumos components save various log files, and aggregates those files for delivery to the Logstash service.

4.5.1.3. Metricbeat

Metricbeat is a support component for the ELK stack. Metricbeat monitors host and process resources and delivers the to the Logstash service.

4.5.1.4. ELK Stack

The ELK Stack provides the core services that archive, access, and present analytics and logs for operations support dashboards. It includes:

  • Logstash: a server-side data processing pipeline that ingests data from multiple sources, transforms it, and then sends it to ElasticSearch for storage
  • ElasticSearch: a data storage, search, and analytics engine
  • Kibana: a visualization frontend for ElasticSearch based data

See Platform Operations, Administration, and Management (OA&M) User Guide for more info.

4.5.2. External Components

The following sections describe the scope, role, and interaction of externally-developed components that are deployed (some, optionally) as part of the Acumos platform or as container runtime environmments in which the Acumos platform is deployed.

4.5.2.1. MariaDB

MariaDB is a relational database system. Acumos platform components that directly use MariaDB for database services include:

  • Common Data Service, for storage of platform data in the CDS database
  • Portal-Marketplace, for storage of Hippos CMS data
  • ELK stack, for access to platform user analytics

Acumos platform components access the MariaDB service via a URL and credentials defined in the component template (docker or kubernetes).

4.5.2.2. Nexus

Nexus (Nexus 3) is used as an artifact repository, for

  • artifacts related to simple and composite models
  • model microservice docker images

Acumos platform components that directly use Nexus for repository services include:

  • Design Studio
  • Onboarding
  • Azure Client
  • Microservice Generation
  • Portal-Marketplace
  • Federation

4.5.2.3. Kong

The Kong Community Edition is an optional component used as needed as a reverse proxy for web and API requests to the platform. The primary web and API services exposed through the kong proxy are

  • the Onboarding service APIs (URL paths based upon /onboarding-app)
  • the Portal-Marketplace web frontend and APIs (all other URL paths)

4.5.2.4. Docker-CE

Docker Community Edition is used as a key component in the platform for the purposes:

  • accessing docker repositories, including the Acumos platform docker repository
  • building docker images
  • launching containers on request of the kubernetes master node

The docker-engine is the main feature of Docker-CE used in Acumos, and is deployed:

  • for Docker-CE based platform deployments, on one of the platform hosts (e.g. VMs or other machines)
  • for kubernetes based platform deployments, as a containerized service using the Docker-in-Docker (docker-dind) variant of the official docker images

4.5.2.5. Kubernetes

Kubernetes provides a container management environment in which the Acumos platform (as a collection of docker image components) and models can be deployed. Kubernetes cluster installation tools are provided by the kubernetes-client repo, and can be used for establishing a private kubernetres cluster where the Acumos platform and models can be deployed. The Acumos AIO toolkit can deploy the Acumos platform in a private kubernetes cluster. For kubernetes clusters hosted by public cloud providers e.g. Azure, Acumos provides kubernetes templates for the Acumos platform components in the system-integration repo.