Runtime Orchestrator User GuideΒΆ

The Runtime Orchestrator (also called Blueprint Orchestrator or Model connector) is used to orchestrate between the different models in a Composite AI/ML solution.

There are three end points in the Blueprint Orchestrator, /putDockerInfo, /putBlueprint and /{operation}, the first two are PUT requests and the last one is a POST request.

The deployer will invoke /putBlueprint and /putDockerInfo APIs to push docker info JSON and blueprint.json to the orchestrator to start the execution flow of the composite solution.

After the deployer pushes the configuration JSON files, the data source will invoke /{operation} to pass the binary data stream in Protobuf format so that the orchestrator can start in turn passing the data to the first node from the blueprint.json by invoking the API as specified in the path variable {operation}. The orchestrator will wait for its response, which is also a data stream in Protobuf format, then continue to pass the response to the subsequent nodes from the blueprint.json until all the nodes are exhausted.

In case of the databroker being present, the deployer will invoke /putBlueprint and /putDockerInfo and the Blueprint Orchestrator will pull data from the Data Broker and pass data to subsequent nodes as defined by the blueprint.json.

The Runtime Orchestrator by default launches on port 8555.

Any application can request predictions from a composite solution/ solution where the Orchestrator is involved in by sending a HTTP POST request to the endpoint http://{hostname}:8555/{operation_of_the_first_model_in_the_solution}