Skip to content

Managing Docker Models

You can run your own Docker image in Predictive Learning Essentials if you want to provide your custom logic, developed using your custom frameworks or development language, within the Insights Hub premises. Running Docker containers provides unrestricted access to the Insights Hub APIs, including faster access to stored data and other programable resources.

Docker images can be executed by programatically creating jobs or scheduled jobs using the Job Manager APIs or, dirctly via the UI.

Docker Image Storage in Model Management

To create a new Docker model, complete these steps:

  1. Click the New Version button on the Manage Analytical Models page. The Create New Version dialog opens.
  2. Select Docker Image from the Type dropdown list. The system displays two Docker-specific controls: a Generate Token button and a text field.
  3. Enter the complete Docker image repository path and tag version in the text field.
  4. Click the Generate Token button only after reviewing the instructions below.

PrL supports Docker models in addition to Python version 3.9 available in Jupyter Notebook. Docker models provide flexibility to run custom code in any programming language and Linux distribution. Other model types use the AWS AMI Linux distribution by default. Docker image configurations have specific constraints for data ingestion and persistence functions. Docker images stored in Model Management:

  • Consume data from the /data/input folder
  • Store output data in the /data/output folder

Job Manager service configures these folders for automated execution. The Job Manager service retrieves job input parameters data and stores it in /data/input Docker image's folder, then, collects data written to /data/output and stores it into the Job's output parameter. Job Manager places this data in designated persistence locations such as Data Exchange, or Integrated Data Lake (IDL).

Building Docker Images for Job Manager Execution

Creating a custom Docker image to contain your code or model requires a Dockerfile at minimum. Most implementations inherit from public base images that provide essential support for your code or model. Here is an example Dockerfile:

ARG BASE_CONTAINER=python:3.9-slim-bullseye
FROM $BASE_CONTAINER

USER root  

RUN ["mkdir", "/tmp/input"]
RUN ["mkdir", "/tmp/output"]
RUN chmod 777 -R /tmp
RUN ["mkdir", "/data"]
RUN ["mkdir", "/data/input"]
RUN ["mkdir", "/data/output"]
RUN chmod 777 -R /data
RUN ["mkdir", "/iot_data"]
RUN ["mkdir", "/iot_data/input"]
RUN ["mkdir", "/iot_data/output"]
RUN ["mkdir", "/iot_data/datasets"] 
RUN chmod 777 -R /iot_data
RUN ["mkdir", "/prl_storage_data"]
RUN chmod 777 -R /prl_storage_data

RUN pip install awscli
RUN apt-get update
RUN apt-get install wget -y
RUN apt-get install curl -y
RUN apt-get install jq -y

COPY . .

ENTRYPOINT ["python3", "./my_python_script.py"]

The RUN ["mkdir", ...] commands create the required folders for Job Manager to copy input files and retrieve the data that your logic is creating. These directory creation commands are unnecessary if your container does not receive inputs or produce outputs during job execution. Install additional libraries in your Docker image using RUN apt-get install ... commands. Commands vary by operating system and require adaptation for each environment.

For comprehensive Dockerfile design instructions, refer to Dockerfile reference. For Docker image building instructions, refer to Docker build.

Persisting a Docker Image in Model Management

The complete process for storing a Docker image and making it available in Job Manager consists of three different stages:

  • obtain a temporary set of credentials
  • push your Docker image
  • create a link between the Docker repository and the new model

The last step will provide the needed metadata for the Job Manager to be able to execute this Docker image.

To create a new Docker model, follow these steps:

  1. Click the "New Version" button on the Manage Analytical Models page. The "Create New Version" pop-up window opens.
  2. Select "Docker Image" from the "Type" drop-down list. The system displays two Docker-relevant controls: a "Generate Token" button, and a text field.
  3. Enter a complete Docker image repository path and tag version in the text field.
  4. Do not click the "Generate Token" button. Read the information below, then proceed with the steps below.

Token Generation Process

Docker images must be uploaded to the Predictive Learning (PrL) service repository before association with a model. This process requires pushing the Docker image to the PrL service repository first.

Critical Time Constraints for Docker Image Upload

Note these time constraints before beginning:

  • Incomplete token generation steps within the specified 2-hour and 24-hour windows require restarting the entire process. Docker image updates become unavailable after the two-hour window expires.
  • Docker image, tag, and repository path must be linked to the model within 24 hours of token generation. The repository is automatically deleted if this timeframe expires.

Token Generation Steps

Complete these steps within two hours of generating your token:

  1. Click the Generate Token button. The service creates a unique repository.
  2. Create a tag for your Docker image to reference the upload version.
  3. Log in using the temporary credentials and push the tagged Docker image to the repository using the docker push command.

Post-Upload Actions

In order to complete the Docker image upload process, after completing the steps above, you need to:

  • Associate the Docker image with your model by referencing the correct repository and tag in Model Management.
  • Close the dialog window after receiving the confirmation that your Docker image has been successfully associated with the new model.

Last update: August 11, 2025

Except where otherwise noted, content on this site is licensed under the Development License Agreement.