Skip to content

LUME-services


LUME-services provides a set of common services for model workflow orchestration:

Not yet implemented but eventually this will also include HPC interfaces slurm etc.:

  • Abstracted HPC service for integration with scientific computing infrastructure.

The intent of these tools are to streamline the packaging of modeling/simulation code by providing contextual flexibility with respect to service clusters. The framework uses a configuration provided at runtime to establish connections will all services. This code design facilitate portability from local, to distributed dev, or production environments with the same code, but by setting the enironment variables. Models revisions are tracked using git.


The microservice interfaces developed for LUME-services are isolated, which allows for abstraction and modularization of updates and rollback, and prioritize scalability, maintainability, and parallelized development and maintenance. Services can be deployed in clusters of containers or on remote resources subject to user constraints. Example configurations of Docker and Kubernetes clusters shown below. * Docker: docker * Kubernetes: kubernetes


Alternatively, users can execute run workflows directly in their process by configuring a local backend.

Features:

  • Standard schema for managing model metadata
  • Differentiated local and remote execution environments
  • Project template for building packaged models
  • Interfaces for model registry database and results database
  • APIs for scheduling workflow runs, registering models, and collecting results from the database.

Requirements

  • The development environment launch containerized services in Docker containers. Use of these tools requires installation of the Docker Engine

Installation

This package can be installed from GitHub using:

pip install git+https://github.com/slaclab/lume-services.git