Backup MLflow data

This how-to guide will show you how to make a backup of all of MLflow’s data, that live in MySQL and S3.

Pre-requisites

  1. Access to a S3 storage - only AWS S3 and S3 RadosGW are supported

  2. Admin access to the Kubernetes cluster where Charmed MLflow is deployed

  3. Juju admin access to the mlflow model

  4. rclone installed and configured to connect to the S3 storage from 1

  5. s3-integrator deployed and configured

    1. https://charmhub.io/mysql-k8s/docs/h-configure-s3-aws

    2. https://charmhub.io/mysql-k8s/docs/h-configure-s3-radosgw

  6. yq binary

Note

This S3 storage will be used for storing all backup data from MLflow.

Throughout the following guide we’ll use the following ENV vars in the commands

S3_BUCKET=backup-bucket-2024
RCLONE_S3_REMOTE=remote-s3
RCLONE_MINIO_MLFLOW_REMOTE=minio-mlflow
RCLONE_BWIDTH_LIMIT=20M

Through the guide we’ll be using rclone to both get files from MinIO and push the backup to an S3 endpoint. An example configuration looks like this:

[minio-mlflow]
type = s3
provider = Minio
access_key_id = minio
secret_access_key = ...
endpoint = http://localhost:9000
acl = private

Note

You can check where this configuration file is located with rclone config file

Backup MLflow DBs

1. Scale up mlflow-mysql

Warning

In a single node setup, the Primary database will become unavailable during the backup. It is recommended to have a multinode setup before backing up the data.

juju scale-application mlflow-mysql 2

2. Create a backup of DB

To see how to make a backup of MLflow’s MySQL database, follow this guide on how to Create a backup.

Note

Please replace mysql-k8s with the name of the database you intend to create a backup for in the commands form that guide. E.g. mlflow-mysql instead of mysql-k8s.

Backup mlflow MinIO bucket

Note

The name of the MLflow MinIO bucket defaults to mlflow, the bucket name can be verified with juju config mlflow default_artifact_root.

1. Configure rclone for MinIO

You can use this sample rclone configuration as a reference:

[minio-mlflow]
type = s3
provider = Minio
access_key_id = minio
secret_access_key = ...
endpoint = http://localhost:9000
acl = private

Note that the machine will need to use a URL to access MinIO. In this case we’ll use kubectl to do a port forward:

kubectl port-forward -n kubeflow svc/mlflow-minio 9000:9000

Note

In order to find the secret-access-key for MinIO you’ll need to run the following command:

juju show-unit mlflow-server/0 \
    | yq '.mlflow-server/0.relation-info.[] | select (.related-endpoint == "object-storage") | .application-data.data' \
    | yq '.secret-key'

In the future the MinIO Charm will be extended so that it can send it’s data directly to the S3 endpoint.

2. Sync buckets from MinIO to S3

rclone --size-only sync \
  --bwlimit $RCLONE_BWIDTH_LIMIT \
  $RCLONE_MINIO_MLFLOW_REMOTE:mlflow \
  $RCLONE_S3_REMOTE:$S3_BUCKET/mlflow

Next Steps