How to manage multiple clusters with the Cluster Manager

This page provides instructions for installing the MicroCloud Cluster Manager and using it to view resource usage and availability across multiple clusters.

For detailed technical information, see Architecture of the MicroCloud Cluster Manager.

Cluster Manager versus the MicroCloud UI

The MicroCloud Cluster Manager is a separate application from the MicroCloud UI. The MicroCloud UI is used to manage a single cluster or single LXD deployment.

Requirements

The clusters that you intend to manage with the Cluster Manager must all use MicroCloud 3/stable or higher.

The Cluster Manager itself must be installed on a machine with the following setup:

  • Juju 3.6 or higher

  • A Kubernetes deployment controlled by Juju

    • A single machine is sufficient; if you require high availability and uptime during maintenance, this deployment should span across 3 or more machines

  • Two resolvable domain names (preferred) or static IPs

    • One must be accessible from the MicroCloud clusters

    • One must be accessible by the Cluster Manager user

  • An OIDC client configured (see below)

OIDC client configuration

You need a supported OpenID Connect (OIDC) account with a client application configured.

The LXD documentation on OIDC provides instructions to configure supported OIDC providers for authenticating with the LXD UI and CLI.

Use the same instructions to configure the OIDC client on the provider side for use with the Cluster Manager, with these modifications:

  • When creating the client application, use port 443 instead of 8443 as the listen port for the callback URL. 8443 is the default listen port for the LXD server; 443 is the default for the Cluster Manager.

  • When shown how to obtain certain values for adding via lxc config, note down the values only. Do not use lxc config to set them. You’ll need the values that are used for the following configuration values in LXD:

  • oidc.issuer

  • oidc.client

  • oidc.audience (Auth0 only)

  • oidc.client.secret (Keycloak only)

You will use these values later for Juju configuration.

Once you have configured the OIDC client on the provider side and obtained these values, continue with the Cluster Manager installation steps below.

Install the Cluster Manager model and set up its charms

First, add a new cluster-manager Juju model:

juju add-model cluster-manager

Deploy the PostgreSQL K8s charm, the Traefik K8s charm, and the MicroCloud Cluster Manager K8s charm:

juju deploy postgresql-k8s --channel 14/stable --trust
juju deploy traefik-k8s --trust
juju deploy microcloud-cluster-manager-k8s --trust

The --trust flag grants the deployed charm permission to access cloud or cluster credentials and perform privileged operations.

A certificate charm must also be deployed to manage TLS/SSL certificates for secure communication. Any charm that implements both the certificates and send-ca-cert interfaces can be used, such as the self-signed-certificates charm.

Deploy your chosen certificate charm:

juju deploy <certificate charm> --trust

Example using the self-signed certificates charm charm:

juju deploy self-signed-certificates --trust

Next, use juju integrate to create virtual connections between the charmed applications you deployed, so that they can communicate:

juju integrate postgresql-k8s:database microcloud-cluster-manager-k8s
juju integrate traefik-k8s:traefik-route microcloud-cluster-manager-k8s

Also integrate the certificate charm you used with the MicroCloud Cluster Manager K8s charm.

juju integrate <certificate charm>:certificates microcloud-cluster-manager-k8s
juju integrate <certificate charm>:send-ca-cert microcloud-cluster-manager-k8s

For example, if you used the self-signed certificates charm, run:

juju integrate self-signed-certificates:certificates microcloud-cluster-manager-k8s
juju integrate self-signed-certificates:send-ca-cert microcloud-cluster-manager-k8s

Set up OIDC access for the Cluster Manager

From the OIDC client configuration section above, you should have obtained values for the following: the OIDC issuer, client ID, audience (Auth0 only), and client secret (Keycloak only).

Use the following syntax to add them to the Cluster Manager configuration in Juju:

juju config microcloud-cluster-manager-k8s oidc-issuer=<your OIDC issuer)
juju config microcloud-cluster-manager-k8s oidc-client-id=<your OIDC client ID>
juju config microcloud-cluster-manager-k8s oidc-audience=<your OIDC audience> # for Auth0 only
juju config microcloud-cluster-manager-k8s oidc-client-secret=<your OIDC client secret> # for Keycloak only

Configure a domain for the Management API, and another for the Cluster Connector. You can also use IP addresses, but using domains is recommended.

The domain or IP address used for the management-api-domain must be accessible from the Cluster Manager user. The Cluster Connector must be accessible from the MicroCloud clusters.

juju config microcloud-cluster-manager-k8s management-api-domain=<domain or IP for the Management API>
juju config microcloud-cluster-manager-k8s cluster-connector-domain=<domain or IP for the Cluster Connector>

Example:

juju config microcloud-cluster-manager-k8s management-api-domain=management-api.example.com
juju config microcloud-cluster-manager-k8s cluster-connector-domain=cluster-connector.example.com

Set the external_hostname for your Traefik controller to the same value as the management-api-domain used above:

juju config traefik-k8s external_hostname=<management-api-domain>

Example:

juju config traefik-k8s external_hostname=management-api.example.com

Now you can access the Cluster Manager’s web UI in your browser through that address.

The web UI should prompt you to log in. Use the username and password for the OIDC account with which you configured an OIDC client earlier.

Enroll your first cluster

Use the web UI to enroll your first cluster. Alternatively, use the enroll-cluster CLI command to create a join token for your first cluster, providing a cluster name of your choice:

juju run microcloud-cluster-manager-k8s/0 enroll-cluster cluster=<cluster-name>

Example:

juju run microcloud-cluster-manager-k8s/0 enroll-cluster cluster=microcloud-01

Once the cluster is enrolled, you can explore its details in the Cluster Manager.

Extend with observability

You can extend Cluster Manager with the Canonical Observability Stack (COS) for Grafana and Prometheus integration.

Add the cos model and deploy the COS Lite charm:

juju add-model cos
juju deploy cos-lite --trust

The following commands expose application interfaces (endpoints) in the cos model for cross-model relations.

Run:

juju offer prometheus:receive-remote-write
juju offer grafana:grafana-dashboard grafana-db
juju offer grafana:grafana-metadata

For the prometheus application, we expose its receive-remote-write interface so that other models (specifically, the cluster-manager model) can connect and send remote write metrics.

For the grafana application, we expose the grafana-dashboard and grafana-db interfaces so that the other models can connect and use Grafana dashboards and its database. The Cluster Manager uses grafana-dashboard to create a LXD dashboard for each enrolled MicroCloud.

We also expose the grafana-metadata interface for sharing metadata with other models. This lets the Cluster Manager deep-link from its cluster details pages to LXD dashboards.

Next, switch to the Cluster Manager controller and model:

juju switch cluster-manager

Enable relations between the MicroCloud Cluster Manager K8s application and the interfaces we exposed in the COS model’s applications.

juju integrate microcloud-cluster-manager-k8s:send-remote-write admin/cos.prometheus
juju integrate microcloud-cluster-manager-k8s:grafana-dashboard admin/cos.grafana-db
juju integrate microcloud-cluster-manager-k8s:grafana-metadata admin/cos.grafana

Set a domain or IP address for the COS load balancer:

juju config traefik external_hostname=<grafana-load-balancer-domain> -m cos

Example:

juju config traefik external_hostname=grafana.example.com -m cos

This makes a LXD dashboard available in Grafana, and Cluster Manager now starts forwarding metrics to COS whenever it receives a cluster heartbeat.

To access Grafana, fetch the admin password:

juju run --model cos grafana/leader get-admin-password

Once you have completed this setup, a button on the cluster details page of the Cluster Manager web UI provides a deep-link into the Grafana dashboard.