How to use the LXD CSI driver with Kubernetes¶
Learn how to get the LXD Container Storage Interface (CSI) driver running in your Kubernetes cluster.
Prerequisites¶
The primary requirement is a Kubernetes cluster (of any size), running on LXD instances inside a dedicated LXD project.
This guide assumes you have administrative access to both LXD and the Kubernetes cluster.
Deploy the CSI driver¶
First, create a new Kubernetes namespace named lxd-csi
:
kubectl create namespace lxd-csi --save-config
Afterwards, create a Kubernetes secret lxd-csi-secret
containing a previously created bearer token:
kubectl create secret generic lxd-csi-secret \
--namespace lxd-csi \
--from-literal=token="${token}"
Deploy the CSI driver using a Helm chart¶
You can deploy the LXD CSI using Helm chart:
helm install lxd-csi-driver oci://ghcr.io/canonical/charts/lxd-csi-driver \
--version v0.0.0-latest-edge \
--namespace lxd-csi
Tip
Use the template
command instead of install
to see the resulting manifests.
The chart is configured to work out of the box. It deploys the CSI node server as a DaemonSet, with the CSI controller server as a single replica Deployment, and ensures minimal required access is granted to the CSI driver.
You can tweak the chart to create your desired storage classes, set resource limits, and increase the controller replica count by providing custom chart values. To get available values, fetch the chart’s default values:
helm show values oci://ghcr.io/canonical/charts/lxd-csi-driver --version v0.0.0-latest-edge > values.yaml
Tip
Use the --values
flag with Helm commands to apply your modified values file.
Deploy the CSI driver using manifests¶
Alternatively, you can deploy the LXD CSI controller and node servers from manifests that can be found in the deploy directory of the LXD CSI Driver GitHub repository.
git clone https://github.com/canonical/lxd-csi-driver
cd lxd-csi-driver
kubectl apply -f deploy/
Usage examples¶
This section provides practical examples of configuring StorageClass and PersistentVolumeClaim (PVC) resources when using the LXD CSI driver.
The examples cover:
Creating different types of storage classes,
Defining volume claims that request storage from those classes,
Demonstrating how different Kubernetes resources consume the volumes.
StorageClass configuration¶
The following example demonstrates how to configure a Kubernetes StorageClass that uses the LXD CSI driver for provisioning volumes.
In the StorageClass, the fields provisioner
and parameters.storagePool
are required.
The first specifies the name of the LXD CSI driver, which defaults to lxd.csi.canonical.com
, and the second references a target storage pool where the driver will create volumes.
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: lxd-csi-fs
provisioner: lxd.csi.canonical.com # Name of the LXD CSI driver.
parameters:
storagePool: my-lxd-pool # Name of the target LXD storage pool.
Default StorageClass¶
The default StorageClass is used when storageClass
is not explicitly set in the PVC configuration.
You can mark a Kubernetes StorageClass as the default by setting the storageclass.kubernetes.io/is-default-class: "true"
annotation.
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: lxd-csi-sc
annotations:
storageclass.kubernetes.io/is-default-class: "true"
provisioner: lxd.csi.canonical.com
parameters:
storagePool: my-lxd-pool
Immediate volume binding¶
By default, volume binding is set to WaitForFirstConsumer
, which delays volume creation until the Pod is scheduled.
Setting the volume binding mode to Immediate
instructs Kubernetes to provision the volume as soon as the PVC is created.
Immediate binding with local storage volumes
When using local storage volumes, the immediate volume binding mode can cause the Pod to be scheduled on a node without access to the volume, leaving the Pod in a Pending
state.
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: lxd-csi-immediate
provisioner: lxd.csi.canonical.com
volumeBindingMode: Immediate # Default is "WaitForFirstConsumer"
parameters:
storagePool: my-lxd-pool
Prevent volume deletion¶
By default, the volume is deleted when its PVC is removed.
Setting the reclaim policy to Retain
prevents the CSI driver from deleting the underlying LXD volume, allowing for manual cleanup or data recovery later.
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: lxd-csi-retain
provisioner: lxd.csi.canonical.com
reclaimPolicy: Retain # Default is "Delete"
parameters:
storagePool: my-lxd-pool
Configure StorageClass using Helm chart¶
The LXD CSI Helm chart allows defining multiple storage classes as part of the deployment.
Each entry in the storageClasses
list must include at least name
and storagePool
.
# values.yaml
storageClasses:
- name: lxd-csi-fs # Name of the StorageClass (required).
storagePool: my-pool # Name of the target LXD storage pool (required).
- name: lxd-csi-fs-retain
storagePool: my-pool
reclaimPolicy: Retain
PersistentVolumeClaim configuration¶
A PVC requests a storage volume from a StorageClass. Specify the access modes, volume size (capacity), and volume mode.
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: app-data
spec:
accessModes:
- ReadWriteOnce # Allowed storage volume access modes.
storageClassName: lxd-csi-sc # Storage class name.
resources:
requests:
storage: 10Gi # Storage volume size.
volumeMode: Filesystem # Storage volume mode (content type in LXD terminology). Can be "Filesystem" or "Block".
Access modes¶
Access modes ↗ define how a volume can be mounted by Pods.
Access mode |
Supported drivers |
Description |
---|---|---|
|
Mounted as read-write by a single node. Multiple Pods on that node can share it. |
|
|
Mounted as read-write by a single Pod on a single node. |
|
|
Mounted as read-only by many Pods across nodes. |
|
|
Mounted as read-write by many Pods across nodes. |
End-to-end examples¶
Referencing PVC in Deployment¶
This pattern is used when multiple Pods share the same persistent volume. The PVC is created first and then referenced by name in the Deployment.
Each replica mounts the same volume, which is only safe when:
the volume’s access mode allows multi-node access (
ReadWriteMany
,ReadOnlyMany
), orthe Deployment has a single replica (
replicas: 1
), as shown below.
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: app-data
spec:
accessModes:
- ReadWriteOnce
storageClassName: lxd-csi-sc
resources:
requests:
storage: 10Gi
volumeMode: Filesystem
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: app
spec:
replicas: 1 # Use a single replica for non-shared storage volumes.
selector:
matchLabels:
app: app
template:
metadata:
labels:
app: app
spec:
containers:
- name: app
image: nginx:stable
ports:
- containerPort: 80
volumeMounts:
- name: data
mountPath: /usr/share/nginx/html
volumes:
- name: data
persistentVolumeClaim:
claimName: app-data # References PVC named "app-data".
Referencing PVC in StatefulSet¶
This pattern is used when each Pod requires its own persistent volume.
The volumeClaimTemplates
section dynamically creates a PVC per Pod (e.g. data-app-0
, data-app-1
, data-app-2
).
This ensures each Pod retains its volume through restarts and rescheduling.
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: app
spec:
serviceName: app
replicas: 3
selector:
matchLabels:
app: app
template:
metadata:
labels:
app: app
spec:
containers:
- name: app
image: nginx:stable
ports:
- containerPort: 80
volumeMounts:
- name: data
mountPath: /usr/share/nginx/html
volumeClaimTemplates:
# PVC template used for each replica in a stateful set.
- metadata:
name: data
spec:
accessModes:
- ReadWriteOnce
storageClassName: lxd-csi-sc
resources:
requests:
storage: 5Gi