LXD 6.6 release notes

This is a feature release and is not recommended for production use.

Release notes content

These release notes cover updates in the core LXD repository and the LXD snap package. For a tour of LXD UI updates, please see the release announcement in our Discourse forum.

Highlights

This section highlights new and improved features in this release.

Instance placement groups

This release adds the concept of placement groups. Placement groups provide declarative control over how instances are distributed across cluster members. They define both a policy (how instances should be distributed) and a rigor (how strictly the policy is enforced). Placement groups are project-scoped resources, which means different projects can have placement groups with the same name without conflict.

Placement cluster member group recorded

When an instance is placed into a cluster member group using the --target=@<group> syntax, the group specified is now recorded into a new volatile.cluster.group configuration key.

This is then used during cluster member evacuation when restoring instances to ensure the instance placement remains within the specified group.

Kubernetes Container Storage Interface (CSI) driver and /dev/lxd volume management

The LXD project now provides a CSI driver that allows Kubernetes to provision and manage volumes for K8s Pods. The driver is an open source implementation of the Container Storage Interface (CSI) that integrates LXD storage backends with Kubernetes. It leverages LXD’s wide range of supported storage drivers, enabling dynamic provisioning of both local and remote volumes. Depending on the storage pool, the CSI supports provisioning of both block and filesystem volumes.

To enable this functionality, the /dev/lxd guest API has been extended to support fine-grained authorization (by way of bearer token authentication) and volume management.

Custom storage volume recovery improvements

Using the backup_metadata_version improvements added in LXD 6.5, the lxd recover tool now allows more extensive recovery of custom volumes attached to instances. The full custom volume configuration can now be recovered. Additionally, the tool now supports recovery from Dell PowerFlex - powerflex and Pure Storage - pure pools which was previously not supported.

Consistent instance and custom volume snapshots

Consistent snapshots of both an instance and its attached volumes can now be taken together.

The lxc snapshot command has been extended with the --disk-volumes flag that accepts either root or all-exclusive values. When root is specified (the default behavior), a snapshot of just the instance’s root volume is taken. In all-exclusive mode, the instance is paused while a snapshot of its root volume and all exclusively attached volumes is taken.

An instance snapshot and its custom volume snapshots can be restored together using lxc restore --disk-volumes=all-exclusive.

HPE Alletra storage driver

Initial support for using HPE Alletra storage appliances using iSCSI or NVME over TCP has been added. Currently, instance and custom volume recovery is not supported (but it is planned).

Persistent VM PCIe bus allocation

Devices added to VMs now have their PCIe bus number persisted into volatile configuration keys so that the device maintains the same location on the bus when the instance is restarted. Previously, when a device was hot plugged into a running VM, it was possible for the operation to fail due to bus location conflicts or to succeed and then have its bus location change on a subsequent restart of the instance.

This change was also required to make the K8s CSI driver usable because it dynamically adds and removes custom filesystem volumes from running VMs.

Per-project image and backup volumes

It has long been possible to specify that downloaded images and exported backups be stored in a custom volume on a particular storage pool. It is now possible to specify these volumes on a per-project basis, allowing for images and backups to be stored in different custom volumes (and storage pools) for different projects.

Two new configuration keys have been introduced: storage.project.{name}.images_volume and storage.project.{name}.backups_volume per each project, allowing for a storage volume on an existing pool to be used for storing the project-specific images and backups artifacts.

OVN internal network forward and load balancers

This release adds support for internal OVN load balancers and network forwards. This approach allows ovn networks to define ports on internal IP addresses that can be forwarded to other internal IPs inside their respective networks. This change removes the previous limitation on ovn networks that load balancers and network forwards could only use external IP addresses to forward to internal IPs.

OVN DHCP ranges

This release adds a new configuration key ipv4.dhcp.ranges for ovn networks. This key allows specifying a list of IPv4 ranges reserved for dynamic allocation using DHCP. This is useful when setting up a network forward towards a floating IP inside an ovn network that needs to be prevented from being allocated via DHCP.

OVN NIC acceleration parent interface option

This release adds support for specifying the OVN NIC acceleration physical function interfaces to allocate virtual functions from.

This avoids the need to add the physical function interfaces to the OVN integration bridge, which had prevented their use for host connectivity.

This change introduces a new configuration key for ovn networks and NICs:

Improved OIDC authentication provider compatibility using sessions

This release adds session support for OIDC authentication. This enables compatibility with identity providers that issue opaque access tokens.

When a session expires, LXD re-verifies the login with the identity provider. The duration of OIDC sessions defaults to one week and can be configured via the oidc.session.expiry configuration key.

Verification of an OIDC session depends on a new, cluster-wide core secret.

A new core.auth_secret_expiry configuration controls how long a secret remains valid before it expires. This sets the upper bound of an OIDC session duration.

Create custom filesystem volume from tarball contents

The lxc storage volume import command has gained support for creating a custom filesystem volume from the contents of a tarball.

A new supported value of tar has been added to the --type flag that causes the contents of the tarball to be unpacked into the newly created volume.

Forced project deletion

It is now possible to forcefully delete a project and all of its entities using the lxc project delete <project> --force command.

Operation requestor information

A new field requestor was added to operations, which contains information about the caller that initiated the operation.

Resources disk used by information

A new field used_by was added to disks in the resources API to indicate its potential use by any virtual parent device, such as bcache.

Bug fixes

The following bug fixes are included in this release.

Backwards-incompatible changes

These changes are not compatible with older versions of LXD or its clients.

Asynchronous storage volume and profile API endpoints

Certain storage and profile endpoints that were previously synchronous now return an operation and behave asynchronously.

The latest LXD Go client detects the presence of this API extension. When it is available, the caller receives an operation object directly from the LXD server. If the extension is not present on the server, then the server response is wrapped in a completed operation, allowing the caller to handle it as an operation while lacking a retrievable operation ID.

Older LXD Go clients are incompatible with servers that include this extension. Instead of the expected successful response, they receive an operation response.

Endpoints converted to asynchronous behavior:

  • POST /storage-pools/{pool}/volumes/{type} - Create storage volume

  • PUT /storage-pools/{pool}/volumes/{type}/{vol} - Update storage volume

  • PATCH /storage-pools/{pool}/volumes/{type}/{vol} - Patch storage volume

  • POST /storage-pools/{pool}/volumes/{type}/{vol} - Rename storage volume

  • DELETE /storage-pools/{pool}/volumes/{type}/{vol} - Delete storage volume

  • PUT /storage-pools/{pool}/volumes/{type}/{vol}/snapshots/{snap} - Update storage volume snapshot

  • PATCH /storage-pools/{pool}/volumes/{type}/{vol}/snapshots/{snap} - Patch storage volume snapshot

  • PUT /1.0/profiles/{name} - Update profile

  • PATCH /1.0/profiles/{name} - Patch profile

Deprecated features

These features are removed in this release.

Instance placement scriptlet removed

The instance placement scriptlet functionality (and the associated instances_placement_scriptlet API extension) has been removed in favor of the new Placement groups functionality.

If a scriptlet is set in the removed instances.placement.scriptlet configuration option, it is stored in the user.instances.placement.script configuration option when upgrading.

Updated minimum Go version

If you are building LXD from source instead of using a package manager, the minimum version of Go required to build LXD is now 1.25.4.

Snap packaging changes

  • Settings to disable the AppArmor restricted user namespaces are persisted to /run/sysctl.d/zz-lxd.conf

  • Dqlite bumped to v1.18.3

  • LXC bumped to v6.0.5

  • LXCFS bumped to v6.0.5

  • Enable lxcfs.pidfd=true by default

  • LXD-UI bumped to 0.19

  • NVIDIA-container and toolkit bumped to 1.18.0

  • QEMU bumped to 8.2.2+ds-0ubuntu1.10

  • ZFS bumped to zfs-2.3.4

Change log

View the complete list of all changes in this release.

Downloads

The source tarballs and binary clients can be found on our download page.

Binary packages are also available for:

  • Linux: snap install lxd

  • MacOS client: brew install lxc

  • Windows client: choco install lxc