LXD 6.6 release notes¶
This is a feature release and is not recommended for production use.
Release notes content
These release notes cover updates in the core LXD repository and the LXD snap package. For a tour of LXD UI updates, please see the release announcement in our Discourse forum.
Highlights¶
This section highlights new and improved features in this release.
Instance placement groups¶
This release adds the concept of placement groups. Placement groups provide declarative control over how instances are distributed across cluster members. They define both a policy (how instances should be distributed) and a rigor (how strictly the policy is enforced). Placement groups are project-scoped resources, which means different projects can have placement groups with the same name without conflict.
Documentation: Placement groups
API extension: instance_placement_groups
Placement cluster member group recorded¶
When an instance is placed into a cluster member group using the --target=@<group> syntax, the group specified is now recorded into a new volatile.cluster.group configuration key.
This is then used during cluster member evacuation when restoring instances to ensure the instance placement remains within the specified group.
Kubernetes Container Storage Interface (CSI) driver and /dev/lxd volume management¶
The LXD project now provides a CSI driver that allows Kubernetes to provision and manage volumes for K8s Pods. The driver is an open source implementation of the Container Storage Interface (CSI) that integrates LXD storage backends with Kubernetes. It leverages LXD’s wide range of supported storage drivers, enabling dynamic provisioning of both local and remote volumes. Depending on the storage pool, the CSI supports provisioning of both block and filesystem volumes.
To enable this functionality, the /dev/lxd guest API has been extended to support fine-grained authorization (by way of bearer token authentication) and volume management.
Documentation: The LXD CSI driver
Documentation: How to authenticate to the DevLXD API
API extension: auth_bearer_devlxd
API extension: devlxd_volume_management
Custom storage volume recovery improvements¶
Using the backup_metadata_version improvements added in LXD 6.5, the lxd recover tool now allows more extensive recovery of custom volumes attached to instances. The full custom volume configuration can now be recovered. Additionally, the tool now supports recovery from Dell PowerFlex - powerflex and Pure Storage - pure pools which was previously not supported.
Consistent instance and custom volume snapshots¶
Consistent snapshots of both an instance and its attached volumes can now be taken together.
The lxc snapshot command has been extended with the --disk-volumes flag that accepts either root or all-exclusive values.
When root is specified (the default behavior), a snapshot of just the instance’s root volume is taken.
In all-exclusive mode, the instance is paused while a snapshot of its root volume and all exclusively attached volumes is taken.
An instance snapshot and its custom volume snapshots can be restored together using lxc restore --disk-volumes=all-exclusive.
Documentation: Use snapshots for instance backup
API extension: instance_snapshots_multi_volume
HPE Alletra storage driver¶
Initial support for using HPE Alletra storage appliances using iSCSI or NVME over TCP has been added. Currently, instance and custom volume recovery is not supported (but it is planned).
Documentation: HPE Alletra - alletra
API extension: storage_driver_alletra
Persistent VM PCIe bus allocation¶
Devices added to VMs now have their PCIe bus number persisted into volatile configuration keys so that the device maintains the same location on the bus when the instance is restarted. Previously, when a device was hot plugged into a running VM, it was possible for the operation to fail due to bus location conflicts or to succeed and then have its bus location change on a subsequent restart of the instance.
This change was also required to make the K8s CSI driver usable because it dynamically adds and removes custom filesystem volumes from running VMs.
API extension: vm_persistent_bus
Per-project image and backup volumes¶
It has long been possible to specify that downloaded images and exported backups be stored in a custom volume on a particular storage pool. It is now possible to specify these volumes on a per-project basis, allowing for images and backups to be stored in different custom volumes (and storage pools) for different projects.
Two new configuration keys have been introduced: storage.project.{name}.images_volume and storage.project.{name}.backups_volume per each project, allowing for a storage volume on an existing pool to be used for storing the project-specific images and backups artifacts.
API extension: daemon_storage_per_project
OVN internal network forward and load balancers¶
This release adds support for internal OVN load balancers and network forwards.
This approach allows ovn networks to define ports on internal IP addresses that can be forwarded to other internal IPs inside their respective networks.
This change removes the previous limitation on ovn networks that load balancers and network forwards could only use external IP addresses to forward to internal IPs.
API extension: ovn_internal_load_balancer
OVN DHCP ranges¶
This release adds a new configuration key ipv4.dhcp.ranges for ovn networks.
This key allows specifying a list of IPv4 ranges reserved for dynamic allocation using DHCP.
This is useful when setting up a network forward towards a floating IP inside an ovn network that needs to be prevented from being allocated via DHCP.
API extension: ovn_dhcp_ranges
OVN NIC acceleration parent interface option¶
This release adds support for specifying the OVN NIC acceleration physical function interfaces to allocate virtual functions from.
This avoids the need to add the physical function interfaces to the OVN integration bridge, which had prevented their use for host connectivity.
This change introduces a new configuration key for ovn networks and NICs:
acceleration.parent- Comma separated list of physical function (PF) interfaces from which to allocate virtual functions (VFs) from for hardware acceleration whenaccelerationis enabled.API extension: ovn_nic_acceleration_parent
Improved OIDC authentication provider compatibility using sessions¶
This release adds session support for OIDC authentication. This enables compatibility with identity providers that issue opaque access tokens.
When a session expires, LXD re-verifies the login with the identity provider.
The duration of OIDC sessions defaults to one week and can be configured via the oidc.session.expiry configuration key.
Verification of an OIDC session depends on a new, cluster-wide core secret.
A new core.auth_secret_expiry configuration controls how long a secret remains valid before it expires.
This sets the upper bound of an OIDC session duration.
API extension: auth_oidc_sessions
Create custom filesystem volume from tarball contents¶
The lxc storage volume import command has gained support for creating a custom filesystem volume from the contents of a tarball.
A new supported value of tar has been added to the --type flag that causes the contents of the tarball to be unpacked into the newly created volume.
API extension: import_custom_volume_tar
Forced project deletion¶
It is now possible to forcefully delete a project and all of its entities using the lxc project delete <project> --force command.
API extension: projects_force_delete
Operation requestor information¶
A new field requestor was added to operations, which contains information about the caller that initiated the operation.
API extension: operation_requestor
Resources disk used by information¶
A new field used_by was added to disks in the resources API to indicate its potential use by any virtual parent device, such as bcache.
API extension: resources_disk_used_by
Bug fixes¶
The following bug fixes are included in this release.
Local privilege escalation through custom storage volumes (CVE-2025-64507) lxc init doesn't immediately fail on duplicated instance name if the source image is not cached Restoring cluster member while evacuating breaks instance relationship with origin Cluster healing stops network on member that triggers healing NVIDIA CDI will not work with multiple GPUs when nvidia-persistenced is running Containers do not start again when the host is not shut down properly nvidia CDI Listing instances through fine grained TLS auth is not reliable at scale Underlying storage uses 4096 bytes sector size when virtual machine images require 512 bytes Forcibly stopping an instance should not spam logs about leftover sftp server Concurrent (graphical) console connections to a VM don't close connections Help does not reflect, that there is a difference between lxc shell and lxc exec The fanotify mechanism does not notice dynamic removal of underlying devices Removing a member from cluster group that is not in any other group silently ignores request
Backwards-incompatible changes¶
These changes are not compatible with older versions of LXD or its clients.
Asynchronous storage volume and profile API endpoints¶
Certain storage and profile endpoints that were previously synchronous now return an operation and behave asynchronously.
The latest LXD Go client detects the presence of this API extension. When it is available, the caller receives an operation object directly from the LXD server. If the extension is not present on the server, then the server response is wrapped in a completed operation, allowing the caller to handle it as an operation while lacking a retrievable operation ID.
Older LXD Go clients are incompatible with servers that include this extension. Instead of the expected successful response, they receive an operation response.
Endpoints converted to asynchronous behavior:
POST /storage-pools/{pool}/volumes/{type}- Create storage volumePUT /storage-pools/{pool}/volumes/{type}/{vol}- Update storage volumePATCH /storage-pools/{pool}/volumes/{type}/{vol}- Patch storage volumePOST /storage-pools/{pool}/volumes/{type}/{vol}- Rename storage volumeDELETE /storage-pools/{pool}/volumes/{type}/{vol}- Delete storage volumePUT /storage-pools/{pool}/volumes/{type}/{vol}/snapshots/{snap}- Update storage volume snapshotPATCH /storage-pools/{pool}/volumes/{type}/{vol}/snapshots/{snap}- Patch storage volume snapshotPUT /1.0/profiles/{name}- Update profilePATCH /1.0/profiles/{name}- Patch profile
API extension: storage_and_profile_operations
Deprecated features¶
These features are removed in this release.
Instance placement scriptlet removed¶
The instance placement scriptlet functionality (and the associated instances_placement_scriptlet API extension) has been removed in favor of the new Placement groups functionality.
If a scriptlet is set in the removed instances.placement.scriptlet configuration option, it is stored in the user.instances.placement.script configuration option when upgrading.
Updated minimum Go version¶
If you are building LXD from source instead of using a package manager, the minimum version of Go required to build LXD is now 1.25.4.
Snap packaging changes¶
Settings to disable the AppArmor restricted user namespaces are persisted to
/run/sysctl.d/zz-lxd.confDqlite bumped to
v1.18.3LXC bumped to
v6.0.5LXCFS bumped to
v6.0.5Enable
lxcfs.pidfd=trueby defaultLXD-UI bumped to
0.19NVIDIA-container and toolkit bumped to
1.18.0QEMU bumped to
8.2.2+ds-0ubuntu1.10ZFS bumped to
zfs-2.3.4
Change log¶
Downloads¶
The source tarballs and binary clients can be found on our download page.
Binary packages are also available for:
Linux:
snap install lxdMacOS client:
brew install lxcWindows client:
choco install lxc