LXD 6.7 release notes¶
This is a feature release and is not recommended for production use.
Release notes content
These release notes cover updates in the core LXD repository and the LXD snap package. For a tour of LXD UI updates, please see the release announcement in our Discourse forum.
Highlights¶
This section highlights new and improved features in this release.
AMD GPU CDI support¶
LXD now supports AMD GPU passthrough for containers using the AMD CDI container-toolkit bundled in the snap package.
An AMD GPU device can be added to an instance using the command:
lxc config device add <instance_name> <device_name> gpu gputype=physical id=amd.com/gpu=0
Documentation: gputype: physical
API extension: gpu_cdi_amd
Improved VM GPU passthrough support with major new QEMU and EDK2 versions¶
As we approach the next LXD LTS release the snap package has been updated with QEMU 10.2 and EDK2 firmware 2025.02. These represent significant version increases from the previous QEMU 8.2.2 and EDK2 2023.11.
In particular VM GPU device passthrough now offers increased compatibility due to dynamic MMIO window size support being enabled.
Simplified initial access to the LXD UI¶
The lxd init command now offers the option to generate an initial access link for the UI during initialization.
This initial access URL can be used to directly access the LXD UI as an admin user for 24 hours, after which time the URL stops working.
The intention is that this initial access can be used to quickly get started with LXD UI and allows for setting up permanent authentication methods such as Permanent UI access using browser certificate or OpenID Connect authentication.
Documentation: UI access using initial link
Storage pool database recovery support for clusters¶
As part of the database recovery process it might be necessary to scan existing storage pools previously created by LXD that still exist on the storage device.
Previously this was only possible for standalone LXD servers by using the lxd recover tool.
We have now re-worked the database disaster recovery process to support LXD clusters.
As part of this storage pools need to be re-created in the LXD database before running the lxd recover tool.
For storage pools that still exist on the storage device a new source.recover option is available that allows creating the storage pool database record without modifying the data on the storage device.
Previously this was only partially possible for some of the drivers (e.g. by using lvm.vg.force_reuse), but not directly supported.
The new pool source.recover configuration key can be set per cluster member to allow reuse of an existing pool source.
The source.recover option does not allow reusing the same source for multiple storage pools, however the LVM storage driver has the specific lvm.vg.force_reuse configuration key for this purpose.
Documentation: How to recover instances in case of disaster
API extension: storage_source_recover
Forced instance deletion through API¶
This adds support for a force query parameter to the DELETE /1.0/instances/{name} endpoint. When set, running instances will be forcibly stopped before deletion.
This is now supported by the lxc CLI, rather than previously performing a force stop API call followed by a delete API call.
API extension: instance_force_delete
Bearer authentication method¶
A new identity type bearer has been added that allows authentication with the LXD API using bearer tokens.
If applicable, the endpoint /1.0/auth/identities/current now also exposes the credential expiration time.
The expires_at field is set when the current identity is trusted and the authentication method is either bearer or tls.
In these cases, it reports the expiration time of the bearer token or the TLS certificate, respectively.
Documentation: Bearer token authentication
API extension: auth_bearer
VM bus port limits¶
There is now a limits.max_bus_ports configuration key for virtual machines.
This option controls the maximum allowed number of user configurable devices that require a dedicated PCI/PCIe bus port.
This limit includes both the devices attached before the instance start and the devices hotplugged when the instance is running.
When the limit is set higher than the number of bus ports required at VM start time then the remainder of ports are usable for hot-plugging devices.
This limit was introduced to avoid the previous behaviour where 8 spare hot-plugging ports were added to VMs at start time. This was non-deterministic as after hot-plugging up to the spare number of ports and then rebooting the VM a further 8 more spare ports would be added, which eventually could lead to the guest OS not being bootable.
This new setting allows control over how many bus ports are added to the VM.
API extension: vm_limits_max_bus_ports
Optimized instance state field retrieval¶
Added support for selective recursion of state fields to speed up querying for instances in circumstances where not all state information is required.
The API now supports selective state field fetching using semicolon-separated syntax in the recursion parameter:
recursion=2;fields=state.disk- Fetch only disk informationrecursion=2;fields=state.network- Fetch only network informationrecursion=2;fields=state.disk,state.network- Fetch both disk and networkrecursion=2;fields=- Fetch no expensive state fields (disk and network skipped)recursion=2- Fetch all fields (default behavior)
The lxc list command now automatically optimizes queries based on requested columns.
API extension: instances_state_selective_recursion
Container swap reporting on ZFS in /proc/meminfo¶
An updated LXCFS version has been bundled in the snap package that now allows a container’s swap usage on ZFS to be reported in the container’s /proc/meminfo file.
amd64v3 architecture variant support¶
Added support for running images built for the amd64v3 architecture variant. Such optimized images are currently available for the upcoming release of Ubuntu Resolute.
UI updates¶
This release includes significant improvements and new features across networking, instances, clustering, storage, authentication, and overall user experience in the LXD UI.
Placement group management¶
Full management support for placement groups has been added. You can now create, edit, delete, and manage placement groups directly in the UI, improving workload distribution and cluster-aware placement of instances.
Instance console and usability improvements¶
Major improvements were made to the instance console and interaction model:
Clipboard sync between desktop VM console and host OS (including Windows guests)
Allow ALT and CTRL keys in console
Proper numpad key handling
Better graphic console scaling on narrow layouts
Prevent spurious connection close errors when leaving console tabs
Overall, console reliability and UX are significantly improved.
NIC device configuration UX improvements¶
Enhanced NIC device configuration for instances and profiles:
Move NIC device edit mode into side panel
Revamp NIC read mode
Added UI support for static IP management
Support for ACLs and ACL default actions on instance NICs
This provides more reliable and user-friendly network configuration.
Rich chips and rich tooltips¶
The UI now includes expanded rich chips and rich tooltips across multiple entities:
Instances
Profiles
Networks
Projects
Cluster members
Storage pools
This improves discoverability and provides more contextual information.
Cluster improvements¶
Display total memory and CPU limits correctly across clusters
Show memory information for stand-alone servers
Add memory column to cluster member list
Ensure partial network lists are shown when one cluster member is down
These enhancements improve cluster visibility and resilience in degraded scenarios.
Error screens harmonization¶
All “Not found” and error screens were harmonized for consistency, improving UX coherence across the application.
Cloud-init full-screen editor¶
The Cloud-init form now supports a full-screen editor mode, making large configuration editing significantly easier.
Storage improvements¶
Migrate storage volumes between cluster members¶
The UI now supports migrating storage volumes to another cluster member, improving cluster flexibility and maintenance workflows.
Updated storage visuals¶
Storage pools, volumes, and buckets now use updated icons for better clarity.
Force delete and protected instance handling¶
Instance and project deletion flows were improved:
Support to force stop and delete running or frozen instances
Added project force delete, also showing all contained entities that will be deleted
These updates make destructive actions clearer and more robust.
Authentication and identity flow improvements¶
Improved first user access flow
Improved identity creation modal and validation
This strengthens onboarding and identity configuration clarity.
Network and IPAM UI refinements¶
Optimized column widths for IPAM and network leases
Improved retry logic for network API requests
Better handling when editing networks on localhost
Ensure correct link generation for network forwards
These changes improve reliability and layout clarity in networking workflows.
Instance UX refinements¶
Numerous refinements improve instance workflows:
Highlight active configuration sections
Allow ISO attach/detach while powered off
Improved image selection handling
Stable instance sorting during migration
Adjust spacing in detail panel
These refinements create a more predictable and polished instance management experience.
Local Network peering and IPAM improvements¶
The UI now supports management of Local Network peering for OVN networks. Additionally, IPAM and network lease pages now link directly to NIC static IP configuration.
Build and routing improvements¶
Improved handling of relative URLs in deployments with a load balancer or reverse proxy
Ensured correct root path handling across UI links
Updated routing and internal dependency structure
Bug fixes¶
The following bug fixes are included in this release.
Container environment configuration newline injection (CVE-2026-23953 from Incus) Container image templating arbitrary host file read and write (CVE-2026-23954 from Incus) Instance POST changing target and project/pool cannot be mixed zfs.clone_copy=rebase option does not work for copying volumes Used by list of ACL shows instance multiple times if instance has multiple ACLs systemd services with credentials fail to start in containers with systemd v259 (Resolute) Volume snapshots can be attached using source= / rather than requiring use of source.snapshot key Unable to upgrade from 5.21 to 6.6: Assertion header.wal_size == 0’ failed`Network create leaves stale database record if interrupted (context canceled) dnsmasq log files are left behind after deleting the associated network
Backwards-incompatible changes¶
These changes are not compatible with older versions of LXD or its clients.
Minimum system requirement changes¶
The minimum supported version of some components has changed:
Kernel 6.8
LXC 5.0.0
QEMU 8.2.2
virt-v2v 2.3.4
ZFS 2.2
VM security.csm and security.secure_boot options combined into boot.mode option¶
The security.csm and security.secure_boot VM options have been combined into the new boot.mode configuration key to control the VM boot firmware mode.
The new setting accepts:
uefi-secureboot(default) - Use UEFI firmware with secure boot enableduefi-nosecureboot- Use UEFI firmware with secure boot disabledbios- Use legacy BIOS firmware (SeaBIOS),x86_64(amd64) only
API extension: instance_boot_mode
Instance type specific API endpoints and Container specific Go SDK functions removed¶
The /1.0/containers and /1.0/virtual-machines endpoints have been removed along with all the container specific Go SDK functions.
Clients using these endpoints should be updated to use the combined /1.0/instances endpoints and Instance related Go SDK functions.
Documentation: Main API specification
Operation resources URL changes¶
Each operation event has a resources field that contains URLs of LXD entities that the operation depends on.
When an instance, instance backup, or storage volume backup is created, it is not strictly required for the caller to provide the name of the new resource.
In this case, the URL of the expected resource was added to the resources map for clients to inspect and use.
The resources field then contains both a dependency of the operation, and the newly created resource (which may not exist yet).
To improve consistency, an optional entity_url field has been added to operation metadata that contains the URL of the entity that will be created.
The field is only included when a resource is being created asynchronously (operation response), and where it is not required for the entity name to be specified by the client.
For synchronous resource creation, clients should inspect the Location header for the same information.
The resources field will no longer contain this information.
Additionally the URLs presented in the resources field have been reviewed and in several cases updated to reflect the correct existing entities.
API extension: operation_metadata_entity_url
Asynchronous project deletion¶
The forced project deletion API extension added support for forcibly deleting a project and all of its contents.
This can take a long time, but the DELETE /1.0/projects/{name} endpoint was previously returning a synchronous response.
Now this endpoint has been changed to an asynchronous operation response. As with the storage and profile operation extension, this extension is forward compatible only.
API extension: project_delete_operation
Go SDK changes¶
The following backwards-incompatible changes were made to the LXD Go SDK and will require updates to consuming applications. However these client functions are made to be backward compatible with older LXD servers.
Deprecated features¶
These features are removed in this release.
VM 9p filesystem support for custom disk devices removed¶
Due to the change to QEMU 10.2 (which removed virtfs-proxy-helper support) LXD no longer supports exporting custom filesystem disk devices to VM guest using the 9p protocol. Custom filesystem disk devices can now only be exported to the VM guest using the virtiofs protocol.
However the read-only config drive used to bootstrap the lxd-agent inside the guest is still exported via both the 9p and virtiofs protocols for maximum lxd-agent guest OS compatibility.
Updated minimum Go version¶
If you are building LXD from source instead of using a package manager, the minimum version of Go required to build LXD is now 1.25.7.
Snap packaging changes¶
AMD container-toolkit added at
v1.2.0EDK2 bumped to
2025.02-8ubuntu3Dqlite bumped to
v1.18.5LXCFS bumped to
v6.0.6LXD-UI bumped to
0.20NVIDIA-container and toolkit bumped to
v1.18.2QEMU bumped to
10.2.1+ds-1ubuntu1ZFS bumped to
zfs-2.4.1,zfs-2.3.6virtfs-proxy-helper removed (no longer supported by QEMU 10.2)
Change log¶
Downloads¶
The source tarballs and binary clients can be found on our download page.
Binary packages are also available for:
Linux:
snap install lxd --channel=6/stableMacOS client:
brew install lxcWindows client:
choco install lxc