Pure Storage - pure
¶
Pure Storage is a software-defined storage solution. It offers the consumption of redundant block storage across the network.
LXD supports connecting to Pure Storage storage clusters through two protocols: either iSCSI or NVMe/TCP. In addition, Pure Storage offers copy-on-write snapshots, thin provisioning, and other features.
To use Pure Storage with LXD requires a Pure Storage API version of at least 2.21
, corresponding to a minimum Purity//FA version of 6.4.2
.
Additionally, ensure that the required kernel modules for the selected protocol are installed on your host system.
For iSCSI, the iSCSI CLI named iscsiadm
needs to be installed in addition to the required kernel modules.
Terminology¶
Each storage pool created in LXD using a Pure Storage driver represents a Pure Storage pod, which is an abstraction that groups multiple volumes under a specific name. One benefit of using Pure Storage pods is that they can be linked with multiple Pure Storage arrays to provide additional redundancy.
LXD creates volumes within a pod that is identified by the storage pool name.
When the first volume needs to be mapped to a specific LXD host, a corresponding Pure Storage host is created with the name of the LXD host and a suffix of the used protocol.
For example, if the LXD host is host01
and the mode is nvme
, the resulting Pure Storage host would be host01-nvme
.
The Pure Storage host is then connected with the required volumes, to allow attaching and accessing volumes from the LXD host. The created Pure Storage host is automatically removed once there are no volumes connected to it.
The pure
driver in LXD¶
The pure
driver in LXD uses Pure Storage volumes for custom storage volumes, instances, and snapshots.
All created volumes are thin-provisioned block volumes. If required (for example, for containers and custom file system volumes), LXD formats the volume with a desired file system.
LXD expects Pure Storage to be pre-configured with a specific service (e.g. iSCSI) on network interfaces whose address is provided during storage pool configuration. Furthermore, LXD assumes that it has full control over the Pure Storage pods it manages. Therefore, you should never maintain any volumes in Pure Storage pods that are not owned by LXD because LXD might disconnect or even delete them.
This driver behaves differently than some of the other drivers in that it provides remote storage. As a result, and depending on the internal network, storage access might be a bit slower compared to local storage. On the other hand, using remote storage has significant advantages in a cluster setup: all cluster members have access to the same storage pools with the exact same contents, without the need to synchronize them.
When creating a new storage pool using the pure
driver in either iscsi
or nvme
mode, LXD automatically discovers the array’s qualified name and target address (portal).
Upon successful discovery, LXD attaches all volumes that are connected to the Pure Storage host that is associated with a specific LXD server.
Pure Storage hosts and volume connections are fully managed by LXD.
Volume snapshots are also supported by Pure Storage. However, each snapshot is associated with a parent volume and cannot be directly attached to the host. Therefore, when a snapshot is being exported, LXD creates a temporary volume behind the scenes. This volume is attached to the LXD host and removed once the operation is completed. Similarly, when a volume with at least one snapshot is being copied, LXD sequentially copies snapshots into destination volume, from which a new snapshot is created. Finally, once all snapshots are copied, the source volume is copied into the destination volume.
Volume names¶
Due to a limitation in Pure Storage, volume names cannot exceed 63 characters.
Therefore, the driver uses the volume’s volatile.uuid
to generate a shorter volume name.
For example, a UUID 5a2504b0-6a6c-4849-8ee7-ddb0b674fd14
is first trimmed of any hyphens (-
), resulting in the string 5a2504b06a6c48498ee7ddb0b674fd14
.
To distinguish volume types and snapshots, special identifiers are prepended and appended to the volume names, as depicted in the table below:
Type |
Identifier |
Example |
---|---|---|
Container |
|
|
Virtual machine |
|
|
Image (ISO) |
|
|
Custom volume |
|
|
Snapshot |
|
|
Limitations¶
The pure
driver has the following limitations:
- Volume size constraints
Minimum volume size (quota) is
1MiB
and must be a multiple of512B
.- Snapshots cannot be mounted
Snapshots cannot be mounted directly to the host. Instead, a temporary volume must be created to access the snapshot’s contents. For internal operations, such as copying instances or exporting snapshots, LXD handles this automatically.
- Sharing the Pure Storage storage pool between installations
Sharing the same Pure Storage storage pool between multiple LXD installations is not supported. If a different LXD installation tries to create a storage pool with a name that already exists, an error is returned.
- Recovering Pure Storage storage pools
Recovery of Pure Storage storage pools using
lxd recover
is currently not supported.
Configuration options¶
The following configuration options are available for storage pools that use the pure
driver, as well as storage volumes in these pools.
Storage pool configuration¶
Key: | pure.gateway.verify |
Type: | bool |
Default: |
|
Key: | pure.mode |
Type: | string |
Default: | the discovered mode |
The mode to use to map Pure Storage volumes to the local server.
Supported values are iscsi
and nvme
.
Key: | pure.target |
Type: | string |
Default: | the discovered mode |
A comma-separated list of target addresses. If empty, LXD discovers and connects to all available targets. Otherwise, it only connects to the specified addresses.
Key: | volume.size |
Type: | string |
Default: |
|
Default Pure Storage volume size rounded to 512B. The minimum size is 1MiB.
Tip
In addition to these configurations, you can also set default values for the storage volume configurations. See Configure default values for storage volumes.
Storage volume configuration¶
Key: | block.filesystem |
Type: | string |
Default: | same as |
Condition: | block-based volume with content type |
Valid options are: btrfs
, ext4
, xfs
If not set, ext4
is assumed.
Key: | block.mount_options |
Type: | string |
Default: | same as |
Condition: | block-based volume with content type |
Key: | size |
Type: | string |
Default: | same as |
Default Pure Storage volume size rounded to 512B. The minimum size is 1MiB.
Key: | snapshots.expiry |
Type: | string |
Default: | same as |
Condition: | custom volume |
Scope: | global |
Specify an expression like 1M 2H 3d 4w 5m 6y
.
Key: | snapshots.pattern |
Type: | string |
Default: | same as |
Condition: | custom volume |
Scope: | global |
You can specify a naming template that is used for scheduled snapshots and unnamed snapshots.
The snapshots.pattern
option takes a Pongo2 template string to format the snapshot name.
To add a time stamp to the snapshot name, use the Pongo2 context variable creation_date
.
Make sure to format the date in your template string to avoid forbidden characters in the snapshot name.
For example, set snapshots.pattern
to {{ creation_date|date:'2006-01-02_15-04-05' }}
to name the snapshots after their time of creation, down to the precision of a second.
Another way to avoid name collisions is to use the placeholder %d
in the pattern.
For the first snapshot, the placeholder is replaced with 0
.
For subsequent snapshots, the existing snapshot names are taken into account to find the highest number at the placeholder’s position.
This number is then incremented by one for the new name.
Key: | snapshots.schedule |
Type: | string |
Default: | same as |
Condition: | custom volume |
Scope: | global |
Specify either a cron expression (<minute> <hour> <dom> <month> <dow>
), a comma-separated list of schedule aliases (@hourly
, @daily
, @midnight
, @weekly
, @monthly
, @annually
, @yearly
), or leave empty to disable automatic snapshots (the default).