HPE Alletra - alletra
¶
HPE Alletra is a storage solution. It offers the consumption of redundant block storage across the network.
LXD supports connecting to HPE Alletra storage through NVMe/TCP. In addition, HPE Alletra offers copy-on-write snapshots, thin provisioning, and other features.
Using HPE Alletra with LXD requires a HPE Alletra WSAPI version 1
. Additionally, ensure that the required kernel modules for the selected protocol are installed on your host system.
Terminology¶
Each storage pool created in LXD using an HPE Alletra driver represents an HPE Alletra volume set, which is an abstraction that groups multiple volumes under a specific name.
LXD creates volumes within a volume set that is identified by the storage pool name.
When the first volume needs to be mapped to a specific LXD host, a corresponding HPE Alletra host entity is created with the name of the LXD host and a suffix of the used protocol.
For example, if the LXD host is host01
and the mode is nvme
, the resulting HPE Alletra host entity would be host01-nvme
.
The HPE Alletra host is then connected with the required volumes to allow attaching and accessing volumes from the LXD host. The HPE Alletra host is automatically removed once there are no volumes connected to it.
The alletra
driver in LXD¶
The alletra
driver in LXD uses HPE Alletra volumes for custom storage volumes, instances, and snapshots.
All created volumes are thin-provisioned block volumes. If required (for example, for containers and custom file system volumes), LXD formats the volume with a desired file system.
LXD expects HPE Alletra to be pre-configured with a specific service (such as iSCSI) on the network interfaces whose addresses you provide during storage pool configuration. Furthermore, LXD assumes that it has full control over the HPE Alletra volume sets it manages. Therefore, do not keep any volumes in HPE Alletra volume sets unless they are owned by LXD, because LXD might disconnect or even delete them.
This driver provides remote storage. As a result, and depending on the internal network, storage access might be a bit slower compared to local storage. On the other hand, using remote storage has significant advantages in a cluster setup: all cluster members have access to the same storage pools with the exact same contents, without the need to synchronize them.
When creating a new storage pool using the alletra
driver, LXD automatically discovers the array’s qualified name and target address.
Upon successful discovery, LXD attaches all volumes that are connected to the HPE Alletra host that is associated with a specific LXD server.
HPE Alletra hosts and volume connections (vLUNs) are fully managed by LXD.
Volume snapshots are also supported by HPE Alletra. When a volume with at least one snapshot is copied, LXD sequentially copies snapshots into the destination volume, from which a new snapshot is created. Finally, once all snapshots are copied, the source volume is copied into the destination volume.
Volume names¶
As a Pure storage driver, the alletra
driver uses the volume’s volatile.uuid
to generate a volume name.
For example, a UUID 5a2504b0-6a6c-4849-8ee7-ddb0b674fd14
is first trimmed of any hyphens (-
), resulting in the string 5a2504b06a6c48498ee7ddb0b674fd14
.
To distinguish volume types and snapshots, special identifiers are prepended and appended to the volume names, as depicted in the table below:
Type |
Identifier |
Example |
---|---|---|
Container |
|
|
Virtual machine |
|
|
Image (ISO) |
|
|
Custom volume |
|
|
Snapshot |
|
|
Limitations¶
The alletra
driver has the following limitations:
- Volume size constraints
The minimum volume size (quota) is
256MiB
and must be a multiple of256MiB
. If the requested size does not meet these conditions, LXD automatically rounds it up to the nearest valid value.- Sharing an HPE Alletra storage pool between multiple LXD installations
Sharing an HPE Alletra array among multiple LXD installations is possible, provided that the installations use distinct storage pool names. Storage pools are implemented as volume sets on the array, and volume set names must be unique.
- Recovering HPE Alletra storage pools
Recovery of HPE Alletra storage pools using
lxd recover
is currently not supported.
Configuration options¶
The following configuration options are available for storage pools that use the alletra
driver, as well as storage volumes in these pools.
Storage pool configuration¶
Key: | alletra.mode |
Type: | string |
Default: | the discovered mode |
The mode to use to map storage volumes to the local server.
Supported values are iscsi
and nvme
.
Key: | alletra.target |
Type: | string |
Default: | the discovered mode |
A comma-separated list of target addresses. If empty, LXD discovers and connects to all available targets. Otherwise, it only connects to the specified addresses.
Key: | alletra.wsapi.verify |
Type: | bool |
Default: |
|
Key: | rsync.bwlimit |
Type: | string |
Default: |
|
Scope: | global |
When rsync
must be used to transfer storage entities, this option specifies the upper limit
to be placed on the socket I/O.
Key: | rsync.compression |
Type: | bool |
Default: |
|
Scope: | global |
Key: | volume.size |
Type: | string |
Default: |
|
Default storage volume size rounded to 256MiB. The minimum size is 256MiB.
Tip
In addition to these configurations, you can also set default values for the storage volume configurations. See Configure default values for storage volumes.
Storage volume configuration¶
Key: | block.filesystem |
Type: | string |
Default: | same as |
Condition: | block-based volume with content type |
Valid options are: btrfs
, ext4
, xfs
If not set, ext4
is assumed.
Key: | block.mount_options |
Type: | string |
Default: | same as |
Condition: | block-based volume with content type |
Key: | security.shifted |
Type: | bool |
Default: | same as |
Condition: | custom volume |
Scope: | global |
Enabling this option allows attaching the volume to multiple isolated instances.
Key: | security.unmapped |
Type: | bool |
Default: | same as |
Condition: | custom volume |
Scope: | global |
Key: | size |
Type: | string |
Default: |
|
Default storage volume size rounded to 256MiB. The minimum size is 256MiB.
Key: | snapshots.expiry |
Type: | string |
Default: | same as |
Condition: | custom volume |
Scope: | global |
Specify an expression like 1M 2H 3d 4w 5m 6y
.
Key: | snapshots.pattern |
Type: | string |
Default: | same as |
Condition: | custom volume |
Scope: | global |
You can specify a naming template that is used for scheduled snapshots and unnamed snapshots.
The snapshots.pattern
option takes a Pongo2 template string to format the snapshot name.
To add a time stamp to the snapshot name, use the Pongo2 context variable creation_date
.
Make sure to format the date in your template string to avoid forbidden characters in the snapshot name.
For example, set snapshots.pattern
to {{ creation_date|date:'2006-01-02_15-04-05' }}
to name the snapshots after their time of creation, down to the precision of a second.
Another way to avoid name collisions is to use the placeholder %d
in the pattern.
For the first snapshot, the placeholder is replaced with 0
.
For subsequent snapshots, the existing snapshot names are taken into account to find the highest number at the placeholder’s position.
This number is then incremented by one for the new name.
Key: | snapshots.schedule |
Type: | string |
Default: | same as |
Condition: | custom volume |
Scope: | global |
Specify either a cron expression (<minute> <hour> <dom> <month> <dow>
), a comma-separated list of schedule aliases (@hourly
, @daily
, @midnight
, @weekly
, @monthly
, @annually
, @yearly
), or leave empty to disable automatic snapshots (the default).
Key: | volatile.devlxd.owner |
Type: | string |
Default: | DevLXD owner identity ID |
Scope: | global |
Key: | volatile.idmap.last |
Type: | string |
Condition: | filesystem |
Key: | volatile.idmap.next |
Type: | string |
Condition: | filesystem |