HPE Alletra - alletra

HPE Alletra is a storage solution. It offers the consumption of redundant block storage across the network.

LXD supports connecting to HPE Alletra storage through NVMe/TCP. In addition, HPE Alletra offers copy-on-write snapshots, thin provisioning, and other features.

Using HPE Alletra with LXD requires a HPE Alletra WSAPI version 1. Additionally, ensure that the required kernel modules for the selected protocol are installed on your host system.

Terminology

Each storage pool created in LXD using an HPE Alletra driver represents an HPE Alletra volume set, which is an abstraction that groups multiple volumes under a specific name.

LXD creates volumes within a volume set that is identified by the storage pool name. When the first volume needs to be mapped to a specific LXD host, a corresponding HPE Alletra host entity is created with the name of the LXD host and a suffix of the used protocol. For example, if the LXD host is host01 and the mode is nvme, the resulting HPE Alletra host entity would be host01-nvme.

The HPE Alletra host is then connected with the required volumes to allow attaching and accessing volumes from the LXD host. The HPE Alletra host is automatically removed once there are no volumes connected to it.

The alletra driver in LXD

The alletra driver in LXD uses HPE Alletra volumes for custom storage volumes, instances, and snapshots. All created volumes are thin-provisioned block volumes. If required (for example, for containers and custom file system volumes), LXD formats the volume with a desired file system.

LXD expects HPE Alletra to be pre-configured with a specific service (such as iSCSI) on the network interfaces whose addresses you provide during storage pool configuration. Furthermore, LXD assumes that it has full control over the HPE Alletra volume sets it manages. Therefore, do not keep any volumes in HPE Alletra volume sets unless they are owned by LXD, because LXD might disconnect or even delete them.

This driver provides remote storage. As a result, and depending on the internal network, storage access might be a bit slower compared to local storage. On the other hand, using remote storage has significant advantages in a cluster setup: all cluster members have access to the same storage pools with the exact same contents, without the need to synchronize them.

When creating a new storage pool using the alletra driver, LXD automatically discovers the array’s qualified name and target address. Upon successful discovery, LXD attaches all volumes that are connected to the HPE Alletra host that is associated with a specific LXD server. HPE Alletra hosts and volume connections (vLUNs) are fully managed by LXD.

Volume snapshots are also supported by HPE Alletra. When a volume with at least one snapshot is copied, LXD sequentially copies snapshots into the destination volume, from which a new snapshot is created. Finally, once all snapshots are copied, the source volume is copied into the destination volume.

Volume names

As a Pure storage driver, the alletra driver uses the volume’s volatile.uuid to generate a volume name.

For example, a UUID 5a2504b0-6a6c-4849-8ee7-ddb0b674fd14 is first trimmed of any hyphens (-), resulting in the string 5a2504b06a6c48498ee7ddb0b674fd14. To distinguish volume types and snapshots, special identifiers are prepended and appended to the volume names, as depicted in the table below:

Type

Identifier

Example

Container

c-

c-5a2504b06a6c48498ee7ddb0b674fd14

Virtual machine

v-

v-5a2504b06a6c48498ee7ddb0b674fd14-b (block volume) and v-5a2504b06a6c48498ee7ddb0b674fd14 (file system volume)

Image (ISO)

i-

i-5a2504b06a6c48498ee7ddb0b674fd14-i

Custom volume

u-

u-5a2504b06a6c48498ee7ddb0b674fd14 (file system volume) and u-5a2504b06a6c48498ee7ddb0b674fd14-b (block volume)

Snapshot

s

sc-5a2504b06a6c48498ee7ddb0b674fd14 (container snapshot), sv-5a2504b06a6c48498ee7ddb0b674fd14-b (VM snapshot) and su-5a2504b06a6c48498ee7ddb0b674fd14 (custom volume snapshot)

Limitations

The alletra driver has the following limitations:

Volume size constraints

The minimum volume size (quota) is 256MiB and must be a multiple of 256MiB. If the requested size does not meet these conditions, LXD automatically rounds it up to the nearest valid value.

Sharing an HPE Alletra storage pool between multiple LXD installations

Sharing an HPE Alletra array among multiple LXD installations is possible, provided that the installations use distinct storage pool names. Storage pools are implemented as volume sets on the array, and volume set names must be unique.

Recovering HPE Alletra storage pools

Recovery of HPE Alletra storage pools using lxd recover is currently not supported.

Configuration options

The following configuration options are available for storage pools that use the alletra driver, as well as storage volumes in these pools.

Storage pool configuration

alletra.cpg

HPE Alletra Common Provisioning Group (CPG) name

Key: alletra.cpg
Type:

string

alletra.mode

How volumes are mapped to the local server

Key: alletra.mode
Type:

string

Default:

the discovered mode

The mode to use to map storage volumes to the local server. Supported values are iscsi and nvme.

alletra.target

List of target addresses.

Key: alletra.target
Type:

string

Default:

the discovered mode

A comma-separated list of target addresses. If empty, LXD discovers and connects to all available targets. Otherwise, it only connects to the specified addresses.

alletra.user.name

HPE Alletra storage admin username

Key: alletra.user.name
Type:

string

alletra.user.password

HPE Alletra storage admin password

Key: alletra.user.password
Type:

string

alletra.wsapi

Address of the HPE Alletra Storage UI/WSAPI

Key: alletra.wsapi
Type:

string

alletra.wsapi.verify

Whether to verify the HPE Alletra Storage UI/WSAPI certificate

Key: alletra.wsapi.verify
Type:

bool

Default:

true

rsync.bwlimit

Upper limit on the socket I/O for rsync

Key: rsync.bwlimit
Type:

string

Default:

0 (no limit)

Scope:

global

When rsync must be used to transfer storage entities, this option specifies the upper limit to be placed on the socket I/O.

rsync.compression

Whether to use compression while migrating storage pools

Key: rsync.compression
Type:

bool

Default:

true

Scope:

global

volume.size

Size/quota of the storage volume

Key: volume.size
Type:

string

Default:

10GiB

Default storage volume size rounded to 256MiB. The minimum size is 256MiB.

Tip

In addition to these configurations, you can also set default values for the storage volume configurations. See Configure default values for storage volumes.

Storage volume configuration

block.filesystem

File system of the storage volume

Key: block.filesystem
Type:

string

Default:

same as volume.block.filesystem

Condition:

block-based volume with content type filesystem

Valid options are: btrfs, ext4, xfs If not set, ext4 is assumed.

block.mount_options

Mount options for block-backed file system volumes

Key: block.mount_options
Type:

string

Default:

same as volume.block.mount_options

Condition:

block-based volume with content type filesystem

security.shared

Enable volume sharing

Key: security.shared
Type:

bool

Default:

same as volume.security.shared or false

Condition:

virtual-machine or custom block volume

Scope:

global

Enabling this option allows sharing the volume across multiple instances despite the possibility of data loss.

security.shifted

Enable ID shifting overlay

Key: security.shifted
Type:

bool

Default:

same as volume.security.shifted or false

Condition:

custom volume

Scope:

global

Enabling this option allows attaching the volume to multiple isolated instances.

security.unmapped

Disable ID mapping for the volume

Key: security.unmapped
Type:

bool

Default:

same as volume.security.unmappped or false

Condition:

custom volume

Scope:

global

size

Size/quota of the storage volume

Key: size
Type:

string

Default:

10GiB

Default storage volume size rounded to 256MiB. The minimum size is 256MiB.

snapshots.expiry

When snapshots are to be deleted

Key: snapshots.expiry
Type:

string

Default:

same as volume.snapshots.expiry

Condition:

custom volume

Scope:

global

Specify an expression like 1M 2H 3d 4w 5m 6y.

snapshots.pattern

Template for the snapshot name

Key: snapshots.pattern
Type:

string

Default:

same as volume.snapshots.pattern or snap%d

Condition:

custom volume

Scope:

global

You can specify a naming template that is used for scheduled snapshots and unnamed snapshots.

The snapshots.pattern option takes a Pongo2 template string to format the snapshot name.

To add a time stamp to the snapshot name, use the Pongo2 context variable creation_date. Make sure to format the date in your template string to avoid forbidden characters in the snapshot name. For example, set snapshots.pattern to {{ creation_date|date:'2006-01-02_15-04-05' }} to name the snapshots after their time of creation, down to the precision of a second.

Another way to avoid name collisions is to use the placeholder %d in the pattern. For the first snapshot, the placeholder is replaced with 0. For subsequent snapshots, the existing snapshot names are taken into account to find the highest number at the placeholder’s position. This number is then incremented by one for the new name.

snapshots.schedule

Schedule for automatic volume snapshots

Key: snapshots.schedule
Type:

string

Default:

same as snapshots.schedule

Condition:

custom volume

Scope:

global

Specify either a cron expression (<minute> <hour> <dom> <month> <dow>), a comma-separated list of schedule aliases (@hourly, @daily, @midnight, @weekly, @monthly, @annually, @yearly), or leave empty to disable automatic snapshots (the default).

volatile.devlxd.owner

The ID of the DevLXD identity which owns the volume

Key: volatile.devlxd.owner
Type:

string

Default:

DevLXD owner identity ID

Scope:

global

volatile.idmap.last

JSON-serialized UID/GID map that has been applied to the volume

Key: volatile.idmap.last
Type:

string

Condition:

filesystem

volatile.idmap.next

JSON-serialized UID/GID map that has been applied to the volume

Key: volatile.idmap.next
Type:

string

Condition:

filesystem

volatile.uuid

The volume’s UUID

Key: volatile.uuid
Type:

string

Default:

random UUID

Scope:

global