Backup and Restore
A volume backup is a copy of an existing volume stored:
- On a distinct cluster, fully decoupled from the one storing the source volume;
- In a different physical location from the one hosting the source cluster.
In the event the source volume is corrupt, lost, or unavailable, existing backups can be used to create new volumes, which can in turn be attached and used by VMs.
Creating volume backups
In order to create backups, sufficient backup quota (number of backups and size of backups) must be available for the particular project. In order to obtain or increase the backup quota, please file a Quota Change Request
It is possible to create a backup of an existing volume when it is in state
'available' or 'in use' (passing the --force
flag).
Backups are built on top of snapshots:
- When one triggers the backup creation, a snapshot of the volume is taken. This operation is fast, taking only a few seconds
- The created snapshot is then copied to a target cluster as the actual backup of the volume. The time required is proportional to the amount of data to be copied to the target cluster
- The backup is reported in state 'available' only when the copy to the target cluster is completed. In the meantime, it is possible to continue performing IO against the source volume, as the backup process does not impact its availability or performance
Note: As backups rely on the snapshot functionality, they are point-in-time consistent. For more details on different consistency degrees levels, please refer to the snapshot documentation.
Creating a volume backup
As an example, let's create the backup for the volume 'myvol10', where:
--description
is a free-text description of the backup--force
is required as the volume is attached to a VM ('in use')
$ openstack volume backup create myvol10 --name myvol10-initial --description 'Initial backup of myvol10' --force
+-------+--------------------------------------+
| Field | Value |
+-------+--------------------------------------+
| id | 9027e97a-6105-40a2-a6be-4c8c3365b132 |
| name | myvol10-initial |
+-------+--------------------------------------+
One can check the status of a backup with:
$ openstack volume backup show myvol10-initial
+-----------------------+--------------------------------------+
| Field | Value |
+-----------------------+--------------------------------------+
| availability_zone | None |
| container | rbd-backups-513_ec_meta |
| created_at | 2024-02-26T12:20:41.000000 |
| data_timestamp | 2024-02-26T12:20:41.000000 |
| description | Initial backup of myvol10 |
| fail_reason | None |
| has_dependent_backups | False |
| id | 9027e97a-6105-40a2-a6be-4c8c3365b132 |
| is_incremental | False |
| name | myvol10-initial |
| object_count | 0 |
| size | 10 |
| snapshot_id | None |
| status | available |
| updated_at | 2024-02-26T12:20:47.000000 |
| volume_id | 4516da8f-4b3f-4a9b-9a8f-851358ec06ed |
+-----------------------+--------------------------------------+
Creating incremental backups
It is possible to create incremental backups by passing the --incremental
flag.
In this case, only the extents that changed from the last backup will be copied
over to the target cluster, saving time with respect to performing a full backup.
If no previous backups exist, the driver will automatically perform a full backup.
$ openstack volume backup create myvol10 --name myvol10-incr1 --description 'First incremental of myvol10' --force --incremental
+-------+--------------------------------------+
| Field | Value |
+-------+--------------------------------------+
| id | 02b87494-1374-4991-98e3-e3888bef06b0 |
| name | myvol10-incr1 |
+-------+--------------------------------------+
When creating incrementals, the boolean flag is_incremental
will be set to true:
$ openstack volume backup show myvol10-incr1
+-----------------------+--------------------------------------+
| Field | Value |
+-----------------------+--------------------------------------+
| availability_zone | None |
| container | rbd-backups-513_ec_meta |
| created_at | 2024-02-26T12:29:54.000000 |
| data_timestamp | 2024-02-26T12:29:54.000000 |
| description | First incremental of myvol10 |
| fail_reason | None |
| has_dependent_backups | False |
| id | 02b87494-1374-4991-98e3-e3888bef06b0 |
| is_incremental | True |
| name | myvol10-incr1 |
| object_count | 0 |
| size | 10 |
| snapshot_id | None |
| status | available |
| updated_at | 2024-02-26T12:30:03.000000 |
| volume_id | 4516da8f-4b3f-4a9b-9a8f-851358ec06ed |
+-----------------------+--------------------------------------+
...and the previous backup for myvol10-initial
will be updated to report
dependent backups exist:
$ openstack volume backup show myvol10-initial
+-----------------------+--------------------------------------+
| Field | Value |
+-----------------------+--------------------------------------+
| availability_zone | None |
| container | rbd-backups-513_ec_meta |
| created_at | 2024-02-26T12:20:41.000000 |
| data_timestamp | 2024-02-26T12:20:41.000000 |
| description | Initial backup of myvol10 |
| fail_reason | None |
| has_dependent_backups | True |
| id | 9027e97a-6105-40a2-a6be-4c8c3365b132 |
| is_incremental | False |
| name | myvol10-initial |
| object_count | 0 |
| size | 10 |
| snapshot_id | None |
| status | available |
| updated_at | 2024-02-26T12:30:03.000000 |
| volume_id | 4516da8f-4b3f-4a9b-9a8f-851358ec06ed |
+-----------------------+--------------------------------------+
Note: It is not possible to delete backups that have dependent backups. It is required to delete all the dependent backups first:
$ openstack volume backup delete myvol10-initial
Failed to delete backup with name or ID 'myvol10-initial': Invalid backup: Incremental backups exist for this backup. (HTTP 400) (Request-ID: req-0172cbb5-6194-4d7d-a67f-6e3be7c8ce84)
1 of 1 backups failed to delete.
Creating volume backups from existing snapshots
If a snapshot of a volume already exists, it is possible to make a backup from it
by specifying the --snapshot
parameter when creating the backup:
$ openstack volume backup create myvol10 --snapshot myvol10-snapshot1 --name myvol10-fromsnap --description "backup from snapshot 1 of myvol10"
+-------+--------------------------------------+
| Field | Value |
+-------+--------------------------------------+
| id | cef64b4c-30d3-42c3-a99a-8278ac97e63a |
| name | myvol10-fromsnap |
+-------+--------------------------------------+
$ openstack volume backup show myvol10-fromsnap
+-----------------------+--------------------------------------+
| Field | Value |
+-----------------------+--------------------------------------+
| availability_zone | None |
| container | rbd-backups-513_ec_meta |
| created_at | 2024-02-26T12:39:47.000000 |
| data_timestamp | 2024-02-26T12:39:21.000000 |
| description | backup from snapshot 1 of myvol10 |
| fail_reason | None |
| has_dependent_backups | False |
| id | cef64b4c-30d3-42c3-a99a-8278ac97e63a |
| is_incremental | False |
| name | myvol10-fromsnap |
| object_count | 0 |
| size | 10 |
| snapshot_id | fa8d5506-1db8-48de-b693-07434c65a8ab |
| status | available |
| updated_at | 2024-02-26T12:40:36.000000 |
| volume_id | 4516da8f-4b3f-4a9b-9a8f-851358ec06ed |
+-----------------------+--------------------------------------+
Note: When creating a backup from a snapshot, the backup content will be what was stored in the volume at the time the snapshot was taken (i.e., the backup will match what is stored in the snapshot), and not the current volume content.
Restoring from a backup
To identify the backup to restore, one can list available backups with:
$ openstack volume backup list
+--------------------------------------+------------------------+-----------------------------------+-----------+------+
| ID | Name | Description | Status | Size |
+--------------------------------------+------------------------+-----------------------------------+-----------+------+
| cef64b4c-30d3-42c3-a99a-8278ac97e63a | myvol10-fromsnap | backup from snapshot 1 of myvol10 | available | 10 |
| 02b87494-1374-4991-98e3-e3888bef06b0 | myvol10-incr1 | First incremental of myvol10 | available | 10 |
| 9027e97a-6105-40a2-a6be-4c8c3365b132 | myvol10-initial | Initial backup of myvol10 | available | 10 |
+--------------------------------------+------------------------+-----------------------------------+-----------+------+
Once identified, it is possible to create a new volume from the backup:
$ openstack volume backup restore myvol10-incr1
If no other parameters are passed, a new volume will be created and will:
- Use the default configured volume type, 'standard'
- Have a name matching the name of the source volume (which can be confusing)
- Have a size equal to the backup size
During the restore operation, the new volume will have a temporary name in the form of 'restore_backup_' + the backup ID and status 'restoring-backup':
$ openstack volume show restore_backup_02b87494-1374-4991-98e3-e3888bef06b0
+------------------------------+-----------------------------------------------------+
| Field | Value |
+------------------------------+-----------------------------------------------------+
| attachments | [] |
| availability_zone | ceph-geneva-2 |
| bootable | false |
| consistencygroup_id | None |
| created_at | 2024-02-26T16:39:33.000000 |
| description | auto-created_from_restore_from_backup |
| encrypted | False |
| id | 629aa1c8-5b5c-49f2-84af-f25b248f9ae3 |
| multiattach | False |
| name | restore_backup_02b87494-1374-4991-98e3-e3888bef06b0 |
| os-vol-tenant-attr:tenant_id | e6ea4cf0-b857-4864-aeb6-cd91e1a1290e |
| properties | |
| replication_status | None |
| size | 10 |
| snapshot_id | None |
| source_volid | None |
| status | restoring-backup |
| type | standard |
| updated_at | 2024-02-26T16:39:34.000000 |
| user_id | ebocchi |
+------------------------------+-----------------------------------------------------+
The time taken to restore from a backup is proportional to the size of the backup. Once completed, the resulting volume will be available and ready to be attached to a VM:
$ openstack volume list
+--------------------------------------+-----------------------+-----------+------+----------------------------------+
| ID | Name | Status | Size | Attached to |
+--------------------------------------+-----------------------+-----------+------+----------------------------------+
| 629aa1c8-5b5c-49f2-84af-f25b248f9ae3 | myvol10 | available | 10 | |
| 4516da8f-4b3f-4a9b-9a8f-851358ec06ed | myvol10 | in-use | 10 | Attached to cephdev8 on /dev/vdb |
+--------------------------------------+-----------------------+-----------+------+----------------------------------+
If one wants to restore a backup using a custom volume in terms of name, size, or performance, it is needed to create a new volume with the desired properties first:
$ openstack volume create --description "restored from myvol10-incr1" --size 50 --type io1 myvol-restored
...and specify it should be used as restored target when calling backup restore
,
using the --force
option:
$ openstack volume backup restore myvol10-incr1 myvol-restored --force
$ openstack volume show myvol-restored
+------------------------------+--------------------------------------+
| Field | Value |
+------------------------------+--------------------------------------+
| attachments | [] |
| availability_zone | nova |
| bootable | false |
| consistencygroup_id | None |
| created_at | 2024-02-26T16:58:45.000000 |
| description | restored from myvol10-incr1 |
| encrypted | False |
| id | e0cf85a6-a225-4296-b91f-114d7b8a48cb |
| multiattach | False |
| name | myvol-restored |
| os-vol-tenant-attr:tenant_id | e6ea4cf0-b857-4864-aeb6-cd91e1a1290e |
| properties | |
| replication_status | None |
| size | 50 |
| snapshot_id | None |
| source_volid | None |
| status | restoring-backup |
| type | io1 |
| updated_at | 2024-02-26T16:58:53.000000 |
| user_id | ebocchi |
+------------------------------+--------------------------------------+
In the example above, a custom volume is used as restore target:
- name is 'myvol-restored'
- type is 'io1' (backup source was standard)
- size is 50 GB (backup source was 10 GB)
Note:
- If a backup is restored to an existing volume, the content of the latter will
be overwritten. OpenStack does not have any mean to detect if the volume is
already filled with data or not, hence the
--force
option is required. Be careful when chosing the volume to restore to. - When a volume bigger than the backup size is chosen as the restore target, the excess capacity will be zeroed, which is a time-consuming operation.
- When a volume bigger than the backup size is chosen as the restore target, the filesystem will not automatically be grown. Check the documentation in extending volumes for examples on how to grow a filesystem.
Deleting backups
One can delete existing backups with:
$ openstack volume backup delete myvol10-fromsnap
The backup will enter state 'deleting' to then disappear from the list of backups.
Note:
- Deletion cannot be reverted!
- It is not possible to delete backups that have dependent backups (e.g., when using incremental backups, it will be possible to delete only the last one)
Common error messages
Error | When | Cause |
---|---|---|
VolumeBackupSizeExceedsAvailableQuota: Requested backup exceeds allowed Backup gigabytes quota. Requested 10G, quota is 0G and 0G has been consumed. (HTTP 413) | Creating new backups | The backup your trying to create exceeds in size the backup volume allowed by the quota. Consider deleting no longer needed backups or submit a quota increase request. |
VolumeBackupLimitExceeded: Maximum number of backups allowed (N) exceeded | Creating new backups | You have exceeded the number of backups allowed by the quota. Consider deleting no longer needed backups or submit a quota increase request. |
VolumeSizeExceedsAvailableQuota (similar to create volumes) | Restoring from backup | There is not enough quota (in size of volumes) to create the volume to be used as restored target. |
VolumeLimitExceeded (similar to create volumes) | Restoring from backup | There is not enough quota (in number of volumes) to create the volume to be used as restored target. |
Restoring an xfs filesystem from backup
When restoring the backup of an xfs filesystem and attaching the resulting volume to the VM mounting the source volume, mounting the filesystem from the restored backup will fail with:
$ mkdir /restored
$ mount /dev/vdc /restored
mount: /restored: wrong fs type, bad option, bad superblock on /dev/vdc, missing codepage or helper program, or other error.
This id due to xfs using a UUID to identify the filesystem, which makes it not possible to mount two filesystems with the same UUID (being it preserved during the backup and restore process):
$ dmesg
[19632.032280] virtio-pci 0000:08:00.0: enabling device (0000 -> 0002)
[19632.062938] virtio_blk virtio6: [vdc] 20971520 512-byte logical blocks (10.7 GB/10.0 GiB)
[19632.062960] vdc: detected capacity change from 0 to 10737418240
[19644.566244] XFS (vdc): Filesystem has duplicate UUID c4bf4389-2b8a-4ebb-b19b-aab237d91de9 - can't mount
To circumvent this problem, it is possile to change the UUID of the filesystem restored from the backup with:
$ xfs_admin -U generate /dev/vdc
Clearing log and setting UUID
writing all SBs
new UUID = e247f880-09cc-4d14-bed6-6bfa95bcff8a
...which will result in a clean mount:
$ mount /dev/vdc /restored
$ ls -l /restored
total 261752
-rw-r--r--. 1 root root 126620400 Feb 23 09:03 linux-5.15.149.tar.xz
-rw-r--r--. 1 root root 141409556 Feb 23 09:59 linux-6.7.6.tar.xz