Skip to content

RAID Configuration

Note

RAID can be configured on the node by the user only if multitenancy is enabled, meaning the node has a project uuid assigned as an owner in its properties.

Creating a software RAID configuration

In order to have a software RAID config applied to a node:

  • the target_raid_config needs to be defined (the example is for a RAID 0)
    openstack baremetal node set --target-raid-config '{"logical_disks": [{"size_gb": 100, "controller": "software", "raid_level": "1"}, {"size_gb": "MAX", "controller":"software", "raid_level": "0"}]}' <BAREMETAL_NODE>
    
  • the cleaning needs to be initiated (e.g. moving a node from manageable to available):
    openstack baremetal node provide <BAREMETAL_NODE>
    

Creating a software RAID with a subset of the disks

Depending on the machine configuration or purpose, we might need to create RAID layouts that don't span across all the disks available.

In our current release, Xena, this can be done in the target RAID configuration using the field physical_disks (more info).

Example 1. RAID1 with 2 specific disks

In this example the underlying hardware comes with a set of disks of different sizes. The size of the disks dedicated for the operating system is different, so we can use that as the hint criteria while defining the raid.

The following target_raid_config payload defines a first volume of 100G and a second one using the remaining space. The RAID will be configured with two disks using the filters in physical_disks:

{
    "logical_disks":
        [
            {
                "size_gb": 100,
                "controller": "software",
                "raid_level": "1",
                "physical_disks": [
                    {"size": "< 3000"},
                    {"size": "< 3000"},
                ]
            },
            {
                "size_gb": "MAX",
                "controller": "software",
                "raid_level": "1",
                "physical_disks": [
                    {"size": "< 3000"},
                    {"size": "< 3000"},
                ]
            }
        ]
}

In one line:

$ openstack baremetal node set --target-raid-config '{"logical_disks": [{"size_gb": 100,"controller": "software","raid_level": "1","physical_disks": [{"size": "< 3000"},{"size": "< 3000"}]}, {"size_gb": "MAX","controller": "software","raid_level": "1","physical_disks": [{"size": "< 3000"}, {"size": "< 3000"}]}]}' <node>

Deleting a software RAID configuration

In order to delete the RAID config of a node:

  • the target_raid_config needs to be cleared via:
    openstack baremetal node set --target-raid-config '{}' <BAREMETAL_NODE>
    
  • the cleaning needs to be initiated (e.g. moving a node from manageable to available):
    openstack baremetal node provide <BAREMETAL_NODE>
    

Configuring hardware RAID

Hardware RAID is not integrated with Ironic and it cannot be configured through the node settings as in the previous examples for software RAID.

In order to configure the hardware RAID, the user needs to follow these steps to access the BIOS through the node console and apply the configuration manually:

  1. Find the UUID in Ironic of the node that needs to be configured:

    $ openstack baremetal node list
    +-------------------+-----------------+---------------+-------------+--------------------+-------------+
    | UUID              | Name            | Instance UUID | Power State | Provisioning State | Maintenance |
    +-------------------+-----------------+---------------+-------------+--------------------+-------------+
    | <node-uuid>       | <node-name>     | None          | power off   | available          | False       |
    (...)
    

    No output?

    Getting an empty list in this step means that you're not the owner of any physical node and you're not allowed to perform such operations.

    If you think this is wrong, and there is hardware owned by your service, please contact our support line

  2. Put the node in maintenance mode to prevent Ironic from changing the power state of the machine:

    $ openstack baremetal node maintenance set <node-uuid>
    
  3. Fetch the BMC connection details:

    $ openstack baremetal node console show <node-uuid>
    
  4. In this step, you connect to the node, turn it on and use the console to apply the required RAID configuration.

  5. Once the configuration is applied, turn off the server.

  6. Remove the maintenance flag from the node. This step is important to be able to instantiate the machine:

    $ openstack baremetal node maintenance unset <node-uuid>
    

Once these steps are done, you should be able to instantiate the machine with the RAID configuration applied.