Index ¦ Archives ¦ Atom

OpenStack Cinder - Configure multiple backends

Following my first post of the series discussing how to scale OpenStack Cinder to multiple nodes, with this I want to approach the configuration and usage of the multibackend feature landed in Cinder with the Grizzly release.

This feature allows you to configure a single volume node for use with more than a single backend driver. You can find all about the few configuration bits needed also in the OpenStack block storage documentation. That makes this post somehow redundant but I wanted to keep up with the series and the topic is well worth to be kept also here.

As usual, some assumptions before we start:

  • you're familiar with the general OpenStack architecture
  • you have already some Cinder volume node configured and working as expected

Assuming we want our node, configured with some LVM based and an additional NFS based backend, this is what we would need to add into cinder.conf:

enabled_backends=lvm1,nfs1
[lvm1]
volume_driver=cinder.volume.drivers.lvm.LVMISCSIDriver
volume_backend_name=LVM_iSCSI
[nfs1]
nfs_shares_config=${PATH_TO_YOUR_SHARES_FILE}
volume_driver=cinder.volume.drivers.nfs.NfsDriver
volume_backend_name=NFS

The enabled_backends value defines some names (separated by a comma) for the config groups. These do not have to match the driver name nor the backend name.

When the configuration is complete, to use a particular backend when allocating new volumes, you'll have to pass a volume_type parameter to the creation command. Such a type has to be created beforehand and to have some backends assigned to it:

# cinder type-create lvm
# cinder type-key lvm set volume_backend_name=LVM_iSCSI
# cinder type-create nfs
# cinder type-key nfs set volume_backend_name=NFS

Finally, to create your volumes:

# cinder create --volume_type lvm --display_name inlvm 1

For people using the REST interface, to set any type-key property, including volume_backend_name, you pass that information along with the request as extra specs. You can list those indeed to make sure the configuration is working as expected:

#  cinder extra-specs-list

Note that you can have backends of the same type (driver) using different names (say two LVM based backends allocating volumes in different volume groups) or you can also have backends of the same type using the same name! The scheduler is in charge of making the proper decision on how to pickup the correct backend at creation time so a few notes on the filter scheduler (enabled by default in Grizzly):

  • firstly it filters the available backends (AvailabilityZoneFilter, CapacityFilter and CapabilitiesFilter are enabled by default and the backend name is matched against the capabilities)
  • secondly weights the previously filtered backends (CapacityWeigher is the only one enabled by default)

The CapacityWeigher attributes high score to backends with the most available space, so new volumes are allocated within the backend with the more space available matching the particular name in the request.

UPDATE (Nov 2013): As reported by Yogev in this bug, misplacing the settings can have dangerous side effects. All settings below the enabled_backends parameter are actually in some section (eg. [lvm1]) of the ini file rather than [DEFAULT]. Make sure to move the [lvm1] and [nfs1] settings to the bottom of the file and so that all other settings are in the [DEFAULT] section.

© Giulio Fidente. Built using Pelican. Theme by Giulio Fidente on github. Member of the Internet Defense League.