Skip to content
Snippets Groups Projects
ceph-guide.rst 11.57 KiB

Ceph in Kolla

The out-of-the-box Ceph deployment requires 3 hosts with at least one block device on each host that can be dedicated for sole use by Ceph. However, with tweaks to the Ceph cluster you can deploy a healthy cluster with a single host and a single block device.

Requirements

  • A minimum of 3 hosts for a vanilla deploy
  • A minimum of 1 block device per host

Preparation

To prepare a disk for use as a Ceph OSD you must add a special partition label to the disk. This partition label is how Kolla detects the disks to format and bootstrap. Any disk with a matching partition label will be reformatted so use caution.

To prepare an OSD as a storage drive, execute the following operations:

Warning

ALL DATA ON $DISK will be LOST! Where $DISK is /dev/sdb or something similar.

parted $DISK -s -- mklabel gpt mkpart KOLLA_CEPH_OSD_BOOTSTRAP 1 -1

The following shows an example of using parted to configure /dev/sdb for usage with Kolla.

parted /dev/sdb -s -- mklabel gpt mkpart KOLLA_CEPH_OSD_BOOTSTRAP 1 -1
parted /dev/sdb print
Model: VMware, VMware Virtual S (scsi)
Disk /dev/sdb: 10.7GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Number  Start   End     Size    File system  Name                      Flags
     1      1049kB  10.7GB  10.7GB               KOLLA_CEPH_OSD_BOOTSTRAP

Using an external journal drive

The steps documented above created a journal partition of 5 GByte and a data partition with the remaining storage capacity on the same tagged drive.

It is a common practice to place the journal of an OSD on a separate journal drive. This section documents how to use an external journal drive.

Prepare the storage drive in the same way as documented above:

Warning

ALL DATA ON $DISK will be LOST! Where $DISK is /dev/sdb or something similar.

parted $DISK -s -- mklabel gpt mkpart KOLLA_CEPH_OSD_BOOTSTRAP_FOO 1 -1

To prepare the journal external drive execute the following command:

parted $DISK -s -- mklabel gpt mkpart KOLLA_CEPH_OSD_BOOTSTRAP_FOO_J 1 -1

Note

Use different suffixes (_42, _FOO, _FOO42, ..) to use different external journal drives for different storage drives. One external journal drive can only be used for one storage drive.

Note

The partition labels KOLLA_CEPH_OSD_BOOTSTRAP and KOLLA_CEPH_OSD_BOOTSTRAP_J are not working when using external journal drives. It is required to use suffixes (_42, _FOO, _FOO42, ..). If you want to setup only one storage drive with one external journal drive it is also necessary to use a suffix.

Configuration

Edit the [storage] group in the inventory which contains the hostname of the hosts that have the block devices you have prepped as shown above.

[storage]
controller
compute1

Enable Ceph in /etc/kolla/globals.yml:

enable_ceph: "yes"

RadosGW is optional, enable it in /etc/kolla/globals.yml:

enable_ceph_rgw: "yes"

Note

Regarding number of placement groups (PGs)

Kolla sets very conservative values for the number of PGs per pool (ceph_pool_pg_num and ceph_pool_pgp_num). This is in order to ensure the majority of users will be able to deploy Ceph out of the box. It is highly recommended to consult the official Ceph documentation regarding these values before running Ceph in any kind of production scenario.

RGW requires a healthy cluster in order to be successfully deployed. On initial start up, RGW will create several pools. The first pool should be in an operational state to proceed with the second one, and so on. So, in the case of an all-in-one deployment, it is necessary to change the default number of copies for the pools before deployment. Modify the file /etc/kolla/config/ceph.conf and add the contents:

[global]
osd pool default size = 1
osd pool default min size = 1

To build a high performance and secure Ceph Storage Cluster, the Ceph community recommend the use of two separate networks: public network and cluster network. Edit the /etc/kolla/globals.yml and configure the cluster_interface:

cluster_interface: "eth2"

For more details, see NETWORK CONFIGURATION REFERENCE of Ceph Documentation.

Deployment