Monday, October 3, 2016

How to deploy a Ceph storage cluster




By Jack Wallen | October 2, 2016, 4:51 PM PST


Deploying a storage cluster doesn't have to wreck your sanity. See how simple it can be to deploy an object or block storage-capable Ceph cluster.



If you're looking to deploy a local Amazon Simple Storage Solution (Amazon S3) or Elastic Block Store (EBS) system, you probably believe it's almost impossible to pull off—it's not. Actually, it can be done with the help of Ceph.

A Ceph cluster can provide object or block storage across a network, which will give you that Amazon-like system. To make this happen we'll use a Ubuntu 16.04 virtual machine and three clones. Our cluster will be comprised of two storage nodes and one monitoring node. The first virtual machine will be used to initially deploy the cluster. This setup
is only to illustrate how the cluster is created; in a real-world environment, you'd want at least three monitoring nodes and three storage nodes.

SEE: Is IT witnessing the death of the storage administrator?
First steps


More about Storage
Fiber channel networking: The smart person's guide
Why Microsoft stored 200MB of data in DNA strands, and what's next
Is IT witnessing the death of the storage administrator?
Subscribe to TechRepublic's Data Center Trends newsletter

The first thing you want to do is have a Ubuntu 16.04 virtual machine up and running. Update that machine, and then shut it down. Once it's shut down, click Machine | Clone and walk through the simple process of cloning it. You'll do this three times. Once it's cloned, you'll need to know the IP address of each machine, so you'll want to set static IPs for each.
Configure the Ceph deployer

Once you have your clones ready, it's time to begin with the setup of the Ceph deployer. Fire up that virtual machine, open a terminal window, and issue the following command (the first command is one line):
wget -q -O- 'https://ceph.com/git/?p=ceph.git;a=blob_plain;f=keys/release.asc' | sudo apt-key add - echo deb http://download.ceph.com/debian-jewel/ $(lsb_release -sc) main | sudo tee /etc/apt/sources.list.d/ceph.list
sudo apt update
sudo apt install ceph-deploy

Then you'll need to install NTP and openssh-server on all four virtual machines. To install NTP and openssh-server, issue the command:

sudo apt-get install ntp openssh-server -y

Next we must create a Ceph user that is on all of the nodes and configured identically. Because this will be done with password-less SSH access, create a very unique username to prevent brute-force attacks. The following commands will be run on all machines (ceph_user is the actual name of the user):
USERNAME="ceph_user"
sudo useradd -d /home/$USERNAME -m $USERNAME
sudo passwd $USERNAME

Now we'll enable that user to execute sudo commands without requiring a password. To do this, open the file /etc/sudoers and add the following line (ceph_user is the actual username you created):


ceph_user ALL=(ALL) NOPASSWD:ALL

Make sure to create this Ceph user on every node.

The next step is to generate a password-less ssh key on the deployment node. Issue the command ssh-keygen and hit Enter three times.

After the key generates, you'll distribute the key to all of your nodes with the following commands:
su ceph_user
ssh-copy-id ceph_user@IP_ADDRESS

Where ceph_user is the name of the Ceph user you created, and IP_ADDRESSis the IP address of each server (issued one at a time).

The last step is to add entries into your deployment node's /etc/hosts file. Due to DNS issues, Ceph won't allow you to issue the ceph-deploy using IP addresses, so open /etc/hosts and add an entry for each node like so:

192.168.1.190 ceph1 ceph1.localhost.local
192.168.1.191 ceph2 ceph2.localhost.local
192.168.1.192 ceph3 ceph3.localhost.local

Edit the above to fit your networking scheme and needs. Save and close the host file.
Deploying the cluster

Now it's time to deploy the cluster. We have to create a directory on the deployment node from which to work. Go to the deployment node, log in as the Ceph user, open a terminal window, and issue the command mkdir ceph-cluster and then cd ceph-cluster.

Now we'll run the ceph-deploy command for each node like so:
ceph-deploy new ceph1
ceph-deploy new ceph2
ceph-deploy new ceph3

The above commands will take some time to run on each node, so either sit back and watch the fun go by, or go take care of another admin task and come back.

If the command comes back with errors, chances are you'll have to change the hostname on the node to match the name you gave the machine in the deployment node's /etc/hosts file. To change the hostname, you'll want to issue the command on each node hostname HOSTNAME (HOSTNAME is the name that will reflect each node). Next open the /etc/hostname file on each node and edit it to reflect the new hostname. After changing all the hostnames, you should be good to rerun the above commands, and they should run without fail.

Once the command has run, it will create a configuration file (ceph.config) within the working directory. Open that file and add the following line under[Global]:

osd pool default size = 2

Under normal circumstances, that number would be at least 3, but for this tutorial, we're only using two data storage nodes.

It's time to install Ceph on the nodes. This is done with a command like so:

ceph deploy-install ceph1 ceph2 ceph3

The above command will take quite some time to run (it has to download all the necessary software and then install), so don't bother watching it go by.

Now we must add the initial monitor and collect all the keys from the nodes with the command:

ceph-deploy mon create-initial
Setting up the OSDs

The next step requires the creation of directories on each node for the Ceph OS daemon. On each node, issue the command:

sudo mkdir /var/local/osd*

Where * is a number that corresponds with the number associated with the Ceph node (i.e., osd1 for ceph1, osd2 for ceph2, etc.).

We prepare each osd (from the Ceph deploy machine) with the command:

ceph-deploy osd prepare ceph1:/var/loca/osd1

Issue the command for all nodes, and then activate each node with the command:

ceph-deploy osd activate ceph1:/var/local/osd1

If you receive an error about an inability to create an empty object store in/var/local/osd*, go back to each node and change the permissions like so:

sudo chmod -R ugo+w /var/local/osd1

Run the above command on each of the node machines, and then attempt to activate the osd again.

Now we're going to issue a command that will copy the Ceph config files to each node, so you don't have to specify monitor addresses when using the Ceph CLI. Issue the command:

ceph-deploy admin ceph-deploy ceph1 ceph2 ceph3

We need to change the permissions of the Ceph keyring file on each node with the command:

sudo chmod +r /etc/ceph/ceph.client.admin.keyring

That's it! Your Ceph cluster has been deployed and is ready for use. You can provision and deploy block devices to your cluster and much more.

No comments:

Post a Comment

Related Posts Plugin for WordPress, Blogger...