21
Deploy a Fully Functional SUSE Enterprise Storage Cluster Test Environment in About 30 minutes July 2019 Written by: John S. Tonello SUSE Global Technical Marketing Manager www.suse.com SUSE Guide

D eploy a Fully Functional SUSE Enterprise Storage Cluster Test … · 2020-05-08 · credentials (or optionally connecting to your own SMT server), set up the network by clicking

  • Upload
    others

  • View
    1

  • Download
    0

Embed Size (px)

Citation preview

Page 1: D eploy a Fully Functional SUSE Enterprise Storage Cluster Test … · 2020-05-08 · credentials (or optionally connecting to your own SMT server), set up the network by clicking

Deploy a Fully Functional SUSE Enterprise Storage Cluster Test Environment in About 30 minutes July 2019 Written by: John S. Tonello SUSE Global Technical Marketing Manager

www.suse.com SUSE Guide

Page 2: D eploy a Fully Functional SUSE Enterprise Storage Cluster Test … · 2020-05-08 · credentials (or optionally connecting to your own SMT server), set up the network by clicking

p. 2

Introduction Whether you’re new to Ceph, consider yourself a pro, or fall somewhere in the middle, nothing gets a heart beating faster

than getting your hands on the technology. That’s particularly true with OpenStack and storage clusters because

production deployments generally require a lot of hardware, which can be difficult to muster even in the best of times.

To help, this SUSE Guide shows you how to get a fully operational SUSE Enterprise Storage cluster up and running on a

single workstation or laptop. It’s based on SUSE Linux Enterprise Server 15, requires little or no experience with Ceph,

and can be deployed in about 30 minutes. You’ll end up with a fully functional storage cluster you can use to test NFS,

iSCSI and other aspects of this popular storage technology.

Remember, this is a test environment, not a production cluster. Shortcuts taken in this guide are done to conserve

resources (like using monitor nodes for data, a sure no-no). Use this cluster for testing purposes only, but have fun.

Target Audience This SUSE Guide is for anyone considering a Ceph deployment in an on-site datacenter. This proof-of-concept provides a

fully functional storage cluster that will help inform decisions about deploying SUSE Enterprise Storage.

The hardware you’ll need If you have tons of servers hanging around in your datacenter, you can certainly use those to build a Ceph test

environment with SUSE Enterprise Storage, but the point of this guide is to give you something much smaller and

portable. Something you can bang on right where you sit. That means a single workstation or laptop with the following

minimum hardware:

• At least 8GB of RAM

• At least one quad-core CPU that supports virtualization

• At least 100GB of thin-provisioned storage

Of course, more is better in all categories. A workstation with 16GB of RAM or more and more CPU horsepower will

speed up this deployment and give the cluster a little more oomph, but these minimums will get you started.

If you’re game, you can mix and match hardware – even use Raspberry Pis – but to keep things simple, find a single,

semi-beefy box on which to do it all.

The software you’ll need Unlike a SUSE Enterprise Storage cluster running in production, this test environment runs purely in virtual machines.

That means you’ll need a hypervisor of some sort. A base Linux OS (like openSUSE Leap 15) is optimal because you can

easily deploy KVM or Xen and, later, take advantage of built-in Linux capabilities to make use of the Ceph cluster. If Linux

isn’t an option, you can adapt these steps for Windows or Mac environments.

For the hypervisor in this project, I use KVM (virt-manager). It’s flexible, has good networking features and it’s free. If

you’re more comfortable with Xen, VirtualBox, VMware Workstation or ProxMox, those are viable, too. Just be aware that

VirtualBox and VMWare Workstation may require more CPU and RAM and change how you manage networking.

Other software you’ll need:

• SUSE Linux Enterprise Server 15.0

Page 3: D eploy a Fully Functional SUSE Enterprise Storage Cluster Test … · 2020-05-08 · credentials (or optionally connecting to your own SMT server), set up the network by clicking

p. 3

• SUSE Enterprise Storage 6

• The SUSE DHCP and DNS Server pattern

• salt-master, salt-minion and deepsea

All this software is free or free-to try. You can get a free 60-day trial licenses for the SUSE software when you download

the .iso files, which you should save on the same workstation or laptop you’re using for this cluster.

If you have previously licensed SUSE software and your own SMT (or RMT) server, that will make this install go even

faster (and enable a plugs-out test environment), but that’s optional. Learn more about installing an SMT server at

suse.com.

Overview of the steps For this test environment, you’ll focus on three distinct steps. The first sets up your host machine, the second sets up your

virtual machines, and the third installs and configures SUSE Enterprise Storage using Salt and DeepSea automation. The

final step is accessing the Ceph Dashboard itself:

The host machine set-up As mentioned earlier, I’m using openSUSE as the base operating system on my host machine, a laptop with 16GB of

RAM and 1TB of storage. Setting up KVM is done using YaST and a simple search for “install hypervisor.”

Figure 1 SUSE Enterprise Storage Ceph Dashboard.

Page 4: D eploy a Fully Functional SUSE Enterprise Storage Cluster Test … · 2020-05-08 · credentials (or optionally connecting to your own SMT server), set up the network by clicking

p. 4

After installing and launching virt-manager, create a new QEMU/KVM connection, then set up a default network under

Edit -> Connection Details. Select the Virtual Networks tab and click the + to create a new network.

Create the admin virtual machine

Figure 2 Install the KVM hypervisor tools using YaST on openSUSE.

Figure 3 Choose the hypervisor you want and any associated tools.

Figure 4 Create a virtual network called example.com that’s NATed to a physical network device.

Page 5: D eploy a Fully Functional SUSE Enterprise Storage Cluster Test … · 2020-05-08 · credentials (or optionally connecting to your own SMT server), set up the network by clicking

p. 5

For this environment, I use 192.168.122.0/24 as the subnet. The default gateway will be 192.168.122.1 and the

DNS server will be the IP address of the admin/salt-master server, which we’ll discuss later. Be sure to set the virtual

network to “Forwarding to physical network” via NAT and create a DNS Domain Name. I use example.com in this guide.

If you use a different subnet, just be sure to write it down so you remember it and use it later.

Set up the VM cluster Create the admin virtual machine

Creating new VMs with KVM is straightforward and won’t be covered in detail here. Use the SLES 15 .iso you downloaded

earlier to create your initial admin server, giving the VM the following:

• 2048 MB of RAM

• 2 CPU

• 20GB storage

Storage can be created by choosing either the default location (/var/lib/libvirt/images/) and entering a size, or

by selecting and creating a custom virtual disk. If space is limited in the default location, create storage elsewhere in your

filesystem, preferably someplace where you can store all your .qcow2 images together.

Also be sure to select the “default” example.com network you created in the previous step, which in my case is NATed

to br01, a bridge connected to one of the physical network devices on my workstation:

Figure 5 Create the main storage disk for the admin VM.

Page 6: D eploy a Fully Functional SUSE Enterprise Storage Cluster Test … · 2020-05-08 · credentials (or optionally connecting to your own SMT server), set up the network by clicking

p. 6

Start the VM and answer the on-screen questions until you get to the registration screen. Before entering your registration

credentials (or optionally connecting to your own SMT server), set up the network by clicking the link at the top right of the

screen.

It’s important that networking is correct so the admin/salt-master server can communicate properly with the monitor and

data nodes. It’s especially important to set up networking during this step if you’re using a local SMT/RMT virtual machine

that’s ideally on the same “example.com” 192.168.122.0/24 network.

Figure 6 Give the admin (salt-master) node a name and assign the network.

Figure 7 Register the system using your SUSE Customer Center credentials or local SMT server.

Page 7: D eploy a Fully Functional SUSE Enterprise Storage Cluster Test … · 2020-05-08 · credentials (or optionally connecting to your own SMT server), set up the network by clicking

p. 7

Function Hostname Network

Default gateway - 192.168.122.1

Name server salt.example.com 192.168.122.40

Admin/salt-master node salt.example.com 192.168.122.40

Monitor mon01.example.com 192.168.122.41

Monitor mon02.example.com 192.168.122.42

Monitor mon03.example.com 192.168.122.43

Data data01.example.com 192.168.122.44

Data data02.example.com 192.168.122.45

Optional SMT/RMT server* smt.example.com 192.168.122.50

Note: It’s possible to skip setting up DNS, but don’t skip the networking set-up. You can alternatively edit the

/etc/hosts file on each of your VMS so the salt-master and salt-minion nodes resolve to the above hostnames and IP

addresses, but setting up static addresses and fully qualified domain names on each VM is a must.

Figure 8 Set the static IP address, subnet mask and fully qualified domain name.

Page 8: D eploy a Fully Functional SUSE Enterprise Storage Cluster Test … · 2020-05-08 · credentials (or optionally connecting to your own SMT server), set up the network by clicking

p. 8

Hopefully, you’ve noticed the Name server IP address in the above image is the same as the admin/salt-master VM.

That’s because you’ll set up the DNS server on that node to save resources. If you’re flush and have time on your hands,

feel free to create a separate SLES 15 virtual machine for DNS.

Back on the registration screen, either enter the credentials for your SUSE license or point to your local SMT/RMT server

and click Next to see the Extension and Module Selection page.

Figure 9 Set the hostname, domain name, name server and domain search.

Figure 10 Register your servers.

Page 9: D eploy a Fully Functional SUSE Enterprise Storage Cluster Test … · 2020-05-08 · credentials (or optionally connecting to your own SMT server), set up the network by clicking

p. 9

Under Available Extensions and Modules, select SUSE Enterprise Storage 6 and, if necessary, enter separate credentials

on the screen that follows.

In the following set-up screens, configure the system to automatically synchronize with an NTP server, disable the firewall

and enable SSH. Finally, click the Software link and scroll down the list to add SUSE Enterprise Storage DeepSea Admin

Node pattern and the DHCP and DNS Server pattern (if you plan to use this as your DNS server). If you forget or get

click-happy and don’t add these patterns now, don’t worry. You can add them later when you boot the system.

Be sure that AppArmor is disabled (in the Software settings) and that you’ve disabled the firewall and enabled SSH.

Figure 11 Select the SUSE Enterprise Storage module.

Page 10: D eploy a Fully Functional SUSE Enterprise Storage Cluster Test … · 2020-05-08 · credentials (or optionally connecting to your own SMT server), set up the network by clicking

p. 10

Start the install and wait for the VM to deploy. If all goes well, you’ll have your admin/salt-master virtual machine

configured and ready to proceed.

Set up the DNS server

Setting up DNS using the graphical tool available in SLES 15 is straightforward, allows you to set up a network topology

that’s flexible and enables you to easily add or remove nodes later. It also lets you take advantage of your SUSE

Enterprise Storage test environment beyond the single workstation if you use a bridged network that can connect to other

bare metal machines and VMs in your datacenter.

If you added the DHCP and DNS Server pattern during the initial setup of your admin/salt-master server, open YaST,

search for “dns” and open the DNS control panel to finish the configuration. If not, open the YaST software manager and

add the pattern before proceeding.

In the Zone Editor, create a new master zone called “example.com.” and add a name server called

“salt.example.com.” Again, you’re piggybacking your main admin server for this service. If you’re using a separate

VM as a DNS server, use that hostname. Finally, add A records for each of your monitor and data nodes. If you’d like to

access the salt/admin node via a different name than “salt”, add a CNAME pointer such as “admin,” shown in this

example, which also includes an “ns1” CNAME entry for salt that could be used for the name server:

Figure 12 Disable AppArmor and the firewall, and enable SSH.

Page 11: D eploy a Fully Functional SUSE Enterprise Storage Cluster Test … · 2020-05-08 · credentials (or optionally connecting to your own SMT server), set up the network by clicking

p. 11

Click OK to save these settings and make sure that DNS is set to start at boot time. Later, you’ll be able to use these

names – in the short form or as fully qualified – for the rest of the set-up.

Deploy the first monitor node

Before moving on to finalize the admin/salt-master node, repeat the steps to create your first monitor node, named

mon01.example.com. The steps are identical to the salt-master set-up, with these exceptions:

• Create three disks -- one 20GB for the OS, and two 32GB for data.

• Install the SUSE Enterprise Storage server packages pattern

• Do not partition or otherwise format the 32GB data disks.

• Do not install or configure the DHCP and DNS Server pattern.

Set the hostname and IP address according to the table above (mon01.example.com = 192.168.122.41, etc.).

When the monitor node install is complete, boot it and make sure networking works properly by opening a terminal and

pinging the gateway, the salt-master and any public server: 192.168.122.1, salt.example.com and

google.com respectively.

Before shutting down the node, install the salt-minion:

$ sudo zypper in salt-minion

Figure 13 Create A and CNAME records in the Zone Editor.

Page 12: D eploy a Fully Functional SUSE Enterprise Storage Cluster Test … · 2020-05-08 · credentials (or optionally connecting to your own SMT server), set up the network by clicking

p. 12

Do not enable salt-minion and do not start it. You’ll do that in a later step after you’ve cloned this VM. Doing it now

will add more work later because starting it creates some configuration files that use the VM’s hostname.

Clone the monitor node

Instead of manually creating four more nodes – two more monitor nodes and two data nodes – just clone the mon01 node

you just created. It has the two disks and other features you need. Simply right-click on the stopped VM in the main Virtual

Machine Manager window and choose “Clone...”. Be sure to give each cloned VM a new name, clone the disks, and

share the same SLES 15 .iso:

Repeat the cloning step at least three times so you have new VMs mon02, mon03, data01 and maybe data02.

Since each of these is a duplicate of the original mon01, you’ll need to power on each VM and change its network

settings. Do this with the YaST Network Settings tool. Refer to the table above to configure the proper hostnames and IP

addresses. To keep things tidy, you may want to start one VM at a time and make the changes, and run ping tests to

salt.example.com and google.com before moving on to the next.

data01:~ $ ping salt.example.com

data01:~ $ ping google.com

The final step in configuring each monitor and data node is to enable and start the salt-minion service, which is

required for the automated install from the admin/salt-master node (salt.example.com). It’s important to do this step

after each node has its final fully qualified hostname because the salt-minion service uses that name to create keys

for communicating with the salt-master. If you started the salt-minion service earlier, you’ll need to edit the

/etc/salt/minion_id file on each node to ensure it matches its hostname and restart the salt-minion service.

Rather than log into each node for this step, simply log into the admin/salt-master (salt.example.com) node, open a

terminal and enter the following. This will enable and start the salt-minion service on each node:

$ for i in mon1 mon2 mon3 data01 data02 data03; do \

> ssh root@$i systemctl enable salt-minion.service; \

> ssh root@$i systemctl start salt-minion.service; \

Figure 14 Clone the mon01.example.com node two or three times.

Page 13: D eploy a Fully Functional SUSE Enterprise Storage Cluster Test … · 2020-05-08 · credentials (or optionally connecting to your own SMT server), set up the network by clicking

p. 13

> done

You’ll be prompted to log into each host, a manual step you can avoid by copying SSH keys from salt.example.com to

each node, but since you’re not going to need to manually shell into the nodes again, you can skip that if you want.

If resources are limited, it’s safe to now reduce the resources used by the data nodes to 1 CPU and 1024 MB of RAM.

Simply edit each node’s settings in the Virtual Machine Manager and power-off and restart each VM if prompted.

Deploy the Ceph cluster Finalize the admin node

With all your virtual machines up and running, properly configured and able to communicate, you’re ready to finalize the

admin/salt-master node. First, install, enable and start the salt-master and salt-minion services:

$ sudo zypper in salt-master salt-minion # Should already be installed by the Deepsea pattern

$ sudo systemctl enable salt-master salt-minion

$ sudo systemctl start salt-master salt-minion

Note that only this admin node has salt-master installed. Be sure not to install it on any other node.

In order to properly authenticate with each other, the master and minion nodes must exchange credentials. On the

admin/salt-master, execute the following commands to view the available nodes and then accept them all:

# salt-key -F

# salt-key --accept-all

Figure 15 View and accept the salt keys from each node, including the admin/salt-master.

Page 14: D eploy a Fully Functional SUSE Enterprise Storage Cluster Test … · 2020-05-08 · credentials (or optionally connecting to your own SMT server), set up the network by clicking

p. 14

Be sure all your nodes – including the admin/salt-master (salt.example.com) – are listed. Duplicates or missing nodes

suggest an errant hostname, misconfigured network settings or a stopped salt-minion service.

After accepting the keys, run “salt-key -F” again to see that all the entries are now green and accepted.

The SUSE Enterprise Storage installation is automated using DeepSea, which uses Salt to execute all the commands

needed to deploy the necessary configurations. Done manually, this is a major stumbling block to many a successful

Ceph deployment. By automating it, SUSE has removed the guesswork and dramatically reduced the opportunity to

introduce human error.

If you didn’t install the DeepSea pattern during installation of your admin/master node, go ahead now and install DeepSea

now:

$ sudo zypper in deepsea

Use salt to ping your nodes and deploy “salt grains.” Installing grains isn’t strictly required to make your test environment

work, but it’s a best practice and enables you to use other default settings. First, use “salt ‘*’ test.ping to affirm

your master can communicate with the minions.

Next, deploy the grains:

# salt ‘*’ grains.append deepsea default

Figure 16 Use the salt command to ping your nodes from the master.

Page 15: D eploy a Fully Functional SUSE Enterprise Storage Cluster Test … · 2020-05-08 · credentials (or optionally connecting to your own SMT server), set up the network by clicking

p. 15

Confirm that the salt-master can connect to the minions using the grain:

# salt -C ‘G@deepsea:*’ test.ping

You’re now ready to run the first of five DeepSea steps that will deploy your Ceph cluster. The first step is Preparation:

# deepsea stage run ceph.stage.0

Figure 17 Use the salt to apply the grains to each minion node.

Figure 18 A second ping with salt, this time confirming the grains are present on each node.

Page 16: D eploy a Fully Functional SUSE Enterprise Storage Cluster Test … · 2020-05-08 · credentials (or optionally connecting to your own SMT server), set up the network by clicking

p. 16

This will take a little time. Be patient and let it complete the steps. If there are errors, re-run the stage. Next, run the

Discovery stage:

# deepsea stage run ceph.stage.1

This stage discovers your resource nodes and prepares the environment for the Configuration stage, which requires that

you create a policy.cfg file that maps elements of your Ceph environment to specific nodes. This file must be created before running ceph.stage.2.

Example policy.cfg files are found in /usr/share/doc/packages/deepsea/examples/ and you can use the

role-based example as a place to start. Copy the sample to your Pillar configuration directory:

Figure 19 The first of five DeepSea stages.

Figure 20 Run the Discovery stage.

Page 17: D eploy a Fully Functional SUSE Enterprise Storage Cluster Test … · 2020-05-08 · credentials (or optionally connecting to your own SMT server), set up the network by clicking

p. 17

# cp /usr/share/doc/packages/deepsea/examples/policy.cfg-rolebased

/srv/pillar/ceph/proposals/policy.cfg

This file includes a number of roles, which must be assigned to nodes in your newly built VM cluster. The format is

flexible; you can use fully qualified domain names or wildcards to identify each node. Edit the

/srv/pillar/ceph/proposals/policy.cfg as necessary or use the example below, which matches the

environment you just built and provides everything you need:

cluster-ceph/cluster/*.sls

role-master/cluster/salt*.sls

role-admin/cluster/salt*.sls

role-prometheus/cluster/salt*.sls

role-grafana/cluster/salt*.sls

role-mon/cluster/mon*.sls

role-mgr/cluster/mon*.sls

role-mds/cluster/mon*.sls

role-igw/cluster/mon*.sls

role-rgw/cluster/mon*.sls

role-ganesha/cluster/mon*.sls

config/stack/default/global.yml

config/stack/default/ceph/cluster.yml

role-storage/cluster/*[1-3]*.sls

Note that the role-master, role-admin, role-prometheus and role-grafana entries all point to the

salt.example.com VM with the “salt*.sls” wildcard. If you set up a CNAME entry in DNS, this could alternately be

“admin*.sls”. Also note that role-storage points to your mon01, mon02, mon03, data01 and data02 nodes

with a wildcard. If you add data nodes later, you can set up a salt-key, place a salt grain and re-run these steps to add

them – with only minor changes to the role-storage entry in this policy.cfg file.

To be sure, there’s a lot going on under the covers here and I’m glossing over more in-depth explanations of the Ceph

stack configuration as a trade-off to get you up and running quickly. Detailed SUSE Enterprise Storage documentation

explains this and other DeepSea steps in much more detail and can help you troubleshoot any problems.

Finalize the master node

With the policy.cfg edited and saved, you’re ready to run the DeepSea Configuration, Deploy and Services stages:

# deepsea stage run ceph.stage.2

Page 18: D eploy a Fully Functional SUSE Enterprise Storage Cluster Test … · 2020-05-08 · credentials (or optionally connecting to your own SMT server), set up the network by clicking

p. 18

If there’s an issue with your policy.cfg, it’ll show up in this Configuration stage. If you get errors indicating insufficient

storage resources, ensure that your secondary monitor and data node disks are unformatted (you can run wipefs -af

/dev/sdx on each node if necessary), and re-run this stage.

Next, run the Deployment stage:

# deepsea stage run ceph.stage.3

Figure 21 Run the DeepSea Configuration stage after setting up your policy.cfg file.

Figure 22 Run the DeepSea Deployment stage.

Page 19: D eploy a Fully Functional SUSE Enterprise Storage Cluster Test … · 2020-05-08 · credentials (or optionally connecting to your own SMT server), set up the network by clicking

p. 19

This Deployment stage has more than 50 automated steps and took about five minutes to complete in my test

environment. Be patient and make sure the stage completes successfully before proceeding.

Finally, execute the Services stage:

# deepsea stage run ceph.stage.4

The Services stage also has more than 50 steps, so be patient. It will take about 10 minutes, but on-screen feedback will

show you the progress.

When the Services stage is done, your Ceph cluster is ready. You can now log into the Ceph Dashboard. On your host

workstation (or on the admin/salt-master node itself), navigate to the IP address of any ceph-monitor node VM (or add an

entry into your host machine’s /etc/hosts file to resolve to the salt.example.com domain name):

http://mon01.example.com

Figure 23 Run the DeepSea Services stage.

Page 20: D eploy a Fully Functional SUSE Enterprise Storage Cluster Test … · 2020-05-08 · credentials (or optionally connecting to your own SMT server), set up the network by clicking

p. 20

You should see the main login screen. The default credentials are admin/admin or fetch the admin password from your

master node by running:

$ salt -I roles:master grains.get dashboard_creds

You’ll be redirected to the Ceph Dashboard, where you can see the health of your cluster, capacity and other metrics.

You’ll also be able to immediately begin experimenting with RBDs, pools, and NFS and iSCSI gateways.

Figure 24 Navigate to the IP address or domain name of any ceph-manager node.

Page 21: D eploy a Fully Functional SUSE Enterprise Storage Cluster Test … · 2020-05-08 · credentials (or optionally connecting to your own SMT server), set up the network by clicking

p. 21

Conclusion This SUSE Guide provides a rapid way to deploy a fully operational SUSE Enterprise Storage cluster for anyone looking

to experiment with Ceph. By automating the deployment of Ceph components, SUSE has dramatically accelerated and

simplified the ability to get up and running. Remember, though, that any truly enterprise-ready software-defined storage

environment takes planning. Always keep in mind how you plan to use your storage cluster, your priorities and storage

needs over time. Use this proof-of-concept environment to feel your way around, create storage pools and experiment

with different gateways as you think about a larger production deployment.

To learn more, read documentation, watch webinars and review success stories, please visit

www.suse.com/products/suse-enterprise-storage/.

Resources • Download SUSE Enterprise Storage

• SUSE Enterprise Storage deployment guide

• SUSE Linux Enterprise Server 15 documentation

• SUSE Virtualization Guide

Figure 25 The SUSE Enterprise Storage Ceph Dashboard is now live and ready to use.