SUSE Enterprise Storage 5 and iSCSI
www.suse.com Whitepaper Storage
p. 2
Table of Contents
Table of Contents .................................................................................................................................................................. 2
Introduction ........................................................................................................................................................................... 3
The Ceph Storage System .................................................................................................................................................... 3
iSCSI Block Storage .............................................................................................................................................................. 4
lrbd ........................................................................................................................................................................................ 5
Deployment Considerations .................................................................................................................................................. 6
Installing and Configuring SUSE Enterprise Storage ............................................................................................................ 6
Creating RBD Images ........................................................................................................................................................... 7
Connecting to Your iSCSI Targets ...................................................................................................................................... 12
Conclusion .......................................................................................................................................................................... 30
p. 3
Executive Summary SUSE Enterprise Storage is a distributed storage management solution that enables you to readily build and deploy a distributed, highly available, open source Ceph storage cluster.
Among the strengths of SUSE Enterprise Storage is its ability to make file, object and block storage available to all your
new and legacy workloads running on Linux, Microsoft Windows, VMware and pretty much any platform you can name. Its
flexibility and resiliency are rooted in Ceph, the popular open-source storage technology, which makes it a good choice for
anyone looking for a robust storage solution that can scale without breaking the bank. Traditional SAN solutions deliver
solid block storage, but they are often expensive and difficult to scale—especially if you are locked into a single vendor.
With SUSE Enterprise Storage, iSCSI and openATTIC, you get an open-source solution than can deliver reliable, resilient,
highly available block storage volumes for just about everything in your data center—from your OpenStack and
Kubernetes deployments to your database servers—and save you a bundle while you are at it.
Introduction
This whitepaper provides an overview of the following technologies:
The Ceph distributed storage system
The iSCSI network block-storage protocol and its Linux kernel implementation
The SUSE Enterprise Storage openATTIC Ceph management tool
lrbd, a program used to create iSCSI targets and make Ceph RBD images available across your network
The Ceph Storage System
SUSE Enterprise Storage is based on Ceph, the distributed, highly available, open-source software originally conceived at
the University of California, Santa Cruz, in 2003 that has become the de facto standard for cloud storage. It is versatile
software-defined storage that relies on commodity hardware, open protocols, a robust API and an active development
community to provide highly scalable storage to clients running workloads anywhere in your data center. Applications can
use Ceph storage directly, use intermediate abstraction layers that provide a distributed POSIX file system, a RESTful
object store akin to OpenStack Swift and Amazon S3 and more. As with Ceph, SUSE Enterprise Storage runs on Linux
and offers storage to clients running nearly all modern operating systems. SUSE has long been a key contributor to the
Ceph community. In 2018, it became one of the founding members of the Ceph Foundation, which aims to galvanize rapid
adoption, training and collaboration across the Ceph ecosystem.
Reliable Distributed Autonomic Storage (RADOS)
At the heart of Ceph and SUSE Enterprise Storage is the Reliable Distributed Autonomic Object Store (RADOS), which
provides highly distributed, scalable, self-healing, auto-balancing and highly available storage. Any chunk of data in
RADOS is stored as an object, and a Controlled Replication Under Scalable Hashing (CRUSH) algorithm determines how
to store and retrieve data from physical storage devices. Instead of a centralized data look-up service, which is often a
single point of failure that prevents scaling in other systems, RADOS uses a hierarchy organized by physical nodes,
racks, aisles, rooms, data centers or any factor you want.
p. 4
RADOS can be broken down into logical subdivisions, known as “pools,” that you can define to suit your needs. You can
then define replication and availability policies by pool and define storage rules tuned to different risks and needs. Data
replicas can be confined to a single rack or distributed data centers—whatever you need.
RADOS Block Devices
Although SUSE Enterprise Storage (and Ceph) can be configured to serve up storage as file and object stores, this
whitepaper focuses on block devices, which can be made available via iSCSI when necessary—the same iSCSI protocol
used by traditional SANs.
Block devices are a simple and ubiquitous storage abstraction. These devices have a defined length (the maximum
capacity they can store) and a sector size (the smallest increment that can be used to read and write data). As a result,
the input/output (I/O) interface on a block device is simple. Either a memory buffer of a given length is being written onto
the device at a given location or data at a given location is being read into a memory buffer.
RADOS Block Devices (RBDs) provide a versatile client-side abstraction layer on top of RADOS that enables users to
interact with the storage cluster as if it were a conventional block storage device. With RBDs, users define images
(volumes) and use the RBD client libraries available for Linux, Z/Architecture and scores of other systems to interact with
these devices as if they were traditional attached block devices. Every block read-and-write operation automatically
translates into a RADOS object operation that is transparent to the user and allows RBDs to seamlessly “inherit” RADOS’
distribution, replication, scalability and high availability.
iSCSI Block Storage
iSCSI is an implementation of the Small Computer System Interface (SCSI) command set that uses the well-established
Internet Protocol. These connections enable a client (the initiator) to talk to a server (the target) via a session on TCP port
3260. An iSCSI target’s IP address and port are called an “iSCSI portal.” A target can be exposed through one or more
portals, providing built-in redundancy right out of the gate. The combination of a target and one or more portals is called
“the target portal group.”
The Linux Kernel iSCSI Target
The Linux kernel iSCSI target was originally named “LIO” for linux-iscsi.org, the project’s original domain and website. For
some time, no fewer than four competing iSCSI target implementations were available for the Linux platform, but LIO
ultimately prevailed as the single iSCSI reference target. The mainline kernel code for LIO uses the simple but somewhat
ambiguous name “target,” distinguishing between “target core” and a variety of front-end and back-end target modules.
The most commonly used front-end module is arguably iSCSI. However, LIO also supports Fibre Channel (FC), Fibre
Channel over Ethernet (FCoE) and several other protocols. At this time, only the iSCSI protocol has been validated with
SUSE Enterprise Storage.
iSCSI Initiators
LINUX
The standard initiator on the Linux platform is open-iscsi, an open source package available for most distributions,
including SUSE Linux Enterprise Server. It launches a daemon, iscsid, which allows you to discover iSCSI targets on
any given portal, log in to those targets and map iSCSI volumes. iscsid communicates with the iSCSI mid-layer to
p. 5
create in-kernel block devices that the kernel can then treat like any other SCSI block device on the system, such as
/dev/sda or /dev/sdb on a Linux system. The open-iscsi initiator can be deployed in conjunction with the Device
Mapper Multipath (dm-multipath) facility to provide a highly available iSCSI block device.
MICROSOFT WINDOWS AND HYPER-V
The default iSCSI initiator for Windows is the Microsoft iSCSI initiator. The iSCSI service can be configured via a graphical
interface and supports multipath I/O for high availability.
VMWARE
The default iSCSI initiator for VMware vSphere and ESX is the VMware iSCSI initiator, vmkiscsi. Once enabled, it can
be configured from the vSphere client console or with the vm-kiscsi-tool command. You can then format connected
storage volumes with VMFS and use them like any other VM storage device. The VMware initiator also supports multipath
I/O for high availability.
We will discuss how to attach volumes using all three of these platforms.
lrbd
The lrbd utility simplifies iSCSI management of RBD images by centrally storing configurations in Ceph objects and
executing the necessary rbd and targetcli commands. It enables any application to make use of the Ceph block
storage—even if it doesn’t “speak” any Ceph client protocol. In SUSE Enterprise Storage, lrbd is abstracted by the
openATTIC management console.
Figure 1: The iSCSI Gateway enables a wide variety of systems to communicate with the Ceph datastore.
p. 6
lrbd is highly available by design and supports multipath operations, which means it can provide multiple network routes to
the data objects. When communicating with an iSCSI target configured with more than one gateway, initiators can load-
balance iSCSI requests on the client. So if a gateway fails, I/O will transparently continue via another node.
Figure 2: With lrbd, storage targets become highly available with different OSD nodes, providing simultaneous connections.
Deployment Considerations
A minimum SUSE Enterprise Storage configuration consists of:
A cluster with at least four physical servers hosting at least eight object storage daemons (OSDs) each. In such a
configuration, three OSD nodes double as monitor (MON) nodes.
An iSCSI target server running the LIO iSCSI target, configured via openATTIC or lrbd.
An iSCSI initiator host, such as open-iscsi on Linux, the Microsoft iSCSI Initiator on Windows or the VMware
iSCSI Software initiator in vSphere.
A recommended production SUSE Enterprise Storage configuration consists of:
A cluster with 10 or more physical servers, each typically running 10 to 12 object storage daemons (OSDs), with
no fewer than three dedicated monitor (MON) nodes. All should be running the same version of SUSE Linux
Enterprise Server.
Several iSCSI target servers running the LIO iSCSI target, configured via openATTIC or lrbd. For iSCSI failover
and load-balancing, these servers must run a kernel supporting the target_core_rbd module. This also
requires the latest available update of the kernel-default package. (Update packages are available from the
SUSE Linux Enterprise Server maintenance channel.)
An iSCSI initiator host, such as open-iscsi on Linux, the Microsoft iSCSI Initiator on Windows or the VMware
iSCSI Software initiator in vSphere.
Installing and Configuring SUSE Enterprise Storage
Adding the SUSE Enterprise Storage Installation Media
p. 7
Use YaST ® to configure SUSE Enterprise Storage as an add-on product to SUSE Linux Enterprise Server. You can use a
local ISO, a SUSE Customer Center license or a local Subscription Management Tool server (SMT). Using SMT is
recommended for speed and security, and it is a particularly good choice if you are deploying behind a firewall.
Information on how to add the SMT pattern to a new or existing SUSE Linux Enterprise Server is available here,
https://www.suse.com/documentation/sles-12/book_smt/data/smt_installation.html.
Deploying a Ceph Cluster
The SUSE Enterprise Storage deployment is managed by Salt and Deepsea, which automate the installation steps,
including setting up the policies used to assign monitor, management, services and data nodes. Even a basic Deepsea
deployment can automate the installation of everything you need to use and manage the RADOS gateway, the iSCSI
gateway, ganesha (if you plan to also deploy NFS) and openATTIC (the SUSE complete Ceph management application).
Complete details are available here, https://www.suse.com/documentation/suse-enterprise-storage-
5/singlehtml/book_storage_deployment/book_storage_deployment.html.
Figure 3: SUSE Enterprise Storage features openATTIC, which makes it easy to set up and deploy RBDs, pools and gateways.
Creating RBD Images
RBD images—your storage volumes—are created in the Ceph object store and subsequently exported to iSCSI. We
recommend that you use a dedicated RADOS pool for your RBD images. OpenATTIC makes this easy, but you can also
p. 8
create a volume by logging into any host that can connect to your storage cluster and has the Ceph rbd command-line
utility installed. This requires the client to have at least a minimal ceph.conf configuration file and appropriate CephX
authentication credentials, all of which are taken care of for you when deploying with Salt and Deepsea.
We recommend using openATTIC, first to create a pool and then an RBD image. Start by clicking on the Pool menu and
clicking “Add”:
Figure 4: Create a new pool in the openATTIC management console.
p. 9
Next, create a “testvol” RBD image. Simply click on the RBD menu item and click “Add”:
Figure 5: Create an RBD image using openATTIC.
Alternatively, you can manually create a new volume and a pool in which to store it. For example, use the osd tool to
create a pool named “iscsi” with 128 placement groups:
# ceph osd pool create iscsi 128
Then use the rbd create command to create the volume, specifying the volume size in megabytes. For example, in
order to create a 100GB volume named “testvol” in the newly created pool named “iscsi,” run:
# rbd --pool iscsi create --size=102400 testvol
This will create an RBD “format 1” volume. In order to enable layered snapshots and cloning, you might want to create a
“format 2” volume instead:
p. 10
# rbd --pool iscsi create --image-format 2 --size=102400 testvol
To always enable “format 2 on” all newly created images, add the following configuration options to
/etc/ceph/ceph.conf:
[client]
rbd default image format = 2
Exporting RBD Images via iSCSI
To export RBD images via iSCSI, you can again use openATTIC or the lrbd utility. Either approach enables you to
create, review and modify the iSCSI target configuration. OpenATTIC creates the necessary .json for you and
automatically adds it to /srv/salt/ceph/igw/cache/lrbd.conf on your SUSE Enterprise Storage admin server.
Using openATTIC, click the “iSCSI” menu item and click “Add.” This will auto-generate a Target IQN name and enable
you to choose portals (nodes available in your cluster) and a target image (iscsi:testvol, in this example):
Figure 6: Export an RBD image to an ISCSI gateway using openATTIC.
In order to manually edit the configuration using lrbd, use lrbd -e or lrbd --edit. This will invoke your system’s
default editor and enable you to enter raw JSON content. We strongly recommend that you avoid manually editing the
JSON file when using openATTIC, but if you are curious, below is an lrbd JSON example configuration featuring the
following:
p. 11
1. Two iSCSI gateway hosts named iscsi1.example.com and iscsi2.example.com
2. A single iSCSI target with an iSCSI Qualified Name (IQN) of iqn.2003-01.org.linux-
iscsi.iscsi.x86:testvol
3. A single iSCSI Logical Unit (LU)
4. An RBD image named “testvol” in the RADOS pool “rbd”
5. Two targets exported via two portals named “east” and “west”
{
"auth": [
{
"target": "iqn.2003-01.org.linux-iscsi.iscsi.x86:testvol",
"authentication": "none"
}
],
"targets": [
{
"target": "iqn.2003-01.org.linux-iscsi.iscsi.x86:testvol",
"hosts": [
{
"host": "iscsi1.example.com",
"portal": "east"
},
{
"host": "iscsi2.example.com",
"portal": "west"
}
]
}
],
"portals": [
{
"name": "east",
"addresses": [
"192.168.124.104"
]
},
{
"name": "west",
"addresses": [
"192.168.124.105"
]
}
],
"pools": [
{
"pool": "rbd",
"gateways": [
{
"target": "iqn.2003-01.org.linux-iscsi.iscsi.x86:testvol",
"tpg": [
{
"image": "testvol"
}
]
}
]
}
]
p. 12
}
Note that whenever you refer to a hostname in the configuration, that hostname must match the output of the iSCSI
gateway’s uname -n output.
This JSON object is available to the gateway hosts where the JSON is edited (in this case, on the SUSE Enterprise
Storage admin host in /srv/salt/ceph/igw/cache/lrbd.conf) and to all gateway hosts connected to the same
Ceph cluster. Any additional gateways created using openATTIC can be viewed in the dashboard. All openATTIC and any
manually created lrbd entries are stored in the same lrbd.conf file, which is why we strongly recommend using
openATTIC and not switching back and forth between manual lrbd.conf edits. Manual edits can introduce errors that
can cause problems.
Once the configuration is created and saved via openATTIC, it’s automatically activated. To manually activate the
configuration using lrbd, run the following from the command line:
# lrbd # Run lrbd without additional options
# systemctl restart lrbd.service
If you plan to run lrbd manually—and if you haven’t installed openATTIC to manage it for you—you should also enable
lrbd to auto-configure on system start-up:
# systemctl enable lrbd.service
The iSCSI gateway sample above is basic, but you can find an extensive set of configuration examples in
/usr/share/doc/packages/lrbd/samples. The samples also are available in GitHub at
https://github.com/SUSE/lrbd/tree/master/samples.
Connecting to Your iSCSI Targets
Linux (open-iscsi)
Once you have created iSCSI targets, remotely accessing them from Linux hosts is a two-step process. First, the remote
host, known as the client, must discover the iSCSI targets available on the gateway host and then map the available
logical units (LUs).
Both steps require the open-iscsi daemon to be running on the Linux client host (that is, the host to which you want to
attach the storage). Starting and enabling the open-iscsi daemon is straightforward:
# systemctl start open-iscsi
To enable it on system start, additionally run:
# systemctl enable open-iscsi
If your initiator host runs SUSE Linux Enterprise Server, refer to www.suse.com/documentation/sles-
12/stor_admin/data/sec_iscsi_initiator.html for details on how to connect to an iSCSI target using YaST.
p. 13
For any other Linux distribution supporting open-iscsi, you can discover your iSCSI gateway targets with a simple
command using the open-iscsi tool iscsiadm. This example uses iscsi1.example.com as the portal address and
returns information about all the available target IQNs.
# iscsiadm -m discovery -t sendtargets -p iscsi1.example.com
192.168.124.104:3260,1 iqn.2003-01.org.linux-iscsi.iscsi.x86:testvol
192.168.124.105:3260,1 iqn.2003-01.org.linux-iscsi.iscsi.x86:testvol
The final step is to log in to the portal. You have a couple of options. The first example will log you in to all available iSCSI
targets:
# iscsiadm -m node -p iscsi1.example.com --login
Logging in to [iface: default, target: iqn.2003-01.org.linux-iscsi.iscsi.x86:testvol,
portal:
192.168.124.104,3260] (multiple)
Login to [iface: default, target: iqn.2003-01.org.linux-iscsi.iscsi.x86:testvol, portal:
192.168.124.104,3260] successful.
If you want to log into a single specific iSCSI target, you can use the target name and its IP address or domain name:
# iscsiadm -m node --login -T iqn.2003-01.org.linux-iscsi.iscsi.x86:testvol –p
iscsi1.example.com
Logging in to [iface: default, target: iqn.2003-01.org.linux-iscsi.iscsi.x86.testvol,
portal: 10.128.1.41192.168.124.104,3260]
Login to [iface: default, target: iqn.2003-01.org.linux-iscsi.iscsi.x86.testvol, portal:
10.128.1.41192.168.124.104,3260] successful.
Once logged in, the block storage will immediately become available and visible on the system, highlighted in bold here as
/dev/sdb:
$ lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 32G 0 disk
├─sda1 8:1 0 8M 0 part
├─sda2 8:2 0 30G 0 part /var/lib/docker/btrfs
└─sda3 8:3 0 2G 0 part [SWAP]
sdb 8:16 0 49G 0 disk
sr0 11:0 1 3.7G 0 rom
If your system has the lsscsi utility installed, use it to see the iSCSI device(s) on your system:
# lsscsi
[0:0:0:0] disk QEMU HARDDISK 2.5+ /dev/sda
[2:0:0:0] cd/dvd QEMU DVD-ROM 2.5+ /dev/sr0
[3:0:0:0] disk SUSE RBD 4.0 /dev/sdb
p. 14
In a multipath configuration (where two connected iSCSI devices represent one and the same LU), you can also examine
the device state with the multipath utility:
# multipath -ll
360014050cf9dcfcb2603933ac3298dca dm-9 SUSE,RBD
size=49G features=’0’ hwhandler=’0’ wp=rw
|-+- policy=’service-time 0’ prio=1 status=active
| `- 2:0:0:0 sdb 8:64 active ready running
`-+- policy=’service-time 0’ prio=1 status=enabled
`- 3:0:0:0 sdf 8:80 active ready running
You can now use this iSCSI device as you would any other physically attached block device. For example, you can use it
as a physical volume for Linux Logical Volume Management (LVM), or you can simply create a file system on it. The
example below demonstrates how to create an XFS file system on the newly connected multipath iSCSI volume:
# mkfs.xfs /dev/sdb
log stripe unit (4194304 bytes) is too large (maximum is 256KiB)
log stripe unit adjusted to 32KiB
meta-data=/dev/mapper/360014050cf9dcfcb2603933ac3298dca isize=256
agcount=17, agsize=799744 blks
= sectsz=512 attr=2, projid32bit=1
= crc=0 finobt=0
data = bsize=4096 blocks=12800000, imaxpct=25
= sunit=1024 swidth=1024 blks
naming =version 2 bsize=4096 ascii-ci=0 ftype=0
log =internal log bsize=4096 blocks=6256, version=2
= sectsz=512 sunit=8 blks, lazy-count=1
realtime =none extsz=4096 blocks=0, rtextents=0
Note that because XFS is a non-clustered file system, you can only connect it to a single iSCSI initiator node at any given
time. Remember that iSCSI volumes are block storage devices, not file storage devices. Connecting the same target to
separate systems shares just the raw block volume, not the files. Double-mounting will lead to data loss!
If you want to stop using the iSCSI LUs associated with a target, simply log out by running the following command. Be
sure to unmount the device first to avoid data corruption:
# iscsiadm -m node -p iscsi1.example.com --logout
Logging out of session [sid: 18, iqn.2003-01.org.linux-iscsi.iscsi.x86:testvol,
portal:192.168.124.104,3260]
Logout of [sid: 18, target: iqn.2003-01.org.linux-iscsi.iscsi.x86:testvol, portal:
192.168.124.104,3260]
successful.
Microsoft Windows (Microsoft iSCSI Initiator)
Taking advantage of RBD-backed storage via iSCSI is not limited to Linux hosts. Ceph volumes can be readily attached to
Windows machines using the Microsoft iSCSI Initiator, available from the Windows Server Manager Tools → iSCSI
Initiator or simply by searching for “iSCSI Initiator.” When it opens, select the Discovery tab:
p. 15
In the Discover Target Portal dialog, enter the target’s hostname or IP address in the Target field and click OK:
p. 16
Repeat this process for all other gateway hostnames or IP addresses. Once complete, review the Target Portals list:
Next, switch to the Targets tab and review your discovered target(s):
p. 17
Click “Connect” in the Targets tab. When the Connect To Target dialog appears, select the “Enable multi-path” checkbox
to enable multipath I/O (MPIO); then click OK:
Once the Connect to Target dialog closes, select Properties to review the target’s properties:
p. 18
Select “Devices”:
Next, click on “MPIO” to review the multipath I/O configuration:
p. 19
The default Load Balance policy is Round Robin With Subset. If you prefer a pure fail-over configuration, change it to Fail
Over Only.
That is it! The iSCSI volumes are now available just like any other local disks and can be initialized for use as volumes
and drives. Click “OK” to close the iSCSI Initiator Properties dialog and proceed with the File and Storage Services role
from the Server Manager dashboard:
p. 20
If the new volume does not appear immediately, select Rescan Storage from the Tasks drop-down to rescan the iSCSI
bus. Right-click on the iSCSI volume and select New Volume from the context menu:
The New Volume wizard appears. Click “Next” to begin:
p. 21
Highlight the newly connected iSCSI volume and click “Next:”
Initially, the device is empty and does not contain a partition table. When prompted, confirm the dialog indicating that the
volume will be initialized with a GPT partition table:
Next, select the volume size. Typically, you would use the device’s full capacity:
p. 22
Then, assign a drive letter or folder name where you want the newly created volume to become available:
Select a file system to create on the new volume:
Finally, confirm your selection and click the “Create” button to finish creating the volume:
p. 23
When the process finishes, review the results and click “Close” to complete the drive initialization:
Once initialization is complete, the volume (and its NTFS file system) is available just like any newly initialized local drive.
VMware
To connect iSCSI volumes to VMs running on vSphere, you need a configured iSCSI software adapter. If no such adapter
is available in your vSphere configuration, create one by selecting Configuration → Storage Adapters → Add → iSCSI
Software initiator.
Once available, select the adapter’s properties by right-clicking on it and selecting Properties from the context menu:
p. 24
In the iSCSI Software Initiator dialog, click the “Configure” button:
Go to the Dynamic Discovery tab and select “Add:”
p. 25
Enter the IP address or hostname of your iSCSI gateway. If you run multiple iSCSI gateways in a failover configuration,
repeat this step for as many gateways as you operate:
Once you have entered all iSCSI gateways, click “Yes” in the dialog box to initiate a rescan of the iSCSI adapter:
Once the rescan is complete, the new iSCSI device appears below the Storage Adapters list in the Details pane:
p. 26
For multipath devices, you can now right-click on the adapter and select Manage Paths from the context menu:
You should now see all paths with a green light under Status. One of your paths should be marked “Active (I/O)” and all
others simply “Active”:
You can now switch from Storage Adapters to the item labeled simply “Storage:”
p. 27
Select Add “Storage” in the top-right corner of the pane to bring up the Add Storage dialog. Then select “Disk/LUN” and
click “Next:”
The newly added iSCSI device appears in the Select Disk/LUN list. Select it and then click “Next” to proceed:
p. 28
Click “Next” to accept the default disk layout:
In the Properties pane, assign a name to the new datastore. Then click “Next:”
p. 29
Accept the default setting to use the volume’s entire space for the datastore or select “Custom Space Setting” for a
smaller datastore:
Finally, click “Finish” to complete the datastore creation:
The new datastore now appears in the datastore list and you can select it to retrieve details. You can now use the iSCSI
volume like any other vSphere datastore:
p. 30
Conclusion
The ability to rapidly deploy iSCSI gateway targets is a key component of SUSE Enterprise Storage that enables ready
access to distributed, highly available block storage on any server or client capable of speaking the iSCSI protocol. That
gives you all the advantages of Ceph—redundancy, high availability, ready load balancing—across your data center. You
can quickly scale up or down to suit your needs with resilient, self-healing enterprise storage technology that does not lock
you into any one vendor, while enabling you to employ commodity hardware and an entirely open source platform.
Disclaimer: The findings detailed in this whitepaper are developed using SUSE’s long-term technical expertise and to the
best of SUSE’s knowledge and abilities, but on an “as is” basis and without warranty of any kind, to the extent permitted
by applicable law.
262-002523-004 | 02/19 | © 201p SUSE LLC. All rights reserved. SUSE, the SUSE logo and YaST are registered trademarks, and SUSE Enterprise Storage
is a trademark of SUSE LLC in the United States and other countries. All third-party trademarks are the property of their respective owners.