Upload
letram
View
223
Download
2
Embed Size (px)
Citation preview
WHITE PAPER
DELL EMC VXBLOCK SYSTEM 540 ORACLE, SQL, SAP BEST PRACTICES
Best Practices for Oracle, SQL Server, and SAP
November 2017
Abstract
This white paper provides an overview of best practices for Oracle, Microsoft SQL
Server, and SAP on the Dell EMC VxBlock® System 540.
H16811
This document is not intended for audiences in China, Hong Kong, Taiwan, and
Macao.
Copyright
2 Dell EMC VxBlock® System 540 Oracle SQL SAP Best Practices Best Practices for Oracle, SQL Server, and SAP White Paper
The information in this publication is provided as is. Dell Inc. makes no representations or warranties of any kind with respect to the information in this publication, and specifically disclaims implied warranties of merchantability or fitness for a particular purpose.
Use, copying, and distribution of any software described in this publication requires an applicable software license.
Copyright © 2017 Dell Inc. or its subsidiaries. All Rights Reserved. Dell, EMC, Dell EMC and other trademarks are trademarks of Dell Inc. or its subsidiaries. Intel, the Intel logo, the Intel Inside logo and Xeon are trademarks of Intel Corporation in the U.S. and/or other countries. Other trademarks may be the property of their respective owners. Published in the USA 11/17 White Paper H16811.
Dell Inc. believes the information in this document is accurate as of its publication date. The information is subject to change without notice.
Contents
3 Dell EMC VxBlock® System 540 Oracle SQL SAP Best Practices Best Practices for Oracle, SQL Server, and SAP
White Paper
Contents
Chapter 1 Introduction 5
Executive summary ............................................................................................... 6
Solution overview .................................................................................................. 6
Benefits of VxBlock System 540............................................................................ 7
Audience ............................................................................................................... 8
We value your feedback ........................................................................................ 8
Chapter 2 Technology Overview 9
Introduction ......................................................................................................... 10
VxBlock System 540 ........................................................................................... 10
Chapter 3 Cross Application Design Guidelines 14
Overview ............................................................................................................. 15
Flash fundamentals ............................................................................................. 15
Multipathing......................................................................................................... 17
NUMA ................................................................................................................. 18
Disk provisioning ................................................................................................. 18
Paravirtualized SCSI (PVSCSI) adapters ............................................................ 18
Full bandwidth testing ......................................................................................... 19
Sizing and capacity tools ..................................................................................... 19
XtremIO Sizing Tool ............................................................................................ 20
XtremIO Data Reduction Estimator ..................................................................... 23
Chapter 4 Deployment Best Practices for Oracle 25
Overview ............................................................................................................. 26
Design considerations ......................................................................................... 26
Data performance analysis .................................................................................. 27
Benefits ............................................................................................................... 29
Self-service monitoring with Oracle Enterprise Manager 12c Plug-in ................. 30
VMware ............................................................................................................... 35
Storage virtualization ........................................................................................... 35
Virtualizing compute (vCPUs) ............................................................................. 36
VMware memory configuration guidelines ........................................................... 37
Database types ................................................................................................... 37
Decision Support Systems (DSS) ........................................................................ 38
Contents EMC Confidential [delete if not required]
4 Dell EMC VxBlock® System 540 Oracle SQL SAP Best Practices Best Practices for Oracle, SQL Server, and SAP White Paper
Chapter 5 Deployment Best Practices for Microsoft SQL Server 39
Overview ............................................................................................................. 40
Design considerations ......................................................................................... 40
VMware ............................................................................................................... 44
Database types ................................................................................................... 45
Data warehouse/OLAP ....................................................................................... 47
EMC Storage Integrator for Windows Suite (ESI) ................................................ 48
Chapter 6 Deployment Best Practices for SAP 50
Overview ............................................................................................................. 51
Design considerations ......................................................................................... 51
VMware recommendations .................................................................................. 52
Application workload ........................................................................................... 53
EMC Storage Integrator (ESI) for SAP Landscape Virtualization Management ... 55
Chapter 7 Conclusion 57
Overview ............................................................................................................. 58
Chapter 8 References 59
Dell EMC documentation..................................................................................... 60
VMware documentation ...................................................................................... 60
Other documentation ........................................................................................... 60
Chapter 1: Introduction
5 Dell EMC VxBlock® System 540 Oracle SQL SAP Best Practices Best Practices for Oracle, SQL Server, and SAP
White Paper
Chapter 1 Introduction
This chapter presents the following topics:
Executive summary ............................................................................................ 6
Solution overview ............................................................................................... 6
Benefits of VxBlock System 540 ........................................................................ 7
Audience .............................................................................................................. 8
We value your feedback ..................................................................................... 8
Chapter 1: Introduction
6 Dell EMC VxBlock® System 540 Oracle SQL SAP Best Practices Best Practices for Oracle, SQL Server, and SAP White Paper
Executive summary
Best practices and guidelines for critical workloads such as Oracle, Microsoft SQL Server,
and SAP change over time as software features and hardware infrastructure undergo
continuous updates. VMware virtualization caused a major shift in the datacenter,
enabling more agility, flexibility, and consolidation for all applications. Today many IT
organizations have virtualized their mission-critical databases following proven best
practices and have realized the benefits without impacting performance or protection. The
key to success is planning, testing, and following guidelines designed to make your
databases perform well on a virtualized infrastructure.
All-flash arrays have caused a similar shift in best practices and have enhanced the
consolidation of critical workloads. Traditionally, database administrators (DBAs) designed
dedicated database architectures, as they provide predictable performance. For example,
compute, networking, and storage were isolated in a silo for a production database. The
Dell EMC VxBlock® System1 540 with XtremIO® storage has transformed how critical
databases perform when given enough IOPS with sub-millisecond latency to drive
consolidation for multiple production applications or non-production environments. The
complexities of isolation for performance management are now eliminated. The IT
organization can consolidate more applications and environments to a VxBlock System
540, driving greater cost savings and simplifying management.
Combining compute, networking, all-flash storage, and virtualization has led to converged
infrastructures. The VxBlock System 540 is a converged infrastructure designed for
datacenter consolidation, performance, and protection. The IT organization can bypass
the complexities of building a similar infrastructure that can take time and require multiple
support organizations. With the VxBlock System 540, IT teams can immediately start
migrating, provisioning, and monitoring applications. In this paper, we provide an overview
of best practices for Oracle, Microsoft SQL Server, and SAP on the VxBlock System 540.
Our goal is to provide the guidelines to make your mixed application workloads a success.
Solution overview
This best practices paper is a companion to the Dell EMC Solutions for Enterprise Mixed
Workloads on VxBlock System 540 Solution Guide. The solution guide contains VxBlock
System 540 test results for running mixed applications such as Oracle, Microsoft SQL
Server, and SAP on mixed workloads like Online Transaction Processing (OLTP) and
Online Analytical Processing (OLAP). To successfully deploy mixed applications and
workloads, best practices were employed to optimize performance. In this paper, we have
captured many of the best practices that explain the benefits and provide supporting
detail.
This best practices paper takes a new approach in that Oracle, Microsoft, and SAP
guidelines are all included within one document. IT organizations often have to manage
mixed applications. Referring to multiple, separate documents can be complex. To
1 For purposes of this paper, VxBlock Systems is used as the solution term. Dell EMC is now
focused exclusively on positioning VxBlock Systems, as they can be configured similarly to a Vblock
System, and can add additional levels of flexibility.
Chapter 1: Introduction
7 Dell EMC VxBlock® System 540 Oracle SQL SAP Best Practices Best Practices for Oracle, SQL Server, and SAP
White Paper
streamline the deployment of best practices we have structured the paper at a high level
like this:
Introduction: Provides an overview to this guide.
Technology Overview: Provides basic definition of the component pieces the make
up this solution.
Cross Application Design: Provides guidelines that apply to all three applications
and have been consolidated into one common section. In this section, you will find
universal concepts and guidelines.
Deployment Best Practices: Provides all three application’s personalized best
practices with detailed guidelines for deploying each workload. Note that each
section is structured differently to address key guidelines.
Oracle Database 12c: Provides guidelines and design considerations for
Oracle.
Microsoft SQL Server: Provides deployment best practices and design
considerations for SQL Server.
SAP: Provides guidelines and design considerations for SAP applications
based on the NetWeaver Platform.
Now an IT organization can use one paper for the deployment of multiple applications.
The scope of this paper is to capture key best practices; it does not include all possible
guidelines. It is common to tailor best practices to the business and the application. This
paper contains overviews of tools to use to size and performance tune applications. For
more advanced assistance, we recommend contacting a Dell EMC application specialist.
Benefits of VxBlock System 540
Dell EMC VxBlock System 540 All-Flash Array provides IT organizations with a single,
high-performance platform to standardize and consolidate applications. This converged
platform includes all the enterprise features for databases: replication for disaster
recovery, XtremIO Virtual Copies (XVC) for lifecycle management, and Dell EMC Vision
for system monitoring.
Inline Compression—Each application is automatically compressed inline on the
VxBlock System 540. In this paper, we explore the variations in compression ratio
for each of the three applications.
Deduplication—Data copies initially use no additional capacity. For example,
creating a copy of a 10 TB database uses no additional flash space. Only unique
data will consume flash space. DBAs can provision copies of databases without
consuming flash capacity until the source copy is modified.
XtremIO Data Protection (XDP) —Consumes only 8 percent of flash capacity,
provides protection in the case of double Solid State Disk (SSD) failure, and has
fast rebuild times.
XtremIO All-
Flash benefits
Chapter 1: Introduction
8 Dell EMC VxBlock® System 540 Oracle SQL SAP Best Practices Best Practices for Oracle, SQL Server, and SAP White Paper
Dell EMC Vision Operations Software—This health and lifecycle management
software is embedded in EMC converged and hyper-converged systems. Dell EMC
Vision increases efficiency of monitoring, automates updates and upgrades, and
assists with identifying security gaps and protecting the system. For more
information, refer to Dell EMC Vision Operations Software.
VMware vSphere Management and Integration—Consolidates the virtual
infrastructure, standardized on vSphere, and integrates with all-flash Virtual Storage
Integrator (VSI). vSphere Management and Integration simplifies all aspects of
management, saving time and decreasing the complexity of consolidation.
There are many benefits of the VxBlock System 540. Overall, using the VxBlock System
540 enables you to consolidate more and manage less. DBAs value the sub-millisecond
latencies that make applications very fast. The wealth of IOPS means many copies of
databases can be supported without sacrificing performance. This best practices paper
provides guidelines to assist with a smooth transformation to the VxBlock System 540.
Audience
This best practices paper is for datacenter architects, database administrators, vSphere
administrators, and storage administrators interested in guidelines for deploying Microsoft
SQL Server, Oracle, and SAP.
We value your feedback
Dell EMC and the authors of this document welcome your feedback on the solution and
the solution documentation. Contact [email protected] with your
comments.
Authors: Sam Lucido, Phil Hummel, Dave Simmons, Indranil Chakrabarti, Jyoti Tripathi
VxBlock
management
benefits
Chapter 2: Technology Overview
9 Dell EMC VxBlock® System 540 Oracle SQL SAP Best Practices Best Practices for Oracle, SQL Server, and SAP
White Paper
Chapter 2 Technology Overview
This chapter presents the following topics:
Introduction ....................................................................................................... 10
VxBlock System 540 ......................................................................................... 10
Chapter 2: Technology Overview
10 Dell EMC VxBlock® System 540 Oracle SQL SAP Best Practices Best Practices for Oracle, SQL Server, and SAP White Paper
Introduction
The Dell EMC VxBlock System 540 is an ideal platform for enterprise software updates,
Big Data analytics, and end-user computing, providing less than 1 ms response times, as
well as inline data reduction and compression, thin provisioning, and a 99.999 percent
availability experience. Dell EMC XtremIO™ All-Flash Array and Cisco Unified Computing
System (UCS) deliver scale-out performance at ultralow latency for applications, such as
Oracle Database 12c, Microsoft SQL Server, and SAP.
VxBlock System 540
VxBlock System 540 is a modular platform with defined scale points that meet the higher
performance and availability requirements of an enterprise's business-critical applications.
VxBlock 540 is designed for deployments involving large numbers of virtual machines and
users.
The computing power within a Dell EMC VxBlock System 540 utilizes Cisco UCS B-Series
Blades installed in the Cisco UCS chassis. Fabric Extenders (FEX) within the Cisco UCS
chassis connect to Cisco fabric interconnects over converged Ethernet. Up to eight 10
Gigabit Ethernet ports on each Cisco UCS Fabric Extender connect northbound to the
fabric interconnects, regardless of the number of blades in the chassis. These
connections carry IP and storage traffic.
Dell EMC VxBlock System 540 powered by Cisco UCS offers the following features:
Built-in redundancy for high availability
Hot-swappable components for serviceability, upgrade, or expansion
Fewer physical components than in a comparable piece built system
Reduced cabling
Improved energy efficiency over a traditional blade server chassis
The VxBlock System 540 uses multiple ports for each fabric interconnect for 8 Gb Fibre
Channel (FC). These ports connect to Cisco MDS storage switches and the connections
carry FC traffic between the compute layer and the storage layer. These connections also
enable SAN booting of the Cisco UCS blades.
Cisco Trusted Platform Module (TPM) provides authentication and evidence services that
provide safer computing in all environments. Cisco TPM is a computer chip that securely
stores artifacts, such as passwords, certificates, or encryption keys that authenticate the
Dell EMC System.
Cisco TPM is provided by default on the Dell EMC System as a component of the Cisco
UCS B-Series M3 Blade Servers and Cisco UCS B-Series M4 Blade Servers.
XtremIO provides the ideal platform to support mixed workloads and mixed applications
with these advantages:
Consistent and predictable performance—Provides sub-millisecond response times
to meet strict SLA thresholds.
Compute
components
Storage
components
Chapter 2: Technology Overview
11 Dell EMC VxBlock® System 540 Oracle SQL SAP Best Practices Best Practices for Oracle, SQL Server, and SAP
White Paper
Consolidation without compromise—In-line data reduction capabilities enable
copies of production databases to be created quickly, with no initial physical
capacity consumed.
Faster time to value for applications—Accelerated deployment of applications
eliminates performance concerns and discussions between storage and application
owners.
DBA controlled database protection—Copies of databases can be configured to
expire or refresh on-demand using AppSync software integration. Production
databases can be protected individually, or as part of a consistency group.
VxBlock System 540 supports a variety of XtremIO X-Brick storage options as described
in the table below. A data migration professional services engagement is required if
additional X-Bricks are added to clusters after initial deployment. Dell EMC recommends
planning for future growth during the initial purchase.
Table 1. VxBlock System 540 and XtremIO X-Brick storage options
X-Brick (encryption capable)
Storage Option Capacity Per X-Brick
RAW Usable
10 TB X-Brick One X-Brick =
Two X-Bricks =
Four X-Bricks =
10 TB
20 TB
40 TB
7.6 TB
15.2 TB
30.3 TB
20 TB X-Brick One X-Brick =
Two X-Bricks =
Four X-Bricks =
Six X-Bricks =
Eight X-Bricks =
20 TB
40 TB
80 TB
120 TB
160 TB
15.2 TB
30.3 TB
60.8 TB
91 TB
121.3 TB
40 TB X-Brick One X-Brick =
Two X-Bricks =
Four X-Bricks =
Six X-Bricks =
Eight X-Bricks =
40 TB
80 TB
160 TB
240 TB
320 TB
30.6 TB
61.1 TB
122.2 TB
183.3 TB
244.4 TB
The VxBlock System 540 utilizes both a TCP/IP LAN layer and a FC SAN layer to provide
network services for the Dell EMC system. Each Dell EMC system includes two Cisco
Nexus 9396PX Switches and either two Cisco Nexus 5548UP or 5596UP Switches.
Each VxBlock System 540 requires a pair of Cisco Nexus 3064-T Switches for all device
management connectivity and management traffic within the Dell EMC system. Each
Cisco Nexus 3064-T Switch provides 48 ports of 100/1000/10 Gb twisted pair connectivity
and four QSFP+ ports.
The Cisco Nexus 5548UP Switch, Cisco Nexus 5596UP Switch, and Cisco Nexus
9396PX Switch in the network layer provide 10 Gb connectivity using SFP+ modules for
all system production traffic.
The VxBlock System 540 contains two Cisco MDS switches to provide FC connectivity
between the compute and storage layer components. These switches are configured for
Networking
components
Chapter 2: Technology Overview
12 Dell EMC VxBlock® System 540 Oracle SQL SAP Best Practices Best Practices for Oracle, SQL Server, and SAP White Paper
two separate fabrics. Connections from the storage components provide 8 Gb of
bandwidth. Cisco UCS fabric interconnects provide a FC port channel of four 8 Gb
connections (32 Gb bandwidth) to each fabric. This can be increased to eight connections
for 64 Gb bandwidth or sixteen connections for 128 Gb bandwidth per fabric. These
connections also facilitate SAN booting of the blades in the compute layer.
Two Cisco MDS 9148S Multilayer Fabric Switches provide:
Fibre channel connectivity between the compute layer components and the storage
layer components
Connectivity for backup and business continuity requirements when configured
VMware vSphere ESXi
VMware ESXi is a bare metal embedded hypervisor, which means it runs directly on
server hardware and does not require the installation of an additional underlying operating
system. This virtualization software creates and runs its own kernel, which is run after a
Linux kernel bootstraps the hardware.
The VMware vSphere ESXi hypervisor runs in the management servers and in Dell EMC
systems using VMware vSphere Server Enterprise Plus. The lightweight hypervisor
requires very little space to run and has minimal management overhead.
VMware vSphere ESXi hosts and their resources are pooled together into clusters. These
clusters contain the CPU, memory, network, and storage resources available for allocation
to virtual machines (VMs). Clusters can scale up to a maximum of 32 hosts for VMware
vSphere 5.1/5.5, and 64 hosts for VMware vSphere 6.0. Clusters can support thousands
of virtual machines.
Dell EMC VxBlock System 540 supports a mixture of data store types: block-level storage
using VMware Virtual Machine File System (VMFS), or file-level storage using Network
File System (NFS). The maximum size per VMFS volume is 64 TB.
Virtual networking in the Advanced Management Platform uses the VMware Virtual
Standard Switch (VSS). Advanced Management Platform (AMP-2) is a management
system that includes the hardware and software to run Core Management and Dell EMC
Optional workloads. The Core Management Workload is the minimum set of software
required to install, operate, and support a VxBlock System, including hypervisor
management, element managers, virtual networking components (Cisco Nexus 1000V
Switch or the Virtual Distributed Switch (VDS) and Dell EMC Vision Intelligent Operations
Software. The Cisco Nexus 1000V Series Switch ensures consistent, policy-based
network capabilities by allowing policies to move with a virtual machine during live
migration. This provides persistent network, security, and storage compliance.
VMware vCenter Server
VMware vCenter Server provides centralized management of virtualized hosts and virtual
machines from a single console. It gives administrators visibility into the configuration of
the critical components of a virtual infrastructure—all from one place. With vCenter
Server, virtual environments are easier to manage: a single administrator can manage
hundreds of workloads, more than doubling typical productivity when managing the
physical infrastructure. Problem resolution times are cut dramatically. IT administrators
Virtualization
layer
Chapter 2: Technology Overview
13 Dell EMC VxBlock® System 540 Oracle SQL SAP Best Practices Best Practices for Oracle, SQL Server, and SAP
White Paper
can ensure security and availability, simplify day-to-day tasks, and reduce the complexity
of managing virtual infrastructure.
VMware vCenter is installed on a 64-bit Windows Server. VMware Update Manager is
installed on a 64-bit Windows Server and runs as a service to assist with host patch
management. VMware vCenter Server provides the following functionality:
Cloning of VMs
Template creation
VMware vMotion and VMware Storage vMotion migration
Initial configuration of VMware Distributed Resource Scheduler (DRS) and
VMwarevSphere high-availability clusters
VMware vCenter Server also provides monitoring and alerting capabilities for hosts and
VMs. System administrators can create and apply alarms to all managed objects in
VMware vCenter Server, including:
Datacenter, cluster, and host health, inventory, and performance
Data store health and capacity
Virtual machine usage, performance, and health
VMware vSphere Web Client
VMware vSphere Distributed Switch (VDS)
VMware vSphere High Availability
VMware DRS
VMware Fault Tolerance
VMware vMotion
VMware Storage vMotion
Raw Device Maps
Resource Pools
Storage DRS (capacity only)
Storage-driven profiles (user-defined only)
Distributed power management (up to 50 percent of VMware vSphere ESXi
hosts/blades)
VMware Syslog Service
VMware Core Dump Collector
VMware vCenter Web Services
Chapter 3: Cross Application Design Guidelines
14 Dell EMC VxBlock® System 540 Oracle SQL SAP Best Practices Best Practices for Oracle, SQL Server, and SAP White Paper
Chapter 3 Cross Application Design Guidelines
This chapter presents the following topics:
Overview ............................................................................................................ 15
Flash fundamentals .......................................................................................... 15
Multipathing ....................................................................................................... 17
NUMA ................................................................................................................. 18
Disk provisioning .............................................................................................. 18
Paravirtualized SCSI (PVSCSI) adapters ......................................................... 18
Full bandwidth testing ...................................................................................... 19
Sizing and capacity tools ................................................................................. 19
XtremIO Sizing Tool .......................................................................................... 20
XtremIO Data Reduction Estimator .................................................................. 23
Chapter 3: Cross Application Design Guidelines
15 Dell EMC VxBlock® System 540 Oracle SQL SAP Best Practices Best Practices for Oracle, SQL Server, and SAP
White Paper
Overview
This section includes universal guidelines that apply to Oracle, Microsoft SQL Server, and
SAP. For example, features like XtremIO Data Protection (XDP) and inline deduplication
apply to any application on the VxBlock System 540. Implementation guidelines like
multipathing and Non-uniform Memory Access (NUMA) are also VMware related concepts
that can be covered by universal best practices. Recommendations that are specific to the
enterprise applications in this paper are included in their respective sections below.
Flash fundamentals
Flash memory is a storage media designed to electronically secure binary information.
Originally developed in the 1980s, the nickname “Flash” is a reference to the memory
erasure process, which to the eye looks like a “flash of a camera”. The media is designed
to be electronically erased and reprogrammed.
Flash storage is the use of flash memory as a storage media primarily used for main
memory, memory cards, USB flash drives and solid-state drives (SSD).
Flash storage is a reference to any device that can function as a storage repository. Flash
storage can be a simple USB device or a fully integrated All Flash Storage Array. SSDs
are an integrated device that uses flash memory designed to replace hard disk drives
(HHD) that use spinning media. SSDs have been developed for many uses from
consumer devices like laptops to enterprise grade like those used in mission critical
storage arrays with reliability requirements of 99.999 percent or higher.
Flash technology has introduced a transformational shift in computing and storage. Flash
provides orders of magnitude faster access to persistent data than traditional magnetic
storage by eliminating rotational delay and seek time. High-performance flash storage can
now enable workloads that were previously not possible. Although flash storage devices
currently have higher $/GB cost than magnetic products, this is something that can often
be mitigated with data reduction technologies, such as deduplication and compression.
Also, flash storage provides a dramatically lower cost per operation on a $/IO basis. The
lower space, power, and cooling costs of flash storage also improve the economics
compared to traditional HDD solutions.
Each storage controller maintains a table that manages the location of each data block on
SSD. The table has two parts:
The first part of the table maps the host Logical Block Address (LBA) to its content
fingerprint.
The second part of the table maps the content fingerprint to its location on SSD.
Using the second part of the table provides XtremIO with the unique capability to distribute
the data evenly across the array and place each block in the most suitable location on
SSD. It also enables the system to skip a non-responding drive or to select where to write
new blocks when the array is almost full and there are no empty stripes to which to write.
In a typical write operation, the incoming data stream reaches any one of the active-active
storage controllers and is broken into data blocks. For every data block, the array
Content
addressing
Chapter 3: Cross Application Design Guidelines
16 Dell EMC VxBlock® System 540 Oracle SQL SAP Best Practices Best Practices for Oracle, SQL Server, and SAP White Paper
fingerprints the data with a unique identifier. The array maintains a table with this
fingerprint to determine if incoming writes already exist within the array. The fingerprint is
used also to determine the storage location of the data. The LBA to content fingerprint
mapping is recorded in the metadata, within the storage controller memory. The system
checks if the fingerprint and the corresponding data block have been stored previously.
If the fingerprint is new, the system:
Compresses the data
Chooses a location on the array where the block will go (based on the fingerprint,
and not the LBA)
Creates the "fingerprint to physical location" mapping
Increments the reference count for the fingerprint by one
Data is encrypted
Performs the write
In case of a "duplicate" data block, the system records the new LBA to fingerprint
mapping, and increments the reference count on this specific fingerprint. Since the data
already exists in the array, it is neither necessary to change the fingerprint to physical
location mapping nor to write anything to SSD. All metadata changes occur within the
memory. The deduplication operation is carried out faster than the first unique block write.
The actual write of the data block to SSD is carried out asynchronously.
The XtremIO array with XDP works in a fundamentally different way than most other
storage. The XtremIO array stores blocks based on individual content fingerprinting
instead of logical block addresses. Traditional arrays update logical block addresses that
use a fixed physical location on disk (causing the high I/O overhead of a stripe update).
Every update to the data at a specific logical block address on XtremIO is written to a new
location on the disk, based on the content fingerprint. If the content already exists in the
array, the block is deduplicated.
As with traditional RAID, XDP tries to do as many full stripe writes as possible by bundling
new and changed blocks and writing them to empty stripes available in the array.
However, with XDP the unavailability of a full stripe does not cause the high levels of
partial stripe update overhead as found in traditional RAID because XtremIO does not
update data in place. Rather, the array always places data in the stripe with the most
amount of free space available. The net result is that XtremIO almost never incurs the full
RAID 6 I/O overhead of a stripe update. XtremIO average update performance is nearly
40 percent better than that in RAID 10 – the RAID level with the highest performance.
A standard XtremIO X-Brick contains 25 SSDs, with 23 for data and two for parity. When
one of the 25 SSDs in an X-Brick fails, XDP quickly rebuilds the failed drive, while
dynamically reconfiguring incoming new writes into a 22+2 stripe size to maintain N+2
double failure protection for all new data written to the array. When the rebuild completes
and the failed drive is replaced, incoming writes are again written with the standard 23+2
stripe.
XtremIO data reduction services calculate a unique fingerprint for every application data
block entering the array based on its payload contents. This unique ID is used to uniformly
XDP
Deduplication
Chapter 3: Cross Application Design Guidelines
17 Dell EMC VxBlock® System 540 Oracle SQL SAP Best Practices Best Practices for Oracle, SQL Server, and SAP
White Paper
distribute and balance data throughout every flash module in the array. The deduplication
engine identifies duplicate fingerprints in real-time as data blocks pass through the
controllers. The system inherently spreads the data across the array, using all SSDs
evenly and providing perfect wear leveling. Duplicate objects never translate into physical
data writes and are replaced with in-memory metadata pointers in the metadata fabric.
Data compression is applied after deduplication in-line and in real-time against all data
blocks which results in a highly optimal flash footprint.
When accessing a block of data on a spinning disk, the disk actuator arm has to move the
head to the correct track (the seek time), then the disk platter has to rotate to locate the
correct sector (the rotational latency). This mechanical action takes time; the amount of
time depends on where the head was previously located, and the location of the next
sector on the platter. If the next piece of information is directly under the head, you do not
need to wait, but if it just passed the head you will need to incur the same penalties of
seek time and rotational latency. This type of operation is random I/O. But if the next block
is located directly after the previous one on the same track, the disk head would
encounter it immediately afterwards, incurring no wait time (no latency). This is a
sequential I/O.
The idea of sequential I/O does not exist with flash memory because there is no physical
concept of blocks being adjacent or contiguous. Two blocks may have consecutive block
addresses, but this has no bearing on where the actual information is electronically stored.
Therefore, all-flash I/Os have the same latency whether the application accesses
sequential or random logical block addresses of a file.
Multipathing
Multipathing allows the use of more than one physical path that transfers data between
the host and an external storage device. In case of a failure of any element in the SAN
network, such as an adapter, switch, or cable, I/O streams can switch to another physical
path, which does not depend on the failed component. This process of path switching to
avoid failed components is known as path failover. In addition to path failover,
multipathing provides load balancing. Load balancing is the process of distributing I/O
loads across multiple physical paths. Load balancing reduces the potential for incurring
single path bottlenecks.
XtremIO supports VMware vSphere Native Multipathing (NMP) technology. For best
performance, Dell EMC recommends automatically configuring native vSphere
multipathing for XtremIO volumes with ESI, or manually as follows:
1. Set the native round-robin path selection policy on XtremIO volumes that are
presented to the ESXi host.
2. Use the ESXi command line interface (CLI) to set the vSphere NMP round-robin
path switching frequency for XtremIO volumes from the default value (1,000 I/O
packets) to one.
These settings ensure optimal distribution and availability of load between I/O paths to
XtremIO storage. Dell EMC PowerPath®/VE for ESXi manage XtremIO devices as
generic. You must enable generic loadable array module (LAM) support for PowerPath/VE
Random/
sequential I/O
Chapter 3: Cross Application Design Guidelines
18 Dell EMC VxBlock® System 540 Oracle SQL SAP Best Practices Best Practices for Oracle, SQL Server, and SAP White Paper
to recognize and manage XtremIO devices. You can also use EMC VSI for XtremIO for
the NMP round robin configuration.
NUMA
Non-Uniform Memory Access or NUMA is a process that links several small, cost-effective
nodes using a high-performance connection. Each node contains processors and
memory, much like a small SMP system. However, an advanced memory controller allows
a node to use memory on all other nodes, creating a single system image. When a
processor accesses memory that does not lie within its own node (remote memory), the
data must be transferred over the NUMA connection, which is slower than accessing local
memory. Memory access times are not uniform and depend on the location of the memory
and the node from which it is accessed.
Virtual NUMA (vNUMA) exposes NUMA topology to the guest operating system, allowing
NUMA-aware guest operating systems and applications to make the most efficient use of
the underlying hardware NUMA architecture. By default, vNUMA is enabled only for virtual
machines with more than eight vCPUs. This feature can be enabled for smaller virtual
machines, however, while still allowing ESXi to automatically manage the vNUMA
topology.
Disk provisioning
All volumes in the XtremIO storage array are thin provisioned, meaning that the system
consumes capacity only when it needs to perform a unique write. XtremIO determines
where to place the unique data blocks physically inside the cluster after it calculates their
fingerprint IDs. The array never pre-allocates or thick-provisions storage space before
writing. As a result, blocks can be stored at any location in the system and the data is
written only when unique blocks are received. Unlike thin provisioning with many disk-
oriented architectures, XtremIO has no space creeping or garbage collection. Volume
fragmentation over time is not applicable to XtremIO (as the blocks are scattered equally
over the random-access array space) so no defragmentation utilities are needed. XtremIO
inherent thin provisioning enables consistent performance and data management across
the entire life cycle of the volumes, regardless of the system capacity utilization or the
write patterns to the system.
Use eager-zeroed thick format for all virtual disks. Although eager-zeroed thick disk has
all space allocated and zeroed out at the time of creation, XtremIO is zero-block aware,
and therefore there is no physical capacity allocated. The combination of eager-zeroed
thick format and XtremIO provides the best combination of performance and space
efficiency.
Paravirtualized SCSI (PVSCSI) adapters
The Paravirtual SCSI adapter uses paravirtualization, enabling the OS kernel to
communicate directly with the virtualization layer, in this case the ESXi hypervisor.
Therefore, the first important step in creating a virtual machine is to select the operating
system, as the PVSCSI adapter only works for OSs like Windows Server, Red Hat
Enterprise Linux, and some others. For a list of the operating systems that can support the
Chapter 3: Cross Application Design Guidelines
19 Dell EMC VxBlock® System 540 Oracle SQL SAP Best Practices Best Practices for Oracle, SQL Server, and SAP
White Paper
PVSCSI adapters, refer to VMware KB 1010398, Configuring disks to use VMware
Paravirtual SCSI (PVSCSI) adapters. The benefits of paravirtualization are greater
performance and lower CPU utilization. Some of the guidelines for using PVSCSI
adapters are:
Use the PVSCSI adapter for large to medium workloads that require better
throughput, lower latency, and less CPU cost.
If the virtual machine is performing a low number of IOPS there is no need to
change from the default LSI or BusLogic Parallel adapter.
This recommendation applies only to vSphere versions 5.1.x and 5.5.x. According
to VMware Knowledge Base article 2053145, large-scale workloads with intensive
I/O patterns might require queue depths significantly greater than Paravirtual SCSI
default values.
Full bandwidth testing
This recommendation applies to new converged platforms. On receiving the new VxBlock
System 540 for mixed application workload testing, we conducted a bandwidth test. We
used the command:
dd if=/dev/urandom <file_name> bs=<file_size>
This command writes a random file to disk. This is important, as we want to test
bandwidth without any of the benefits of inline compression and deduplication. This test
assists with validating maximum bandwidth and quickly identifies bottlenecks.
Sizing and capacity tools
Dell EMC sizing and performance analysis tools are designed to help our customers
resolve performance challenges and size new storage systems. For example, our tools
can assist with performance, capacity, and protection in moving to a new converged
platform like the VxBlock System 540. The two most common requests from customers
include:
Capacity planning—Before the emergence of flash, DBAs and storage
administrators had to carefully plan capacity for maintaining full copies of
databases. Flash turns this equation around completely: converged platforms like
the VxBlock System 540 with XtremIO have inline deduplication, compression, and
thin provisioning which offer big space savings.
Managing database copies—DBAs frequently need to manage a process that
iteratively creates copies of a running production database. These copies are used
for functions such as backup, data warehouse staging, ETL, monthly close, batch
processing, test/dev and so on. However, the copy process always contends with
the production database in terms of resources, and thus the Oracle DBA must
attempt to avoid any impact of the copy process on production database
performance. In a legacy mechanical disk-based array context, this typically took
the form of things like BCVs (which wasted capacity) or snapshots (which often had
some cost in terms of performance). In an enterprise flash context, as XtremIO
includes inline deduplication, the capacity cost of making database copies is vastly
Chapter 3: Cross Application Design Guidelines
20 Dell EMC VxBlock® System 540 Oracle SQL SAP Best Practices Best Practices for Oracle, SQL Server, and SAP White Paper
reduced. Also, unlike most legacy arrays, the performance cost of XtremIO
snapshots is nil. However, given the high cost of flash, having visibility into the
capacity cost of managing database copies is critical.
Database storage-related performance issues are now largely alleviated by the intelligent
application of flash. When a database is on flash, the DBA’s focus shifts from performance
to capacity. That is because flash provides an enormous pipe in terms of performance, but
is tight on capacity. EMC has created a set of tools that address these concerns:
XtremIO Sizer (XS) —XS enables the design of a configuration that includes
XtremIO at the storage layer. This configuration runs a given workload, based on a
set of customer-supplied metrics, which includes a growth factor over time. This
eliminates the uncertainty and risk associated with making the transition from a
legacy array to an XtremIO all-flash array.
XtremIO Data Reduction Estimator (XDRE) —This tool can be used to determine
the deduplication ratio for specific volumes. Deduplication of primary, original
Oracle database data will typically be minimal. This is because Oracle defeats
block-based deduplication algorithms by including unique data in each database
block. However, making copies of production Oracle data produces deduplication,
and is a common DBA task. Thus XDRE allows the DBA to determine beforehand
the capacity cost of making a given set of database copies (for example, in the form
of snapshots) on an EMC XtremIO array.
We look at each of these tools, with regard to descriptions and benefits, in the following
sections.
XtremIO Sizing Tool
The XtremIO Sizing Tool can size an XtremIO array configuration based on the
customer’s requirements. The tool supports the following applications:
Databases
Custom applications
End-user computing (one of the following use cases)
VMware Horizon View
Citrix XenDesktop and XenApp PVS
Citrix XenDesktop and XenApp MCS
The tool recommends an X-Brick size and cluster based on the user’s inputs. An X-Brick
can be sized as a standalone cluster or as part of a VxBlock configuration. After the
completion of a sizing calculation, the resulting presentation includes the user’s inputs and
the complete sizing configuration.
The operation of the Sizing Tool is divided into three sections:
Setup
Setup section
XtremIO sizing analysis configuration section
Operation
Chapter 3: Cross Application Design Guidelines
21 Dell EMC VxBlock® System 540 Oracle SQL SAP Best Practices Best Practices for Oracle, SQL Server, and SAP
White Paper
Inputs
Results
After setting up the Sizing Tool, the following input values can be inserted into the tool as
shown here:
Figure 1. XtremIO Sizing Tool – sample input section
Based on the input values, the following chart is displayed for capacity planning and
forecasting:
Figure 2. XtremIO Sizing Tool – sample database calculations
Chapter 3: Cross Application Design Guidelines
22 Dell EMC VxBlock® System 540 Oracle SQL SAP Best Practices Best Practices for Oracle, SQL Server, and SAP White Paper
Figure 3. XtremIO Sizing Tool – sample results section
These are the sample recommendations:
Recommended X-Brick 10TB X-Brick
XtremIO Cluster 6 X-Brick Cluster
Total number of X-Bricks 6
Recommendations:
The total number of X-Bricks required based on capacity is: 1
The recommended X-Brick size based on capacity only is: 20TB X-Brick
The total number of X-Bricks required based on IOPS is: 6
Based on IOPS, you require a larger X-Brick Cluster, we recommend the X-Brick size would be: 10TB X-Brick
The benefits of the XtremIO Sizing Tool can be described as follows:
Useful tool for capacity planning and forecasting.
Gives an understanding on the size of the hardware to procure.
Helps in forecasting the IOPS value as the database grows in value.
Helps in pro-active budgeting of the infrastructure and database costs.
Provides holistic sizing solutions to the customer.
Benefits
Chapter 3: Cross Application Design Guidelines
23 Dell EMC VxBlock® System 540 Oracle SQL SAP Best Practices Best Practices for Oracle, SQL Server, and SAP
White Paper
XtremIO Data Reduction Estimator
The XtremIO Data Reduction Estimator is a stand-alone tool that can be used to
determine the exact deduplication ratio for specific existing volumes on another storage
device. This tool supports multiple platforms and environments, and can be run against
the following targets:
A drive or mount point
A raw device
A folder
The Data Reduction Estimator performs a read-only scan, then analyzes the content in a
similar fashion to an XtremIO storage cluster, and reports the exact deduplication savings,
as if the data were written to an XtremIO X-Brick. The tool can also scan multiple targets
in parallel, and reports the deduplication rate per target as well as the global deduplication
across all targets scanned. The data deduplication ratios are directly proportional to the
volume of redundant data on the target. For example, if there is more repetitive data in the
target, a higher ratio of data deduplication can be expected. If there are multiple operating
systems, lower data deduplication ratios are expected.
Figure 4. Data Reduction Estimator screen
This tool can be used with the XtremIO storage to get the following information:
Table 2. Benefit of the Data Reduction Estimator tool
Result Description
inDevice The device currently being scanned.
Size The size of the device currently being scanned.
Thin provisioning savings How much of the data scanned is zero blocks not written to the XtremIO storage array, and do not count for the deduplication rate.
Deduplication ratio The deduplication rate expressed as <rate>:1.
Benefits
Chapter 3: Cross Application Design Guidelines
24 Dell EMC VxBlock® System 540 Oracle SQL SAP Best Practices Best Practices for Oracle, SQL Server, and SAP White Paper
Result Description
Overall savings How much space can be saved, taking into account both the deduplication rate and thin provisioning.
Size on XtremIO array If data is copied to an XtremIO array, how much space it would take. This is size divided by deduplication rate.
Block size The block size currently being sampled.
Progress Progress of current scan.
Speed The current scan speed. Hover the mouse pointer to display the average speed.
Temp space The tool uses files to store data in the %TEMP% folder. This is the current sum of all space currently used.
Elapsed/Remaining Amount of scan time elapsed and remaining.
Chapter 4: Deployment Best Practices for Oracle
25 Dell EMC VxBlock® System 540 Oracle SQL SAP Best Practices Best Practices for Oracle, SQL Server, and SAP
White Paper
Chapter 4 Deployment Best Practices for Oracle
This chapter presents the following topics:
Overview ............................................................................................................ 26
Design considerations ...................................................................................... 26
Data performance analysis ............................................................................... 27
Benefits .............................................................................................................. 29
Self-service monitoring with Oracle Enterprise Manager 12c Plug-in .......... 30
VMware .............................................................................................................. 35
Storage virtualization ........................................................................................ 35
Virtualizing compute (vCPUs) .......................................................................... 36
VMware memory configuration guidelines ...................................................... 37
Database types .................................................................................................. 37
Decision Support Systems (DSS) .................................................................... 38
Chapter 4: Deployment Best Practices for Oracle
26 Dell EMC VxBlock® System 540 Oracle SQL SAP Best Practices Best Practices for Oracle, SQL Server, and SAP White Paper
Overview
Technology is transforming how Oracle Database is positioned on infrastructure. IT
organizations are modernizing their datacenters with converged and hyper-converged
platforms that accelerate databases with all-flash storage. Key benefits for using a
converged infrastructure include:
Server consolidation—VMware virtualization enables consolidation of
heterogeneous applications on to fewer servers.
Storage consolidation—EMC all-flash arrays have enabled application consolidation
in which multiple applications share the same flash array.
Ease of management—Converged and hyper-converged platforms like VxBlock
provide IT organizations with the ability to manage everything with a few tools, thus
simplifying administration.
Converged/integrated—All components have been tested and integrated into a
converged platform and there is only one call to make for support. Using a
converged platform means less complexity and faster time to value for the IT
organization.
Converged platforms empower the IT organization to support mixed application workloads
and maximize savings while lowering administration overhead. Using converged and
hyper-converged platforms for Oracle databases and others can mean changing how we
architect our infrastructure. Traditionally, DBAs would dedicate CPU, networking, and
storage to ensure deterministic performance. A VxBlock with all-flash storage provides
consistent predictable performance at all layers without the need for dedicated and
complex infrastructure design.
In this section, we review best practices for Oracle Database 12c on a VxBlock System
540 with XtremIO storage.
Design considerations
Traditionally, the storage design for Oracle and other databases was complex. DBAs
worked closely with storage administrators to determine the type of RAID, number of
disks, number of LUNs, and capacity requirements. With the introduction of all-flash
arrays and converged platforms, storage design has been simplified.
Performance is a key design consideration for Oracle databases. Dell EMC has Oracle
specialists that work with DBAs to collect database information needed to estimate
performance requirements. Oracle Automatic Workload Repository (AWR) reports are
used with the Dell EMC Workload Profile Assessment tool. The Oracle specialist using
these tools can review the performance profile of the database and work with the DBA to
properly size the converged platform. In addition to database performance analysis, the
Oracle specialists can analyze storage performance if the database is on a Dell EMC
array. For example, analysis might include front-end port read and write response times,
IOPS, queuing, and many other storage metrics. We explore the tools available for sizing
Oracle databases in more detail later in this paper.
Chapter 4: Deployment Best Practices for Oracle
27 Dell EMC VxBlock® System 540 Oracle SQL SAP Best Practices Best Practices for Oracle, SQL Server, and SAP
White Paper
For capacity sizing, DBAs and storage administrators consider the initial database
placement size and estimate future growth. Planning capacity has become easier as the
VxBlock System 540 with XtremIO has inline deduplication, inline compression, and thin
provisioning.
Inline deduplication means the same data is written only once to the flash array.
Metadata content addressing is used to manage duplicate data so that only unique
data is written to the array.
Inline compression can significantly compress Oracle databases for a space saving
of up to two times the logical size of the database. For example, a database of
1,000 GB without compression will take 500 GB of physical space with inline
compression.
Thin provisioning is a virtualization technology that logically allocates more space
than the application is physically using. For example, a storage administrator can
provision 1 TB of space to a database and the database administrator sees the full
terabyte, but if the database is not using the entire space the storage array can use
it for other applications.
Oracle specialists can assist with capacity sizing on the VxBlock System 540. For
example, most production systems have copies for test and development purposes. An
Oracle specialist can capture the entire database ecosystem and develop a plan for
capacity and performance.
Data performance analysis
The Oracle sizing and performance analysis tool, MiTrend Performance Analyzer (MPA),
is a comprehensive performance analysis tool that runs off Oracle AWR reports. MPA
makes specific recommendations regarding performance-tuning changes. In a storage
context, these recommendations typically include configuring enterprise flash storage in
some way.
Dell EMC employees and partners help with database performance analysis and sizing
new systems using the Workload Profile Assessment tool for Oracle. MiTrend uses
AWR/StatsPack reports to supply performance statistics, for example, IOPS and MB/s.
AWR and StatsPack are the Oracle performance gathering and reporting tools. Originally
the UTLBSTAT/UTLESTAT scripts were used to monitor performance metrics. Oracle8i
introduced the StatsPack functionality that Oracle9i extended. Starting in Oracle 10g
StatsPack has evolved into the Automatic Workload Repository (AWR).
The Workload Assessment tool can take AWR files from Dell EMC and from many non-
Dell EMC arrays, many operating systems, and some applications. With a personalized
approach, the Workload Assessment tool provides comprehensive performance analysis
of your database and supporting infrastructure. The tool profiles performance, identifies
bottlenecks, trends, and shows how performance improves with all-flash arrays.
As part of the analysis, a Dell EMC Oracle specialist or partner reviews the findings,
assisting with resolving performance issues, driving better performance, or sizing for
performance or capacity. Working with a Dell EMC Oracle specialist means the
assessment is free; analysis is personalized, and delivered in a presentation.
Description
Chapter 4: Deployment Best Practices for Oracle
28 Dell EMC VxBlock® System 540 Oracle SQL SAP Best Practices Best Practices for Oracle, SQL Server, and SAP White Paper
Figure 5. MiTrend Instruction Screen for Oracle Performance
For each database ZIP file uploaded, the user receives the following items:
An Excel spreadsheet with the raw data.
A PowerPoint deck for customer presentations:
The first 12 to 16 slides cover customer presentation basics with another 25 or
more including detailed information. Two sample reports are attached below for
Real Application Clusters (RAC) and non-RAC.
The reports contain performance-based estimates and are similar to the AWR
or Statspack reports supplied.
The slides focus on IOPS, MB/s, estimated drive counts to support the IOPS,
IO-latencies, block-read sizes, read/write ratios, and so on.
We can also make capacity planning growth requirements and requirements for disaster
recovery, clones and snaps, and backups to disk based on the estimation of the data that
is published in the PowerPoint slides. A typical presentation of the Oracle MiTrend WPA
breaks into three areas:
Storage optimization statistics
I/O latencies
I/O sizes and I/O latencies
MiTrend output
Chapter 4: Deployment Best Practices for Oracle
29 Dell EMC VxBlock® System 540 Oracle SQL SAP Best Practices Best Practices for Oracle, SQL Server, and SAP
White Paper
Figure 6. IOPS and Drive Estimates report from MiTrend presentation
The system summary slide captures an overview of the storage performance statistics.
The process is repeated and the RAID-adjusted IOPS and drive estimates from the IOPS
and Drive Estimate slide shows peak and 95th percentile values. The number of either 15
K drives or flash drives is needed to support the estimated IOPS in either a RAID 10 or
RAID 5 configuration. For Oracle, we typically model storage solutions using either the
peak or 95th percentile values.
Benefits
A MiTrend Data Performance Analysis tool helps users and customers to perform the
following tasks:
Alerts customers to Dell EMC’s Oracle application awareness and expertise.
Helps to bridge the gap between storage teams and DBA teams.
Makes understanding the Oracle database performance problems easy, lucid, and
comprehensive.
Enables insight into the right fit for different Dell EMC technologies.
Allows the infrastructure staff to open a broader conversation on Oracle DR,
backup, dev/test refreshes, and new Oracle projects.
Helps non-DBAs to understand the practical details of Oracle performance tuning.
In the process, they do not need to learn how to work with Oracle Data Dictionary,
sqltrace, or explain plans and so on.
Given a sufficient number of AWR or Statspack reports for a database or its instances,
users can capture the read and write IOPS over time. With the read and write IOPS, they
can RAID adjust them for RAID 10 and RAID 5 and with some assumptions about drive
Chapter 4: Deployment Best Practices for Oracle
30 Dell EMC VxBlock® System 540 Oracle SQL SAP Best Practices Best Practices for Oracle, SQL Server, and SAP White Paper
type and their IOPS ratings, estimate the number of drives of a particular type to support
the estimated RAID-adjusted IOPS. This enables correct sizing of a configuration for a
particular workload.
Self-service monitoring with Oracle Enterprise Manager 12c Plug-in
Oracle Enterprise Manager (EM) 12c is one of the most popular DBA tools as it integrates
with databases to enable management of database environments. The challenge most
DBAs have is using database statistics to analyze storage performance. For example,
looking at database I/O wait events such as db file sequential read, db file parallel read,
and others, can give a DBA an indication about storage response times, but not the
complete picture. In a study by Unisphere Research on the drive to innovation, the
research found that 80 percent of data managers agree that it is important to improve
DBA-to-storage administrator communications.
The Dell EMC VxBlock plug-in for Enterprise Manager solves the problem by providing the
DBA read-only access to storage information while assuring storage administrators that
they retain configuration control. This free plug-in can be downloaded from the Oracle
Extensibility Exchange website, search for XtremIO to find the plug-in. After initial
collaboration between database and storage administrators, the DBA will have access to
features like those shown in the image below.
Figure 7. Monitoring the VxBlock with Enterprise Manager 12c Plug-in
Oracle states that using Enterprise Manager will improve staff productivity by up to 75
percent and we believe adding VxBlock storage monitoring will further increase
productivity. The Dell EMC Storage Plug-in for OEM 12c consists of several components
that work together to collect configuration and performance data from both database
servers and Dell EMC storage systems.
The Dell EMC Home page shows the Storage dashboard, which shows reads, writes and
throughput for selected databases. The DBA can choose to view these storage metrics
over the Last Day, Last Week, or Last Month. The DBA can quickly analyze storage
Chapter 4: Deployment Best Practices for Oracle
31 Dell EMC VxBlock® System 540 Oracle SQL SAP Best Practices Best Practices for Oracle, SQL Server, and SAP
White Paper
performance and correlate it to database performance. The DBA can determine in a
matter of minutes if performance issues are, or are not, related to storage.
Figure 8. Storage pane for EM 12c Plug-in
In the next figure, a part of the Database Storage dashboard has been recreated for
readability. Selecting databases in the Database Storage pane displays the metrics in the
Storage page as shown below. Multiple database storage targets can be graphed
simultaneously. Using the table, DBAs can quickly see which storage array the databases
are on and the reads per second, writes per second, reads MB per second, and writes MB
per second.
Figure 9. Storage dashboard for EM 12c Plug-in
In the Array pane example shown here, you can see the arrays being monitored and the
incidents and problems for each array. For example, two XtremIO arrays appear to be out
of compliance—one for one day, and the other for fourteen days. DBAs find the Array
pane useful in checking if the arrays they want to monitor have been registered in
Enterprise Manager and to quickly investigate any incidents.
Figure 10. Storage pane for EMC 12c Plug-in
The Database storage page displays information about the Oracle database and storage
associated with the database. The Hierarchy pane, shown next, is at the top of the page
and displays the technology stack that is running the database. Clicking any of the images
in the hierarchy allows the DBA to drill down into the supporting storage performance
metrics.
Chapter 4: Deployment Best Practices for Oracle
32 Dell EMC VxBlock® System 540 Oracle SQL SAP Best Practices Best Practices for Oracle, SQL Server, and SAP White Paper
Figure 11. Hierarchy pane
The Storage pane figure below displays response time, throughput, and IOPS for a
selectable period of time, that is, last day, last week, or last month. The Storage pane is at
the top half of the figure and helps DBAs view overall storage performance. At the bottom
of the page is the Database pane, which has the same performance metrics and time
selections as the Storage pane. DBAs can correlate the Storage and Database panes to
see overall storage performance relative to database performance.
Figure 12. Storage pane (top) and Database pane (bottom)
The XtremIO Storage dashboard, as shown below, displays detailed information about
XtremIO arrays. Using this view, you can click array components and drill down into
secondary pages for more information on specific components. The Initiator Group pane
shows read and write throughput, read IOPS, and write IOPS. This is information to which
DBAs do not normally have access. Using this view, DBAs can see the performance
metrics from the array.
Figure 13. Array pane with selectable times
Collaboration between the DBA and storage administrators is strengthened because both
share the same view of performance metrics. Using the EM plug-in frees up time for the
DBA and streamlines activities like daily monitoring, historical reporting, performance
tuning, and storage capacity planning. Additionally, for mission-critical systems the DBA
can set alerts for notification when a performance threshold has been breached. For
example, setting an alert for when read-response time exceeds two milliseconds will
enable the DBA to quickly remediate the problem.
Chapter 4: Deployment Best Practices for Oracle
33 Dell EMC VxBlock® System 540 Oracle SQL SAP Best Practices Best Practices for Oracle, SQL Server, and SAP
White Paper
Using the VxBlock EM plug-in, the DBA can intelligently manage the entire Oracle stack
from the database down to storage. The capabilities of XtremIO storage become
transparent to the DBA for a much more comprehensive approach to enterprise
monitoring.
Host Bus Adapter (HBA) and queue depth are important for optimizing the performance of
Oracle databases. A Host Bus Adapter is a card that inserts into a server that provides
input/output (I/O) processing and physical connectivity from the host to the storage
system. Queue depth is a configurable parameter of an HBA that defines the number of
I/O requests that can be queued at one time. Common misconfiguration problems include
leaving the queue depth at its default value, or maximizing the queue depth, both of which
can degrade performance.
The recommendation is to set the queue depth to the vendor recommended value. For
example, if one host is connected the queue depth settings are:
256 for QLogic HBA
128 for Emulex HBA
Note that the queue depths may change after this paper is published, so we encourage
the operating system and storage administrators to validate the vendor recommendations.
In the paper Oracle Best Practices with XtremIO on Linux 6.x, it is recommended that as
the number of host increases, queue depth settings should decrease. For example, for
connecting two hosts, the best practice is reducing this setting by a half of the maximum
value:
128 for QLogic HBA
64 for Emulex HBA
Queue depth settings will also require tuning depending upon the VxBlock configuration,
number of applications, and storage utilization. We recommend engaging Oracle
specialists to assist with reviewing HBA settings if you have any concerns or questions.
Until this point, HBA queue depth settings were discussed at the operating system level. If
the Oracle database(s) have been virtualized with VMware, setting the ESX Host
recommended settings is another optimization to check. Using the Virtual Storage
Integrator (VSI) from Dell EMC enables VMware vCenter administrator to provision,
monitor, and manage vSphere data stores on Dell EMC storage arrays. The VSI plug-in
can be downloaded for free, and once installed can greatly simplify storage management
in vSphere.
A YouTube video, EMC Virtual Storage Integrator (VSI) 6.6.3 & XtremIO, shows how easy
it is to implement the ESX Host recommendations. The vSphere administrator right clicks
the cluster and selects All EMC VSI Plugin Actions > ESX Host Settings. The following
table shows the ESX Host Settings from the video:
Table 3. ESX host settings
Disk Settings
SchedNumReqOutstanding 256
SchedQuantum 64
I/O devices and
parallelization
Chapter 4: Deployment Best Practices for Oracle
34 Dell EMC VxBlock® System 540 Oracle SQL SAP Best Practices Best Practices for Oracle, SQL Server, and SAP White Paper
Disk Settings
DiskMaxIOSize 4096
Native Multipathing (NMP) Settings
Path selection policy Round Robin (RR)
Round Robin path switching frequency 1 I/O packet
HBA queue depth 256
Optimize settings for cloning to XtremIO volumes Enabled
In ITZIKR’S BLOG XtremIO Host Configuration for VMware, the writer reviews the
background for several of the parameters in the above table. For example, the
SchedNumReqOutstanding setting defines the maximum number of active storage
commands (I/Os) allowed at any given time at the VMkernel. Generally, you should use
the ESX Host Settings from the VSI plug-in.
XtremIO automatically compresses data after all deduplications have been identified. This
feature is always on and is performed only on unique data blocks. Data compression is
performed in real time and not as a post-processing operation. When initially installing or
migrating a database to a VxBlock System 540, inline compression offers an immediate
physical space savings. The degree of compression is unique to each
application/database and dependent on the uniqueness of the data. Our tests for Oracle
databases show the range of compression is 1.5 to 2.5 times with a median of 2.0 times.
The table below shows a range of physical space savings based on the uniqueness of
data as it applies to inline compression on XtremIO.
Table 4. Inline compression on XtremIO
Compression Savings
100 GB Database 1000 GB Database 3000 GB Database
1.5X 66 GB 666 GB 2000 GB
2.0X 50 GB 500 GB 1500 GB
2.5X 40 GB 400 GB 1200 GB
A good overview of inline compression with and without Oracle Advanced Compression
Option is provided in the white paper, Oracle 11g and 12c Database Consolidation and
Workload Scalability with EMC XtremIO 4.0. A key finding in the paper is that Oracle
Advanced Compression Option and XtremIO compression are compatible. The
advantages to using storage-based compression over Oracle Advanced Compression
include:
No additional costs as inline compression is part of the VxBlock System 540.
Storage compression does not affect host compute. Database compression
requires significant compute overhead.
Everything on storage is compressed, not just the database.
No additional patching and related administration tasks.
Compression
Chapter 4: Deployment Best Practices for Oracle
35 Dell EMC VxBlock® System 540 Oracle SQL SAP Best Practices Best Practices for Oracle, SQL Server, and SAP
White Paper
No compounded compression when using XtermIO inline compression with Oracle
Advanced Compression. Not using Oracle Advanced Compress with XtremIO can
be a license saving for the business.
We recommend using inline compression with or without Oracle Advanced Compression
Option to achieve a space savings on storage.
VMware
Oracle databases are ideal candidates for VMware virtualization as the DBA gains more
agility, automation, and the capability to consolidate databases.
Storage virtualization
Virtual Machine File System (VMFS) is VMware’s implementation of a high-performance
clustered file system optimized for virtual machines. By clustered, VMware means that
multiple VMs can read and write to the same VMFS data store, making storage
consolidation and management very easy. VMFS works on any SCSI-based protocol
including, Fibre Channel, Fibre Channel over Ethernet, and iSCSI. Choosing VMFS gives
the Oracle DBAs access to features like Distributed Resource Schedule (DRS), High
Availability (HA), vMotion, and Storage vMotion.
To determine if VMFS is right for virtualizing your Oracle databases, consider the
following:
Storage consolidation—VMFS volumes can host one or many virtual machines;
however, Oracle databases tend to be among the most demanding of storage when
compared to other applications. And the application users also expect high service
levels so storage consolidation is less of a benefit with regard to databases. Oracle
DBAs should collaborate closely with the storage and VMware administrators to
validate that the VMFS storage layout has been architected to deliver expected
performance and application SLAs for the business. Generally, production should
have a dedicated VMFS data store for predictable performance and less critical
databases, like those in development, are candidates for greater consolidation.
Ease of administration—Generally, VMware administrators find managing VMFS
datastores easier than RDMs. For example, adding a virtual machine to a VMFS
datastore is an easy administrative task that can be completed quickly.
Support for disabling simultaneous write protection—VMware KB article 1034165
entitled, Disabling simultaneous write protection provided by VMFS using the multi-
writer flag has a good technical overview of when to use this feature and is
recommended reading, particularly when implementing Oracle Real Application
Clusters (RAC). By default, multiple VMs in the same data store cannot write the
same vmdk file as this could cause data corruption. For clustering solutions, like
Oracle RAC that maintain write consistency, the recommendation is to disable
simultaneous write protection. There is a maximum of eight physical servers
supported for disabling simultaneous write protection. There are some caveats that
come with disabling simultaneous write protection as the following VMware features
are unsupported:
Virtual backup solutions leverage snapshots through vStorage APIs
Chapter 4: Deployment Best Practices for Oracle
36 Dell EMC VxBlock® System 540 Oracle SQL SAP Best Practices Best Practices for Oracle, SQL Server, and SAP White Paper
Cloning a virtual machine with one or more disks configured with the multi-write
flag
Storage vMotion
Change Block Tracking (CBT)
Suspending a virtual machine
Hot-extending a virtual disk
Note: This is not a complete list of unsupported features.
Oracle RAC Node Live Migration: Using VMFS means the DBA can use vMotion to non-
disruptively migrate the virtual machine from one server to another. A recent paper by
Principled Technologies entitled, Demonstrating vMotion Capabilities with Oracle RAC on
VMware vSphere is recommended reading for DBAs interested in third-party validation
proving that vMotion of heavily utilized RAC nodes will not result in data loss and can be
done very quickly. In the study, migration of all three heavily utilized RAC nodes took only
180 seconds to complete with minor impact to database performance.
Virtualizing compute (vCPUs)
VMware vSphere 6.0 increased the number of virtual CPUs to 128 from the prior vSphere
5.5 version of 64. For IT organizations, this means larger, more compute intensive,
databases can be virtualized with vSphere 6.0.
We reference the white paper VMware vSphere 6 and Oracle Database Scalability Study
to discuss recommendations based on physical cores and hyper-threading in virtualizing
Oracle database with vSphere 6.0. Here are some useful definitions:
CPU—In this paper refers to the physical die that plugs into the server motherboard
Processor core—Is an independent execution unit. Today’s CPUs have multiple
processor cores.
Hyperthreading—Allows two threads to be run in one clock cycle on a physical CPU
core.
In the VMware vSphere 6 and Oracle Database Scalability Study, a Dell PowerEdge R920
with four Intel Xeon E7-4890 v2 (Ivy-Bridge-EX) had the following compute configuration:
Table 5. CPUs, processor cores, threads
Intel Xeon E7-4890
CPU Number of processor cores
Number of logical cores
1 15 30
1 15 30
1 15 30
1 15 30
Totals 4 60 120
Chapter 4: Deployment Best Practices for Oracle
37 Dell EMC VxBlock® System 540 Oracle SQL SAP Best Practices Best Practices for Oracle, SQL Server, and SAP
White Paper
As the table above shows, the total number of physical CPUs is 4, the total processor core
count is 60, and with hyper-threading enabled in BIOS the number of logical cores is 120.
At the time of publication, there is no Oracle licensing impact from enabling hyper-
threading. In the paper a number of tests were conducted and the finding was that hyper-
threading provided a 24 percent performance uplift with an Oracle transactional workload.
Enable hyper-threading in BIOS as the performance benefit can range from 10 to
30 percent
For production databases, dedicate CPU so there is no contention with other
databases or applications
VMware memory configuration guidelines
VMware memory guidelines for virtualized Oracle databases are as follows:
Note: The paper Oracle Real Application Cluster on VMware Virtual SAN was used to reference
these guidelines.
Production Oracle databases can be memory intensive. Therefore, set a memory
reservation equal to, or slightly greater than, the aggregate size of the SGA,
program global area (PGA), and the operating system background process.
Do not over commit memory on production database servers.
Do not disable the balloon driver.
Follow the guidelines for swap or page files in the same way as if you were doing a
physical installation of the guest operating system.
Configure HugePages in the Linux OS to improve the performance of Oracle
databases on vSphere. Disable Automatic Memory Management (AMM) with the
Oracle database if using HugePages, as they are not compatible.
Database types
Understanding I/O patterns and characteristics is critical for designing and deploying
databases. A properly configured I/O subsystem can optimize and deliver consistent
database performance.
We will review two of the most common types of database workloads: On-line Transaction
Processing (OLTP) and data warehouse/Online Analytical Processing (OLAP). Most
enterprise applications have a mixture of OLTP and batch workloads. For example, Oracle
E-Business suite generates mostly OLTP random workload, but also has reporting
workloads that are closer to data warehouse I/O.
OLTP workloads can be characterized as having concurrent transactions with small
random I/O reads and writes. Enterprise applications, such as order entry, inventory
management, and financial transactions are examples of OLTP applications. For OLTP
applications to perform well, storage latencies should be as low as possible to ensure fast
response times.
Recommendations
OLTP
Chapter 4: Deployment Best Practices for Oracle
38 Dell EMC VxBlock® System 540 Oracle SQL SAP Best Practices Best Practices for Oracle, SQL Server, and SAP White Paper
Type of I/O Size of I/O Read Profile Write Profile Performance Requirement
Random Small Weighted towards reads
Less writes Lowest possible latency
The larger the OLTP application, the more short transactions that involve database reads,
inserts, updates, and deletes. The primary performance focus is on accelerating any
reads or writes from the storage system. In the past, good latency times were sub-second.
For example, 2 to 5 milliseconds were considered fast for short reads and writes. The
VxBlock System 540 with All-Flash Array now delivers sub-millisecond performance,
which has become the gold standard for OLTP performance.
Decision Support Systems (DSS)
DSS stores data and scans data from other systems. For example, a DSS database can
be connected to an OLTP system, external data, and other data sources. DSS systems
are characterized as having large queries and fewer users generating fewer queries.
Although there are fewer users, the large queries generate significant sequential I/O and
can be very demanding of the storage subsystem.
The storage array should be optimized for throughput as it will have to support large data
reads. Storage throughput is measured in MB/s and Gigabytes / second (GB/s) and refers
to the data transfer rate. The greater the throughput the more data that can be transferred
per second.
Type of I/O Size of I/O Read Profile Write Profile Performance Requirement
Sequential Large Large volume Low volume of writes
Throughput
Chapter 5: Deployment Best Practices for Microsoft SQL Server
39 Dell EMC VxBlock® System 540 Oracle SQL SAP Best Practices Best Practices for Oracle, SQL Server, and SAP
White Paper
Chapter 5 Deployment Best Practices for Microsoft SQL Server
This chapter presents the following topics:
Overview ............................................................................................................ 40
Design considerations ...................................................................................... 40
VMware .............................................................................................................. 44
Database types .................................................................................................. 45
Data warehouse/OLAP ...................................................................................... 47
EMC Storage Integrator for Windows Suite (ESI) ........................................... 48
Chapter 5: Deployment Best Practices for Microsoft SQL Server
40 Dell EMC VxBlock® System 540 Oracle SQL SAP Best Practices Best Practices for Oracle, SQL Server, and SAP White Paper
Overview
Microsoft SQL Server is a platform that consists of multiple components and tools. This
section is focused entirely on the relational database engine. The SQL Server workload
that is explained in the Dell EMC Solutions for Enterprise Mixed Workload on VxBlock
System 540 Solution Guide includes representative best practices for Online Transaction
Processing (OLTP) and Online Analytical Processing (OLAP) for typical ecommerce and
data warehouse applications. Both of these uses cases are implemented using the SQL
Server relational database engine exclusively.
Design considerations
To properly design a robust SQL Server application, you must be able to estimate both
the required capacity and the transaction throughput success criteria. Capacity is typically
specified as an initial “go live” size (GB), as well as forecast capacity growth in GB/time
period. Throughput for a database application is typically more difficult to specify for
design purposes, especially for new or “green field” applications. Most designs start with
an estimate of the number of primary business transactions per period of time. These
transactions are typically specified in business terms, such as orders per hour, inventory
restocks per week, or timecard changes per day. Each application will have a set of
relatively easy to specify operations together with a group of ancillary functions, such as
ad hoc reports, data extracts, and data protection jobs, which will also need to be
considered in the design of the infrastructure requirements for an application.
The application design will not result in any useful sizing data for the infrastructure team to
work with. The business-defined transactions identified during the design will need to be
coded and tested to produce useful metrics in terms of CPU, memory, I/O, and network
requirements. Many development tools include provisions for unit testing application code
that accesses a database. Visual Studio includes a wide range of testing tools.
Understanding these capabilities can help you plan a testing strategy for any set of
development tools that you choose.
When you have the basic capability to run unit tests and measure the resulting CPU,
network, I/O, and memory requirements for individual components, you will need a way to
combine unit tests into more complete scenarios that will match as closely as possible
how users will use the application. For applications that are intended to scale to 10,000
users and higher, doing full system scale load testing is often prohibitively expensive. Two
approaches are typically employed:
Test a fraction of the expected users and then apply a scaling factor for high user
counts.
Implement a scale-out everywhere design and then learn and scale as you go.
Ideally, you can spend some of the resources that you save from choosing not to do
detailed load testing to build application level telemetry and analysis in order to detect
resource constraints and bring new capacity online before users are impacted.
If you choose to do some level of scale testing, you will face a build or buy decision.
Industry benchmarking tools that primarily implement various versions of the Transaction
Processing Council standard database scale testing have limited use. They are useful for
Sizing tools
Chapter 5: Deployment Best Practices for Microsoft SQL Server
41 Dell EMC VxBlock® System 540 Oracle SQL SAP Best Practices Best Practices for Oracle, SQL Server, and SAP
White Paper
ensuring that new infrastructure is configured properly and that there is a good balance of
memory, CPU, I/O, and networking. Tools such as Dell’s Benchmark Factory for
Databases include the capability to run industry standard tests including data generation,
as well as developing custom tests based on your unique applications.
The performance of a storage system used by SQL Server is dependent on various
configuration parameters that are applied at the partition, disk, controller, SAN, RAID, and
device driver levels. For detailed information on consideration of these settings for
XtremIO, refer to Best Practices for Running SQL Server on EMC XtremIO.
The SQL Server storage engine is dependent on the Windows operating system to make
available disk and volumes for the placement of data and log files. Windows manages all
I/O operations to those files and reports the results back to the SQL Server process. You
may find I/O errors or warnings in the SQL Server error log based on the information that
is exchanged between SQL Server and the operating system.
The order of the I/O operations associated with SQL Server data and log files must be
maintained. The storage system must maintain write ordering or it breaks the write ahead
logging protocol of SQL Server that guarantees transactional consistency at all times.
Any I/O subsystem that supports SQL Server must provide stable media capabilities for
the database log and data files. If the system has a non-battery backed or mirrored cache
it is not safe for SQL Server installations.
Each block storage logical unit (LUN) that is presented to a Windows server will be
associated with a Windows SCSI device and managed by a device driver. SCSI device
drivers have a configurable parameter called the queue depth that determines the
maximum number of outstanding SCSI commands or I/O requests that will be held for a
given LUN. There is a single queue for each LUN regardless of the number of network
paths that are configured between the server and the storage device. A typical queue
depth value for a modern Fibre Channel host bus adapter (HBA) is 32 outstanding
requests. The maximum configurable queue depth is typically 256. For shared storage
devices with relatively limited throughput and many attached servers, an infrastructure
architect should limit the amount of I/O that all the attached servers can consume by
setting the HBA queue depths to an appropriate value to ensure that no small subset of
the servers could flood the shared device with requests and prevent other servers from
being serviced.
In the case of All-flash Arrays (AFA) like XtremIO, the number of available low latency
IOPS is typically much larger than a few servers can consume. In this case, infrastructure
architects tend to use maximum queue depth settings on servers attached to AFAs. Due
to the extremely low latency of I/O operations and the absence of any latency penalty for
mixing sequential and random I/O on an AFA, most SQL Server workloads can be served
by a single device or LUN. For those applications that require more I/O than can be
handled by a single device queue on a Windows server, the storage design will need to
incorporate multiple LUNS and corresponding decisions on how to map objects to those
devices to achieve good I/O balance.
The first option that most DBAs will consider is separating the data file(s) from the
transaction log file. If the application is extremely read intensive, then you could still have
I/O devices and
parallelization
All-flash arrays
Chapter 5: Deployment Best Practices for Microsoft SQL Server
42 Dell EMC VxBlock® System 540 Oracle SQL SAP Best Practices Best Practices for Oracle, SQL Server, and SAP White Paper
a device queue bottleneck on a single LUN for all data. Most DBAs agree that it is best to
avoid overly complex file group and file designs that attempt to explicitly spread I/O
intensive table and index objects over multiple devices. The preferred design pattern is to
use one or a few file groups with 4-8 equally sized files per file group for high I/O
applications. The SQL Server proportional fill algorithm spreads new page allocations
across multiple files so that read and write requests are spread over all allocated devices.
To minimize complexity, the DBA can still use multiple data files and a single LUN for new
applications where there is significant uncertainty about the future growth and demands of
the application. This approach permits expanding the number of LUNs used for the
application by moving file objects to new devices. If your design starts with a single file
and later needs to be expanded, the work required to reallocate objects to a file group with
multiple object is more disruptive to operations.
When storage is accessed through a Fibre Channel or TCP/IP network, the architect must
plan for high availability by using multiple redundant paths between the host and the
storage. Specialized multi-path I/O (MPIO) software must be employed when there are
redundant links between the server and storage device or LUN. XtremIO supports:
The native MPIO feature in Windows server 2008 and above
VMware vSphere Native Multipathing (NMP) technology
EMC PowerPath MPIO software
MPIO software incurs overhead for each path to a storage device that needs to be
managed. Since all XtremIO controllers in a cluster are active, the number of active paths
to a single device should be limited, especially for implementations of large clusters with
four or more nodes. A general rule of thumb is to configure four paths per device. This
should provide a good balance between fault tolerance, parallel I/O, and acceptable path
management overhead.
Microsoft first introduced compression features in SQL Server 2005 SP2. A new
vardecimal storage format was introduced that allowed decimal and numeric data types to
be stored as a variable-length column. SQL Server 2008 extended the variable length
storage format and other optimizations to include char, int, float, datetime, and money
data types. By reducing storage allocations and I/O throughput associated with fixed
column padding, query latency will be reduced, along with buffer cache and disk
utilization. These optimizations are used to implement the row compression feature of
SQL Server including the SQL Server 2016 version. For a more detailed description see
Row Compression Implementation on MSDN.
The implementation of page compression in SQL Server is significantly more complex
than row compression. Page compression uses row compression in its implementation.
The MSDN article that describes SQL Server Page Compression Implementation can be
referred to for more details.
Compression in SQL Server is enabled using the ALTER TABLE or ALTER INDEX
commands and is therefore implemented for selected objects only. There is no database-
wide compression feature. Microsoft recommends using the system level stored
procedure sp_estimate_data_compression_savings before implementing
compression on any user objects. Compression is not supported on system tables. Since
SQL Server engine uses knowledge of the underlying page and metadata layout to
Storage access
Compression
Chapter 5: Deployment Best Practices for Microsoft SQL Server
43 Dell EMC VxBlock® System 540 Oracle SQL SAP Best Practices Best Practices for Oracle, SQL Server, and SAP
White Paper
implement compression, it can be very efficient. However, there is still a CPU cost to pay
for implementing and maintaining compression as data changes. Data architects should
be aware of the current level of CPU headroom, as well as projections of how CPU
utilization is expected to change prior to implementing compression. It is typically easier to
expand storage allocations than CPU, especially in scale-up designs.
XtremIO compresses all data sent to the array before writing it to the flash drives. The
storage controllers are designed to handle the work of inline compression in addition to
other data services, such as encryption and content address-based deduplication. The
additional CPU and memory resources required by an XtremIO storage controller to
implement inline compression is more than offset by the improved space utilization and
increased drive lifetime that is derived by writing less data to the flash drives. XtremIO
never needs to read data already written to the drives to perform any data services since
they are all performed in memory before persisting the data on disk.
Since all data, including any managed by SQL Server is 100 percent inline compressed
on XtremIO, the choice to implement SQL Server object compression is optional. It is
unlikely that row compression will provide much additional savings when implemented on
storage managed by XtremIO since the array is very good at compressing “white space”.
Page compression may provide useful rates of compression that XtremIO would not be
able to realize, since SQL Server is able to do optimizations across all the pages of large
objects based on knowledge of the page structure that XtremIO would not include in its
compression algorithms. Our recommendation is to use Microsoft data compression
estimation tools before implementing page compression on any SQL Server objects and
consider the CPU impact of both initial and ongoing compression overhead.
SQL Server provides methods for encrypting data columns, as well as database-wide
encryption using Transparent Database Encryption (TDE). There are also features for
extensible key management depending on the version and edition that you are using. For
more information on support by SQL Server versions, see the Extensible Key
Management (EKM) topic on Microsoft TechNet.
Since XtremIO both compresses and encrypts all data written to the disks, it is important
to understand the order in which these data services are implemented. XtremIO
compresses data first and then encrypts the result. Since encryption effectively masks any
patterns in the data, it is typically not possible to compress data once it is encrypted. This
applies to encrypted data that is written to an XtremIO array. Enabling TDE for a database
stored on XtremIO will negate any potential space and therefore flash storage costs that
would be gained from using the integrated inline compression and encryption data
services native on the array storage controllers.
If the business needs are to have data-at-rest encryption, then the best choice is to allow
XtremIO data services to compress and encrypt all data on the array. The encryption keys
of the array are not available to the array administrator and therefore cannot be managed
by a third-party key management service. If there is a need to implement Extensible Key
Management (EKM), then the best practice is to use SQL Server data column encryption
with EKM enabled for any data that needs to be protected by compliance rules. This
allows XtremIO to compress all non-sensitive data and help realize the best flash storage
cost savings possible.
Encryption
Chapter 5: Deployment Best Practices for Microsoft SQL Server
44 Dell EMC VxBlock® System 540 Oracle SQL SAP Best Practices Best Practices for Oracle, SQL Server, and SAP White Paper
VMware
When considering SQL Server instances as candidates for virtualization, you need to
collect the same requirements that we recommend for physical implementations including
CPU, memory, disk and network I/O, user connections, transaction throughput, query
execution efficiency/latencies, and database size. You will need a clear understanding of
the business and technical requirements for all databases hosted on each candidate
instance. You will also need requirements for operational considerations including
availability, performance, scalability, growth and headroom, patching, and backups.
Hyper-threading is an Intel technology that exposes two virtual threads to the operating
system from a single physical CPU core. Hyper-threading generally improves the overall
host throughput anywhere from 10 to 30 percent by keeping the processor pipeline busier.
VMware recommends enabling hyper-threading in the BIOS so that ESXi can take
advantage of this technology.
When designing for high performance virtualized SQL Server instances, VMware
recommends not over subscribing the number of physical CPU cores on the ESXi host
machine. For initial sizing, the total number of vCPUs assigned to all the virtual machines
should be no more than the total number of physical cores.
Avoid over-committing memory at the ESXi host level when designing for performance to
prevent memory contention between virtual machines. Also consider setting the memory
reservation equal to the provisioned memory. This will prevent the hypervisor from
swapping memory between competing VMs. Configuring a memory reservation will also
guarantee that the virtual machine gets only physical memory.
Memory hot plug enables a virtual machine administrator to add memory to the virtual
machine with no down time. VMware recommends using memory hot plug only in cases
where memory consumption patterns cannot be easily and accurately predicted and only
with vSphere 6 and later. After memory has been added to the virtual machine, increase
the max memory setting on the database server settings, if one has been set.
A NUMA system consists of multiple nodes made up of one or more CPUs and a bank of
local memory. vSphere NUMA scheduling and memory placement policies eliminate the
need for administrators to manually configure virtual machine mapping to NUMA nodes.
VMware best practice is to assign SQL Server VMs either a number of vCPUs equal to or
less than what is available on a single NUMA node, or assign the virtual machine more
than nine cores or the number available on a single node, whichever is higher, so that
VSphere will create the virtual machine across two NUMA nodes. VMs that use resources
from more than one NUMA node are call “wide” and the VSphere virtual NUMA topology
(vNUMA) will be exposed to the guest OS and SQL to take advantage of memory locality.
CPU hot plug is not compatible with vNUMA. Therefore, the VMware recommendation is
to not enable CPU hot plug for virtual machines that require vNUMA.
Network traffic types should be separated to keep like traffic contained to designated
networks for all virtualized workloads including SQL Server. vSphere can use separate
interfaces for virtual machine traffic, management, vSphere vMotion, and network-based
CPU
Memory
NUMA
Networking
Chapter 5: Deployment Best Practices for Microsoft SQL Server
45 Dell EMC VxBlock® System 540 Oracle SQL SAP Best Practices Best Practices for Oracle, SQL Server, and SAP
White Paper
storage traffic. Virtual machines should have different interfaces for each type of traffic
required. The separation scheme should be carried through to port groups on virtual
switches and dedicated physical interfaces to physically separate traffic.
VMware recommends enabling jumbo frames on the virtual switches where you have
enabled vSphere vMotion traffic or iSCSI traffic. You must ensure that jumbo frames are
also enabled on your physical network infrastructure before making this configuration on
the virtual switches.
Enable Receive Side Scaling (RSS) within Windows to allow distribution of the kernel-
mode network processing load across multiple CPUs. You must enable RSS both in the
windows kernel by running the netsh interface tcp set global rss=enabled
command in elevated command prompt, as well as on the VMXNET network adapter
driver.
According to VMware, most SQL Server performance issues in virtual environments can
be traced to improper storage configuration. SQL Server workloads are generally I/O
heavy, and a misconfigured storage subsystem can increase I/O latency and significantly
degrade performance of SQL Server.
vSphere provides several options for storage configuration. The most widely used is a
VMFS formatted datastore on a central storage system. The other options are VSAN,
Virtual Volumes on supported hardware, and Raw Device Maps. All-flash storage is
gaining increasing popularity in corporate datacenters, typically because of performance.
The ability to maintain consistent sub-millisecond latency under high load and to scale
linearly in a shared environment drives more and more interest in all-flash arrays. VMware
recommends that customers consult their array vendors for additional considerations for
optimally designing the storage layout for a mission-critical SQL Server application on an
all-flash array. We will discuss these in the I/O patterns and Storage Design sections for
both OLTP and OLAP.
Database types
There are two types of common design patterns that represent many SQL Server
database applications: Online Transaction Processing (OLTP) and Online Analytic
Processing (OLAP). Other types of applications may include elements of both OLTP and
OLAP, or have characteristics that are unique from either of these. The only way to
determine the type of access patterns and resulting resource demands is to analyze the
database under a typical load in real time.
OLTP applications typically implement a large number of procedures involving
transactions that impact small amounts of data and require sub-second response times. It
is also common for OLTP systems to have high concurrency requirements with minimal
blocking between different users. Read/write ratios can range from 60/40 to as low as
98/2.
Database design for OLTP often attempts to conform to third normal form (3NF) wherever
possible. If this leads to the need to join between too many tables for some frequently
used procedures, the architect may selectively deviate from 3NF. Since many reads are
highly selective in OLTP systems, indexing is an important aspect of database design.
Storage
OLTP
Chapter 5: Deployment Best Practices for Microsoft SQL Server
46 Dell EMC VxBlock® System 540 Oracle SQL SAP Best Practices Best Practices for Oracle, SQL Server, and SAP White Paper
Data files for OLTP applications are typically accessed at the page (8 KB) or extent (64
KB) for reads and writes. Data is read from disk when it is needed and not already cached
in the buffer pool. Data pages are written to disk when a system checkpoint is issued, or if
a low buffer cache free space condition triggers writes to the data file based on a Least
Recently Used algorithm. SQL Server has a mechanism for bundling writes of multiple
contiguous pages if they are dirty to improve throughput. The combination of the gather
write algorithm and the details of how data changes affects the average bytes/write in
ways that are sometimes difficult to predict.
Checkpoints in SQL Server improve the efficiency of writing to storage by allowing
multiple writes to a single page to be coalesced in memory before writing the changes to
disk. Checkpoints also result in large bursts of I/O to storage periodically, followed by
periods of very low I/O activity. Most shared storage systems perform best when read and
write activity is spread uniformly over time instead of arriving in large bursts of requests.
This is an area that needs to be understood by both the data and infrastructure architects
in order to use the features of all parts of the system in a coordinated design.
The frequency of checkpoints affects the recovery time for a database if the SQL Server
service is restarted and also the amount of data that is burst to the storage system during
each checkpoint. Longer recovery interval settings result in fewer total I/O, but larger
bursts. Shorter recovery interval settings result in more frequent checkpoints of less data,
but more overall I/O per period of time. The default setting for SQL Server is to use
automatic checkpoints whose target recovery interval is 1 minute.
The recommended best practice from Microsoft is to use the default automatic checkpoint
settings for most applications. Since XtremIO has exceptionally low latency compared to
most other shared storage devices, it is very well suited to handle large I/O bursts from
SQL Server checkpoints, even for very large scale OLTP systems. The introduction of
indirect checkpoints in SQL Server 2012 has given data and storage architects better
control of the checkpoint process in situations where more frequent checkpoints would be
beneficial to the overall system. A discussion of database checkpoints including the use of
indirect checkpoints can be found on MSDN.
The SQL Server transaction log file is a write ahead record of all data modifications made
to a database. Read activity is not logged. Writes to the log file are buffered up to a limit of
60 KB. The log buffer is flushed any time a transaction is committed/aborted, or the buffer
is full. Microsoft recommends maintaining log write latency below 5 ms. XtremIO AFAs
typically provide <1 ms latency for small block writes consistently until the cluster reaches
the maximum designed throughput of 150 K small block I/Os per brick, or until the storage
controllers reach maximum streaming bandwidth.
Applications may still experience log write waits even when using very low latency
storage. The SQL Server engine has several built-in parameters related to the amount of
uncommitted log data outstanding for which the Log Manager has issued a write and not
yet received an acknowledgement that the write has completed. Once these limits are
reached, the Log Manager will have to wait for some of the outstanding I/Os to be
acknowledged before issuing any more I/O to the log. These are hard limits and cannot be
adjusted. The limits imposed by the log manager are based on conscious design
decisions to address the balance between data integrity and performance. These limits
have been changing over time so you are more likely to see issues with SQL 2005 or
I/O patterns
Chapter 5: Deployment Best Practices for Microsoft SQL Server
47 Dell EMC VxBlock® System 540 Oracle SQL SAP Best Practices Best Practices for Oracle, SQL Server, and SAP
White Paper
earlier versions. For a complete discussion of the Log Manager limits, see Diagnosing
Transaction Log Performance Issues and Limits of the Log Manager on MSDN.
The third I/O pattern that is important to consider when designing SQL Server
infrastructure is for TempDB. TempDB is a global resource that is shared by all databases
within an SQL Server instance. It is a system-managed work space that is designed to
hold short lived objects, either created by users or by the SQL engine. TempDB is
recreated each time an SQL Server instance starts. In typical OLTP environments,
TempDB generally experiences a small number of semi-random IOPS, while the TempDB
log file incurs minimal small sequential writes.
The goal of any design is to implement the least complex configuration that meets the
needs of the business requirements. With XtremIO:
No RAID design or configuration is needed. A flash-optimized RAID (XDP) is
engineered into the system and used for all disks and devices.
There are no performance optimizations from separating files with random vs
sequential I/O patterns. There are no moving disk platters or read/write head
placement considerations, so any I/O to any logical block address will have the
same latency.
Despite the consistent low latency of the AFA, SQL Server performance may
demand configuring multiple files for user databases and/or TempDB. New page
allocations into a single file may result in page IO latch waits under high load
conditions. In order to mitigate this problem, create multiple data files for any user
databases or TempDB if this is a potential concern based on lab testing or
experience with similar applications.
For most databases a single LUN will suffice for the combined I/O requirements of
both the data and log files for a user database. You can start with a single LUN
even if you create multiple data files to facilitate increasing the number of LUNS for
unanticipated I/O workload growth.
For operational and monitoring efficiency, we recommend separating TempDB files
from user database files.
Data warehouse/OLAP
OLAP is frequently used for any data warehousing, decision support system (DSS) or a
Business Intelligence (BI) application. It is a repository of an organization’s data, designed
to facilitate complex analytical queries accessing very large data sets for reporting and
analysis. OLAP databases are typically de-normalized consisting of one or more fact
tables with keys that relate dimension tables. Fact tables hold the numeric facts and keys.
Dimension tables have one or more key values and labels used in reporting summary
data computed from the fact table(s).
Data in the data warehouse system is usually loaded in batches. The data is largely static
once loaded. Queries tend to read large ranges of data and therefore I/O bandwidth is
usually more important than the number of IOPS. Data may be loaded into staging tables
before performing cleaning and adding keys, further increasing the need for low latency,
high bandwidth storage.
Storage design
Chapter 5: Deployment Best Practices for Microsoft SQL Server
48 Dell EMC VxBlock® System 540 Oracle SQL SAP Best Practices Best Practices for Oracle, SQL Server, and SAP White Paper
OLAP databases frequently generate varied I/O read and write sizes, but are almost
always larger than single 8 KB pages. The SQL Server read-ahead mechanism can
request any multiple of 8 KB up to 512 KB. Bulk load-write operations will generate any
multiple of 8 KB up to 128 KB.
TempDB usage tends to be significant for OLAP databases due to a desire to keep the
number of indexes on fact tables low and the complexity of analytic queries that make
heavy use of grouping and aggregate functions. The SQL Server engine frequently writes
large ranges of rows to TempDB for sorting and aggregation operations.
The major difference in the recommendations between OLTP and OLAP database
storage design is the number of LUNs and file placement. All OLAP databases should be
configured with multiple data files placed on multiple LUNs before attempting to do any
bulk data loads. The number of files and LUNs will usually range between four and eight
for databases that will be in the 1-10 TB range. If you are planning an OLAP
implementation that is expected to exceed 25 TB, it is best to consult with a professional
services organization that has experience with large-scale data warehouses on an AFA.
EMC Storage Integrator for Windows Suite (ESI)
Dell EMC Storage Integrator for Windows Suite (ESI) is a set of software tools that
provide the following components useful to both storage administrators and business
application owners:
Provision storage to Windows hosts using application aware intelligence.
Monitor storage health with Microsoft System Center Operations Manager (SCOM)
integration.
Automate repeatable storage management actions with a rich PowerShell
command library.
ESI for Windows enables you to view, provision, and manage block storage for Microsoft
Windows, SQL Server, SharePoint sites and Linux hosts. Storage and replication
hardware support in ESI includes Dell EMC XtremIO series, Dell EMC VMAX® family, Dell
EMC VNX® series, Dell EMC VNXe® series, and Dell EMC RecoverPoint®. ESI also
includes automation and integration with Dell EMC AppSync® for service-level agreement
(SLA)-driven, self-service data protection management.
In addition to physical environments, ESI also supports storage provisioning and
discovery for Windows virtual machines running on Microsoft Hyper-V and VMware
vSphere. The Hyper-V Adapter and the VMware vSphere Adapter are installed by default
as part of the ESI installation. These adapters require no additional installation or setup.
For Hyper-V virtual machines, you can create virtual hard disks (VHD and VHDX
files) and pass-through SCSI disks. You can also create host disks and cluster
shared volumes.
For VMware vSphere virtual machines, you can create virtual hard disks (VMDK
files) and raw device mapping (RDM) disks. You can also create SCSI disks and
view datastores. Provisioning for SCSI disks require the use of existing SCSI
controllers.
I/O patterns
Storage design
Chapter 5: Deployment Best Practices for Microsoft SQL Server
49 Dell EMC VxBlock® System 540 Oracle SQL SAP Best Practices Best Practices for Oracle, SQL Server, and SAP
White Paper
ESI integration with SQL Server is implemented through an application software adapter
installed on the ESI controller host. Use of the adapter requires that the ESI host and the
SQL Server instances you want to connect to be members of the same Active Directory
domain and you have system administrator credentials for those SQL Server instances.
The ESI SQL Server Adapter enables you to view local and remote SQL Server instances
and databases and to map the databases to EMC storage. ESI is SQL Server Always On
aware. You can view an AG primary replica and up to four secondary replicas. T-SQL
scripts can be executed from an ESI host including creation and configuration of SQL
Server databases.
The ESI Windows Suite includes both an ESI Service package and the ESI SCOM
Management Packs that work with Microsoft System Center Operations Manager for
centralized discovery and monitoring of all supported EMC storage and replication
systems. SCOM integration allows datacenter managers to monitor storage health with an
in-depth storage topology view for discovering detailed component health state. ESI
SCOM management packs surface storage health state, alerts and events with
configurable thresholds in a single console with all Windows hosts and SQL Server
instances managed through SCOM. The storage in-depth view enables quick
infrastructure problem identification and on-point remediation plan.
For documentation, release notes, software updates, or information about ESI, refer to the
EMC Storage Integrator for Windows Support Page. ESI is provided by EMC as a free
download. Online Support is available through EMC technical support services for
customers with a valid support agreement. Contact your EMC sales representative to get
information about support agreements or for other questions about ESI.
Chapter 6: Deployment Best Practices for SAP
50 Dell EMC VxBlock® System 540 Oracle SQL SAP Best Practices Best Practices for Oracle, SQL Server, and SAP White Paper
Chapter 6 Deployment Best Practices for SAP
This chapter presents the following topics:
Overview ............................................................................................................ 51
Design considerations ...................................................................................... 51
VMware recommendations ............................................................................... 52
Application workload ........................................................................................ 53
EMC Storage Integrator (ESI) for SAP Landscape Virtualization Management ............................................................................................... 55
Chapter 6: Deployment Best Practices for SAP
51 Dell EMC VxBlock® System 540 Oracle SQL SAP Best Practices Best Practices for Oracle, SQL Server, and SAP
White Paper
Overview
SAP Business Suite is a bundle of interconnected and interdependent business
applications that provide integration of information and processes, collaboration, industry-
specific functionality, and scalability. SAP Business Suite includes SAP ERP, CRM, SRM,
SCM, and PLM.
SAP Power Benchmark (PBM), based on standard SD benchmark, is a collection of Perl
scripts and SAP configuration transports that allow simulating a large number of SAP user
logins and performs order-to-cash transactions, including create sales order (VA01),
create delivery order (VL01N), display sales order (VA03), post goods issue (VL02N),
create invoice (VF01), and list order (VA05).
In this section, we discuss best practices as they apply to SAP ERP 6.0 based on the
NetWeaver technology platform with Oracle Database 11g Release 2. PBM with 2,000
simulated users were used in the test environment to validate the best practices.
Design considerations
Greenfield compared to brownfield
For new implementation projects, SAP provides Quick Sizer, a web-based tool that
calculates hardware requirements based on functional parameters, such as the number of
users working with the different SAP Business Suite components, throughput and other
inputs, and presents the results in SAPS, a hardware and database independent
measurement unit. Hardware vendors including Dell EMC provide their SAPS for a
particular server configuration by running SAP Benchmark tests, and post the results on
the SAP website. By comparing the output of a SAP Quick Sizer project and a server
vendor’s SAPS report, you can choose the appropriate servers. For more information, go
to http://service.sap.com/sizing. SAP Marketplace access is required to reach this site.
For existing SAP systems migrating to a new hardware platform such as a Dell EMC
converged infrastructure, a performance metric collection and analysis of the running
systems provides the closest approximate to the real requirements. You can use the
following formula to arrive at the target SAPS for a given SAP system:
Target SAPS = SAPS from the current hardware – unused capacity +
projected headroom (+ overhead)
Single system compared to system landscape
Sizing for SAP is always done against a system landscape, rather than a single system. A
basic SAP system landscape comprises at least three systems—production, quality
assurance, and development system—and in many cases customers have five or seven
systems per module, which can easily add up to 40-50 systems in a landscape. Based on
our testing results, throttled non-production system workload has minimal impact to the
production system, and therefore the downstream non-production systems (such as
quality assurance) can simply be derived from XtremIO Virtual Copies (XVC) of the
production system. The implication on sizing would then shift from a 100 percent
Sizing approach
Chapter 6: Deployment Best Practices for SAP
52 Dell EMC VxBlock® System 540 Oracle SQL SAP Best Practices Best Practices for Oracle, SQL Server, and SAP White Paper
increased physical capacity for each additional system, to near-zero additional overhead
(from system rename process), plus the projected change rate.
Keep in mind, more systems based on XVC leads to greater consolidation.
Virtualization overhead
VMware has provided a direct comparison between virtualized and bare metal with the
same hardware configuration and same SAP benchmark workload. The overhead is less
than 6 percent. Refer to SAP Solutions on VMware Best Practices Guide for more details.
When sizing a virtualized SAP system, or landscape, with VMware vSphere, 10 percent
overhead is considered very conservative. Hyper-threading can also add a performance
boost (up to 25 percent) when carefully considered for truly multithreaded workloads.
Storage
Based on the testing results, a single SAP system does not derive much of the benefits
from compression or deduplication. Typical observations are a 1.2:1 data reduction ratio
(taking both compression and deduplication into consideration). For existing systems,
early watch (EW) report/alert may provide a good proxy to estimate the database growth
over time, when projecting for capacity.
VMware recommendations
Here is a non-exhaustive list of recommendations for running SAP on VMware that we
have validated during the test. Refer to SAP Solutions on VMware Best Practices Guide
for other recommendations.
Follow the VMware sizing rules and considerations.
Consider multithreading.
Be aware of NUMA nodes and size VMs accordingly.
Install VMware tools and configure with vmxnet3 network adapter.
Spread database data files across multiple datastores to avoid file system
contention on VMFS layer.
Use VMFS whenever possible to increase operational management efficiency.
Separate logs from data in respective virtual disks.
Use Paravirtual SCSI (PVSCSI) controllers for database data and log virtual disks
to achieve best performance.
Spread the data files virtual disks across all virtual SCSI controllers.
Use eager-zeroed thick format for all virtual disks. Although eager-zeroed thick disk
has all space allocated and zeroed out at the time of creation, XtremIO is zero-
block aware, and therefore there is no physical capacity allocated. The combination
of eager-zeroed thick format and XtremIO provides the best combination of
performance and space efficiency.
Use VMware HA to provide out-of-box high availability for all SAP instances.
Use VMware FT to protect ASCS instance from losing enqueue (a table resides in
RAM) and connections to additional application servers (AAS).
Chapter 6: Deployment Best Practices for SAP
53 Dell EMC VxBlock® System 540 Oracle SQL SAP Best Practices Best Practices for Oracle, SQL Server, and SAP
White Paper
Application workload
SAP ERP is one of the most important transactional systems in a typical enterprise IT
environment; therefore, the system architecture should take both performance and
availability into consideration. Among all available options, a distributed system
architecture is highly recommended. There are three main components, and each
component resides on its own virtual machine:
ABAP central services (ASCS) instance. ASCS comprises a message server and
an enqueuer server that are both single point of failure (SPOF). Separating ASCS
out from an application server instance in a central system architecture can
minimize the impact from other work processes. Less chance of failure also allows
highest level of protection by using VMware FT. SAP shared file systems,
/sapmnt/<SID> and /usr/sap/<SID> can be stored on this instance and shared to
the all other SAP instances within the same system.
Database instance. A dedicated database instance has full command of its virtual
machines resource and is isolated from any other possible thread to the stability of
the database. Because the network traffic between the database instance and the
application server instances are usually high and memory (RAM) state change
within the virtual machine is frequent, avoid using VMware-FT to protect the
database instance. Instead, use OS/DB specific tools to provide a higher level of
protection, such as Oracle RAC.
Additional application server (AAS) instances. Application server instance is a
scale-out architecture and performs most of the computational tasks when
executing transactions, as well as background jobs. AAS can be added at any time
for additional performance and availability. Access is usually managed by logon
groups (T-Code: SMLG) to provide flexibility and increase availability. If one
application server instance fails, connected users will lose connection and
reconnect to other available application server instances, and transaction-in-flight
will be rolled back. Standard VMware HA is good enough to provide a quick restart
from ESXi or OS failures.
Storage design and recommendation
Below is a list of recommendations for the storage design of a SAP NetWeaver platform
on XtremIO.
With the unique XtremIO multi-controller and scale-out architecture, XDP and thin
provisioning, the storage design for SAP shifts its focus from optimizing for
performance, to simplifying management. We recommend using fewer and larger
LUNs to reduce the impact from:
The overhead from constant storage resizing and expansion (apply the same
logic to logical volume management on the operating system level)
VMFS limit per ESXi server, when the consolidation level and availability
requirement is high
Below is an example of the storage design for a single SAP ERP system, using Oracle as
its database.
System
architecture
Chapter 6: Deployment Best Practices for SAP
54 Dell EMC VxBlock® System 540 Oracle SQL SAP Best Practices Best Practices for Oracle, SQL Server, and SAP White Paper
Volume name Volume size Description
SAP_ASCS 2 TB Volume for ASCS instance, can be shared across multiple systems that require federated consistency
SAP_DB_BIN 2 TB Volume for database instance binaries, client, stage, and so on, and can be shared across multiple systems that require federated consistency
SAP_DB_DATA 2 TB (* n) Volume(s) for database instance data files
SAP_DB_LOG 500 GB Volumes for database instance log files
SAP_DB_ARCH 2 TB Volume for database instance archive log, can be shared across multiple systems
SAP_APPS 2 TB Volumes for application server instances, shared across multiple systems that requires federated consistency
Using ASM for SAP on Oracle
ASM is supported for SAP on Oracle 11.2.0.2 and higher. Full ASM support in
BR*Tools requires 7.20 Patch 18 and above, and Software Provisioning Manager
(SWPM) 1.0 SP1 with SAP versions 7.0X and 7.3X. (Note: SWPM support for ASM
on Windows is not available. migration is required and details can be found here.)
For more information about ASM support for SAP on Oracle, refer to SAP on Oracle
Development Update, and SAP on Oracle ASM. Storage Best Practices from the
previous Oracle section is generally applicable to a SAP on Oracle deployment.
Use XVC and consistency group. XVC has near-zero performance impact to
production system, and the consistency group ensures write-dependent
consistency across multiple volumes. Putting both technologies to work together
would yield the following benefits for a SAP solution landscape:
Granular operational protection and faster operational recovery against a single
SAP system, or a SAP solution landscape (multiple systems)
Repurpose/refresh with federated consistency across a SAP solutions
landscape
Improved developer/analyst productivity by massive solution landscape
provisioning/decommissioning at minimal performance and footprint penalty
using XVC
Use tags. Tagging provides value in terms of improving manageability for SAP
systems.
Tag volumes to system roles, such as “production,” “QAS,” and “test,” SAP
system SID, solution landscape (consistency groups), and snapshots sets
Tag each volume with multiple tags to allow dimensional analysis and root
cause analysis
Integrate with vRealize Operations, SAP admins can use tags for correlation
between XtremIO and SAP performance metrics in the single dashboard
Chapter 6: Deployment Best Practices for SAP
55 Dell EMC VxBlock® System 540 Oracle SQL SAP Best Practices Best Practices for Oracle, SQL Server, and SAP
White Paper
EMC Storage Integrator (ESI) for SAP Landscape Virtualization Management
SAP Landscape Virtualization Management (LVM) software is a management tool that
enables the SAP NetWeaver Technology Consultant to simplify and automate SAP
system management and operations, including:
Centralized SAP landscape operations
End-to-end automation for provisioning SAP systems landscape-wide visibility and
control
Automation of repetitive basic administration tasks
Faster response to changing landscape demands
ESI for SAP LVM is a storage adapter that integrates with SAP LVM software and the
SAP systems managed by SAP LVM. ESI for SAP LVM supports physical, virtual and
mixed environment. ESI has the following components:
ESI for SAP LVM Storage Manager Adapter (Java plug-in)
EMC HLS Administration Console (EHAC)
EMC Solutions Enabler
EMC SMI-S Provider
The ESI for SAP LVM Storage Manager Adapter (ESI Adapter) is distributed as a Java
Enterprise Archive (EAR) file that complies with SAP LVM specifications. The ESI Adapter
is deployed in the SAP NetWeaver Java Application Server and integrates with Dell EMC
file and block storage systems, including XtremIO. The following image shows the SAP
LVM architecture combined with the ESI, which provides storage-based operations for
system clone, copy, and refresh operations.
Chapter 6: Deployment Best Practices for SAP
56 Dell EMC VxBlock® System 540 Oracle SQL SAP Best Practices Best Practices for Oracle, SQL Server, and SAP White Paper
Figure 14. SAP LVM architecture combined with ESI for SAP LVM plug-in
Integrating XtremIO into SAP LVM enables customers to maximize the value of taking
advantage of XtremIO Virtual Copies, which provides both instantaneous copies of data,
as well as space efficiency. For more information, refer to Design Guide - EMC Storage
Integration with SAP Landscape Virtualization Management Software.
ESI for SAP LVM also supports Dell EMC VxBlock Systems. For more information about
VxBlock with SAP LVM and ESI, refer to the Dell EMC for SAP LVM White Paper and the
Dell EMC Solutions for SAP webpage.
Chapter 7: Conclusion
57 Dell EMC VxBlock® System 540 Oracle SQL SAP Best Practices Best Practices for Oracle, SQL Server, and SAP
White Paper
Chapter 7 Conclusion
This chapter presents the following topics:
Overview ............................................................................................................ 58
Chapter 7: Conclusion
58 Dell EMC VxBlock® System 540 Oracle SQL SAP Best Practices Best Practices for Oracle, SQL Server, and SAP White Paper
Overview
EMC’s goal is to partner with you, the customer, to enable your success through solution
guidelines and best practices. IT decision makers, when evaluating a platform to
standardize and consolidate applications, want to know how the system performs and the
guidelines for deployment. These two documents combined give the IT decision maker
insights into deploying mixed application workloads on the VxBlock System 540:
Table 6. Topics covered in the solution guide and the best practices paper
Dell EMC Solutions for Enterprise Mixed Workload on VxBlock System 540 Solution Guide
Dell EMC Solutions for Enterprise Mixed Workload on VxBlock System 540 Best Practices
Architecture Overview √
Design Consolidations √
Solution Validation √
Test Results √
Cross Application Design √
Deployment Best Practices for Microsoft
√
Deployment Best Practices for Oracle
√
Deployment Best Practices for SAP
√
The VxBlock System 540 has thousands of hours of integration and validation testing
making it the number one choice for IT organizations to consolidate mission-critical
workloads. Support for the entire system is just one call away and accelerates time to
resolution. Engage Dell EMC to help you size the VxBlock System 540 for all your
performance and consolidation requirements. Our complete package includes converged
and hyper-converged infrastructures, software, services, and training. For more
information please visit www.DellEMC.com.
Chapter 8: References
59 Dell EMC VxBlock® System 540 Oracle SQL SAP Best Practices Best Practices for Oracle, SQL Server, and SAP
White Paper
Chapter 8 References
This chapter presents the following topics:
Dell EMC documentation .................................................................................. 60
VMware documentation .................................................................................... 60
Other documentation ........................................................................................ 60
Chapter 8: References
60 Dell EMC VxBlock® System 540 Oracle SQL SAP Best Practices Best Practices for Oracle, SQL Server, and SAP White Paper
Dell EMC documentation
The following documentation on EMC.com or EMC Online Support provides additional
and relevant information. Access to these documents depends on your login credentials. If
you do not have access to a document, contact your Dell EMC representative.
EMC XTREMIO All-Flash Solution For SAP
EMC XTREMIO Advanced Data Service For SAP Business Suite
VMware documentation
The following documentation on the VMware website provides additional and relevant
information:
SAP on VMware Best Practices
Other documentation
The following documentation on the SAP website provides additional and relevant
information:
SAP Quick Sizer