© 2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 1 of 54
White Paper
SAP NetWeaver Using Microsoft SQL Server AlwaysOn Database on Cisco UCS
© 2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 2 of 54
Contents
Executive Summary ............................................................................................ 3 Purpose of This Document ............................................................................... 3 Benefits of the Configuration ............................................................................ 3
Solution Overview............................................................................................... 4 SAP Applications with SQL Server Database on Cisco UCS ........................... 4
Technology Overview ......................................................................................... 5 Cisco Unified Computing System ..................................................................... 5 Cisco Unified Fabric ......................................................................................... 7 Cisco UCS 5108 Blade Server Chassis ........................................................... 8 Cisco UCS 2204XP Fabric Extenders .............................................................. 9 Cisco UCS Blade Servers ................................................................................ 9 Cisco UCS Virtual Interface Card 1340 .......................................................... 10 Cisco UCS Manager ...................................................................................... 12 Cisco Nexus 5548UP Switch ......................................................................... 16 SAP NetWeaver 7.4 ....................................................................................... 17 Microsoft Windows 2012 R2 .......................................................................... 18 Microsoft SQL Server 2014 ............................................................................ 19 SAN Storage .................................................................................................. 21
Design Topology ............................................................................................... 22 Hardware and Software Used in This Solution ............................................... 22 Cisco UCS Networking for LAN and SAN ...................................................... 22
Cisco UCS Configuration ................................................................................. 24 High-Level Steps for Cisco UCS Configuration .............................................. 24 Configure Fabric Interconnects for Blade Discovery ...................................... 24 Configure and Enable Ethernet LAN Uplink Ports .......................................... 25 Configure and Enable FCoE Uplink Ports ...................................................... 25 Configure VLANs ........................................................................................... 25 Configure VSANs ........................................................................................... 26 Set Jumbo Frames in Both Cisco UCS Fabrics ............................................. 27 Configure Ethernet Uplink Port Channels ...................................................... 27 Create Local Disk Configuration Policy (Optional) ......................................... 29 Create Fibre Channel and SAN Boot Policies ................................................ 30 Create and Associate Service Profile with Cisco UCS Blades ....................... 31 Enable Jumbo Frames and Create vPCs on Cisco Nexus 5548UP ............... 32 Configure Cisco UCS Servers and Stateless Computing Using Fibre Channel Boot .............................................................................. 33
Microsoft Windows Server 2012 R2 for SAP NetWeaver 7.4 Configuration 35
Microsoft SQL Server 2014 Installation .......................................................... 38 Install the Microsoft SQL Server 2014 Failover Cluster Instance on the Primary Database Server .................................................................... 38 Install the Microsoft SQL Server 2014 Failover Cluster Instance on the Secondary Database Server ............................................................... 40 Verify the Cluster ........................................................................................... 41
SAP NetWeaver 7.4 Installation ....................................................................... 44
Destructive and Hardware Failover Tests ....................................................... 53
Conclusion ........................................................................................................ 53
For More Information ........................................................................................ 54
© 2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 3 of 54
Executive Summary
This Cisco® reference architecture describes how the Cisco Unified Computing System
™ (Cisco UCS
®) can be
used in conjunction with the SAN to implement SAP applications—in particular, SAP NetWeaver—on the Microsoft
SQL Server 2014 AlwaysOn database. Cisco UCS provides the computing, networking, and storage access
components of the cluster, deployed as a single cohesive system. The result is an implementation that addresses
many of the challenges that system administrators and their IT departments face today, including needs for
simplified deployment and operation models, high performance for SAP applications on SQL Server databases,
and lower total cost of ownership (TCO). This document introduces a solution consisting of Cisco UCS,
NetWeaver, and the SQL Server database and provides instructions for implementing it.
Cisco UCS provides a new model for data center efficiency and agility. Cisco UCS has been designed with the
performance and reliability to power memory-intensive, mission-critical applications and virtualized workloads.
Historically, enterprise SAP systems have run on costly symmetric multiprocessing servers that use a vertical
scaling (or scale-up) model. However, as the cost of 1-to-4-socket x86-architecture servers continues to drop while
their processing power increases, a new model has emerged. NetWeaver uses a horizontal scaling, or scale-out,
model, in which the active-active application servers uses multiple servers, each contributing its processing power
to the SAP application, increasing performance, scalability, and availability. The active-active SAP application
servers balance the workload across the servers and can provide continuous availability in the event of a failure.
One approach used by storage, system, and application administrators to meet the I/O performance needs of
applications is to deploy high-performance drives with faster CPUs. This solution may be appropriate in
environments with a relatively small number of application users and little movement of hot data sets. However, as
the number of application users grows, frequently accessed data sets change constantly, and more computing
power is needed. Systems become increasingly challenged to identify data based on access frequency and
redistribute it to the correct storage media.
As global technology leaders, Cisco and SAP are uniquely positioned to provide high-quality, innovative products
to customers. Together, Cisco and SAP offer differentiated, scalable, highly secure end-to-end solutions. With SAP
Applications on Cisco UCS, you can reduce deployment risks, complexity, and TCO—and transform the way that
people connect, communicate, and collaborate.
Purpose of This Document
This document provides design guidance for implementing SAP Applications on Cisco UCS—in this case, SAP
NetWeaver—Cisco Nexus® Family switches, and external SAN storage. This guidance is designed to help field
engineers and customers make the decisions when creating a SAP application implementation on a SQL Server
database using Cisco UCS. The guidance includes VLAN, virtual network interface card (vNIC), VSAN, virtual host
bus adapter (vHBA), port-channel, and quality-of-service (QoS) requirements and configuration to help ensure
designs with the stability, performance, and resiliency demanded by mission-critical data center deployments.
Benefits of the Configuration
The history of enterprise computing has been marked by compromises between scale and simplicity. As systems
increased in scale, they also increased in complexity. And as complexity increased, so did the expense of
deployment and ongoing management.
Today, more than 70 percent of the IT budget is spent simply maintaining and managing existing infrastructure. IT
organizations must continually increase resources to maintain a growing, complex, and inflexible infrastructure
instead of using those resources to rapidly and effectively respond to business needs.
© 2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 4 of 54
IT organizations are working with their business counterparts to identify ways to substantially decrease the cost of
ownership while increasing IT business value. Cisco UCS helps address these challenges by streamlining data
center resources, scaling service delivery, and radically reducing the number of devices requiring setup,
management, power and cooling, and cabling.
Cisco UCS can deliver these benefits through:
● Reducing TCO at the platform, site, and organizational levels
● Increasing IT staff productivity and business agility through just-in-time provisioning and mobility support for
both virtualized and nonvirtualized environments
● Enabling scalability through a design for up to 320 discrete servers and thousands of virtual machines in a
single highly available management domain
● Using industry standards supported by a partner ecosystem of innovative, trusted industry leaders
Following are the benefits using SAP applications with SQL Server database on Cisco UCS:
● High availability for the SAP application stack
● High availability for the SQL Server database using AlwaysOn Availability Groups
● Stateless computing and easy deployment
● Prioritization of network bandwidth using QoS policy
● Dedicated HBA for data files, log files, and boot from SAN
Solution Overview
SAP Applications with SQL Server Database on Cisco UCS
The solution described here provides a high-level architecture using Cisco UCS, SAP applications, and Microsoft
technologies. It demonstrates the implementation of SAP applications with a Microsoft SQL server database on
Cisco UCS using block storage.
This solution includes the following infrastructure and software components:
● Cisco Unified Computing System*
● Cisco Nexus 5548UP Switches
● Block storage components
● SAP NetWeaver
● Microsoft SQL Server database
● Microsoft Windows operating system
*Cisco UCS includes all the hardware and software components required for this deployment solution.
© 2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 5 of 54
Figure 1 shows the architecture and the connectivity layout for this deployment model. The next section describes
the individual components.
Figure 1. Solution Architecture
Technology Overview
This section describes the Cisco UCS components, SAP software, and Microsoft technologies used to validate the
solution.
Cisco Unified Computing System
Cisco UCS is a next-generation solution for blade and rack server computing (Figure 2). The system integrates a
low-latency; lossless 10 Gigabit Ethernet unified network fabric with enterprise-class, x86-architecture servers. The
system is an integrated, scalable, multichassis platform in which all resources participate in a unified management
domain. Cisco UCS accelerates the delivery of new services simply, reliably, and securely through end-to-end
provisioning and migration support for both virtualized and nonvirtualized systems.
© 2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 6 of 54
Figure 2. Cisco UCS Components
Cisco UCS consists of the following main components:
● Computing: The system is based on an entirely new class of computing system that incorporates rack-
mount and blade servers based on the Intel® Xeon
® processor 2600 v2 series.
◦ Cisco UCS B-Series Blade Servers (http://www.cisco.com/c/en/us/products/servers-unified-
computing/ucs-b-series-blade-servers/index.html) increase performance, efficiency, versatility, and
productivity.
◦ Cisco UCS C-Series Rack Servers (http://www.cisco.com/c/en/us/products/servers-unified-
computing/ucs-c-series-rack-servers/index.html) deliver unified computing in an industry-standard form
factor to reduce TCO and increase agility.
● Networking: The system is integrated onto a low-latency, lossless, 10-Gbps unified network fabric. This
network foundation consolidates LANs, SANs, and high-performance computing networks, which are
separate networks today. The unified fabric lowers costs by reducing the number of network adapters,
switches, and cables, and by decreasing the power and cooling requirements.
◦ Cisco UCS adapters (http://www.cisco.com/c/en/us/products/interfaces-modules/unified-computing-
system-adapters/index.html), with their wire-once architecture, offer a range of options to converge the
fabric, optimize virtualization, and simplify management.
● Virtualization: The system unleashes the full potential of virtualization by enhancing the scalability,
performance, and operational control of virtual environments. Cisco security, policy enforcement, and
diagnostic features are now extended into virtualized environments to better support changing business and
IT requirements.
● Storage access: The system provides consolidated access to both SAN storage and network-attached
storage (NAS) over the unified fabric. By unifying storage access, Cisco UCS can access storage over
Ethernet (Server Message Block [SMB] 3.0 or Small Computer System Interface over IP [iSCSI]), Fibre
Channel, and Fibre Channel over Ethernet (FCoE). This unified access provides customers with storage
choices and investment protection. In addition, server administrators can preassign storage-access policies
to storage resources, providing simplified storage connectivity and management and leading to increased
productivity.
© 2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 7 of 54
● Management: The system uniquely integrates all system components to enable the entire solution to be
managed as a single entity by Cisco UCS Manager. Cisco UCS Manager has an intuitive GUI, a command-
line interface (CLI), and a powerful scripting library module for Microsoft PowerShell built on a robust API to
manage all system configuration and operations.
◦ Cisco UCS Manager (http://www.cisco.com/c/en/us/products/servers-unified-computing/ucs-
manager/index.html) provides unified, embedded management of all software and hardware components of
Cisco UCS.
Cisco UCS fuses access-layer networking and servers. This high-performance, next-generation server system
provides a data center with a high degree of workload agility and scalability.
Cisco Unified Fabric
The fabric interconnects provide a single point for connectivity and management for the entire system. Typically
deployed as an active-active pair, the system’s fabric interconnects integrate all components into a single, highly
available management domain controlled by Cisco UCS Manager. The fabric interconnects manage all I/O
operations efficiently and securely at a single point, resulting in deterministic I/O latency regardless of the
topological location of the server or virtual machine in the system.
Cisco UCS 6200 Series Fabric Interconnects (http://www.cisco.com/c/en/us/products/servers-unified-
computing/ucs-6200-series-fabric-interconnects/index.html) are line-rate, low-latency, lossless, 10-Gbps Ethernet
and FCoE interconnect switches that provide the management and communication backbone for Cisco UCS. They
support the system’s 80-Gbps unified fabric with low-latency, lossless, cut-through switching that supports IP,
storage, and management traffic using a single set of cables. The fabric interconnects provide virtual interfaces
that terminate both physical and virtual connections equivalently, establishing a virtualization-aware environment in
which blade servers, rack servers, and virtual machines are interconnected using the same mechanisms.
The Cisco UCS 6248UP 48-Port Fabric Interconnect is a 1-rack-unit (1RU) interconnect with up to 48 universal
ports that can support 80 Gigabit Ethernet, FCoE, and native Fibre Channel connectivity (Figure 3).
Figure 3. Cisco UCS 6248UP Fabric Interconnect
© 2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 8 of 54
Figure 4 provides an overview of the Cisco UCS fabric interconnects.
Figure 4. UCS 6000 Series Fabric Interconnects
Cisco UCS 5108 Blade Server Chassis
The Cisco UCS 5100 Series Blade Server Chassis (Figure 5) is a crucial building block of Cisco UCS, delivering a
scalable and flexible blade server chassis.
The Cisco UCS 5108 chassis (http://www.cisco.com/c/en/us/products/servers-unified-computing/ucs-5100-series-
blade-server-chassis/index.html) is 6RU high and can mount in an industry-standard 19-inch rack. A single chassis
can house up to eight half-width Cisco UCS B-Series Blade Servers and can accommodate both half-width and
full-width blade form factors.
Four single-phase, hot-swappable power supplies are accessible from the front of the chassis. These power
supplies are 92 percent efficient and can be configured to support nonredundant, N+ 1 redundant and grid-
redundant configurations. The rear of the chassis contains eight hot-swappable fans, four power connectors (one
per power supply), and two I/O bays for Cisco UCS 2208XP Fabric Extenders.
A passive midplane provides up to 40 Gbps of I/O bandwidth per server slot and up to 80 Gbps of I/O bandwidth
for two slots. The chassis is capable of supporting future 80 Gigabit Ethernet standards.
Figure 5. Cisco UCS Blade Server Front View and Rear View
SLOT1
SLOT5
SLOT3
SLOT7
SLOT2
SLOT6
SLOT4
SLOT8
!
UCS 5108
OK FAIL OK FAIL OK FAIL OK FAIL
FAN STATUS
FAN STATUS
FAN STATUS
FAN STATUS
FAN STATUS
FAN STATUS
FAN STATUS
FAN STATUS
CHS A56
FAN 1 FAN 5 FAN 2 FAN 6 FAN 3 FAN 7 FAN 4 FAN 8
!
! ResetConsole
UCS B200 M3
! ResetConsole
UCS B200 M3
! ResetConsole
UCS B200 M3
! ResetConsole
UCS B200 M3
! ResetConsole
UCS B200 M3
! ResetConsole
UCS B200 M3
! ResetConsole
UCS B200 M3
! ResetConsole
UCS B200 M3
UCS 2204XP
4
3
2
1
UCS 2204XP
4
3
2
1
Cisco UCS 5108 Front Cisco UCS 5108 Rear
© 2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 9 of 54
Cisco UCS 2204XP Fabric Extenders
The Cisco UCS 2204XP Fabric Extender (Figure 6) has four 10 Gigabit Ethernet, FCoE-capable, and Enhanced
Small Form-Factor Pluggable (SFP+) ports that connect the blade chassis to the fabric interconnect. Each Cisco
UCS 2204XP has sixteen 10 Gigabit Ethernet ports connected through the midplane to each half-width slot in the
chassis. Typically configured in pairs for redundancy, two fabric extenders provide up to 80 Gbps of I/O to the
chassis.
Figure 6. Cisco UCS 2204XP Fabric Extender
UCS 2
204X
P
4321
Cisco UCS Blade Servers
Cisco UCS offers a variety of x86-based computing options to address the needs of today’s workloads. Based on
the Intel Xeon processor E7 and E5 product families, Cisco UCS B-Series Blade Servers work with virtualized and
nonvirtualized applications to increase:
● Performance
● Energy efficiency
● Flexibility
● Administrator productivity
This SAP application validation used the enterprise-class, Cisco B200 M4 Blade Servers. Figure 7 provides a
summary of the Cisco UCS computing portfolio with these fourth-generation blade servers.
Figure 7. Cisco UCS Compute Portfolio
© 2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 10 of 54
Cisco UCS Virtual Interface Card 1340
The Cisco UCS Virtual Interface Card (VIC) 1340 (Figure 8) is a 2-port 40-Gbps Ethernet or dual 4-port 10-Gbps
Ethernet, FCoE-capable modular LAN on motherboard (mLOM) designed exclusively for the M4 generation of
Cisco UCS B-Series Blade Servers. When used in combination with an optional port expander, the Cisco UCS VIC
1340 is enabled for two ports of 40-Gbps Ethernet.
The Cisco UCS VIC 1340 enables a policy-based, stateless, agile server infrastructure that can present over 256
PCI Express (PCIe) standards-compliant interfaces to the host, which can be dynamically configured as either
NICs or HBAs. In addition, the VIC supports Cisco Data Center Virtual Machine Fabric Extender (VM-FEX)
technology, which extends the Cisco UCS fabric interconnect ports to virtual machines, simplifying server
virtualization deployment and management.
Figure 8. Cisco Virtual Interface Card
Stateless and Agile Infrastructure
The capability to define, create, and use interfaces on demand provides a stateless and agile server infrastructure.
The personality of the card is determined dynamically at boot time using the service profile associated with the
server. The service profile is used to determine the number of PCIe interfaces, their type (vNIC or vHBA), identity
(MAC address) and World Wide Name (WWN), failover policy, bandwidth, and QoS.
Next-Generation Data Center Features
The hardware classification engine provides support for advanced data center requirements. These include:
● Stateless network offloads for Virtual Extensible LAN (VXLAN) and network virtualization using generic
routing encapsulation (NVGRE)
● Low-latency features of the Cisco user-space NIC (usNIC)
● High-bandwidth protocol remote direct-memory access (RDMA) over converged Ethernet (RoCE)
● Performance-optimization applications such as Virtual Machine Queue (VMQ), Intel Data Plane
Development Kit (DPDK), and Cisco NetFlow
© 2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 11 of 54
Cisco Wire-Once Model
Cisco UCS is designed with a "wire once, walk away" model (Figure 9) with these features:
● Cabling and network infrastructure support a unified network fabric in which features such as FCoE can be
enabled through Cisco UCS Manager as needed.
● Every element in the hierarchy is programmable and managed by Cisco UCS Manager using a just-in-time
resource provisioning model.
● The manager can configure identity information, including the universally unique identifier (UUID) of servers,
MAC addresses, and WWNs of network adapters.
● It can install consistent sets of firmware throughout the system hierarchy, including each blade's baseboard
management controller (BMC), RAID controller, network adapter firmware, and fabric extender firmware.
● It can configure the operational characteristics of every component in the hierarchy, from the hardware
RAID level of onboard disk drives to uplink port configurations on the Cisco UCS 6200 Series Fabric
Interconnects, and everything in between.
● It can configure the types of I/O interfaces on Cisco UCS VIC adapters. The importance of this capability
cannot be understated: when a server resource is configured with this mezzanine card, the number, type
(HBA or NIC), and identities (WWNs and MAC addresses) of I/O interfaces can be programmed using just-
in-time provisioning. This approach allows a server resource to support a traditional OS and application
software stack with a pair of Ethernet NICs and Fibre Channel HBAs at one moment and then be rebooted
to run a virtualized environment with a combination of up to 128 NICs and HBAs, with NICs connected
directly to virtual machines through hypervisor pass-through technology.
© 2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 12 of 54
Figure 9. Cisco Wire-Once Model
Cisco UCS Manager
Cisco UCS Manager provides unified, centralized, embedded management of all Cisco UCS software and
hardware components across multiple chassis and thousands of virtual machines. Administrators use the software
to manage the entire Cisco UCS as a single logical entity through an intuitive GUI, a CLI, or an XML API.
The manager resides on a pair of Cisco UCS 6200 Series Fabric Interconnects using a clustered, active-standby
configuration for high availability. The software gives administrators a single interface for performing server
provisioning, device discovery, inventory, configuration, diagnostics, monitoring, fault detection, auditing, and
statistics collection. Cisco UCS Manager service profiles and templates support versatile role- and policy-based
management, and system configuration information can be exported to configuration management databases
(CMDBs) to facilitate processes based on IT Infrastructure Library (ITIL) concepts. Service profiles benefit both
virtualized and nonvirtualized environments and increase the mobility of nonvirtualized servers: for instance, when
moving workloads from server to server or taking a server offline for service or upgrade. Profiles can also be used
in conjunction with virtualization clusters to bring new resources online easily, complementing existing virtual
machine mobility.
The elements managed by Cisco UCS Manager include:
● Cisco UCS Integrated Management Controller (IMC) firmware
● RAID controller firmware and settings
● BIOS firmware and settings, including server UUID and boot order
© 2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 13 of 54
● Converged network adapter (CNA) firmware and settings, including MAC addresses and WWNs and SAN
boot settings
● Virtual port groups used by virtual machines, using VM-FEX technology
● Interconnect configuration, including uplink and downlink definitions, MAC address and WWN pinning,
VLANs, VSANs, QoS, bandwidth allocations, VM-FEX settings, and EtherChannels to upstream LAN
switches
For more information about Cisco UCS Manager, visit http://www.cisco.com/en/US/products/ps10281/index.html.
Cisco UCS Service Profiles
A server’s identity is made up of many properties, such as UUID, boot order, IPMI settings, BIOS firmware version,
BIOS settings, RAID settings, disk scrub settings, number of NICs, NIC speed, NIC firmware, MAC and IP
addresses, number of HBAs, HBA WWNs, HBA firmware, Fibre Channel fabric assignments, QoS settings, VLAN
assignments, and remote keyboard, video, and monitor (KVM) configuration. All these settings need to be
configured to give this server its identity and make it unique, distinguishing it from every other server in your data
center. Some of these parameters are maintained in the hardware of the server itself (BIOS firmware version, BIOS
settings, boot order, Fibre Channel boot settings, etc.); other settings are kept on your network and storage
switches (VLAN assignments, Fibre Channel fabric assignments, QoS settings, ACLs, etc.). This complexity
creates server deployment challenges (Figure 10):
● Long deployment cycles
◦ Coordination among server, storage, and network teams required for every deployment
◦ Need to help ensure correct firmware and settings for hardware components
◦ Need for appropriate LAN and SAN connectivity
● Slow response time to address business needs
◦ Tedious deployment process
◦ Manual, error-prone processes that are difficult to automate
◦ High operating expenses (OpEx), including outages caused by human errors
● Limited OS and application mobility
◦ Storage and network settings tied to physical ports and adapter identities
◦ Static infrastructure that leads to overprovisioning and increases OpEx
© 2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 14 of 54
Figure 10. UCS Service Profile
© 2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 15 of 54
Cisco UCS uniquely addresses these challenges with the introduction of service profiles that enable integrated,
policy-based infrastructure management. Cisco UCS service profiles contain the settings for nearly all configurable
parameters required to set up a physical server. A set of user-defined policies (rules) allow quick, consistent,
repeatable, and secure deployment of Cisco UCS servers (Figure 11).
Figure 11. Service Profile Infrastructure
Cisco UCS service profiles contain values for a server's property settings, including vNIC settings, MAC addresses,
boot policies, firmware policies, fabric connectivity, external management, and high-availability information. Cisco
UCS abstracts these settings from the physical server to a service profile, and that service profile can then be
deployed to any physical computing hardware within the Cisco UCS domain. Furthermore, service profiles can, at
any time, be migrated from one physical server to another. This logical abstraction of the server personality
eliminates dependency on the hardware type or model and is a result of Cisco’s unified fabric model (rather than
an overlay of software tools on top of the solution).
This innovation is still unique in the industry, despite competitors’ claims to offer similar functions. In most cases,
these vendors rely on several different methods and interfaces to configure the server settings. Furthermore, Cisco
is the only hardware provider to offer a truly unified management platform, with the service profiles and hardware
abstraction capabilities extending to both blade and rack servers.
The main features and benefits of UCS service profiles include the following:
● Service profiles and templates: Service profile templates are stored in the Cisco UCS 6200 Series Fabric
Interconnects for reuse by server, network, and storage administrators. Service profile templates consist of
server requirements and the associated LAN and SAN connectivity. Service profile templates allow different
classes of resources to be defined and applied to a number of resources, each with its own unique identities
assigned from predetermined pools.
Cisco UCS Manager can deploy the service profile on any physical server at any time. When a service
profile is deployed to a server, the manager automatically configures the server, adapters, fabric extenders,
© 2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 16 of 54
and fabric interconnects to match the configuration specified in the service profile. A service profile template
parameterizes the UIDs that differentiate server instances. This automation of device configuration reduces
the number of manual steps required to configure servers, NICs, HBAs, and LAN and SAN switches.
● Programmatic deployment of server resources: Cisco UCS Manager provides centralized management
capabilities, creates a unified management domain, and serves as the central nervous system of Cisco
UCS. The manager is embedded device management software that manages the system from end to end
as a single logical entity through an intuitive GUI, CLI, or XML API. The manager implements role- and
policy-based management using service profiles and templates. This construct improves IT productivity and
business agility. Now infrastructure can be provisioned in minutes instead of days, shifting IT’s focus from
maintenance to strategic initiatives.
● Dynamic provisioning: Cisco UCS resources are abstract in the sense that their identity, I/O configuration,
MAC addresses and WWNs, firmware versions, BIOS boot order, and network attributes (including QoS
settings, ACLs, pin groups, and threshold policies) all can be programmed using a just-in-time deployment
model. A service profile can be applied to any blade server to provision it with the characteristics required to
support a specific software stack. A service profile allows server and network definitions to move within the
management domain, enabling flexibility in the use of system resources. Service profile templates allow
different classes of resources to be defined and applied to a number of resources, each with its own unique
identities assigned from predetermined pools.
Cisco Nexus 5548UP Switch
The Cisco Nexus 5548UP (Figure 12) is a 1RU 1 and 10 Gigabit Ethernet switch offering up to 960 Gbps of
throughput and scaling up to 48 ports. It offers 32 x 1/10 Gigabit Ethernet fixed SFP+ Ethernet and FCoE or
1/2/4/8-Gbps native Fibre Channel unified ports and three expansion slots. These slots have a combination of
Ethernet, FCoE, and native Fibre Channel ports.
Figure 12. Cisco Nexus 5548UP Switch
The Cisco Nexus 5548UP delivers innovative architectural flexibility, infrastructure simplicity, and business agility,
with support for networking standards. For traditional, virtualized, unified, and high-performance computing (HPC)
environments, it offers numerous IT and business advantages, including:
● Architectural flexibility
◦ Provides unified ports that support traditional Ethernet, Fibre Channel, and FCoE
◦ Synchronizes system clocks with accuracy to less than one microsecond, based on IEEE 1588
◦ Supports secure encryption and authentication between two network devices based on Cisco TrustSec®
security and IEEE 802.1AE
◦ Offers converged fabric extensibility based on emerging standard IEEE 802.1BR, with the Cisco Fabric
Extender Technology (FEX Technology) portfolio, including:
− Cisco Nexus 2000 Series FEX
− Cisco Adapter FEX
© 2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 17 of 54
− Cisco Data Center VM-FEX
● Infrastructure simplicity
◦ Provides common high-density, high-performance, data center–class, fixed-form-factor platform
◦ Consolidates LAN and SAN storage
◦ Supports any transport over an Ethernet-based fabric, including Layer 2 and Layer 3 traffic
◦ Supports storage traffic, including iSCSI, NAS, Fibre Channel, RDMA over Ethernet (RoE), and
InfiniBand over Ethernet (IBoE)
◦ Reduces the number of management points with FEX Technology
● Business agility
◦ Meets the needs of diverse data center deployments on one platform
◦ Provides rapid migration and transition for traditional and evolving technologies
◦ Offers performance and scalability to meet growing business needs
The Cisco Nexus 5548UP specifications are as follows:
● 1RU 1 and 10 Gigabit Ethernet switch
● 32 fixed unified ports on the base chassis and one expansion slot, for a total of 48 ports
● Expansion-slot support for any of these three modules: unified ports, 1/2/4/8 native Fibre Channel, and
Ethernet or FCoE
● Throughput of up to 960 Gbps
SAP NetWeaver 7.4
NetWeaver is the primary technology computing platform of the software company SAP AG, and the technical
foundation for many SAP applications. It is a solution stack of SAP's technology products. The SAP Web
Application Server (WebAS) is the runtime environment for the SAP applications, and all of the mySAP Business
Suite solutions (SAP Supplier Relationship Management [SRM], Customer Relationship Management [CRM],
Supply Chain Management [SCM], Product Lifecycle Management [PLM], and Enterprise Resource Planning
[ERP]) run on WebAS.
The NetWeaver technology platform provides the shared technology foundation for SAP business applications. The
foundation components that are part of NetWeaver provide infrastructure support for the creation, extension,
deployment, and management of SAP applications across the development lifecycle. The components also enable
the extension of SAP applications into new solution areas through a large partner ecosystem of experienced
developers.
● Reliable and scalable application server infrastructure for industry-leading business applications
● Proven robustness through approximately 100,000 productive installations worldwide
● Design support for team development in Advanced Business Application Programming (ABAP) and Java
programming languages based on open standards
● Lifecycle management and operations functions to reduce the cost of ownership
● Vast ecosystem providing software enhancements and support for best practices
© 2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 18 of 54
NetWeaver Application Server ABAP is one of the major foundation components of SAP’s technology platform,
powering the vast majority of SAP business applications.
The product is marketed as a service-oriented application and integration platform. It can be used for custom
development and integration with other applications and systems and is built primarily using the ABAP
programming language, but it also uses the C, C++, and Java Platform Enterprise Edition (Java EE) programming
languages. It can also be extended with, and interoperate with, technologies such as Microsoft .NET, Java EE, and
IBM WebSphere.
Microsoft Windows 2012 R2
Microsoft offers an broad range of solutions, spanning consumer and business technologies and extending from
the data center to the desktop to a variety of other devices and into the cloud. Microsoft gives organizations the
flexibility to use the technology that is right for the business and delivers optimal technology to help people
innovate, compete, and grow.
Windows Server 2012 includes many performance, reliability and scalability enhancements and is now generally
available for productive use with most SAP NetWeaver releases. The SAP Product Availability Matrix (PAM)
contains the latest information. SAP installations, SAP version upgrades, OS and database migrations, and system
copies of most NetWeaver ABAP and Java 7.0 and 7.3 and later products are generally available and supported on
Windows Server 2012.
Windows Server 2012 includes many new features and benefits:
● Built-in Microsoft Windows Hyper-V virtualization: Windows Server 2012 supports virtual machines with
up to 1 terabyte (TB) of RAM, 64 virtual CPUs (vCPUs), live migration, virtual nonuniform memory access
(NUMA), single-root I/O virtualization (SR-IOV) support, vHBA support, Hyper-V Replica, Hyper-V Virtual
Hard Disk (VHDX) format, and many other enhancements.
● Built-in network teaming: Performance and stability is greatly improved compared to the previous vendor-
provided teaming solutions.
● Increased scalability: Windows Server 2012 supports up to 640 logical processors and 4 TB of RAM—
enough for any SAP customer.
● Server core role for hypervisor hosts: This feature eliminates the need for almost all patching on
hypervisor hosts.
● SMB 3.0 transparent application failover: Previously, a failover reset the file handles. SMB 3.0 preserves
the state of the application even during a failover.
● Microsoft Windows Server Failover Clustering (WSFC) enhancement: Up to 64 cluster nodes and
many improvements in WSFC, such as dynamic quorum, allow more complex high-availability and disaster
recovery configurations, including sophisticated geoclustering.
Windows Server 2012 provides a platform for consolidating SAP and non-SAP systems to create a single highly
reliable solution. Hyper-V 3.0 removes the limits on the workloads that can be run in a virtual machine and provides
a path to move applications to cloud platforms.
For more than 20 years, SAP and Microsoft have had a strong partnership with one unified goal: create customer
value. This close collaboration has led to the development of joint solutions, such as Duet Enterprise, and the
continued optimization of Microsoft Windows and SQL Server platforms for SAP. Today, thousands of companies
around the world depend on these SAP and Microsoft solutions to effectively run their businesses. Microsoft has
© 2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 19 of 54
become the platform of choice for new SAP customers, with about 65,000 SAP installations on Windows Server
and 35,000 SAP installations on SQL Server. This platform choice is part of a long-term trend, with organizations
choosing the Microsoft platform for its reliability in running their workloads.
Microsoft and SAP have partnered to help ensure that the Microsoft platform and SAP solutions work well together
for enterprise applications. The SAP certification of Windows Server and SQL Server is the technical stamp of
approval of the joint effort from Microsoft and SAP. Choosing a Microsoft platform for SAP fulfills a series of
requirements:
● It helps reduce the number of vendors needed to deploy an SAP system.
● It fits smoothly into a homogeneous Microsoft system landscape.
● It reduces skill requirements because the heterogeneous infrastructure can also be managed using a single
management system such as Microsoft System Center.
Microsoft SQL Server 2014
Enterprises that run SAP applications need full-time availability and performance. SQL Server 2014 with AlwaysOn
Availability Groups is an optimal database for mission-critical environments, offering availability and performance at
a low TCO for SAP installations of all sizes. According to the SAP Sales and Distribution (SD) Standard Application
benchmark results for a two-tier Internet configuration, a single server running Windows and SQL Server can
support 25,160 concurrent users with 137,470 SAP Application Performance Standard (SAPS) performance. The
SAP SD three-tier Internet configuration demonstrated that SQL Server can support 93,000 concurrent users.
Microsoft AlwaysOn
AlwaysOn technology combines the high-availability and disaster-recovery functions of SQL Server, which provides
greater flexibility when managing SAP configuration and architecture. AlwaysOn provides one primary server and
four secondary servers, with up to two secondary servers synchronously aligned with the primary server. This
approach can dramatically improve high availability because the system has no single point of failure, and
maintenance can be performed offline during normal work hours without affecting the production of SAP instances.
Microsoft SQL Server In-Memory Optimized Columnstore
One of the most significant features that SQL Server 2014 provides to the NetWeaver application is the
introduction of a modifiable version of SQL Server’s in-memory-optimized columnstore. The integration of this new
function into SAP Business Warehouse vastly extends Business Warehouse workload scenarios, which benefit
from SQL Server’s in-memory optimized columnstore capabilities. Among the most impressive results provided by
SQL Server in-memory optimized columnstore are more efficient storage of data and greater throughput in
scanning massive amounts of data. These advantages are apparent in scenarios in which customers accelerate
existing Business Warehouse workloads dramatically without the need for additional infrastructure investment or
the need to replace older acceleration technology with a simpler and faster configuration of Business Warehouse.
The new modifiable in-memory optimized SQL Server columnstore technology also enables organizations to use a
new Business Warehouse cube format. It is actually the cube format that SAP pioneered with Business Warehouse
running on SAP HANA. Experience with this new cube format demonstrates an additional significant reduction in
query response time compared to the time using the traditional Business Warehouse cube format. The new
Business Warehouse cube format is available with SAP Business Warehouse 7.40 SP8 in conjunction with
Microsoft SQL Server 2014.
© 2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 20 of 54
Backup Encryption
Especially for public cloud scenarios, customers want backup encryption included as a SQL Server native
capability. SQL Server 2014 introduces this feature. It can be combined with SQL Server’s backup compression.
Backup compression still provides the full compression ratio, and backup encryption encrypts the compressed
backup data.
Backup to URL
SQL Server 2014 allows organizations to perform database, differential, and transaction log backups directly to
Microsoft Azure storage. This feature is most in demand by customers who are running SQL Server in Azure
infrastructure-as-a-service (IaaS) scenarios. Instead of addressing mounted drives on the server or virtual machine
as the backup destination, you can define the destination as an Azure storage account URL.
New Cardinality Estimation
SQL Server was designed in a very modular way nearly two decades ago. Taking advantage of this modular
design, SQL Server 2014 introduces a new cardinality estimation module. Cardinality estimation plays an important
role in creating a plan to process a query. One of the first, very important steps in creating the plan to process a
query is to estimate the number of rows that the query and different branches within the query will return. This
estimation is based on the index and column statistics. The estimation is important because it can determine the
join order of a query or the join type to be chosen, etc. The function that performs this estimation is called
cardinality estimation. The new cardinality estimation algorithms were introduced to close a few gaps in the old
algorithms.
Deploying SAP software on SQL Server 2014 or performing the SAP post-upgrade step after upgrading to SQL
Server 2014 will set a trace flag that will disable the new cardinality estimation logic and enable the old logic again.
This capability is provided because the new cardinality estimation results in different plans than the old logic did,
which introduces the chance that queries using the new logic will not always end up with a more efficient plan than
queries generated with the old logic. Therefore, the step to test the move to SQL Server 2014 is separate from the
step to test of the new cardinality estimation. SAP supports both the new and the old logic to estimate the
cardinality. You, as the customer, can test and choose the approach that is right for your particular workload.
Lock Priority
Lock priority is a new function that is intended to improve the availability of the SQL Server instance during an
operation task. Lock priority can be used when issuing online data definition language (DDL) statements such as
create index against tables. Despite being mostly online, these tasks could lead to queuing of modifications of a
table, especially in the way that SAP integrates into SQL Server. Lock priority should help avoid such queuing of
modifications when online DDL operations are performed against tables.
Microsoft Azure Integration
Besides being able to direct SQL Server 2014 backups directly against Azure Blob storage, SQL Server 2014
offers the option to place SQL Server data and log files directly onto Azure Blob storage without the definition of
virtual hard disks (VHDs). This method of placing SQL Server data files directly in Azure Blob storage is applicable
when you run SQL Server in an Azure virtual machine (IaaS), and when you want to overcome eventual limitations
of the Azure virtual machine that you chose (limitations of the number of VHDs or I/O operations per second
[IOPS]). In SAP scenarios, the function is not meant to run the SQL Server instance in the on-premises data center
and have the data files of databases deployed in Azure Blob storage in an Azure data center.
© 2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 21 of 54
AlwaysOn was changed so that you can run a secondary replica of an AlwaysOn Availability Group in Azure. This
function makes it easier to create a disaster recovery site for SAP landscapes: in this case, especially the database
part. Imagine a setup in which you configure a primary replica plus a secondary replica onsite. These two replicas
would synchronize the data in a synchronous manner. What is possible now is that you can create a third replica in
Azure and add that to the AlwaysOn Availability Group of the two replicas that run on the premises. Note, though,
that in most cases the data synchronization between the on-premises replicas and the replica in Azure needs to be
asynchronous.
SAN Storage
The SAN is a high-speed network of storage devices that also connects those storage devices to servers. It
provides block-level storage that can be accessed by the applications running on any networked servers. SAN
storage devices can include tape libraries and disk-based devices such as RAID hardware.
Several different industry groups have developed standards related to SAN technology. One of the most prominent
is the Storage Networking Industry Association (SNIA), which promotes the Storage Management Initiative
Specification (SMI-S) as well as related standards. The Fibre Channel Industry Association (FCIA) also promotes
standards related to SAN and administers the SANmark Qualified Program. Fibre Channel is currently the most
widely used communication protocol for SANs, but it is by no means the only one. Some SANs rely on iSCSI
communication, a mapping of SCSI protocol over TCP/IP. SANs can also use ATA over Ethernet (AoE), Fibre
Channel over Ethernet (FCoE), ESCON over Fibre Channel, HyperSCSI, and some other protocols.
To set up a simple SAN, you need only three major components: a SAN switch, a storage device, and a server.
You also need cables to connect the various elements together and SAN management software. In most real-world
settings, a SAN will include many different switches, storage devices, and servers, and it will likely also include
routers, bridges, and gateways to extend the SAN over large areas and to connect to other parts of the data center
network. The SAN's topology will depend on its size and the needs of the organization.
The process of deploying a SAN requires several steps. First, you need to design your SAN, taking into account
your current needs and future scalability requirements. Second, you need to select a vendor or vendors to provide
the hardware and software you want as well as any related services. Next, you need to install the necessary
hardware and then install and configure the software for managing your SAN. Deploying a SAN is a complicated
process that often requires specialized knowledge and a great deal of planning, particularly if your SAN is very
large.
© 2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 22 of 54
Design Topology
This section presents physical and logical high-level design considerations for Cisco UCS networking and
computing with SAN storage for SAP NetWeaver with Microsoft SQL Server 2014 deployments.
Hardware and Software Used in This Solution
Table 1 lists the software and hardware used for the NetWeaver with SQL Server database deployment.
Table 1. Software and Hardware Used for SAP NetWeaver with Microsoft SQL Server Database Deployment
Vendor Name Version or Model Description
Cisco Cisco UCS 6248UP 48-Port Fabric Interconnect
Cisco UCS Manager 2.2(5b) Fabric interconnect
Cisco Cisco UCS 5108 Blade Server Chassis Cisco UCS 5108 Chassis
Cisco Cisco UCS 2204XP Fabric Extender Cisco UCS 2204XP I/O module
Cisco Cisco Nexus 5548UP Switch Cisco NX-OS Software Unified-port switch
Cisco Cisco UCS B200 M4 Blade Server Cisco UCS B200 M4 Half-width blade server (database server)
Cisco Cisco UCS VIC 1340 Cisco UCS VIC 1340 mLOM virtual interface card
SAP SAP NetWeaver SAP NetWeaver 7.4 SAP ERP applications
Microsoft Microsoft Windows Microsoft Windows 2012 R2 Operating system
Microsoft Microsoft SQL Server Microsoft SQL Server 2014 AlwaysOn Database
Cisco UCS Networking for LAN and SAN
This section explains Cisco UCS networking and computing design considerations when deploying SAP
NetWeaver with a Microsoft SQL Server database in a SAN storage deployment. In this design, the Ethernet traffic
for the Microsoft Windows cluster is isolated from the regular management traffic, and the Fibre Channel traffic for
boot-from-SAN operations is isolated from the traffic for the application data network using the same Cisco UCS
infrastructure. This design is achieved by defining logical VSAN and VLAN networks and QoS and by tagging the
vNIC with the appropriate class of service (CoS) to provide better data security.
Table 2 shows the LAN configuration, Table 3 shows the Fibre Channel configuration, and Table 4 shows the
configuration of the virtual port channel (vPC).
Table 2. LAN Configuration
Description Details
VLANs 3 VLANs
1. Public VLAN VLAN 760
2. Private VLAN VLAN 192
3. Cluster VLAN VLAN 191
Table 3. Fibre Channel Configuration
Description Details
VSANs 2 VSANs
4. FI-A VSAN 101
5. FI-B VSAN 102
© 2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 23 of 54
Table 4. vPC Details
Network vPC ID VLAN ID
Fab-A vPC 33 VLANs 760,192, and 191
Fab-B vPC 34 VLANs 760, 192, and 191
A pair of Cisco UCS 6248UP fabric interconnects carries both Fibre Channel storage traffic and Ethernet network
traffic from the blades with the help of the Cisco Nexus 5548UP Switch. The 10 Gigabit Ethernet network traffic
leaves the Cisco UCS fabrics through the Cisco Nexus 5548UP Switches, and the 8-Gbps Fibre Channel traffic
leaves the Cisco UCS fabric through the Cisco Nexus 5548UP Switches to the SAN storage. Larger enterprises
that adopt virtualization have much higher I/O requirements. To effectively handle the higher I/O requirements,
Fibre Channel boot is a better solution.
The fabric interconnect and the Cisco Nexus 5548UP Switch are clustered with the peer link between them to
provide high availability. Two vPCs are configured to provide public network, private network, and cluster network
paths for the blades to northbound switches. Each vPC has VLANs created for application network data, cluster
network data, and management data paths. For more information about vPC configuration on the Cisco Nexus
5548UP, see http://www.cisco.com/en/US/prod/collateral/switches/ps9441/ps9670/configuration_guide_c07-
543563.html.
Eight links (four per chassis) go to Fabric Interconnect A (ports 1 through 8). Similarly, eight links go to Fabric
Interconnect B. Fabric Interconnect A links are used for the public network and cluster network traffic, and Fabric
Interconnect B links are used for the private network and cluster network traffic.
The advantage of the Cisco VIC over other network adapters is its powerful QoS capabilities. The VIC can provide
fair sharing of bandwidth for all the virtual adapters, and the policy engine that defines the way that bandwidth is
shared on the VIC is conveniently centrally defined by the systemwide Cisco UCS Manager.
Table 5 shows the vNICs and their requirements for this configuration. Note, though, that the number of vNICs is
not limited. You can add more vNICs according to the throughput requirements and the isolation requirements of
the network traffic.
Table 5. VNICs and VLANs
vNIC VLAN Purpose
vNIC1 760 Public network
vNIC2 192 Private network
vNIC3 191 Cluster network
Table 6 shows the vHBAs and their requirements for this configuration.
Table 6. vHBAs and VSANs
vHBA VSAN Purpose
vHBA1 101 Fibre Channel or SAN boot for Fabric A
vHBA2 102 Fibre Channel or SAN boot for Fabric B
vHBA3 101 Database logical unit number (LUN) or cluster shared storage
vHBA4 102 Logical LUN or cluster shared storage
© 2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 24 of 54
Cisco UCS Configuration
This document provides only high-level configuration details. For detailed configuration guidance, follow a Cisco
Validated Design.
High-Level Steps for Cisco UCS Configuration
Here are high-level steps for Cisco UCS configuration:
1. Configure fabric interconnects for chassis and blade discovery
a. Configure global policies
b. Configure server ports
2. Configure LAN and SAN on Cisco UCS Manager
a. Configure and enable Ethernet LAN uplink ports
b. Configure and enable Fibre Channel SAN uplink ports
c. Configure VLANs
d. Configure VSANs
3. Configure the UUID, MAC address, Node World Wide Name (NWWN) pool, and Port World Wide Name
(PWWN) pool
a. Create UUID pool
b. Create IP address pool and MAC address pool
c. Create NWWN pool and PWWN pool
4. Configure vNIC and vHBA template
a. Create vNIC templates
b. Create public vNIC template
c. Create private vNIC template
d. Create storage vNIC template
e. Create HBA templates
5. Configure Ethernet uplink port channels
6. Create server boot policy for SAN boot
Configure Fabric Interconnects for Blade Discovery
Cisco UCS 6248UP Fabric Interconnects are configured for redundancy. They provide resiliency in the event of
failures. You first need is to establish connectivity between the blades and fabric interconnects.
1. Choose Equipment > Fabric Interconnects > Fabric Interconnect A > Fixed Module > Ethernet Ports and select
the ports that are connected to the chassis I/O module.
2. Right-click and choose Configure as Server Port.
3. Repeat same steps for Fabric Interconnect B.
For the configuration described in this document, ports 1 through 8 on Fabric interconnect A were selected and
configured as server ports. Ports 1 through 8 were also selected on Fabric Interconnect B and configured as server
ports.
© 2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 25 of 54
Configure and Enable Ethernet LAN Uplink Ports
1. Choose Equipment > Fabric Interconnects > Fabric Interconnect A > Fixed Module > Ethernet Ports and select
the desired number of ports.
2. Right-click and choose Configure as Uplink Port.
3. Repeat the same steps for Fabric Interconnect B.
For this configuration, ports 31 and 32 on Fabric interconnect A were selected and configured as Ethernet uplink
ports. Ports 31 and 32 were also selected on Fabric Interconnect B and configured as Ethernet uplink ports. You
will use these ports to create port channels later in this document.
Configure and Enable FCoE Uplink Ports
1. Choose Equipment > Fabric Interconnects > Fabric Interconnect A > Fixed Module > Ethernet Ports and select
the desired number of ports.
2. Right-click and choose Configure as FCoE Uplink Port.
3. Repeat the same steps on Fabric Interconnect B.
For this configuration, ports 33, 34, 35, and 36 on Fabric interconnect A were selected and configured as Ethernet
uplink ports. Ports 33, 34, 35, and 36 were also selected on Fabric Interconncect B and configured as FCoE uplink
ports.
Configure VLANs
1. In Cisco UCS Manager, choose LAN > LAN Cloud > VLAN.
2. Right-click and choose Create VLANs.
In this solution, you need to create three VLANs: one private network (VLAN 192), one public network (VLAN 760),
and one Windows cluster network (VLAN 191).
Figure 13 shows VLAN 760 created for the public network.
Note: Be sure to create all VLANs as global across both fabric interconnects. This way, VLAN identity is
maintained across the fabric interconnects in the event of vNIC failover.
© 2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 26 of 54
Figure 13. Creating a VLAN for the Public Network
3. Create VLANs for public, private, and Windows cluster networks as follows:
● VLAN ID 760 for public network interfaces
● VLAN ID 192 for private network interfaces
● VLAN ID 191 for Windows cluster network interfaces
Configure VSANs
1. In Cisco UCS Manager, choose SAN > SAN Cloud > VSANs.
2. Right-click and choose Create VSAN (Figure 14).
In this configuration, VSANs 101 and 102 were created for SAN boot.
© 2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 27 of 54
Figure 14. Configuring VSANs in Cisco UCS Manager
In this configuration, VSANs were created on both fabrics. For Fabric A, the VSAN ID is 101. For Fabric B, the
VSAN ID is 102.
Set Jumbo Frames in Both Cisco UCS Fabrics
To configure jumbo frames and enable quality of service in the Cisco UCS Fabric, follow these steps:
1. In Cisco UCS Manager, click the LAN tab in the navigation pane.
2. Choose LAN > LAN Cloud > QoS System Class.
3. In the right pane, click the General tab.
4. In the Best Effort row, enter 9216 in the box in the MTU column (Figure 15).
5. Click Save Changes.
6. Click OK.
Figure 15. Setting Jumbo Frames
Configure Ethernet Uplink Port Channels
This configuration uses two uplink ports from each fabric interconnect to the Cisco Nexus 5000 Series Switch.
However, you can have more than two uplink ports, depending on your bandwidth requirements. The
© 2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 28 of 54
recommended approach is to configure one port channel on each fabric interconnect for throughput sharing unless
you have a business reason to create multiple port channels on each fabric interconnect.
1. Choose LAN > LAN Cloud > Fabric A > Port Channels.
2. Right-click and choose Create Port-Channel.
3. Select the desired Ethernet uplink ports configured earlier.
4. Repeat the same steps to create a port channel on Fabric B.
The configuration here configures ports 31 and 32 on Fabric A as port-channel 33, and it configures ports 31 and
32 on Fabric B as port-channel 34 (Figures 16, 17, and 18).
Figure 16. Configuring Port Channels
© 2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 29 of 54
Figure 17. Fabric A Ethernet Port-Channel Details
Figure 18. Port Channels on Fabric A and Fabric B
Create Local Disk Configuration Policy (Optional)
If the servers in the Cisco UCS environment do not have a local disk, you need to configure a local disk.
To create a local disk configuration policy, follow these steps:
1. In Cisco UCS Manager, click the Servers tab in the navigation pane.
2. Choose Policies > root.
3. Right-click Local Disk Config Policies.
4. Choose Create Local Disk Configuration Policy.
5. Enter SAN-Boot as the local disk configuration policy name.
© 2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 30 of 54
6. Change the mode to No Local Storage.
7. Click OK to create the local disk configuration policy.
8. Click OK.
Create Fibre Channel and SAN Boot Policies
This procedure applies to a Cisco UCS environment in which the storage FCoE ports are configured in the
following ways:
● Fibre Channel ports 0 and 1 on SAN storage controllers 1 and 2 are connected to Cisco Nexus 5548UP
Switch A.
● Fibre Channel ports 2 and 3 on SAN storage controllers 1 and 2 are connected to Cisco Nexus 5548UP
Switch B.
To create boot policies for the Cisco UCS environment, follow these steps:
9. In Cisco UCS Manager, click the Servers tab in the navigation pane.
10. Choose Policies > root.
11. Right-click Boot Policies.
12. Choose Create Boot Policy.
13. Enter Boot-FC as the name of the boot policy.
14. (Optional) Enter a description for the boot policy.
15. Keep the Reboot on Boot Order Change check box unchecked.
16. Expand the Local Devices drop-down menu and Choose Add CD-ROM.
17. Expand the vHBAs drop-down menu and Choose Add SAN Boot.
18. In the Add SAN Boot dialog box, enter vHBA0 in the vHBA field.
19. Make sure that the Primary radio button is selected as the SAN boot type.
20. Click OK to add the SAN boot initiator.
21. From the vHBA drop-down menu, choose Add SAN Boot Target.
22. Keep 0 as the value for Boot Target LUN.
23. Enter the PWWN for Fibre Channel port 0 on SAN storage controller 1.
24. Keep the Primary radio button selected as the SAN boot target type.
25. Click OK to add the SAN boot target.
26. From the vHBA drop-down menu, choose Add SAN Boot Target.
27. Keep 0 as the value for Boot Target LUN.
28. Enter the PWWN for Fibre Channel port 0 on SAN storage controller 2.
29. Click OK to add the SAN boot target.
30. From the vHBA drop-down menu, choose Add SAN Boot.
31. In the Add SAN Boot dialog box, enter vHBA1 in the vHBA box.
32. Verify that the SAN boot type is set to Secondary, and the Type option is unavailable.
33. Click OK to add the SAN boot initiator.
© 2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 31 of 54
34. From the vHBA drop-down menu, choose Add SAN Boot Target.
35. Keep 0 as the value for Boot Target LUN.
36. Enter the PWWN for Fibre Channel port 1 on SAN storage controller 1.
37. Keep Primary as the SAN boot target type.
38. Click OK to add the SAN boot target.
39. From the vHBA drop-down menu, choose Add SAN Boot Target.
40. Keep 0 as the value for Boot Target LUN.
41. Enter the PWWN for Fibre Channel port 1 on SAN storage controller 2.
42. Click OK to add the SAN boot target.
43. Click OK, and then click OK again to create the boot policy.
After you create the Fibre Channel boot policies, you can view the boot order in the Cisco UCS Manager GUI. To
view the boot order, navigate to Servers > Policies > Boot Policies. Click Boot Policy Boot-FC to view the boot
order.
Create and Associate Service Profile with Cisco UCS Blades
Service profile templates enable policy-based server management that helps ensure consistent server resource
provisioning suitable to meet predefined workload needs.
Create Service Profile Template
1. In Cisco UCS Manager, choose Servers > Service Profile Templates > root.
2. Right-click root and choose Create Service Profile Template.
3. Enter the template name and select the UUID pool that you created earlier.
4. Select the Update Template radio button. Then click Next to continue.
5. On the Networking page, create one vNIC on each fabric and associate the vNICs with the VLAN policies
created earlier. Select Expert Mode and click Add to add one or more vNICs that the server should use to
connect to the LAN.
6. Create all the required vNICs for Fabrics A and B with appropriate VLANs and adapter policy.
7. After the vNICs are created, you need to create vHBAs. In the Storage page, select Expert Mode, choose the
NWWN pool created earlier, and click Add to create the vHBAs.
In this configuration, four vHBAs were created—vHBA0, vHBA1, vHBA2, and vHBA3—and associated with
appropriate VSANs.
8. This configuration uses the Cisco Nexus 5548UP for zoning, so skip the Zoning section and use the default
vNIC and vHBA placement.
Create Server Boot Policy
1. On the Server Boot Order page, choose the Boot Policy you created in previous section for SAN boot.
2. Click Next.
The rest maintenance and assignment policies were left at their default settings in this configuration. However, the
configuration can vary from site to site, depending on your workloads, best practices, and policies.
© 2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 32 of 54
Create Service Profiles from Service Profile Templates
1. In Cisco UCS Manager, choose Servers > Service Profile Templates.
2. Right-click and choose Create Service Profiles from Template.
For this configuration, four service profiles were created:
● SAP-APPS-1
● SAP-APPS-2
● SAP-SQL-DB-1
● SAP-SQL-DB-2
Two of the service profiles are used for SAP application servers, and two are used for database servers.
Enable Jumbo Frames and Create vPCs on Cisco Nexus 5548UP
You need to set global configurations and jumbo frames in QoS on both Cisco Nexus 5548UP Switch A and Cisco
Nexus 5548UP Switch B. Follow these steps on both switches:
1. Log in as the admin user.
2. Run the commands shown here.
conf t
spanning-tree port type network default
spanning-tree port type edge bpduguard default
port-channel load-balance ethernet source-dest-port
policy-map type network-qos jumbo
class type network-qos class-default
mtu 9216
exit
class type network-qos class-fcoe
pause no-drop
mtu 2158
exit
exit
system qos
service-policy type network-qos jumbo
exit
copy run start
3. Configure the Cisco Nexus 5548UP for VLANs, VSANs, and vPCs.
When configuring the Cisco Nexus 5548UP with vPCs, be sure that the status for all vPCs is “up” for
connected Ethernet ports by running the commands shown in Figure 19 from the CLI on the Cisco Nexus
5548UP Switch.
© 2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 33 of 54
Figure 19. Port-Channel Status on Cisco Nexus 5548UP
4. Configure and create Fibre Channel zoning on the Cisco Nexus 5548UP Switches
This configuration uses three separate zones on each switch: for SAN-boot, data-file, and log-file LUNs. This
configuration will isolate these three types of storage traffic. Figure 20 shows an example of zone configuration
for the boot LUN on Cisco Nexus 5548UP Switch A.
Figure 20. Fibre Channel Zone on Cisco Nexus 5548UP
Configure Cisco UCS Servers and Stateless Computing Using Fibre Channel Boot
Booting from the Fibre Channel SAN helps organizations move toward stateless computing, in which there is no
static binding between a physical server and the OS and applications that it is tasked to run. The OS is installed on
a SAN LUN, and boot–from–Fibre Channel policy is applied to the service profile template or the service profile. If
the service profile were to be moved to another server, the PWWN of the HBAs and the boot-from-SAN policy
© 2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 34 of 54
would move along with it. The new server now takes on the same exact character as the old server, providing the
true unique stateless nature of Cisco UCS blade servers.
Benefits of Boot from Fibre Channel
The main benefits of booting from the network are:
● Reduce server footprint: Boot from Fibre Channel SAN alleviates the need for each server to have its own
direct-attached disk, eliminating internal disks as potential points of failure. Thin diskless servers also take
up less facility space, require less power, and are generally less expensive because they have fewer
hardware components.
● Accelerate disaster and server-failure recovery: All the boot information and production data stored on a
local SAN can be replicated on a SAN at a remote disaster-recovery site. If a disaster destroys server
functions at the primary site, the remote site can take over with little downtime.
Recovery from server failures is simplified in a SAN environment. With the help of snapshots, mirrors of a
failed server can be recovered quickly by booting from the original copy of its image. As a result, boot from
SAN can greatly reduce the time required for server recovery.
● Increase availability: A typical data center is highly redundant, with redundant paths, redundant disks, and
redundant storage controllers. Storing OS images on disks in the SAN supports high availability and
eliminates the potential for mechanical failure of a local disk.
● Accelerate redeployment: Businesses that experience temporary high production workloads can take
advantage of SAN technologies to clone the boot image and distribute the image to multiple servers for
rapid deployment. Such servers may need to be in production for only hours or days and can readily be
removed when the production need has been met. Highly efficient deployment of boot images makes
temporary server use a cost-effective endeavor.
With boot from SAN, the image resides on a SAN LUN, and the server communicates with the SAN through an
HBA. The BIOS of the HBA contains the instructions that enable the server to find the boot disk. All Fibre Channel–
capable converged network adapter (CNA) cards supported on Cisco UCS B-Series Blade Servers support boot
from SAN.
After the power-on self-test (POST), the server hardware component fetches the boot device that is designated as
the boot device in the hardware BIOS settings. After the hardware detects the boot device, it follows the regular
boot process.
Summary of Boot-from-SAN Configuration
At this point, you have completed following steps that are essential for configuring boot from SAN:
● SAN zoning is configured on the Cisco Nexus 5548UP Switches.
● The SAN storage array is configured for boot LUN.
● Cisco UCS boot-from-SAN policy is configured in the service profile.
You are now ready to install the OS. This document does not present the steps to install the OS in a Fibre Channel
boot configuration.
© 2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 35 of 54
Microsoft Windows Server 2012 R2 for SAP NetWeaver 7.4 Configuration
For this solution, four servers are configured: two servers for the SQL Server database (one primary database and
one high-availability database) and two servers for the SAP application. Four Cisco B200 M4 servers are used to
boot from SAN to enable stateless computing should the need arise to replace or swap a server using the unique
capabilities of Cisco UCS service profiles. The OS boot uses the Fibre Channel SAN, and the SQL Server
database and NetWeaver components are configured to use Fibre Channel protocol on SAN storage.
Windows Server 2012 R2 is installed on each server.
Table 7 summarizes the hardware and software configuration details.
Table 7. Host Configuration
Component Details Description
Server 4 Cisco UCS B200 M4 Blade Servers 2 sockets with 15 cores each
Memory 256 GB Physical memory
Static vNIC1 Public access Management and public access; maximum transmission unit (MTU) size 1500
Static vNIC2 Private SAP application; MTU size 9000
Static vNIC3 Windows cluster Windows cluster communication; MTU size 9000
vHBA0 Boot LUN Boot from SAN storage
vHBA1 Boot LUN Boot from SAN storage
vHBA2 Data LUN, log LUN, and shared LUN SAN storage access for data file and log file
vHBA3 Data LUN, log LUN, and shared LUN SAN storage access for data file and log file
Note: The installation of Windows Server 2012 R2 is not covered in this document. This document assumes at
this point that Windows Server 2012 R2 is installed on all four Cisco UCS B200 M4 servers.
1. Configure the IP address for the management network, cluster network, and private network.
2. Configure the virtual IP address for the SAP application using the management network port.
The following example shows the IP address configuration used for the SAP NetWeaver installation.
3. Configure two Windows Servers for the SAP application with Windows cluster (AppCluster), and configure two
Windows Servers for the SQL Server database with Windows cluster (DBCluster).
Table 8 shows the Windows cluster configuration details.
Table 8. Microsoft Windows Cluster Configuration
Cisco UCS Chassis Blade Server Microsoft Windows Cluster Purpose
Chassis 1 Cisco UCS B200 M4 and Windows 2012 R2 AppCluster Application servers
Chassis 2
Chassis 1 Cisco UCS B200 M4 and Windows 2012 R2 DBCluster Database servers
Chassis 2
4. The installation of the SAP system requires the main directories for the system to be created automatically.
However, during the installation procedure, SAPinst prompts you to enter drive letters for the main
components of the system. Distribute components to disks in the system according to your requirements.
© 2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 36 of 54
Table 9 provides an overview of the main SAP system components and directories and their purposes.
Table 9. SAP System Components and Directory Requirements
Components Directory Structure Description
SAP system ● \usr\sap
● \usr\sap\trans
● SAP kernel and related files
● SAP transport directory
Database management system (DBMS) \Program Files\Microsoft SQL Server SQL Server program files including the master, Microsoft database, and model database files
SAP database data files \<SAPSID>DATA0
\<SAPSID>DATA1
\<SAPSID>DATA2
\<SAPSID>DATA3
\<SAPSID>DATA4
…
\<SAPSID>DATA<n>
Database data files 0 to n
SAP database transaction log \<SAPSID>log<n> Database transactional log files
Temporary database (tempdb) files \Tempdb Tempdb data files
For the installation of the SAP system with Microsoft SQL Server, the central components of the database are the
SQL Server program files, tempdb files, SAP database data files, and SAP database transactional log files.
Table 10 shows the SAN storage array and the LUN requirements for the SAP system.
Table 10. SAN Storage Array and LUN Requirements
SAN Storage and RAID Group RAID Type LUN Purpose
Array 1, RAID group1 RAID 1 Boot1, Boot2, Boot3, and Boot4 Operating system, program files, Microsoft SQL Server, and tempdb
Array 1, RAID group2 RAID 1 sapmnt Shared storage for SAP installation
Array 1, RAID group 3 RAID 1 sapcd SAP binaries
Array 2, RAID group 4 RAID 5 DATALUN1 SAP database data files (primary database server)
Array 2, RAID group 5 RAID 1 LOGLUN1 SAP database log files (primary database server)
Array 3, RAID group 6 RAID 5 DATALUN2 SAP database data files (secondary database server)
Array 3, RAID group 7 RAID 1 LOGLUN2 SAP database log files (secondary database server)
Array 1, RAID group 8 RAID 1 DBClusterLUN Windows Server cluster for databases
Array 1, RAID group 9 RAID 1 APPClusterLUN Windows Server cluster for SAP application servers
Figures 21 and 22 show the Windows cluster configuration for SAP application servers and Microsoft SQL Server
database servers.
© 2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 37 of 54
Figure 21. Microsoft Windows Cluster for SAP Application Servers
Figure 22. Microsoft Windows Cluster for SQL Server Database
© 2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 38 of 54
Microsoft SQL Server 2014 Installation
For the configuration validation described here, SQL Server 2014 was installed on both database servers as a new
SQL Server failover cluster installation.
Install the Microsoft SQL Server 2014 Failover Cluster Instance on the Primary Database Server
1. From the mounted SQL Server installation DVD, launch the SQL Server installation wizard.
2. On the Installation page, click the “New SQL Server Failover cluster installation” link to launch the installation
of the SQL Server 2014 failover cluster (Figure 23).
Figure 23. Microsoft Windows SQL Server Installation: Select Failover Cluster Installation
The Install a SQL Server Failover Cluster wizard starts.
3. On the Product Key page, enter the product key and click Next. The configuration here uses the Evaluation
edition.
4. On the License Terms page, read and accept the license terms to install SQL Server. Then click Next.
5. On the Global Rules page, the setup procedure will automatically advance to the next window if no rule errors
are found.
6. The Microsoft Update page will appear next if the Microsoft Update check box is not checked in Control Panel
> All Control Panel Items > Windows Update > Change Settings. Select this check box on the Microsoft
Update page if you want to change the computer settings so that the latest updates are included when you
scan for Windows updates.
© 2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 39 of 54
7. On the Product Updates page, the latest available SQL Server product updates are displayed. If no product
updates are discovered, SQL Server Setup does not display this page and instead auto-advances to the Install
Setup Files page.
8. On the Install Setup Files page, you can see the progress of the downloading, extraction, and installation of
the setup files. If an update for SQL Server Setup is found and is specified to be included, that update will also
be installed.
9. The Install Failover Cluster Rules page runs the rules that are essential for a successful SQL Server cluster
creation. Confirm that this step displays no errors and verify the warnings. Click Next.
10. On the Setup Role page, select the SQL Server Feature Installation radio button to install SQL Server engine
components. Then click Next.
11. On the Feature Selection page, choose the database engine services and the management tools (Figure 24).
Then click Next.
Figure 24. Microsoft Windows SQL Server Installation: Selecting Features to Install
12. The Feature Rule page displays the rule processing and automatically advances if all the rules succeed.
13. On the Instance Configuration page, specify the SQL Server network name and the instance ID. Then click
Next.
14. On the Cluster Resource Group page, choose the SQL Server cluster resource group name from the list or
create a new resource group. Then click Next.
© 2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 40 of 54
15. On the Cluster Disk Selection page, choose the shared cluster disks from the list. Select Add Disks to the
Cluster to make these disks part of the cluster. Then click Next.
16. On the Cluster Network Configuration page, provide the public connectivity IP address details for the SQL
Server failover cluster. Then click Next.
17. On the Server Configuration page, specify the service accounts and collation configuration details. Then click
Next.
18. On the Database Engine Configuration page, specify the database engine authentication security mode,
administrators, and data directory details. On the Data Directory tab, make sure that the root directory and the
temporary database directory are set appropriately. Then click Next.
19. Select Feature Configuration Rules to automatically run the feature configuration rules. Verify the output and
click Next.
20. On the Ready to Install page, verify the installation options. Then click Install to start the SQL Server failover
cluster installation (Figure 25).
Figure 25. Microsoft Windows SQL Server Installation: Verifying Selections and Starting Installation
The installation of SQL Server 2014 on primary database server is now complete.
Install the Microsoft SQL Server 2014 Failover Cluster Instance on the Secondary Database Server
1. Start the installation of SQL Server 2014 on the second node of the database cluster. Select the option Add
Node to SQL Server Failover Cluster.
© 2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 41 of 54
2. Follow the steps in the wizard to complete the installation of SQL Server 2014 on the second node.
Verify the Cluster
You can verify the SQL Server failover cluster instances created on the virtual machines by choosing Failover
Cluster Manager > Cluster > Roles on either of the two database servers hosting the SQL Server instance
(Figure 26).
Figure 26. Figure Verifying the Failover Cluster Instance
AlwaysOn Availability Groups are a central concept in SQL Server 2014. An availability group supports a failover
environment for a discrete set of user databases, known as availability databases, that fail over together. An
availability group supports a set of primary databases and one to eight sets of corresponding secondary
databases. Secondary databases are not backups. Continue to back up your databases and their transaction logs
on a regular basis.
The primary replica makes the primary databases available for read-write connections from clients. Also, a process
known as data synchronization occurs at the database level. The primary replica sends transaction log records for
each primary database to every secondary database. Every secondary replica caches the transaction log records
(hardens the log) and then applies them to its corresponding secondary database. Data synchronization occurs
between the primary database and each connected secondary database, independently of the other databases.
Therefore, a secondary database can be suspended or can fail without affecting other secondary databases, and a
primary database can be suspended or can fail without affecting other primary databases.
Optionally, you can configure one or more secondary replicas to support read-only access to secondary databases,
and you can configure any secondary replica to permit backups on secondary databases.
© 2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 42 of 54
Deploying AlwaysOn Availability Groups requires a Windows Server Failover Clustering, or WSFC, cluster. Each
availability replica of a given availability group must reside on a different node of the same WSFC cluster. The only
exception is that while being migrated to another WSFC cluster, an availability group can temporarily straddle two
clusters.
A WSFC resource group is created for every availability group that you create. The WSFC cluster monitors this
resource group to evaluate the health of the primary replica. The quorum for AlwaysOn Availability Groups is based
on all nodes in the WSFC cluster regardless of whether a given cluster node hosts any availability replicas. Unlike
with database mirroring, AlwaysOn Availability Groups do not have a witness role.
Figure 27 shows an availability group that contains one primary replica and one secondary replica. Up to eight
secondary replicas are supported, including one primary replica and two synchronous-commit secondary replicas.
Figure 27. Availability Group with One Replica
After you install the SQL Server 2014 failover cluster, you can verify the active primary instance and secondary
instance by logging into the SQL Server Management Studio.
© 2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 43 of 54
Figure 28 shows the status of the SQL Server failover cluster instance.
Figure 28. Microsoft SQL Server 2014 Failover Cluster Instance Status
© 2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 44 of 54
SAP NetWeaver 7.4 Installation
To validate this configuration, a distributed system was selected for the SAP system installation. SAP instances
can reside on different hosts in a distributed-system SAP installation.
Following are the SAP instance components required in a SAP distributed-system installation:
● Central services instance
● Database instance
● Primary application server instance
● Additional application server instance
This configuration used one server for the central services instance and primary application server instance, two
servers for the database instance (one for the active database and one for the standby database), and one server
for an additional application server instance.
Table 11 shows the use of each server for this validation.
Table 11. Servers and Purpose
Blade Server Type Operating System Purpose for SAP System
Cisco UCS B200 M4 Windows 2012 R2 ● SAP central services instance
● Primary application server instance
Cisco UCS B200 M4 Windows 2012 R2 ● Database instance
Cisco UCS B200 M4 Windows 2012 R2 ● Standby database instance
Cisco UCS B200 M4 Windows 2012 R2 ● Additional application server instance
1. Start the installation of NetWeaver 7.4 on the primary application server.
Follow the SAP guidelines for the NetWeaver installation for the ABAP stack on Windows Server 2012 R2.
The following steps give a high-level overview of the process of installing the SAP application for the ABAP
stack on Windows Server 2012 R2.
a. Download the NetWeaver software from the SAP marketplace and extract the downloaded files using
sapcar. (This configuration used sapcd to store all extracted binaries, such as software provisioning, export,
kernel, and SQL Server database files.)
b. Start the Power Shell command window and change the directory to \sapcd\sw_provisioning on the SAP
primary application server.
c. Start the installation by entering a command like the following example as the root user from SAP
application server 1:
# cd c:\ClusterStorage\sapcd\sw_provisioning
# ./sapinst SAPINST_USE_HOSTNAME=<hostname of application server>
© 2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 45 of 54
2. Check the prerequisites for the NetWeaver installation on Windows Server 2012 R2 (Figure 29).
Figure 29. Prerequisites Check for SAP NetWeaver Installation
© 2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 46 of 54
3. Check the prerequisites for the central services instance for ABAP on the primary application server (Figure
30). Then check the prerequisites for all other servers with the respective SAP application services before
performing the actual installation.
Figure 30. Prerequisite Check for Central Services Instance
© 2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 47 of 54
4. After verifying that Windows Server 2012 R2 meets the prerequisites, start the installation of the ABAP central
services instance (Figure 31).
Figure 31. Installing the Central Services Instance
5. Click Next to continue the installation.
6. Select Custom mode for the installation. This mode allows you to enter all the parameters, and at the end you
can still change any of these parameters on the parameters summary screen.
© 2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 48 of 54
7. After the installation of the central services instance completes successfully, start the installation of database
Instance on the primary database server (Figure 32).
Figure 32. Installing the Database Instance
© 2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 49 of 54
8. During the database installation, you will be prompted to provide the database service name and instance
name to populate the database schema for NetWeaver. For this validation, a virtual name was created for SQL
Server 2014 after the SQL Server 2014 installation and during the creation of failover group (Figure 33).
Figure 33. Microsoft SQL Server 2014 Database Instance
During the database instance installation, the ABAP schema will be created and the database will be
populated.
© 2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 50 of 54
9. After the database instance is successfully installed on the primary database server, start installation of the
primary application server instance on the primary application server (Figure 34).
Figure 34. Installing the Primary Application Server Instance
© 2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 51 of 54
10. After you successfully install the primary application server instance, start the installation of the additional
application server instance (Figure 35).
Figure 35. Installing the Additional Application Server Instance
This step completes the installation of NetWeaver in a distributed system. This configuration provides high
availability for the application server instance and the database server instance.
© 2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 52 of 54
11. After you successfully install NetWeaver 7.4, use the SAP management console (sapmmc) to verify the
services running on the application server and database server. Figure 36 shows the processes running on the
SAP application server.
Figure 36. Verifying SAP Services Using the SAP Management Console
This validation verified the SQL Server 2014 failover cluster. Performance on the application server was not
affected during the primary database server instance failover and failback.
© 2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 53 of 54
Destructive and Hardware Failover Tests
The goal of destructive and hardware failover tests is to verify that the reference architecture can withstand
common failures that occur as a result of unexpected crashes, hardware failures, and human errors. Testing
consisted of many hardware-, software- (process kills), and OS-specific failures that simulate real-world scenarios
under stress conditions. The destructive testing also demonstrated the unique failover capabilities of the Cisco VIC
1340 adapter. Table 12 presents some of these test cases.
Table 12. Destructive and Hardware Failover Testing
Scenario Test Status
Test 1: Chassis 1 and I/O Module 2 Link Failure test
Run the system on full work load.
Disconnect I/O Module 2 from the first chassis and reconnect the links after 5 minutes.
Network traffic from I/O Module 2 will fail over without any disruption to I/O Module 21.
Test 2: Cisco UCS 6248 Fabric A and Fabric B Failure test
Run the system on full load as in Test 1.
Reboot Fabric B, let it rejoin the cluster, and then reboot Fabric A.
Fabric failover did not cause any disruption in network and storage traffic.
Test 3: Cisco Nexus 5548 Switch A and Fabric B Failure test
Run the system on full work load.
Reboot Cisco Nexus 5548UP Switch A, wait for 5 minutes, reconnect the switch, and repeat for Cisco Nexus 5548UP Switch B.
Network and storage traffic was not disrupted.
Conclusion
Cisco UCS is built on leading computing, networking, and infrastructure software components and supports access
to storage. With a Cisco UCS solution, customers can build a secure, integrated, and optimized stack that includes
computing, networking, and storage resources that are sized, configured, and deployed as a fully tested unit
running industry-standard applications such as SAP NetWeaver on a Microsoft SQL Server 2014 database using
block storage.
The following features make the combination of Cisco UCS with block storage so powerful for SAP NetWeaver
running on a SQL Server 2014 database:
● Cisco UCS stateless computing architecture, provided by the Cisco UCS service profiles, allows fast,
nondisruptive workload changes to be implemented simply and transparently across the integrated Cisco
UCS infrastructure and Cisco x86 servers.
● Cisco UCS in combination with a highly scalable SAN external storage platform is an excellent solution for
NetWeaver running on the SQL Server 2014 database’s unique, scalable, and highly available SAN
technology.
● Cisco Unified Fabric, with its focus on secure IP networks as the standard interconnect for server and data
management solutions, provides the underlying support for these features.
● Cisco UCS Manager provides centralized, simplified management of infrastructure resources, including
end-to-end automation.
● The solution prioritizes network throughput and reduced network latency through the use of unique Cisco
UCS QoS and CoS features.
Customers can achieve dramatic cost savings when using Ethernet-based products and can deploy any application
on a scalable, shared IT infrastructure built on Cisco UCS technologies. The solution provides a flexible
infrastructure platform composed of presized computing, networking, and server components. It is designed to
ease your IT transformation and operational challenges with high efficiency and little risk.
© 2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 54 of 54
For More Information
● Cisco UCS: http://www.cisco.com/en/US/netsol/ns944/index.html
● Cisco Nexus: http://www.cisco.com/en/US/products/ps9441/Products_Sub_Category_Home.html
● Cisco Nexus 5000 Series Switches and NX-OS Software configuration guide:
http://www.cisco.com/en/US/docs/switches/datacenter/nexus5000/sw/configuration/guide/cli/CLIConfigurati
onGuide.html
● SAP NetWeaver installation guide: http://help.sap.com/nw74/
● SAP NetWeaver software: http://help.sap.com/nw_platform
● Microsoft SQL Server 2014 for SAP NetWeaver:
http://blogs.msdn.com/b/saponsqlserver/archive/2015/03/11/sql-server-2014-now-released-by-sap-for-sap-
netweaver-7-xx.aspx
● Microsoft SQL Server AlwaysOn: https://msdn.microsoft.com/en-us/library/ff877884.aspx
● SAP NetWeaver configuration parameters for Microsoft SQL Server 2014: https://websmp130.sap-
ag.de/sap%28bD1lbiZjPTAwMQ==%29/bc/bsp/sno/ui_entry/entry.htm?param=69765F6D6F64653D303031
2669765F7361706E6F7465735F6E756D6265723D3139383637373526
● SAP NetWeaver on Microsoft Windows Server:
http://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/405f3bee-0c69-2a10-7b84-
af7c1cb43ce6?QuickLink=index&overridelayout=true&19825569040023
Printed in USA C11-736212-00 11/15