View
9
Download
0
Category
Preview:
Citation preview
copy 2010 Cisco andor its affiliates All rights reserved 1 copy 2010 Cisco andor its affiliates All rights reserved 1
CIN Technology Workshop Collaborate Anywhere Anytime on any Device with Cisco UC on UCS Overview
May 2012
copy 2010 Cisco andor its affiliates All rights reserved 2
Device Diversity is here to stay
89
10
1
User Wants
bull Consistent experience on multiple devices
bull Seamless transitions between devices
bull Separation of work and personal data
bull Keep up with tech and social trends
IT Wants
bull Proactive adoption of consumermobile devices
bull Embrace BYOD without sacrificing security management business standards
bull Lower organizational costs
bull Improved agility
23
36
26
75
22
copy 2010 Cisco andor its affiliates All rights reserved 3
1990s 2010 2005 Future
CUCM 3x4x Cisco UC 50+ Cisco UC 80(2)+ VOIP ICM
Legacy Voice
Enhancement Server (Special-purpose)
Appliance Virtualization
Network Services
~2000
Business
Agility
Footprint
Space Energy
Cabling
Investment
Leverage
Business
Continuity
Management
Simplification
Increasing Architectural Flexibility while Decreasing Barriers to Rapidly DeployTailor
Increasing ldquoMiniaturizationrdquo Consolidation amp Avoidance while Increasing Efficiency
No Forklifts Network Convergence Commodity ServersStorage Virtualization
Increasing Security Resiliency and options for High Availability Disaster Recovery
Increasing Familiarity Centralization Scale and Efficiency
SAF
copy 2010 Cisco andor its affiliates All rights reserved 4
Excessively Centralized with Duplicate Networks
Inflexible
High TCO
Voice Video
Mobility Data
Mainframe PBX
Too Many Networks
Data Center 10 with
Traditional Communications
Excessively Distributed with Duplicate Networks
Sprawl
Medium to High TCO
hellip hellip
Servers and Appliances
Data
Center
Converged
Network
Too Many Fabrics
Data Center 20 with
Unified Communications
Flexible Operations on a Single Network
Agility + Governance
Low TCO
Virtualized ComputeStorage
hellip
Unified Fabric amp Networks
The Network
Data Center 30 with
Virtualized Communications
Same TCO Drivers Technology Facilities Management Burden
copy 2010 Cisco andor its affiliates All rights reserved 5
Eg Business Edition 6000
Eg Hosted Collaboration
Solution
Private
Cloud
Public
Cloud
Provider Cloud
or Hybrid
copy 2010 Cisco andor its affiliates All rights reserved 6
MCS Appliance UC on UCS
Tested Reference Configuration (TRC)
UC on UCS
Specs-based
Configuration-based
Highly Prescribed
Only 1 app per server
IBM x3650-M2 or x3250-M3
Only one CPU model supported
(single 4-core)
Fixed RAM
Exact match at part number level
for adapters
1 storage option only
Cisco owns app performance
Configuration-based
Prescribed
Typical 4-20 VMs per server
UCS B200B230B440 M2
C210C200 M2
E5640 E5506 E7-28704870
(4-core or 10-core)
Fixed RAM
Blades you pick adapter
Rack exact match at part number
level for adapters
Pick from a few storage options
Cisco owns app performance
Specs-based
A Few Restrictions
Typical 4-20 VMs per server
Any UCS that satisfies policy
56xx 75xx CPU 253+ GHz
E7-xxxx 24+ GHz
RAM depends on VMs
Regardless of server you pick the
adapters
Any storage ndash but you design it
Customer owns app performance
B230 M22-Socket Intel E7-2800 2 SSD 32 DIMM
B200 M22-Socket Intel 5600 2 SFF Disk 12 DIMM
B250 M22-Socket Intel 5600 2 SFF Disk 48 DIMM
B440 M24-Socket Intel E7-4800 4 SFF Disk 32 DIMM
C200 M22-Socket Intel 5600 4 Disks 12 DIMM 2 PCIe 1U
C210 M22-Socket Intel 5600 16 Disks 12 DIMM 5 PCIe 2U
C250 M22-Socket Intel 5600 8 Disks 48 DIMM 5 PCIe 2U
C460 M24-Socket Intel E7-4800 12 Disks 64 DIMM 10 PCIe 4U
Bla
de
Rack M
ou
nt
C260 M22-Socket Intel E7-2800 16 Disks 64 DIMM 6 PCIe 2U
Updated
New
Updated
Updated
copy 2010 Cisco andor its affiliates All rights reserved 7
bull Target Customer Profiles
ldquoReady willing ablerdquo to support servers VMware storage
Ready to move off appliance-oriented operations
UCS B-series for centralized medium to high server count
UCS C-series for low to medium server count or highly distributed
3rd-party server options for investment leverage
bull Platform Support
Virtual Machine Templates - defined by each UC app
Application VM Co-residency ndash ldquomix amp match UC with UCrdquo for most UC apps
Requries VMware vSphere 45 ndash ESXi only feature support depends on app vCenter is also required for specs-based
UCS HP and IBM server hardware options
DAS SAN NAS and Diskless storage options
Various NIC HBA CNA and Cisco VIC network options (1GB through 10GB)
copy 2009 Cisco Systems Inc All rights reserved Presentation_ID 8
UC on UCS Solution Components
Cisco Unified Communications 861
LAN
SAN Optional Shared Storage
PSTN
hellip UCS C-Series General-Purpose Rack-Mount Servers
LAN
SAN UCS 6100 Fabric
Interconnect Switches
Required Shared Storage
UCS 5100 Blade Server Chassis UCS 2100 Fabric Extender
UCS B-Series Blade Servers
PSTN
ldquoUC on UCSrdquo B-Series
ldquoUC on UCSrdquo C-Series
copy 2010 Cisco andor its affiliates All rights reserved 9
Unified Communication System
80bull Voice Video Presence Mobility Customer Care
bull Available in flexible deployment models
bull Deliver a unparalleled user experience
HCS Management Systembull Zero-touch fulfillment amp provisioning with self service
bull Service assurance for enabling high quality of services
bull Coordinated management and integration across domains
Optimized Virtualization Platform
(UC on UCS B-series)bull Resource optimized for reduced hardware capex
bull Installation amp upgrade automation
bull Provides flexibility customization amp additional redundancy
Scalable System Architecturebull Customer Aggregation amp SIP Trunking
bull SLA Enablement Security Scalability
bull Cloud Based SaaS Integration
Cisco Hosted Collaboration Solution Combining virtualization management amp
architecture elements for a comprehensive
platform
Cisco Business Edition 6000 Midmarket 100-1000 user solution for call
control mobility rich media presence and
contact center
Infrastructure Solutions Data Center ldquobuilding blocksrdquo
Vblock FlexPod
UC on FlexPod planned not committed
copy 2010 Cisco andor its affiliates All rights reserved 10
B230 M2 2-Socket Intel E7-2800 2 SSD 32 DIMM
B200 M2 2-Socket Intel 5600 2 SFF Disk 12 DIMM
B250 M2 2-Socket Intel 5600 2 SFF Disk 48 DIMM
B440 M2 4-Socket Intel E7-4800 4 SFF Disk 32 DIMM
C200 M2 (LFF) 2-Socket Intel 5600 4 Disks 12 DIMM 2 PCIe 1U
C220 M3 SFF 2-Socket Intel E5-2600 8 Disks 16 DIMM 2 PCIe 1U
C250 M2 2-Socket Intel 5600 8 Disks 48 DIMM 5 PCIe 2U
C460 M2 4-Socket Intel E7-4800 12 Disks 64 DIMM 10 PCIe 4U
Supported Hardware UC on UCS B
lad
e
Rack M
ou
nt
C260 M2 2-Socket Intel E7-2800 16 Disks 64 DIMM 6 PCIe 2U
UC on UCS Tested Reference Configuration UC on UCS Specs-based
BE6K
B200 M3 SFF 2-Socket Intel E5-2600 2 SFF Disk 24 DIMM
C240 M3 SFF 2-Socket Intel E5-2600 24 Disks 24 DIMM 5 PCIe 2U
C210 M2 2-Socket Intel 5600 16 Disks 12 DIMM 5 PCIe 2U
Target support
Fall 2012
Target support
Fall 2012
Target support
Fall 2012
(Note UCS Express
not supported)
copy 2010 Cisco andor its affiliates All rights reserved 11
Must be on VMware
HCL
Allowed Server
Vendors
Server model and IO devices on wwwvmwarecomgohcl
All parts must be supported by server vendor
No hardware oversubscription allowed for UC
VMware vCenter is REQUIRED
Processor
Intel Xeon 56xx75xx 253+ GHz or E7-xxxx 24+ GHz
CPU support varies by UC app
Required physical core count = sumUC VMs vCPU (+1 if Unity Cxn)
Capacity = sumUC VMs vRAM + 2GB for VMware
Follow server vendor for module densityconfiguration
Memory
Storage Network
Must be on VMware HCL and supported by server vendor
Eg 1GbE10GbE NIC ge2Gb FC HBA 10Gb CNA or VIC
UCS BC UCS Express Other 3rd-parties
Adapters (eg LANStorage Access)
SAN (FCoE FC iSCSI) NAS (NFS) Variable DASRAID
Storage capacity = sumUC VMs vDisk + VMwareRAID overhead
Storage performance = sumUC VM IOPS
copy 2010 Cisco andor its affiliates All rights reserved 12
Intel
Program
Nehalem-
EP
Nehalem-
EX
Westmere-
EP
Westmere-
EX
Romley-
EP
CPU Family 55xx
65xx
75xx
56xx E7-28xx
E7-48xx
E7-88xx
E5-26xx
CPU Cores 4 468 46 6810 468
CPU Speed 2-29 GHz 17-27 GHz 19-333 GHz 17-27 GHz 1-3 GHz
Example UCS
Models with
these CPUs
B200250 M1
C210 M1
C250 M1M2
C200 M2
B230 M1
B440 M1
C460 M1
B200250 M2
C210 M2
C250 M2
B230 M2
B440 M2
C260 M2
C460 M2
B200 M3
C220 M3
C240 M3
UC on UCS
Certifications
TRCs for B200 M1
TRCs for C210 M1
(E5540)
TRC for C200M2
(E5506)
Specs-based
(75xx 253+ GHz)
TRCs for B200 M2
TRCs for C210 M2
(E5640)
Specs-based
(56xx at 253+
GHz)
TRC for B230 M2
TRC for B440 M2
(E7-28704870)
Specs-based
(E7 at 24+ GHz)
Not Currently
Supported by UC
copy 2010 Cisco andor its affiliates All rights reserved 13
VM VM VM
VM VM VM
VM VM VM
VM VM VM
VM
VM
VM
ldquoSmallrdquo VM
ldquoMediumrdquo VM
ldquoLargerdquo VM
VM
VM
VM VM
VM VM VM
19 UC VMs with
total 40 vcpursquos
19 MCS Appliances 5 virtualized
servers (dual 4-core
B200 M2
TRC)
4 virtualized
servers (dual 6-core
B200 M2
specs-based)
2 virtualized
servers (dual 10-core
B230 M2
TRC)
19 UC
app
copies
copy 2010 Cisco andor its affiliates All rights reserved 14
UC on UCS Products with Owner UC on UCS TRC UC on UCS Specs-based
Unified Communications Manager
Business Edition 6000 C200 M2 only Not supported
Unity Connection
Unified Presence
Unified Contact Center Express
Cisco Emergency Responder
Session Manager Edition
InterCompany Media Engine
Unified Attendant Consoles
Unity
Unified Workforce Optimization (WFO)
Unified Contact Center Enterprise Planned
Unified Intelligence Center
Unified Customer Voice Portal Planned
MediaSense Planned
Unified Contact Center Mgmt Portal
SocialMiner
Finesse Planned
Unified EmailWeb Interaction Mgr
Prime UCMS (OMPMSMSSM)
Webex Premise Planned Planned
Unified MeetingPlace Planned Planned
TMSCTMS Planned Planned
VCS Planned Planned
copy 2010 Cisco andor its affiliates All rights reserved 15
Why virtualize
your UC
Why virtualize
on UCS
Lower TCO
Business
Agility
Additional
Savings and
Increased
Agility
End to End
Solution
Single
Support
Tested Reference Configurations
Vblocks
Cisco options
VCE Vblock options
Infrastructure Simplification (Cables Adapters Switching)
Converge Communications and DC Networks ndash ldquowire oncerdquo
Consolidates System Mgmt
Easier Service Provisioning
Reduce ServersStorage
Reduced Power Cooling Cabling Space Weight
Investment Leverage amp Easy Server Repurposing
Efficient App Expansion
Accelerated UC rollouts
Better Business Continuity
PortableMobile VMs
UCS is the industryrsquos only
fully unified and virtualization-
aware compute solution
copy 2010 Cisco andor its affiliates All rights reserved 16
CAPEX
bull Reduced Server Count (50-75)
bull NetworkStorage Consolidation (50+)
bull Reduced Cabling (50+)
OPEX
Reduced Rack amp Floor Space (36)
Reduced PowerCooling (20+)
Fewer Servers to Manage (50-75 less)
Reduced MaintenanceSupport Costs (~20)
Example 5000 users Dial tone voicemail and Presence 10 are Contact Center Agents
11 non-virtualized rack servers required for UC more for other business apps
copy 2010 Cisco andor its affiliates All rights reserved 17
CAPEXOPEX
bull Similar Consolidation and Operational EfficiencyScale benefits as with UC on UCS B-series
Other Benefits
Lower initial investment
Simple entrymigration to virtualized UC ndash Data Center expertise not required unless using SAN option
Example 5000 users Dial tone voicemail and Presence 10 are Contact Center Agents
11 non-virtualized rack servers required for UC more for other business apps
copy 2010 Cisco andor its affiliates All rights reserved 18
$-
$500
$1000
$1500
$2000
$2500
$3000
2 4 8 10 12 20 50 100
UCS B230 M2 TRC OPEX ($K)
UCS B230 M2 TRC CAPEX ($K)
UCS B200 M2 TRC OPEX ($K)
UCS B200 M2 TRC CAPEX ($K)
UCS C210 M2 TRC OPEX ($K)
UCS C210 M2 TRC CAPEX ($K)
MCS 7845-I3 OPEX ($K)
MCS 7845-I3 CAPEX ($K)
Assumptions
bull UC only no other business applications included ldquoSparerdquo or ldquohot standbyrdquo hosts not included
bull ldquoServerrdquo is either an MCS Appliance or a 2-vcpu-core ldquoVirtual Machinerdquo
bull Dual sites split MCS or UCS TRC servers across sites no single point of failure ndash redundant sites switching blade chassis rackblade servers
bull Using list pricing for MCS-7845-I3-IPC1 UCS-C210M2-VCD2 UCS-B200M2-VCS1 UCS-B230M2-VCDL1 and VMware Enterprise Plus Edition
Appliance or VM Count
PSTN
2104
2104
2104
2104
SANLAN
Dual Site Scenario PSTN
2104
2104
2104
2104
SANLAN
hellip
hellip
hellip
hellip
UC on UCS
B200 B230
UC on UCS
C210
MCS 7845
B230 M2
vs B200 M2
C210 M2
vs MCS 7845
20 copy 2010 Cisco andor its affiliates All rights reserved
Current Offers Technical Overview
21 copy 2010 Cisco andor its affiliates All rights reserved
Eg 4 physical servers
Each MCS 7800 hosts only
one UC app instance
4 virtual servers (VMrsquos) on 1 physical server
Single virtualized server with total 8 physical
cores hosts all UC app instances
Unity
Connection
Unified CM
VM for
Unified
CM
Sub
Unified CCX
VM for
Unity
Cxn
VM for
Unified
CCX
VM for
Unified
CM
Pub
or
copy 2010 Cisco andor its affiliates All rights reserved 22
Server Model TRC CPU RAM Storage Adapters
UCS B200 M2 Blade Server TRC 1
Dual E5640 (8 physical cores total)
48 GB DAS (RAID1) for
VMware FC SAN for UC apps
Cisco VIC
TRC 2 Dual E5640 (8
physical cores total) 48 GB Diskless Cisco VIC
UCS B230 M2 Blade Server
TRC 1 Dual E7-2870 (20
physical cores total) 128 GB Diskless Cisco VIC
UCS B440 M2 Blade Server
TRC1 Dual E7-4870 (40
physical cores total)
256 GB Diskless Cisco VIC
UCS C260 M2 Rack-Mount Server
TRC 1 Dual E7-2870 (20
physical cores total) 128 GB DAS (2x RAID5) 1GbE NIC
UCS C210 M2 General-Purpose
Rack-Mount Server TRC 1
Dual E5640 (8 physical cores total)
48 GB
DAS (2 disks RAID1) for VMware + DAS (8 disks RAID5) for
UC apps
1GbE NIC
TRC 2 Dual E5640 (8
physical cores total) 48 GB
DAS (2 disks RAID1) for VMware FC SAN
for UC apps
1GbE NIC and 4G FC HBA
TRC 3 Dual E5640 (8
physical cores total) 48 GB Diskless
1GbE NIC and 4G FC HBA
UCS C200 M2
General-Purpose Rack-Mount Server
TRC 1 Dual E5506 (8
physical cores total) 24 GB
DAS (4 disks RAID10) for VMware
+ UC apps 1GbE NIC
copy 2010 Cisco andor its affiliates All rights reserved 23
UC app Scale
(ldquousersrdquo)
vCPU (cores) Usually 253+ GHz
per core required
vRAM
(GB)
vDisk
(GB)
Notes
Unified
CM
1000 2 4 1 x 80 UCS C200 or BE6K only
2500 1 225 1 x 80 Not for use with C200BE6K
7500 2 6 2 x 80
10000 4 6 2 x 80
Unity
Connection
500 1 2 1 x 160
1000 1 4 1 x 160
5000 2 4 1 x 200
10000 4 4 2 x 146 Not for use with C200BE6K
20000 7 8 2 x 300
Unified
Presence
1000 1 2 1 x 80
2500 2 4 1 x 80 Not for use with C200BE6K
5000 4 4 2 x 80
Unified CCX 100 2 4 2 x 146 UCS C200 or BE6K only
300 2 4 2 x 146 Not for use with C200BE6K
400 4 8 2 x 146
Not exhaustive subject to change see wwwciscocomgouc-virtualized for latest
ie user count for particular values of BHCA trace level encryption CTI and other factors Actual
supportable user count may vary by deployment
copy 2010 Cisco andor its affiliates All rights reserved 24
Policy still lives here httpdocwikiciscocomwikiUnified_Communications_Virtualization_Sizing_GuidelinesApplication_Co-residency_Support_Policy
Three aspects
1 Allowed App Mix on same physical server
SAME RULES for TRC vs specs-based UCSHPIBM
2 Allowed VM OVA choices
DIFFERENT RULES for TRC vs specs-based due to CPU differences
3 Max number of VMs on same physical server
SAME RULES for TRC vs specs-based to determine max but specs-based might allow more VMs
Note DAS IO bottlenecks may prevent very high VM counts even if CPURAM are sufficient
copy 2010 Cisco andor its affiliates All rights reserved 25
bull Which apps can share the same physical server In general any UC with UC from apps listed at httpdocwikiciscocomwikiUnified_Communications_Virtualization_Supported_Applications
But note some UC apps restrict this eg BE6K CUCCE See their rules on their docwiki ldquochild pagesrdquo
NMTGrsquos UC Mgmt Suite (CUOM CUSM CUSSM CUPM) counts as a UC app for this
Note UCS C200 M2 TRC1 for non-BE6K no longer has special restrictions on UC App Mix
bull SEPARATE PHYSICAL SERVER required for non-UC or 3rd-party Eg N1KV ARC SingleWire vCenter FilePrint Directory CRMERP Groupware non-CUCM TFTP Nuance etc
UC VM UC VMhellipVM for
VMware
vCenter
VM for
Nexus
1KV
VSM
VM for
Solutions
Plus
CTDP
app
VM for
Unaffiliated
3rd-party
app
UC VM UC VMhellipVM for
VMware
vCenter
VM for
Nexus
1KV
VSM
VM for
Solutions
Plus
CTDP
app
VM for
Unaffiliated
3rd-party
app
UC VM UC VMhellipVM for
VMware
vCenter
VM for
Nexus
1KV
VSM
VM for
Solutions
Plus
CTDP
app
VM for
Unaffiliated
3rd-party
app
UC VM UC VMhellipVM for
VMware
vCenter
VM for
Nexus
1KV
VSM
VM for
Solutions
Plus
CTDP
app
VM for
Unaffiliated
3rd-party
app
Different blades in same chassis OK
Same blade same chassis not OK
copy 2010 Cisco andor its affiliates All rights reserved 26
bull App to HW some apps eg CUCCE donrsquot allow any of their OVAs on certain TRCs
See httpdocwikiciscocomwikiUnified_Communications_Virtualization_Supported_Applications
bull OVA to HW Some OVAs are deliberately only for use with a particular TRC or CPU
See co-res policy page and Notes column in httpdocwikiciscocomwikiUnified_Communications_Virtualization_Downloads_28including_OVAOVF_Templates29
Why Usually due to CPU modelspeed dependencies
C200 M2 TRC1
(E5506 213 GHz)
UCM
25K
UCM
75K
UCM
10K
C200 M2 Specs-based
(56xx 253+GHz )
B200C210 M2 TRC or Specs-based
(E5640 266 GHz on TRC
56xx75xx 253+ GHz on specs-based)
UCM
25K
UCM
75K
UCM
10K
UCM
1K
UCM
1K
UCM
25K
UCM
75K
UCM
10K
UCM
1K
copy 2010 Cisco andor its affiliates All rights reserved 27
Core
Core
Core
Core
Core
Core
Core
Core
Core
Core
Core
Core
Core
Core
Core
Core
Core
Core
Core
Core
VM VM VM VM
V
M
V
M
V
M
V
M
V
M
V
M
V
M
V
M
VM VM
Dual-socket 4-core
Eg UCS C210 M2 TRC1
with dual E5640
Dual-socket 6-core
Eg UCS C210 M2 Specs-based
with UC-supported CPU model
and min speed
VM VM V
M
V
M
VM VM VM V
M
Idle
VM
VM
VM
Jumbo + 1 reserved
or
Mixed sizes + 1 reserved
or
Mixed sizes
or
21 Large eg UCM 10K
or
41 Med eg UCM 75K
or
81 Small eg UCM 25K
ldquoSmallrdquo VM
ldquoMediumrdquo VM
ldquoLargerdquo VM
ldquoJumbordquo VM
VM
VM
Idle
VM VM VM VM
V
M
V
M
V
M
V
M
V
M
V
M
V
M
V
M
VM VM
VM VM V
M
V
M
VM VM VM V
M
Idle
Mixed sizes + 1 reserved
or
Mixed sizes
or
31 Large eg UCM 10K
or
61 Med eg UCM 75K
or
121 Small eg UCM 25K
VM
Idle
V
M
V
M
V
M
V
M
VM VM
VM
VM VM
VM
VM VM
copy 2010 Cisco andor its affiliates All rights reserved 28
Virtual Software Switch Options
VM
LAN SAN
ESXi Hypervisor
Software Switch
vNIC
CNA
FCoE
VMware
vSwitch
VMware
dvSwitch
Cisco Nexus
1KV
Host based (local) Distributed Distributed
IEEE 8021Q VLAN
tagging
IEEE 8021Q VLAN
tagging
IEEE 8021Q VLAN
tagging
VLANs only visible to
local ESXi host
VLANs visible to all
ESXi hosts
VLANs visible to all
ESXi hosts
EtherChannel EtherChannel EtherChannel
-- -- Virtual PortChannel
-- -- QoS Marking
(DSCPCoS)
-- -- ACL
-- -- SPAN
RADIUSTACACS+
No VM needed No VM needed VM needed for VSM
vmNIC
UCS B200
Strongly recommended for UC on UCS B-Series
Not required but recommended for UC on UCS C-Series
copy 2010 Cisco andor its affiliates All rights reserved 29
pSwitch
ESXi
bull Cisco Software Switch in Hypervisor
bull Familiar network server operations amp management model
bull Enhanced diagnostic amp monitoring capability
bull Visibility direct to VM
ESXi Nexus
1000V
VEM
Nexus
1000V
VEM
Nexus 1000V VSM
copy 2010 Cisco andor its affiliates All rights reserved 30
Physical switch maps L3 DSCP to L2 CoS
CUCM marks traffic based on L3 DSCP values
pSwitch (CAT6K etc) can do mapping from L3 DSCP to L2 CoS (if needed)
CTL Packet L3
dc1-access-6k(config)mls qos map dscp-cos 24 to 3
dc1-access-6k(config)mls qos map dscp-cos 46 to 5
CS3
L20 L3CS3
CUCM
CAT6K
L23 L3CS3 L23 L3CS3
copy 2010 Cisco andor its affiliates All rights reserved 31
bull UCS 6100 doesnrsquot look into L3 IP header
bull DSCPToS setting in IP header is not altered by UCS
bull 6100 sends packet to uplink pEthernet switch
bull Default QoS settings on UCS
FCoE (ldquomatch cos 3rdquo) ndash no drop policy
Rest (ldquomatch anyrdquo) ndash Best Effort Queue
vSwitch amp UCS 6100 can not map L3 DSCP to L2 CoS
L20 L3CS3
CUCM
CAT6K
UCS 6100
copy 2010 Cisco andor its affiliates All rights reserved 32
bull UC blades Network Adapters QoS policy set to Platinum (CoS=5 No
Drop)
bull Non-UC blades Network Adapters QoS policy set to best effort
N1Kv Considerations
bull UC sig traffic (CoS3) share queues with FCoE traffic (CoS3)
bull UC sig traffic is given lossless behavior
bull Default CoS value of 3 for FCoE traffic should never be changed
Without N1Kv
Caveat
bull All traffic types from virtual UC App will get CoS value of Platinum
bull Non-UC application gets best-effort class might not be acceptable
L20 L3CS3
L23 L3CS3
L23 L3CS3
CUCM
N1KV
UCS 6100
CAT6K
copy 2010 Cisco andor its affiliates All rights reserved 33
Compute Layer
SANStorage
Layer ndash Cisco
SRND
Cisco
UCS 6100
Fabric
Interconnect
UCS 5100
Blade
Server
Cisco SAN
Switch
4x10GE
4x10GE
4x10GE
4x10GE
FC FC
FC FC
Nexus
1000V
FC Storage
SP-A SP-B
3rd party layer
CUCM VM IOPS ~ 200
200 IOPS 4KB ~ 64 Mbps per VM
bull Total capacity 28000 IOPS
bull 14000 IOPS per controller
bull 4 KByte block size
14000 IOPS x (4KB) ~ 428 Mbps
600 Mbps throughputcontroller
3rd Party SAN Example
Result
bull One 4 Gbps FC interface is enough to
handle the entire capacity of one Storage
Array
bull HA requires four FC interfaces
copy 2010 Cisco andor its affiliates All rights reserved 34
copy 2010 Cisco andor its affiliates All rights reserved 35
bull All UC Deployment Models are supported
No change in the current deployment models
Base deployment model ndash Single Site Centralized Call Processing etc are not changing
bull VM machine layout on a blade andor chassis
Unity Connection requires one extra CPU core (vCPU) on blade
bull Software checks for design rules
No rules or restrictions are in place in UC Apps to check if you are running the primary and sub on the same blade
bull CoWAN ruleslatency requirement are same
Does not depend on CUCM code or hardware
httpwwwciscocomgoucsrnd
copy 2010 Cisco andor its affiliates All rights reserved 36
bull SRND application-layer guidelines are same as when on MCS
Redundancy Rules are the same
Clustering over the WANlatency numbers
Mega Cluster supported in 85
Determine quantityrole of nodes
For HA No design checks validating proper placement of primary and secondary servers
CUCCE private network requirement
bull Mixed clusters of HP IBM UCS are supported
Subject to ldquocommon senserdquo rules ndash eg donrsquot make Pub or Primary less powerful than Sub or Secondary
bull Direct attach devices must be on physical MCS server
-MOH Live audio stream
-Tape BackupFloppy
bull New factors to consider for end-to-end QoS designconfiguration
37 copy 2010 Cisco andor its affiliates All rights reserved
FROM CUCM
System Release To Unified CM 85 To UC System 85
4x Multi Hop thru 61(x)71(x) Multi Hop thru 61(3)
51(2) Multi Hop thru 61(x)71(x) NA
51(3) 2 Hop thru 71(3) NA
61(1) 2 Hop thru 61(x)71(x) 2 Hop
61(2) 2 Hop thru 61(x)71(x) NA
61(3) 2 Hop thru 61(x)71(x) NA
61(4) Single Hop NA
61(5) Single Hop NA
70(1) 2 Hop thru 71(x) 2 Hop
71(2) 2 Hop thru 71(x) 2 Hop
71(3) Single Hop Single Hop Multi stagesBWC supported
71(5) Single Hop NA
80(1) Single Hop Single Hop Multi stagesBWC supported
80(2) 80(3) Single Hop NA
38 copy 2010 Cisco andor its affiliates All rights reserved
VMware Feature support bull VMware feature support varies by application
bull Some features are supported with Caveats some Partially
bull For example
bull Clone Virtual Machine
ldquoY (C)rdquo means VM has to be powered off
bull vMotion
ldquoY (C)rdquo means vMotion supported for live traffic calls shouldnrsquot be dropped (but not guaranteed)
ldquoPartialrdquo means in maintenance mode only
ESXi Features CUCM CUC CUP CCX
Clone Virtual Machine Y (C) Y (C) Y (C) Y (C)
VMware vMotion Y (C) Partial Partial Y (C)
Resize Virtual Machine Partial Partial Partial Partial
VMware HA Y (C) Y (C) Y (C) Y (C)
Boot From SAN Y (C) Y (C) Y (C) Y (C)
VMware DRS No No No No
Docwiki httpdocwikiciscocomwikiUnified_Communications_VMware_Requirements
copy 2010 Cisco andor its affiliates All rights reserved 39
Current Business Continuity and Disaster Recovery strategies are still applicable
The UC apps redundancy rules are same
Distribute UC application nodes across UCS blades chassis and sites to minimize failure impact
Primarysecondary on different blade chassis sites
On same blade mix Subs with TFTPMoH vs just Subs
Redundancy of UCS components (blade chassis FEX links Interconnect switching)
Redundancy of ldquonewrdquo network types (10GbE SAN multi-pathing etc)
copy 2010 Cisco andor its affiliates All rights reserved 40
TAC Support Demarcation
Server Hardware
Shared Storage
VMware Cisco
Application
UC on UCS
Tested Reference Configuration
Cisco 3rd-party Cisco
Cisco VCE VCE
3rd-party
VCE
UC on UCS
Specs-based (including Vblock option)
Cisco 3rd-party Cisco
Cisco VCE VCE
3rd-party
VCE
3rd-party VMware (HP IBM)
Specs-based 3rd-party 3rd-party 3rd-party Cisco
MCS 7800 Appliances Cisco NA NA Cisco
Customer-provided MCS 7800 equivalent 3rd party NA NA Cisco
41 copy 2010 Cisco andor its affiliates All rights reserved
Customer Example ndash Primary Data Center
OLD NEW
Hardware Nodes 62 Physical Servers ( EU HQ clusters) 14 Approx
Software Version 615 amp 851 851
Ucxn Version 421 851 ndash 3 pairs -- Virtualized
CER 20 70 86 -- Virtualized
CM SUB CM PUB CM SUB CM SUB
MOH TFTP CER CM SUB
CER UCxn UCxN CM SUB
42 copy 2010 Cisco andor its affiliates All rights reserved
Deployment Model ndash Data Center 1
CM SUB CM PUB CM SUB CM SUB
MOH TFTP UCxN UCxn
CER UCxn UCxn UCxn
43 copy 2010 Cisco andor its affiliates All rights reserved
Deployment Model ndash Data Center 2
CM SUB CM PUB CM SUB CM SUB
MOH TFTP Ucxn UCxn
CER UCxn UCxn UCxn
copy 2010 Cisco andor its affiliates All rights reserved 44
Customer Design
PSTN
IP WAN
SIP Proxy
Unified Communications Manager
Unity Connection
Unified Presence
Unified Contact Center Express
Cisco UCS 5108 Chassis with UCS B200 Blade Servers
Cisco UCS 6100 Fabric Interconnect Switch
Cisco UCS C210 or C200 General-Purpose Rack-mount Server
CUSP
(11K phones)
(3K phones)
(400 phones)
copy 2010 Cisco andor its affiliates All rights reserved 45
HQ Details
Blade 1 Blade 2
CPU-1 CPU-2 CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
Blade 3 Blade 4
CPU-1 CPU-2 CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
CUP-1 UCCX-1 CUP-2
Blade 5 Blade 6
CPU-1 CPU-2 CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
UCCX-1
UCxn-1 Active
Leave
idle for
UCxn
UCxn-2
Active
TFTP-2
SUB-2
SUB-4
SUB-6
SUB-8 CUP-2 UCCX-2
SUB-3 PUB SUB-1
CUP-1
TFTP-1 SUB-5
SUB-7
Blade Slot 7 Blade Slot 8
CPU-1 CPU-2 CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
CUCM VM OVAs
Messaging VM OVAs
Leave
idle for
UCxn
ldquoSparerdquo blade slots available for non-UC workloads such as Cisco Nexus 1000V VMware
vCenter or 3rd-party workloads such as directories email groupware or other business
applications
Contact Center VM OVAs
Presence VM OVAs
copy 2010 Cisco andor its affiliates All rights reserved 46
Rack Server 3
CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
UCxn-2 CCX-1 CUP
Rack Server 2
CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
Rack Server 1
CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
Leave
idle for
UCxn
UCxn-1 TFTP-2 SUB-2
Leave
idle for
UCxn
SUB-1 CCX-2 PUB TFTP-1
Branch Office Details
Rack Server 2
CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
Rack Server 1
CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
PUBTFTP CCX-1 CUP
SUB CCX-2
Leave
idle for
UCxn
Leave
idle for
UCxn
UC
xn
-1
UC
xn
-1
CUCM VM OVAs
Messaging VM OVAs
Contact Center VM OVAs
Presence VM OVAs
copy 2010 Cisco andor its affiliates All rights reserved 47
bull DAS Rack Mount Server (Cisco C-Series)
bull Popular DAS Protocol SCSI
bull iSCSI Access SCSI storage media using IP network
bull Fibre Channel The most popular SAN protocol today
Cable distance ~ 2 km
Popular speed - 4 Gbs
bull NAS (Network Attached Storage) uses NFS (Network File System) protocol over TCPIP
DAS
SCSI
Computer System
SCSI Bus Adapter
SCSI Device Driver
Volume Manager
File System
Application
iSCSI
File System
Application
SCSI Device Driver iSCSI Driver
TCPIP Stack
NIC
Volume Manager
NIC
TCPIP Stack
iSCSI Layer
Bus Adapter
Host
Server
Storage
Transport
Storage
Media
FC SAN
FC
FC HBA
SCSI Device Driver
Volume Manager
File System
Application
Computer System Computer System
Block IO
SAN IP
copy 2010 Cisco andor its affiliates All rights reserved 48
HDD 1
450gig
15K RPM
HDD 2
450gig
15K RPM
HDD 3
450gig
15K RPM
HDD 4
450gig
15K RPM
HDD 5
450gig
15K RPM
Single RAID5 Group (14 TB Usable Space)
PUB SUB1 UCCX1
UC
VM 1
UC
VM 2
UC
VM 3
LUN 2 (720 GB) LUN 1 (720 GB)
UCCX2 CUP1 CUP2
UC
VM 4
UC
VM 5 UC
VM 6
4 to 8 UC VMs per
LUN
(max dependent on
sumvDisks)
NASSAN Array Best Practices for UC
Must be lt2TB per
LUN Recommend
500GB to 15 TB
Use FC class disks
with ~180 IOPS (eg
450 GB 15K or
300 GB 15K)
copy 2010 Cisco andor its affiliates All rights reserved 49
DAS Example UCS C210 M2 TRC1
HDD 1
146 GB
15K RPM
HDD 2
146 GB
15K RPM
HDD 3
146 GB
15K RPM
HDD 4
146 GB
15K RPM
HDD 5
146 GB
15K RPM
HDD 6
146 GB
15K RPM
HDD 7
146 GB
15K RPM
HDD 8
146 GB
15K RPM
HDD 9
146 GB
15K RPM
HDD 10
146 GB
15K RPM
Single RAID1
Volume Single RAID5 Volume (1022 GB after RAID overhead)
VMFS Filestore (947 GB after VMFS overhead)
PUB UCCX1
UC
VM 1
UC
VM 3
CUP1
UC
VM 5
vSphere ESXi
image
Notes
bull VMFS block size limits
max vDisk size
bull Could have gt1 VMFS
datastore on RAID
volume
copy 2010 Cisco andor its affiliates All rights reserved 50
DAS Example UCS C200 M2 TRC1 for BE6K
HDD 1
1 TB
72K RPM
HDD 2
1 TB
72K RPM
HDD 3
1 TB
72K RPM
HDD 4 1
TB
72K RPM
Single RAID10 Volume (2 TB after RAID
overhead)
VMFS Filestore (18 TB after VMFS overhead)
PUB UCCX1
UC
VM 1
UC
VM 3
CUP1
UC
VM 5 vSphere ESXi
image
copy 2010 Cisco andor its affiliates All rights reserved 51
UC Application Co-residency
bull See wwwciscocomgouc-virtualized for latest
bull Based on supported OVA for download
OVA reserves cores RAM etc to VMs
Basic rule of thumb fill up blade until out of capacity
If blade contains VM for messaging must reserve core for ESXi
bull Hardware oversubscription not supported
52 copy 2010 Cisco andor its affiliates All rights reserved
Virtual Machine Sizing
bull Virtual Machine virtual hardware defined by an VM template
vCPU vRAM vDisk vNICs
bull Capacity
bullAn VM template is associated with a specific capacity
bull VM templates are packaged in a OVA file
bull There are usually different VM template per release For example CUCM_80_vmv7_v21ova
CUCM_85_vmv7_v21ova
CUCM_86_vmv7_v15ova
Includes product product version VMware hardware version template version
53 copy 2010 Cisco andor its affiliates All rights reserved httptoolsciscocomcucst
copy 2010 Cisco andor its affiliates All rights reserved 54
bull Customer-accessible
bullUC on UCS httpwwwciscocomgouconucs and wwwciscocomgouc-virtualized and wwwciscocomgoucsrnd and wwwciscocomgoswonly (UCS page)
bullUCS in general httpwwwciscocomgoucs
bullVblocks amp Virtual Computing Environment wwwvceportalcom and wwwvcecom
bullFlexPods wwwcisconetappcom
bullCisco Infrastructure as a Service (IaaS) Portal httpwwwciscocomgoiaas
bullCisco Unified Service Delivery Portal httpwwwciscocomgousd
bullldquoCUCM on Virtual Serversrdquo (summary of ldquowhatrsquos newdifferent when virtualizedrdquo) httpwwwciscocomenUSdocsvoice_ip_commcucmvirtualservershtml
bull ldquoVirtualization and UCS 101rdquo httpnewsroomciscocomdlls2010ts_102810html
bull UC on UCS ldquotech valuerdquo httpwwwciscocomenUSsolutionsns340ns339ns638ns914html_TWTVtwtv_episode_74html
copy 2010 Cisco andor its affiliates All rights reserved 55
bull Partner Central ldquoServers OS and Virtualizationrdquo
httpwwwciscocomwebpartnersselltechnologyipcservers_os_virt_for_uchtml
bull Partner Community ldquoServers OS and Virtualizationrdquo
httpswwwmyciscocommunitycomcommunitypartnercollaborationucservers
bull Design info
httpwwwciscocomgouc-virtualized and httpwwwciscocomgoucsrnd
bull ldquoWhatrsquos new whatrsquos different when on VMwarerdquo customer doc
httpwwwciscocomenUSdocsvoice_ip_commcucmvirtualservershtml
bull Product Literature
httpwwwciscocomgoucs and httpwwwciscocomgouconucs
bull Ordering Guide
httpwwwciscocomwebpartnersselltechnologyipcuc_tech_readinesshtml~7
bull Virtual Computing Environment Portal httpwwwvceportalcom
bull Cisco Infrastructure as a Service (IaaS) Portal httpwwwciscocomgoiaas
copy 2010 Cisco andor its affiliates All rights reserved 2
Device Diversity is here to stay
89
10
1
User Wants
bull Consistent experience on multiple devices
bull Seamless transitions between devices
bull Separation of work and personal data
bull Keep up with tech and social trends
IT Wants
bull Proactive adoption of consumermobile devices
bull Embrace BYOD without sacrificing security management business standards
bull Lower organizational costs
bull Improved agility
23
36
26
75
22
copy 2010 Cisco andor its affiliates All rights reserved 3
1990s 2010 2005 Future
CUCM 3x4x Cisco UC 50+ Cisco UC 80(2)+ VOIP ICM
Legacy Voice
Enhancement Server (Special-purpose)
Appliance Virtualization
Network Services
~2000
Business
Agility
Footprint
Space Energy
Cabling
Investment
Leverage
Business
Continuity
Management
Simplification
Increasing Architectural Flexibility while Decreasing Barriers to Rapidly DeployTailor
Increasing ldquoMiniaturizationrdquo Consolidation amp Avoidance while Increasing Efficiency
No Forklifts Network Convergence Commodity ServersStorage Virtualization
Increasing Security Resiliency and options for High Availability Disaster Recovery
Increasing Familiarity Centralization Scale and Efficiency
SAF
copy 2010 Cisco andor its affiliates All rights reserved 4
Excessively Centralized with Duplicate Networks
Inflexible
High TCO
Voice Video
Mobility Data
Mainframe PBX
Too Many Networks
Data Center 10 with
Traditional Communications
Excessively Distributed with Duplicate Networks
Sprawl
Medium to High TCO
hellip hellip
Servers and Appliances
Data
Center
Converged
Network
Too Many Fabrics
Data Center 20 with
Unified Communications
Flexible Operations on a Single Network
Agility + Governance
Low TCO
Virtualized ComputeStorage
hellip
Unified Fabric amp Networks
The Network
Data Center 30 with
Virtualized Communications
Same TCO Drivers Technology Facilities Management Burden
copy 2010 Cisco andor its affiliates All rights reserved 5
Eg Business Edition 6000
Eg Hosted Collaboration
Solution
Private
Cloud
Public
Cloud
Provider Cloud
or Hybrid
copy 2010 Cisco andor its affiliates All rights reserved 6
MCS Appliance UC on UCS
Tested Reference Configuration (TRC)
UC on UCS
Specs-based
Configuration-based
Highly Prescribed
Only 1 app per server
IBM x3650-M2 or x3250-M3
Only one CPU model supported
(single 4-core)
Fixed RAM
Exact match at part number level
for adapters
1 storage option only
Cisco owns app performance
Configuration-based
Prescribed
Typical 4-20 VMs per server
UCS B200B230B440 M2
C210C200 M2
E5640 E5506 E7-28704870
(4-core or 10-core)
Fixed RAM
Blades you pick adapter
Rack exact match at part number
level for adapters
Pick from a few storage options
Cisco owns app performance
Specs-based
A Few Restrictions
Typical 4-20 VMs per server
Any UCS that satisfies policy
56xx 75xx CPU 253+ GHz
E7-xxxx 24+ GHz
RAM depends on VMs
Regardless of server you pick the
adapters
Any storage ndash but you design it
Customer owns app performance
B230 M22-Socket Intel E7-2800 2 SSD 32 DIMM
B200 M22-Socket Intel 5600 2 SFF Disk 12 DIMM
B250 M22-Socket Intel 5600 2 SFF Disk 48 DIMM
B440 M24-Socket Intel E7-4800 4 SFF Disk 32 DIMM
C200 M22-Socket Intel 5600 4 Disks 12 DIMM 2 PCIe 1U
C210 M22-Socket Intel 5600 16 Disks 12 DIMM 5 PCIe 2U
C250 M22-Socket Intel 5600 8 Disks 48 DIMM 5 PCIe 2U
C460 M24-Socket Intel E7-4800 12 Disks 64 DIMM 10 PCIe 4U
Bla
de
Rack M
ou
nt
C260 M22-Socket Intel E7-2800 16 Disks 64 DIMM 6 PCIe 2U
Updated
New
Updated
Updated
copy 2010 Cisco andor its affiliates All rights reserved 7
bull Target Customer Profiles
ldquoReady willing ablerdquo to support servers VMware storage
Ready to move off appliance-oriented operations
UCS B-series for centralized medium to high server count
UCS C-series for low to medium server count or highly distributed
3rd-party server options for investment leverage
bull Platform Support
Virtual Machine Templates - defined by each UC app
Application VM Co-residency ndash ldquomix amp match UC with UCrdquo for most UC apps
Requries VMware vSphere 45 ndash ESXi only feature support depends on app vCenter is also required for specs-based
UCS HP and IBM server hardware options
DAS SAN NAS and Diskless storage options
Various NIC HBA CNA and Cisco VIC network options (1GB through 10GB)
copy 2009 Cisco Systems Inc All rights reserved Presentation_ID 8
UC on UCS Solution Components
Cisco Unified Communications 861
LAN
SAN Optional Shared Storage
PSTN
hellip UCS C-Series General-Purpose Rack-Mount Servers
LAN
SAN UCS 6100 Fabric
Interconnect Switches
Required Shared Storage
UCS 5100 Blade Server Chassis UCS 2100 Fabric Extender
UCS B-Series Blade Servers
PSTN
ldquoUC on UCSrdquo B-Series
ldquoUC on UCSrdquo C-Series
copy 2010 Cisco andor its affiliates All rights reserved 9
Unified Communication System
80bull Voice Video Presence Mobility Customer Care
bull Available in flexible deployment models
bull Deliver a unparalleled user experience
HCS Management Systembull Zero-touch fulfillment amp provisioning with self service
bull Service assurance for enabling high quality of services
bull Coordinated management and integration across domains
Optimized Virtualization Platform
(UC on UCS B-series)bull Resource optimized for reduced hardware capex
bull Installation amp upgrade automation
bull Provides flexibility customization amp additional redundancy
Scalable System Architecturebull Customer Aggregation amp SIP Trunking
bull SLA Enablement Security Scalability
bull Cloud Based SaaS Integration
Cisco Hosted Collaboration Solution Combining virtualization management amp
architecture elements for a comprehensive
platform
Cisco Business Edition 6000 Midmarket 100-1000 user solution for call
control mobility rich media presence and
contact center
Infrastructure Solutions Data Center ldquobuilding blocksrdquo
Vblock FlexPod
UC on FlexPod planned not committed
copy 2010 Cisco andor its affiliates All rights reserved 10
B230 M2 2-Socket Intel E7-2800 2 SSD 32 DIMM
B200 M2 2-Socket Intel 5600 2 SFF Disk 12 DIMM
B250 M2 2-Socket Intel 5600 2 SFF Disk 48 DIMM
B440 M2 4-Socket Intel E7-4800 4 SFF Disk 32 DIMM
C200 M2 (LFF) 2-Socket Intel 5600 4 Disks 12 DIMM 2 PCIe 1U
C220 M3 SFF 2-Socket Intel E5-2600 8 Disks 16 DIMM 2 PCIe 1U
C250 M2 2-Socket Intel 5600 8 Disks 48 DIMM 5 PCIe 2U
C460 M2 4-Socket Intel E7-4800 12 Disks 64 DIMM 10 PCIe 4U
Supported Hardware UC on UCS B
lad
e
Rack M
ou
nt
C260 M2 2-Socket Intel E7-2800 16 Disks 64 DIMM 6 PCIe 2U
UC on UCS Tested Reference Configuration UC on UCS Specs-based
BE6K
B200 M3 SFF 2-Socket Intel E5-2600 2 SFF Disk 24 DIMM
C240 M3 SFF 2-Socket Intel E5-2600 24 Disks 24 DIMM 5 PCIe 2U
C210 M2 2-Socket Intel 5600 16 Disks 12 DIMM 5 PCIe 2U
Target support
Fall 2012
Target support
Fall 2012
Target support
Fall 2012
(Note UCS Express
not supported)
copy 2010 Cisco andor its affiliates All rights reserved 11
Must be on VMware
HCL
Allowed Server
Vendors
Server model and IO devices on wwwvmwarecomgohcl
All parts must be supported by server vendor
No hardware oversubscription allowed for UC
VMware vCenter is REQUIRED
Processor
Intel Xeon 56xx75xx 253+ GHz or E7-xxxx 24+ GHz
CPU support varies by UC app
Required physical core count = sumUC VMs vCPU (+1 if Unity Cxn)
Capacity = sumUC VMs vRAM + 2GB for VMware
Follow server vendor for module densityconfiguration
Memory
Storage Network
Must be on VMware HCL and supported by server vendor
Eg 1GbE10GbE NIC ge2Gb FC HBA 10Gb CNA or VIC
UCS BC UCS Express Other 3rd-parties
Adapters (eg LANStorage Access)
SAN (FCoE FC iSCSI) NAS (NFS) Variable DASRAID
Storage capacity = sumUC VMs vDisk + VMwareRAID overhead
Storage performance = sumUC VM IOPS
copy 2010 Cisco andor its affiliates All rights reserved 12
Intel
Program
Nehalem-
EP
Nehalem-
EX
Westmere-
EP
Westmere-
EX
Romley-
EP
CPU Family 55xx
65xx
75xx
56xx E7-28xx
E7-48xx
E7-88xx
E5-26xx
CPU Cores 4 468 46 6810 468
CPU Speed 2-29 GHz 17-27 GHz 19-333 GHz 17-27 GHz 1-3 GHz
Example UCS
Models with
these CPUs
B200250 M1
C210 M1
C250 M1M2
C200 M2
B230 M1
B440 M1
C460 M1
B200250 M2
C210 M2
C250 M2
B230 M2
B440 M2
C260 M2
C460 M2
B200 M3
C220 M3
C240 M3
UC on UCS
Certifications
TRCs for B200 M1
TRCs for C210 M1
(E5540)
TRC for C200M2
(E5506)
Specs-based
(75xx 253+ GHz)
TRCs for B200 M2
TRCs for C210 M2
(E5640)
Specs-based
(56xx at 253+
GHz)
TRC for B230 M2
TRC for B440 M2
(E7-28704870)
Specs-based
(E7 at 24+ GHz)
Not Currently
Supported by UC
copy 2010 Cisco andor its affiliates All rights reserved 13
VM VM VM
VM VM VM
VM VM VM
VM VM VM
VM
VM
VM
ldquoSmallrdquo VM
ldquoMediumrdquo VM
ldquoLargerdquo VM
VM
VM
VM VM
VM VM VM
19 UC VMs with
total 40 vcpursquos
19 MCS Appliances 5 virtualized
servers (dual 4-core
B200 M2
TRC)
4 virtualized
servers (dual 6-core
B200 M2
specs-based)
2 virtualized
servers (dual 10-core
B230 M2
TRC)
19 UC
app
copies
copy 2010 Cisco andor its affiliates All rights reserved 14
UC on UCS Products with Owner UC on UCS TRC UC on UCS Specs-based
Unified Communications Manager
Business Edition 6000 C200 M2 only Not supported
Unity Connection
Unified Presence
Unified Contact Center Express
Cisco Emergency Responder
Session Manager Edition
InterCompany Media Engine
Unified Attendant Consoles
Unity
Unified Workforce Optimization (WFO)
Unified Contact Center Enterprise Planned
Unified Intelligence Center
Unified Customer Voice Portal Planned
MediaSense Planned
Unified Contact Center Mgmt Portal
SocialMiner
Finesse Planned
Unified EmailWeb Interaction Mgr
Prime UCMS (OMPMSMSSM)
Webex Premise Planned Planned
Unified MeetingPlace Planned Planned
TMSCTMS Planned Planned
VCS Planned Planned
copy 2010 Cisco andor its affiliates All rights reserved 15
Why virtualize
your UC
Why virtualize
on UCS
Lower TCO
Business
Agility
Additional
Savings and
Increased
Agility
End to End
Solution
Single
Support
Tested Reference Configurations
Vblocks
Cisco options
VCE Vblock options
Infrastructure Simplification (Cables Adapters Switching)
Converge Communications and DC Networks ndash ldquowire oncerdquo
Consolidates System Mgmt
Easier Service Provisioning
Reduce ServersStorage
Reduced Power Cooling Cabling Space Weight
Investment Leverage amp Easy Server Repurposing
Efficient App Expansion
Accelerated UC rollouts
Better Business Continuity
PortableMobile VMs
UCS is the industryrsquos only
fully unified and virtualization-
aware compute solution
copy 2010 Cisco andor its affiliates All rights reserved 16
CAPEX
bull Reduced Server Count (50-75)
bull NetworkStorage Consolidation (50+)
bull Reduced Cabling (50+)
OPEX
Reduced Rack amp Floor Space (36)
Reduced PowerCooling (20+)
Fewer Servers to Manage (50-75 less)
Reduced MaintenanceSupport Costs (~20)
Example 5000 users Dial tone voicemail and Presence 10 are Contact Center Agents
11 non-virtualized rack servers required for UC more for other business apps
copy 2010 Cisco andor its affiliates All rights reserved 17
CAPEXOPEX
bull Similar Consolidation and Operational EfficiencyScale benefits as with UC on UCS B-series
Other Benefits
Lower initial investment
Simple entrymigration to virtualized UC ndash Data Center expertise not required unless using SAN option
Example 5000 users Dial tone voicemail and Presence 10 are Contact Center Agents
11 non-virtualized rack servers required for UC more for other business apps
copy 2010 Cisco andor its affiliates All rights reserved 18
$-
$500
$1000
$1500
$2000
$2500
$3000
2 4 8 10 12 20 50 100
UCS B230 M2 TRC OPEX ($K)
UCS B230 M2 TRC CAPEX ($K)
UCS B200 M2 TRC OPEX ($K)
UCS B200 M2 TRC CAPEX ($K)
UCS C210 M2 TRC OPEX ($K)
UCS C210 M2 TRC CAPEX ($K)
MCS 7845-I3 OPEX ($K)
MCS 7845-I3 CAPEX ($K)
Assumptions
bull UC only no other business applications included ldquoSparerdquo or ldquohot standbyrdquo hosts not included
bull ldquoServerrdquo is either an MCS Appliance or a 2-vcpu-core ldquoVirtual Machinerdquo
bull Dual sites split MCS or UCS TRC servers across sites no single point of failure ndash redundant sites switching blade chassis rackblade servers
bull Using list pricing for MCS-7845-I3-IPC1 UCS-C210M2-VCD2 UCS-B200M2-VCS1 UCS-B230M2-VCDL1 and VMware Enterprise Plus Edition
Appliance or VM Count
PSTN
2104
2104
2104
2104
SANLAN
Dual Site Scenario PSTN
2104
2104
2104
2104
SANLAN
hellip
hellip
hellip
hellip
UC on UCS
B200 B230
UC on UCS
C210
MCS 7845
B230 M2
vs B200 M2
C210 M2
vs MCS 7845
20 copy 2010 Cisco andor its affiliates All rights reserved
Current Offers Technical Overview
21 copy 2010 Cisco andor its affiliates All rights reserved
Eg 4 physical servers
Each MCS 7800 hosts only
one UC app instance
4 virtual servers (VMrsquos) on 1 physical server
Single virtualized server with total 8 physical
cores hosts all UC app instances
Unity
Connection
Unified CM
VM for
Unified
CM
Sub
Unified CCX
VM for
Unity
Cxn
VM for
Unified
CCX
VM for
Unified
CM
Pub
or
copy 2010 Cisco andor its affiliates All rights reserved 22
Server Model TRC CPU RAM Storage Adapters
UCS B200 M2 Blade Server TRC 1
Dual E5640 (8 physical cores total)
48 GB DAS (RAID1) for
VMware FC SAN for UC apps
Cisco VIC
TRC 2 Dual E5640 (8
physical cores total) 48 GB Diskless Cisco VIC
UCS B230 M2 Blade Server
TRC 1 Dual E7-2870 (20
physical cores total) 128 GB Diskless Cisco VIC
UCS B440 M2 Blade Server
TRC1 Dual E7-4870 (40
physical cores total)
256 GB Diskless Cisco VIC
UCS C260 M2 Rack-Mount Server
TRC 1 Dual E7-2870 (20
physical cores total) 128 GB DAS (2x RAID5) 1GbE NIC
UCS C210 M2 General-Purpose
Rack-Mount Server TRC 1
Dual E5640 (8 physical cores total)
48 GB
DAS (2 disks RAID1) for VMware + DAS (8 disks RAID5) for
UC apps
1GbE NIC
TRC 2 Dual E5640 (8
physical cores total) 48 GB
DAS (2 disks RAID1) for VMware FC SAN
for UC apps
1GbE NIC and 4G FC HBA
TRC 3 Dual E5640 (8
physical cores total) 48 GB Diskless
1GbE NIC and 4G FC HBA
UCS C200 M2
General-Purpose Rack-Mount Server
TRC 1 Dual E5506 (8
physical cores total) 24 GB
DAS (4 disks RAID10) for VMware
+ UC apps 1GbE NIC
copy 2010 Cisco andor its affiliates All rights reserved 23
UC app Scale
(ldquousersrdquo)
vCPU (cores) Usually 253+ GHz
per core required
vRAM
(GB)
vDisk
(GB)
Notes
Unified
CM
1000 2 4 1 x 80 UCS C200 or BE6K only
2500 1 225 1 x 80 Not for use with C200BE6K
7500 2 6 2 x 80
10000 4 6 2 x 80
Unity
Connection
500 1 2 1 x 160
1000 1 4 1 x 160
5000 2 4 1 x 200
10000 4 4 2 x 146 Not for use with C200BE6K
20000 7 8 2 x 300
Unified
Presence
1000 1 2 1 x 80
2500 2 4 1 x 80 Not for use with C200BE6K
5000 4 4 2 x 80
Unified CCX 100 2 4 2 x 146 UCS C200 or BE6K only
300 2 4 2 x 146 Not for use with C200BE6K
400 4 8 2 x 146
Not exhaustive subject to change see wwwciscocomgouc-virtualized for latest
ie user count for particular values of BHCA trace level encryption CTI and other factors Actual
supportable user count may vary by deployment
copy 2010 Cisco andor its affiliates All rights reserved 24
Policy still lives here httpdocwikiciscocomwikiUnified_Communications_Virtualization_Sizing_GuidelinesApplication_Co-residency_Support_Policy
Three aspects
1 Allowed App Mix on same physical server
SAME RULES for TRC vs specs-based UCSHPIBM
2 Allowed VM OVA choices
DIFFERENT RULES for TRC vs specs-based due to CPU differences
3 Max number of VMs on same physical server
SAME RULES for TRC vs specs-based to determine max but specs-based might allow more VMs
Note DAS IO bottlenecks may prevent very high VM counts even if CPURAM are sufficient
copy 2010 Cisco andor its affiliates All rights reserved 25
bull Which apps can share the same physical server In general any UC with UC from apps listed at httpdocwikiciscocomwikiUnified_Communications_Virtualization_Supported_Applications
But note some UC apps restrict this eg BE6K CUCCE See their rules on their docwiki ldquochild pagesrdquo
NMTGrsquos UC Mgmt Suite (CUOM CUSM CUSSM CUPM) counts as a UC app for this
Note UCS C200 M2 TRC1 for non-BE6K no longer has special restrictions on UC App Mix
bull SEPARATE PHYSICAL SERVER required for non-UC or 3rd-party Eg N1KV ARC SingleWire vCenter FilePrint Directory CRMERP Groupware non-CUCM TFTP Nuance etc
UC VM UC VMhellipVM for
VMware
vCenter
VM for
Nexus
1KV
VSM
VM for
Solutions
Plus
CTDP
app
VM for
Unaffiliated
3rd-party
app
UC VM UC VMhellipVM for
VMware
vCenter
VM for
Nexus
1KV
VSM
VM for
Solutions
Plus
CTDP
app
VM for
Unaffiliated
3rd-party
app
UC VM UC VMhellipVM for
VMware
vCenter
VM for
Nexus
1KV
VSM
VM for
Solutions
Plus
CTDP
app
VM for
Unaffiliated
3rd-party
app
UC VM UC VMhellipVM for
VMware
vCenter
VM for
Nexus
1KV
VSM
VM for
Solutions
Plus
CTDP
app
VM for
Unaffiliated
3rd-party
app
Different blades in same chassis OK
Same blade same chassis not OK
copy 2010 Cisco andor its affiliates All rights reserved 26
bull App to HW some apps eg CUCCE donrsquot allow any of their OVAs on certain TRCs
See httpdocwikiciscocomwikiUnified_Communications_Virtualization_Supported_Applications
bull OVA to HW Some OVAs are deliberately only for use with a particular TRC or CPU
See co-res policy page and Notes column in httpdocwikiciscocomwikiUnified_Communications_Virtualization_Downloads_28including_OVAOVF_Templates29
Why Usually due to CPU modelspeed dependencies
C200 M2 TRC1
(E5506 213 GHz)
UCM
25K
UCM
75K
UCM
10K
C200 M2 Specs-based
(56xx 253+GHz )
B200C210 M2 TRC or Specs-based
(E5640 266 GHz on TRC
56xx75xx 253+ GHz on specs-based)
UCM
25K
UCM
75K
UCM
10K
UCM
1K
UCM
1K
UCM
25K
UCM
75K
UCM
10K
UCM
1K
copy 2010 Cisco andor its affiliates All rights reserved 27
Core
Core
Core
Core
Core
Core
Core
Core
Core
Core
Core
Core
Core
Core
Core
Core
Core
Core
Core
Core
VM VM VM VM
V
M
V
M
V
M
V
M
V
M
V
M
V
M
V
M
VM VM
Dual-socket 4-core
Eg UCS C210 M2 TRC1
with dual E5640
Dual-socket 6-core
Eg UCS C210 M2 Specs-based
with UC-supported CPU model
and min speed
VM VM V
M
V
M
VM VM VM V
M
Idle
VM
VM
VM
Jumbo + 1 reserved
or
Mixed sizes + 1 reserved
or
Mixed sizes
or
21 Large eg UCM 10K
or
41 Med eg UCM 75K
or
81 Small eg UCM 25K
ldquoSmallrdquo VM
ldquoMediumrdquo VM
ldquoLargerdquo VM
ldquoJumbordquo VM
VM
VM
Idle
VM VM VM VM
V
M
V
M
V
M
V
M
V
M
V
M
V
M
V
M
VM VM
VM VM V
M
V
M
VM VM VM V
M
Idle
Mixed sizes + 1 reserved
or
Mixed sizes
or
31 Large eg UCM 10K
or
61 Med eg UCM 75K
or
121 Small eg UCM 25K
VM
Idle
V
M
V
M
V
M
V
M
VM VM
VM
VM VM
VM
VM VM
copy 2010 Cisco andor its affiliates All rights reserved 28
Virtual Software Switch Options
VM
LAN SAN
ESXi Hypervisor
Software Switch
vNIC
CNA
FCoE
VMware
vSwitch
VMware
dvSwitch
Cisco Nexus
1KV
Host based (local) Distributed Distributed
IEEE 8021Q VLAN
tagging
IEEE 8021Q VLAN
tagging
IEEE 8021Q VLAN
tagging
VLANs only visible to
local ESXi host
VLANs visible to all
ESXi hosts
VLANs visible to all
ESXi hosts
EtherChannel EtherChannel EtherChannel
-- -- Virtual PortChannel
-- -- QoS Marking
(DSCPCoS)
-- -- ACL
-- -- SPAN
RADIUSTACACS+
No VM needed No VM needed VM needed for VSM
vmNIC
UCS B200
Strongly recommended for UC on UCS B-Series
Not required but recommended for UC on UCS C-Series
copy 2010 Cisco andor its affiliates All rights reserved 29
pSwitch
ESXi
bull Cisco Software Switch in Hypervisor
bull Familiar network server operations amp management model
bull Enhanced diagnostic amp monitoring capability
bull Visibility direct to VM
ESXi Nexus
1000V
VEM
Nexus
1000V
VEM
Nexus 1000V VSM
copy 2010 Cisco andor its affiliates All rights reserved 30
Physical switch maps L3 DSCP to L2 CoS
CUCM marks traffic based on L3 DSCP values
pSwitch (CAT6K etc) can do mapping from L3 DSCP to L2 CoS (if needed)
CTL Packet L3
dc1-access-6k(config)mls qos map dscp-cos 24 to 3
dc1-access-6k(config)mls qos map dscp-cos 46 to 5
CS3
L20 L3CS3
CUCM
CAT6K
L23 L3CS3 L23 L3CS3
copy 2010 Cisco andor its affiliates All rights reserved 31
bull UCS 6100 doesnrsquot look into L3 IP header
bull DSCPToS setting in IP header is not altered by UCS
bull 6100 sends packet to uplink pEthernet switch
bull Default QoS settings on UCS
FCoE (ldquomatch cos 3rdquo) ndash no drop policy
Rest (ldquomatch anyrdquo) ndash Best Effort Queue
vSwitch amp UCS 6100 can not map L3 DSCP to L2 CoS
L20 L3CS3
CUCM
CAT6K
UCS 6100
copy 2010 Cisco andor its affiliates All rights reserved 32
bull UC blades Network Adapters QoS policy set to Platinum (CoS=5 No
Drop)
bull Non-UC blades Network Adapters QoS policy set to best effort
N1Kv Considerations
bull UC sig traffic (CoS3) share queues with FCoE traffic (CoS3)
bull UC sig traffic is given lossless behavior
bull Default CoS value of 3 for FCoE traffic should never be changed
Without N1Kv
Caveat
bull All traffic types from virtual UC App will get CoS value of Platinum
bull Non-UC application gets best-effort class might not be acceptable
L20 L3CS3
L23 L3CS3
L23 L3CS3
CUCM
N1KV
UCS 6100
CAT6K
copy 2010 Cisco andor its affiliates All rights reserved 33
Compute Layer
SANStorage
Layer ndash Cisco
SRND
Cisco
UCS 6100
Fabric
Interconnect
UCS 5100
Blade
Server
Cisco SAN
Switch
4x10GE
4x10GE
4x10GE
4x10GE
FC FC
FC FC
Nexus
1000V
FC Storage
SP-A SP-B
3rd party layer
CUCM VM IOPS ~ 200
200 IOPS 4KB ~ 64 Mbps per VM
bull Total capacity 28000 IOPS
bull 14000 IOPS per controller
bull 4 KByte block size
14000 IOPS x (4KB) ~ 428 Mbps
600 Mbps throughputcontroller
3rd Party SAN Example
Result
bull One 4 Gbps FC interface is enough to
handle the entire capacity of one Storage
Array
bull HA requires four FC interfaces
copy 2010 Cisco andor its affiliates All rights reserved 34
copy 2010 Cisco andor its affiliates All rights reserved 35
bull All UC Deployment Models are supported
No change in the current deployment models
Base deployment model ndash Single Site Centralized Call Processing etc are not changing
bull VM machine layout on a blade andor chassis
Unity Connection requires one extra CPU core (vCPU) on blade
bull Software checks for design rules
No rules or restrictions are in place in UC Apps to check if you are running the primary and sub on the same blade
bull CoWAN ruleslatency requirement are same
Does not depend on CUCM code or hardware
httpwwwciscocomgoucsrnd
copy 2010 Cisco andor its affiliates All rights reserved 36
bull SRND application-layer guidelines are same as when on MCS
Redundancy Rules are the same
Clustering over the WANlatency numbers
Mega Cluster supported in 85
Determine quantityrole of nodes
For HA No design checks validating proper placement of primary and secondary servers
CUCCE private network requirement
bull Mixed clusters of HP IBM UCS are supported
Subject to ldquocommon senserdquo rules ndash eg donrsquot make Pub or Primary less powerful than Sub or Secondary
bull Direct attach devices must be on physical MCS server
-MOH Live audio stream
-Tape BackupFloppy
bull New factors to consider for end-to-end QoS designconfiguration
37 copy 2010 Cisco andor its affiliates All rights reserved
FROM CUCM
System Release To Unified CM 85 To UC System 85
4x Multi Hop thru 61(x)71(x) Multi Hop thru 61(3)
51(2) Multi Hop thru 61(x)71(x) NA
51(3) 2 Hop thru 71(3) NA
61(1) 2 Hop thru 61(x)71(x) 2 Hop
61(2) 2 Hop thru 61(x)71(x) NA
61(3) 2 Hop thru 61(x)71(x) NA
61(4) Single Hop NA
61(5) Single Hop NA
70(1) 2 Hop thru 71(x) 2 Hop
71(2) 2 Hop thru 71(x) 2 Hop
71(3) Single Hop Single Hop Multi stagesBWC supported
71(5) Single Hop NA
80(1) Single Hop Single Hop Multi stagesBWC supported
80(2) 80(3) Single Hop NA
38 copy 2010 Cisco andor its affiliates All rights reserved
VMware Feature support bull VMware feature support varies by application
bull Some features are supported with Caveats some Partially
bull For example
bull Clone Virtual Machine
ldquoY (C)rdquo means VM has to be powered off
bull vMotion
ldquoY (C)rdquo means vMotion supported for live traffic calls shouldnrsquot be dropped (but not guaranteed)
ldquoPartialrdquo means in maintenance mode only
ESXi Features CUCM CUC CUP CCX
Clone Virtual Machine Y (C) Y (C) Y (C) Y (C)
VMware vMotion Y (C) Partial Partial Y (C)
Resize Virtual Machine Partial Partial Partial Partial
VMware HA Y (C) Y (C) Y (C) Y (C)
Boot From SAN Y (C) Y (C) Y (C) Y (C)
VMware DRS No No No No
Docwiki httpdocwikiciscocomwikiUnified_Communications_VMware_Requirements
copy 2010 Cisco andor its affiliates All rights reserved 39
Current Business Continuity and Disaster Recovery strategies are still applicable
The UC apps redundancy rules are same
Distribute UC application nodes across UCS blades chassis and sites to minimize failure impact
Primarysecondary on different blade chassis sites
On same blade mix Subs with TFTPMoH vs just Subs
Redundancy of UCS components (blade chassis FEX links Interconnect switching)
Redundancy of ldquonewrdquo network types (10GbE SAN multi-pathing etc)
copy 2010 Cisco andor its affiliates All rights reserved 40
TAC Support Demarcation
Server Hardware
Shared Storage
VMware Cisco
Application
UC on UCS
Tested Reference Configuration
Cisco 3rd-party Cisco
Cisco VCE VCE
3rd-party
VCE
UC on UCS
Specs-based (including Vblock option)
Cisco 3rd-party Cisco
Cisco VCE VCE
3rd-party
VCE
3rd-party VMware (HP IBM)
Specs-based 3rd-party 3rd-party 3rd-party Cisco
MCS 7800 Appliances Cisco NA NA Cisco
Customer-provided MCS 7800 equivalent 3rd party NA NA Cisco
41 copy 2010 Cisco andor its affiliates All rights reserved
Customer Example ndash Primary Data Center
OLD NEW
Hardware Nodes 62 Physical Servers ( EU HQ clusters) 14 Approx
Software Version 615 amp 851 851
Ucxn Version 421 851 ndash 3 pairs -- Virtualized
CER 20 70 86 -- Virtualized
CM SUB CM PUB CM SUB CM SUB
MOH TFTP CER CM SUB
CER UCxn UCxN CM SUB
42 copy 2010 Cisco andor its affiliates All rights reserved
Deployment Model ndash Data Center 1
CM SUB CM PUB CM SUB CM SUB
MOH TFTP UCxN UCxn
CER UCxn UCxn UCxn
43 copy 2010 Cisco andor its affiliates All rights reserved
Deployment Model ndash Data Center 2
CM SUB CM PUB CM SUB CM SUB
MOH TFTP Ucxn UCxn
CER UCxn UCxn UCxn
copy 2010 Cisco andor its affiliates All rights reserved 44
Customer Design
PSTN
IP WAN
SIP Proxy
Unified Communications Manager
Unity Connection
Unified Presence
Unified Contact Center Express
Cisco UCS 5108 Chassis with UCS B200 Blade Servers
Cisco UCS 6100 Fabric Interconnect Switch
Cisco UCS C210 or C200 General-Purpose Rack-mount Server
CUSP
(11K phones)
(3K phones)
(400 phones)
copy 2010 Cisco andor its affiliates All rights reserved 45
HQ Details
Blade 1 Blade 2
CPU-1 CPU-2 CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
Blade 3 Blade 4
CPU-1 CPU-2 CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
CUP-1 UCCX-1 CUP-2
Blade 5 Blade 6
CPU-1 CPU-2 CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
UCCX-1
UCxn-1 Active
Leave
idle for
UCxn
UCxn-2
Active
TFTP-2
SUB-2
SUB-4
SUB-6
SUB-8 CUP-2 UCCX-2
SUB-3 PUB SUB-1
CUP-1
TFTP-1 SUB-5
SUB-7
Blade Slot 7 Blade Slot 8
CPU-1 CPU-2 CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
CUCM VM OVAs
Messaging VM OVAs
Leave
idle for
UCxn
ldquoSparerdquo blade slots available for non-UC workloads such as Cisco Nexus 1000V VMware
vCenter or 3rd-party workloads such as directories email groupware or other business
applications
Contact Center VM OVAs
Presence VM OVAs
copy 2010 Cisco andor its affiliates All rights reserved 46
Rack Server 3
CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
UCxn-2 CCX-1 CUP
Rack Server 2
CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
Rack Server 1
CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
Leave
idle for
UCxn
UCxn-1 TFTP-2 SUB-2
Leave
idle for
UCxn
SUB-1 CCX-2 PUB TFTP-1
Branch Office Details
Rack Server 2
CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
Rack Server 1
CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
PUBTFTP CCX-1 CUP
SUB CCX-2
Leave
idle for
UCxn
Leave
idle for
UCxn
UC
xn
-1
UC
xn
-1
CUCM VM OVAs
Messaging VM OVAs
Contact Center VM OVAs
Presence VM OVAs
copy 2010 Cisco andor its affiliates All rights reserved 47
bull DAS Rack Mount Server (Cisco C-Series)
bull Popular DAS Protocol SCSI
bull iSCSI Access SCSI storage media using IP network
bull Fibre Channel The most popular SAN protocol today
Cable distance ~ 2 km
Popular speed - 4 Gbs
bull NAS (Network Attached Storage) uses NFS (Network File System) protocol over TCPIP
DAS
SCSI
Computer System
SCSI Bus Adapter
SCSI Device Driver
Volume Manager
File System
Application
iSCSI
File System
Application
SCSI Device Driver iSCSI Driver
TCPIP Stack
NIC
Volume Manager
NIC
TCPIP Stack
iSCSI Layer
Bus Adapter
Host
Server
Storage
Transport
Storage
Media
FC SAN
FC
FC HBA
SCSI Device Driver
Volume Manager
File System
Application
Computer System Computer System
Block IO
SAN IP
copy 2010 Cisco andor its affiliates All rights reserved 48
HDD 1
450gig
15K RPM
HDD 2
450gig
15K RPM
HDD 3
450gig
15K RPM
HDD 4
450gig
15K RPM
HDD 5
450gig
15K RPM
Single RAID5 Group (14 TB Usable Space)
PUB SUB1 UCCX1
UC
VM 1
UC
VM 2
UC
VM 3
LUN 2 (720 GB) LUN 1 (720 GB)
UCCX2 CUP1 CUP2
UC
VM 4
UC
VM 5 UC
VM 6
4 to 8 UC VMs per
LUN
(max dependent on
sumvDisks)
NASSAN Array Best Practices for UC
Must be lt2TB per
LUN Recommend
500GB to 15 TB
Use FC class disks
with ~180 IOPS (eg
450 GB 15K or
300 GB 15K)
copy 2010 Cisco andor its affiliates All rights reserved 49
DAS Example UCS C210 M2 TRC1
HDD 1
146 GB
15K RPM
HDD 2
146 GB
15K RPM
HDD 3
146 GB
15K RPM
HDD 4
146 GB
15K RPM
HDD 5
146 GB
15K RPM
HDD 6
146 GB
15K RPM
HDD 7
146 GB
15K RPM
HDD 8
146 GB
15K RPM
HDD 9
146 GB
15K RPM
HDD 10
146 GB
15K RPM
Single RAID1
Volume Single RAID5 Volume (1022 GB after RAID overhead)
VMFS Filestore (947 GB after VMFS overhead)
PUB UCCX1
UC
VM 1
UC
VM 3
CUP1
UC
VM 5
vSphere ESXi
image
Notes
bull VMFS block size limits
max vDisk size
bull Could have gt1 VMFS
datastore on RAID
volume
copy 2010 Cisco andor its affiliates All rights reserved 50
DAS Example UCS C200 M2 TRC1 for BE6K
HDD 1
1 TB
72K RPM
HDD 2
1 TB
72K RPM
HDD 3
1 TB
72K RPM
HDD 4 1
TB
72K RPM
Single RAID10 Volume (2 TB after RAID
overhead)
VMFS Filestore (18 TB after VMFS overhead)
PUB UCCX1
UC
VM 1
UC
VM 3
CUP1
UC
VM 5 vSphere ESXi
image
copy 2010 Cisco andor its affiliates All rights reserved 51
UC Application Co-residency
bull See wwwciscocomgouc-virtualized for latest
bull Based on supported OVA for download
OVA reserves cores RAM etc to VMs
Basic rule of thumb fill up blade until out of capacity
If blade contains VM for messaging must reserve core for ESXi
bull Hardware oversubscription not supported
52 copy 2010 Cisco andor its affiliates All rights reserved
Virtual Machine Sizing
bull Virtual Machine virtual hardware defined by an VM template
vCPU vRAM vDisk vNICs
bull Capacity
bullAn VM template is associated with a specific capacity
bull VM templates are packaged in a OVA file
bull There are usually different VM template per release For example CUCM_80_vmv7_v21ova
CUCM_85_vmv7_v21ova
CUCM_86_vmv7_v15ova
Includes product product version VMware hardware version template version
53 copy 2010 Cisco andor its affiliates All rights reserved httptoolsciscocomcucst
copy 2010 Cisco andor its affiliates All rights reserved 54
bull Customer-accessible
bullUC on UCS httpwwwciscocomgouconucs and wwwciscocomgouc-virtualized and wwwciscocomgoucsrnd and wwwciscocomgoswonly (UCS page)
bullUCS in general httpwwwciscocomgoucs
bullVblocks amp Virtual Computing Environment wwwvceportalcom and wwwvcecom
bullFlexPods wwwcisconetappcom
bullCisco Infrastructure as a Service (IaaS) Portal httpwwwciscocomgoiaas
bullCisco Unified Service Delivery Portal httpwwwciscocomgousd
bullldquoCUCM on Virtual Serversrdquo (summary of ldquowhatrsquos newdifferent when virtualizedrdquo) httpwwwciscocomenUSdocsvoice_ip_commcucmvirtualservershtml
bull ldquoVirtualization and UCS 101rdquo httpnewsroomciscocomdlls2010ts_102810html
bull UC on UCS ldquotech valuerdquo httpwwwciscocomenUSsolutionsns340ns339ns638ns914html_TWTVtwtv_episode_74html
copy 2010 Cisco andor its affiliates All rights reserved 55
bull Partner Central ldquoServers OS and Virtualizationrdquo
httpwwwciscocomwebpartnersselltechnologyipcservers_os_virt_for_uchtml
bull Partner Community ldquoServers OS and Virtualizationrdquo
httpswwwmyciscocommunitycomcommunitypartnercollaborationucservers
bull Design info
httpwwwciscocomgouc-virtualized and httpwwwciscocomgoucsrnd
bull ldquoWhatrsquos new whatrsquos different when on VMwarerdquo customer doc
httpwwwciscocomenUSdocsvoice_ip_commcucmvirtualservershtml
bull Product Literature
httpwwwciscocomgoucs and httpwwwciscocomgouconucs
bull Ordering Guide
httpwwwciscocomwebpartnersselltechnologyipcuc_tech_readinesshtml~7
bull Virtual Computing Environment Portal httpwwwvceportalcom
bull Cisco Infrastructure as a Service (IaaS) Portal httpwwwciscocomgoiaas
copy 2010 Cisco andor its affiliates All rights reserved 3
1990s 2010 2005 Future
CUCM 3x4x Cisco UC 50+ Cisco UC 80(2)+ VOIP ICM
Legacy Voice
Enhancement Server (Special-purpose)
Appliance Virtualization
Network Services
~2000
Business
Agility
Footprint
Space Energy
Cabling
Investment
Leverage
Business
Continuity
Management
Simplification
Increasing Architectural Flexibility while Decreasing Barriers to Rapidly DeployTailor
Increasing ldquoMiniaturizationrdquo Consolidation amp Avoidance while Increasing Efficiency
No Forklifts Network Convergence Commodity ServersStorage Virtualization
Increasing Security Resiliency and options for High Availability Disaster Recovery
Increasing Familiarity Centralization Scale and Efficiency
SAF
copy 2010 Cisco andor its affiliates All rights reserved 4
Excessively Centralized with Duplicate Networks
Inflexible
High TCO
Voice Video
Mobility Data
Mainframe PBX
Too Many Networks
Data Center 10 with
Traditional Communications
Excessively Distributed with Duplicate Networks
Sprawl
Medium to High TCO
hellip hellip
Servers and Appliances
Data
Center
Converged
Network
Too Many Fabrics
Data Center 20 with
Unified Communications
Flexible Operations on a Single Network
Agility + Governance
Low TCO
Virtualized ComputeStorage
hellip
Unified Fabric amp Networks
The Network
Data Center 30 with
Virtualized Communications
Same TCO Drivers Technology Facilities Management Burden
copy 2010 Cisco andor its affiliates All rights reserved 5
Eg Business Edition 6000
Eg Hosted Collaboration
Solution
Private
Cloud
Public
Cloud
Provider Cloud
or Hybrid
copy 2010 Cisco andor its affiliates All rights reserved 6
MCS Appliance UC on UCS
Tested Reference Configuration (TRC)
UC on UCS
Specs-based
Configuration-based
Highly Prescribed
Only 1 app per server
IBM x3650-M2 or x3250-M3
Only one CPU model supported
(single 4-core)
Fixed RAM
Exact match at part number level
for adapters
1 storage option only
Cisco owns app performance
Configuration-based
Prescribed
Typical 4-20 VMs per server
UCS B200B230B440 M2
C210C200 M2
E5640 E5506 E7-28704870
(4-core or 10-core)
Fixed RAM
Blades you pick adapter
Rack exact match at part number
level for adapters
Pick from a few storage options
Cisco owns app performance
Specs-based
A Few Restrictions
Typical 4-20 VMs per server
Any UCS that satisfies policy
56xx 75xx CPU 253+ GHz
E7-xxxx 24+ GHz
RAM depends on VMs
Regardless of server you pick the
adapters
Any storage ndash but you design it
Customer owns app performance
B230 M22-Socket Intel E7-2800 2 SSD 32 DIMM
B200 M22-Socket Intel 5600 2 SFF Disk 12 DIMM
B250 M22-Socket Intel 5600 2 SFF Disk 48 DIMM
B440 M24-Socket Intel E7-4800 4 SFF Disk 32 DIMM
C200 M22-Socket Intel 5600 4 Disks 12 DIMM 2 PCIe 1U
C210 M22-Socket Intel 5600 16 Disks 12 DIMM 5 PCIe 2U
C250 M22-Socket Intel 5600 8 Disks 48 DIMM 5 PCIe 2U
C460 M24-Socket Intel E7-4800 12 Disks 64 DIMM 10 PCIe 4U
Bla
de
Rack M
ou
nt
C260 M22-Socket Intel E7-2800 16 Disks 64 DIMM 6 PCIe 2U
Updated
New
Updated
Updated
copy 2010 Cisco andor its affiliates All rights reserved 7
bull Target Customer Profiles
ldquoReady willing ablerdquo to support servers VMware storage
Ready to move off appliance-oriented operations
UCS B-series for centralized medium to high server count
UCS C-series for low to medium server count or highly distributed
3rd-party server options for investment leverage
bull Platform Support
Virtual Machine Templates - defined by each UC app
Application VM Co-residency ndash ldquomix amp match UC with UCrdquo for most UC apps
Requries VMware vSphere 45 ndash ESXi only feature support depends on app vCenter is also required for specs-based
UCS HP and IBM server hardware options
DAS SAN NAS and Diskless storage options
Various NIC HBA CNA and Cisco VIC network options (1GB through 10GB)
copy 2009 Cisco Systems Inc All rights reserved Presentation_ID 8
UC on UCS Solution Components
Cisco Unified Communications 861
LAN
SAN Optional Shared Storage
PSTN
hellip UCS C-Series General-Purpose Rack-Mount Servers
LAN
SAN UCS 6100 Fabric
Interconnect Switches
Required Shared Storage
UCS 5100 Blade Server Chassis UCS 2100 Fabric Extender
UCS B-Series Blade Servers
PSTN
ldquoUC on UCSrdquo B-Series
ldquoUC on UCSrdquo C-Series
copy 2010 Cisco andor its affiliates All rights reserved 9
Unified Communication System
80bull Voice Video Presence Mobility Customer Care
bull Available in flexible deployment models
bull Deliver a unparalleled user experience
HCS Management Systembull Zero-touch fulfillment amp provisioning with self service
bull Service assurance for enabling high quality of services
bull Coordinated management and integration across domains
Optimized Virtualization Platform
(UC on UCS B-series)bull Resource optimized for reduced hardware capex
bull Installation amp upgrade automation
bull Provides flexibility customization amp additional redundancy
Scalable System Architecturebull Customer Aggregation amp SIP Trunking
bull SLA Enablement Security Scalability
bull Cloud Based SaaS Integration
Cisco Hosted Collaboration Solution Combining virtualization management amp
architecture elements for a comprehensive
platform
Cisco Business Edition 6000 Midmarket 100-1000 user solution for call
control mobility rich media presence and
contact center
Infrastructure Solutions Data Center ldquobuilding blocksrdquo
Vblock FlexPod
UC on FlexPod planned not committed
copy 2010 Cisco andor its affiliates All rights reserved 10
B230 M2 2-Socket Intel E7-2800 2 SSD 32 DIMM
B200 M2 2-Socket Intel 5600 2 SFF Disk 12 DIMM
B250 M2 2-Socket Intel 5600 2 SFF Disk 48 DIMM
B440 M2 4-Socket Intel E7-4800 4 SFF Disk 32 DIMM
C200 M2 (LFF) 2-Socket Intel 5600 4 Disks 12 DIMM 2 PCIe 1U
C220 M3 SFF 2-Socket Intel E5-2600 8 Disks 16 DIMM 2 PCIe 1U
C250 M2 2-Socket Intel 5600 8 Disks 48 DIMM 5 PCIe 2U
C460 M2 4-Socket Intel E7-4800 12 Disks 64 DIMM 10 PCIe 4U
Supported Hardware UC on UCS B
lad
e
Rack M
ou
nt
C260 M2 2-Socket Intel E7-2800 16 Disks 64 DIMM 6 PCIe 2U
UC on UCS Tested Reference Configuration UC on UCS Specs-based
BE6K
B200 M3 SFF 2-Socket Intel E5-2600 2 SFF Disk 24 DIMM
C240 M3 SFF 2-Socket Intel E5-2600 24 Disks 24 DIMM 5 PCIe 2U
C210 M2 2-Socket Intel 5600 16 Disks 12 DIMM 5 PCIe 2U
Target support
Fall 2012
Target support
Fall 2012
Target support
Fall 2012
(Note UCS Express
not supported)
copy 2010 Cisco andor its affiliates All rights reserved 11
Must be on VMware
HCL
Allowed Server
Vendors
Server model and IO devices on wwwvmwarecomgohcl
All parts must be supported by server vendor
No hardware oversubscription allowed for UC
VMware vCenter is REQUIRED
Processor
Intel Xeon 56xx75xx 253+ GHz or E7-xxxx 24+ GHz
CPU support varies by UC app
Required physical core count = sumUC VMs vCPU (+1 if Unity Cxn)
Capacity = sumUC VMs vRAM + 2GB for VMware
Follow server vendor for module densityconfiguration
Memory
Storage Network
Must be on VMware HCL and supported by server vendor
Eg 1GbE10GbE NIC ge2Gb FC HBA 10Gb CNA or VIC
UCS BC UCS Express Other 3rd-parties
Adapters (eg LANStorage Access)
SAN (FCoE FC iSCSI) NAS (NFS) Variable DASRAID
Storage capacity = sumUC VMs vDisk + VMwareRAID overhead
Storage performance = sumUC VM IOPS
copy 2010 Cisco andor its affiliates All rights reserved 12
Intel
Program
Nehalem-
EP
Nehalem-
EX
Westmere-
EP
Westmere-
EX
Romley-
EP
CPU Family 55xx
65xx
75xx
56xx E7-28xx
E7-48xx
E7-88xx
E5-26xx
CPU Cores 4 468 46 6810 468
CPU Speed 2-29 GHz 17-27 GHz 19-333 GHz 17-27 GHz 1-3 GHz
Example UCS
Models with
these CPUs
B200250 M1
C210 M1
C250 M1M2
C200 M2
B230 M1
B440 M1
C460 M1
B200250 M2
C210 M2
C250 M2
B230 M2
B440 M2
C260 M2
C460 M2
B200 M3
C220 M3
C240 M3
UC on UCS
Certifications
TRCs for B200 M1
TRCs for C210 M1
(E5540)
TRC for C200M2
(E5506)
Specs-based
(75xx 253+ GHz)
TRCs for B200 M2
TRCs for C210 M2
(E5640)
Specs-based
(56xx at 253+
GHz)
TRC for B230 M2
TRC for B440 M2
(E7-28704870)
Specs-based
(E7 at 24+ GHz)
Not Currently
Supported by UC
copy 2010 Cisco andor its affiliates All rights reserved 13
VM VM VM
VM VM VM
VM VM VM
VM VM VM
VM
VM
VM
ldquoSmallrdquo VM
ldquoMediumrdquo VM
ldquoLargerdquo VM
VM
VM
VM VM
VM VM VM
19 UC VMs with
total 40 vcpursquos
19 MCS Appliances 5 virtualized
servers (dual 4-core
B200 M2
TRC)
4 virtualized
servers (dual 6-core
B200 M2
specs-based)
2 virtualized
servers (dual 10-core
B230 M2
TRC)
19 UC
app
copies
copy 2010 Cisco andor its affiliates All rights reserved 14
UC on UCS Products with Owner UC on UCS TRC UC on UCS Specs-based
Unified Communications Manager
Business Edition 6000 C200 M2 only Not supported
Unity Connection
Unified Presence
Unified Contact Center Express
Cisco Emergency Responder
Session Manager Edition
InterCompany Media Engine
Unified Attendant Consoles
Unity
Unified Workforce Optimization (WFO)
Unified Contact Center Enterprise Planned
Unified Intelligence Center
Unified Customer Voice Portal Planned
MediaSense Planned
Unified Contact Center Mgmt Portal
SocialMiner
Finesse Planned
Unified EmailWeb Interaction Mgr
Prime UCMS (OMPMSMSSM)
Webex Premise Planned Planned
Unified MeetingPlace Planned Planned
TMSCTMS Planned Planned
VCS Planned Planned
copy 2010 Cisco andor its affiliates All rights reserved 15
Why virtualize
your UC
Why virtualize
on UCS
Lower TCO
Business
Agility
Additional
Savings and
Increased
Agility
End to End
Solution
Single
Support
Tested Reference Configurations
Vblocks
Cisco options
VCE Vblock options
Infrastructure Simplification (Cables Adapters Switching)
Converge Communications and DC Networks ndash ldquowire oncerdquo
Consolidates System Mgmt
Easier Service Provisioning
Reduce ServersStorage
Reduced Power Cooling Cabling Space Weight
Investment Leverage amp Easy Server Repurposing
Efficient App Expansion
Accelerated UC rollouts
Better Business Continuity
PortableMobile VMs
UCS is the industryrsquos only
fully unified and virtualization-
aware compute solution
copy 2010 Cisco andor its affiliates All rights reserved 16
CAPEX
bull Reduced Server Count (50-75)
bull NetworkStorage Consolidation (50+)
bull Reduced Cabling (50+)
OPEX
Reduced Rack amp Floor Space (36)
Reduced PowerCooling (20+)
Fewer Servers to Manage (50-75 less)
Reduced MaintenanceSupport Costs (~20)
Example 5000 users Dial tone voicemail and Presence 10 are Contact Center Agents
11 non-virtualized rack servers required for UC more for other business apps
copy 2010 Cisco andor its affiliates All rights reserved 17
CAPEXOPEX
bull Similar Consolidation and Operational EfficiencyScale benefits as with UC on UCS B-series
Other Benefits
Lower initial investment
Simple entrymigration to virtualized UC ndash Data Center expertise not required unless using SAN option
Example 5000 users Dial tone voicemail and Presence 10 are Contact Center Agents
11 non-virtualized rack servers required for UC more for other business apps
copy 2010 Cisco andor its affiliates All rights reserved 18
$-
$500
$1000
$1500
$2000
$2500
$3000
2 4 8 10 12 20 50 100
UCS B230 M2 TRC OPEX ($K)
UCS B230 M2 TRC CAPEX ($K)
UCS B200 M2 TRC OPEX ($K)
UCS B200 M2 TRC CAPEX ($K)
UCS C210 M2 TRC OPEX ($K)
UCS C210 M2 TRC CAPEX ($K)
MCS 7845-I3 OPEX ($K)
MCS 7845-I3 CAPEX ($K)
Assumptions
bull UC only no other business applications included ldquoSparerdquo or ldquohot standbyrdquo hosts not included
bull ldquoServerrdquo is either an MCS Appliance or a 2-vcpu-core ldquoVirtual Machinerdquo
bull Dual sites split MCS or UCS TRC servers across sites no single point of failure ndash redundant sites switching blade chassis rackblade servers
bull Using list pricing for MCS-7845-I3-IPC1 UCS-C210M2-VCD2 UCS-B200M2-VCS1 UCS-B230M2-VCDL1 and VMware Enterprise Plus Edition
Appliance or VM Count
PSTN
2104
2104
2104
2104
SANLAN
Dual Site Scenario PSTN
2104
2104
2104
2104
SANLAN
hellip
hellip
hellip
hellip
UC on UCS
B200 B230
UC on UCS
C210
MCS 7845
B230 M2
vs B200 M2
C210 M2
vs MCS 7845
20 copy 2010 Cisco andor its affiliates All rights reserved
Current Offers Technical Overview
21 copy 2010 Cisco andor its affiliates All rights reserved
Eg 4 physical servers
Each MCS 7800 hosts only
one UC app instance
4 virtual servers (VMrsquos) on 1 physical server
Single virtualized server with total 8 physical
cores hosts all UC app instances
Unity
Connection
Unified CM
VM for
Unified
CM
Sub
Unified CCX
VM for
Unity
Cxn
VM for
Unified
CCX
VM for
Unified
CM
Pub
or
copy 2010 Cisco andor its affiliates All rights reserved 22
Server Model TRC CPU RAM Storage Adapters
UCS B200 M2 Blade Server TRC 1
Dual E5640 (8 physical cores total)
48 GB DAS (RAID1) for
VMware FC SAN for UC apps
Cisco VIC
TRC 2 Dual E5640 (8
physical cores total) 48 GB Diskless Cisco VIC
UCS B230 M2 Blade Server
TRC 1 Dual E7-2870 (20
physical cores total) 128 GB Diskless Cisco VIC
UCS B440 M2 Blade Server
TRC1 Dual E7-4870 (40
physical cores total)
256 GB Diskless Cisco VIC
UCS C260 M2 Rack-Mount Server
TRC 1 Dual E7-2870 (20
physical cores total) 128 GB DAS (2x RAID5) 1GbE NIC
UCS C210 M2 General-Purpose
Rack-Mount Server TRC 1
Dual E5640 (8 physical cores total)
48 GB
DAS (2 disks RAID1) for VMware + DAS (8 disks RAID5) for
UC apps
1GbE NIC
TRC 2 Dual E5640 (8
physical cores total) 48 GB
DAS (2 disks RAID1) for VMware FC SAN
for UC apps
1GbE NIC and 4G FC HBA
TRC 3 Dual E5640 (8
physical cores total) 48 GB Diskless
1GbE NIC and 4G FC HBA
UCS C200 M2
General-Purpose Rack-Mount Server
TRC 1 Dual E5506 (8
physical cores total) 24 GB
DAS (4 disks RAID10) for VMware
+ UC apps 1GbE NIC
copy 2010 Cisco andor its affiliates All rights reserved 23
UC app Scale
(ldquousersrdquo)
vCPU (cores) Usually 253+ GHz
per core required
vRAM
(GB)
vDisk
(GB)
Notes
Unified
CM
1000 2 4 1 x 80 UCS C200 or BE6K only
2500 1 225 1 x 80 Not for use with C200BE6K
7500 2 6 2 x 80
10000 4 6 2 x 80
Unity
Connection
500 1 2 1 x 160
1000 1 4 1 x 160
5000 2 4 1 x 200
10000 4 4 2 x 146 Not for use with C200BE6K
20000 7 8 2 x 300
Unified
Presence
1000 1 2 1 x 80
2500 2 4 1 x 80 Not for use with C200BE6K
5000 4 4 2 x 80
Unified CCX 100 2 4 2 x 146 UCS C200 or BE6K only
300 2 4 2 x 146 Not for use with C200BE6K
400 4 8 2 x 146
Not exhaustive subject to change see wwwciscocomgouc-virtualized for latest
ie user count for particular values of BHCA trace level encryption CTI and other factors Actual
supportable user count may vary by deployment
copy 2010 Cisco andor its affiliates All rights reserved 24
Policy still lives here httpdocwikiciscocomwikiUnified_Communications_Virtualization_Sizing_GuidelinesApplication_Co-residency_Support_Policy
Three aspects
1 Allowed App Mix on same physical server
SAME RULES for TRC vs specs-based UCSHPIBM
2 Allowed VM OVA choices
DIFFERENT RULES for TRC vs specs-based due to CPU differences
3 Max number of VMs on same physical server
SAME RULES for TRC vs specs-based to determine max but specs-based might allow more VMs
Note DAS IO bottlenecks may prevent very high VM counts even if CPURAM are sufficient
copy 2010 Cisco andor its affiliates All rights reserved 25
bull Which apps can share the same physical server In general any UC with UC from apps listed at httpdocwikiciscocomwikiUnified_Communications_Virtualization_Supported_Applications
But note some UC apps restrict this eg BE6K CUCCE See their rules on their docwiki ldquochild pagesrdquo
NMTGrsquos UC Mgmt Suite (CUOM CUSM CUSSM CUPM) counts as a UC app for this
Note UCS C200 M2 TRC1 for non-BE6K no longer has special restrictions on UC App Mix
bull SEPARATE PHYSICAL SERVER required for non-UC or 3rd-party Eg N1KV ARC SingleWire vCenter FilePrint Directory CRMERP Groupware non-CUCM TFTP Nuance etc
UC VM UC VMhellipVM for
VMware
vCenter
VM for
Nexus
1KV
VSM
VM for
Solutions
Plus
CTDP
app
VM for
Unaffiliated
3rd-party
app
UC VM UC VMhellipVM for
VMware
vCenter
VM for
Nexus
1KV
VSM
VM for
Solutions
Plus
CTDP
app
VM for
Unaffiliated
3rd-party
app
UC VM UC VMhellipVM for
VMware
vCenter
VM for
Nexus
1KV
VSM
VM for
Solutions
Plus
CTDP
app
VM for
Unaffiliated
3rd-party
app
UC VM UC VMhellipVM for
VMware
vCenter
VM for
Nexus
1KV
VSM
VM for
Solutions
Plus
CTDP
app
VM for
Unaffiliated
3rd-party
app
Different blades in same chassis OK
Same blade same chassis not OK
copy 2010 Cisco andor its affiliates All rights reserved 26
bull App to HW some apps eg CUCCE donrsquot allow any of their OVAs on certain TRCs
See httpdocwikiciscocomwikiUnified_Communications_Virtualization_Supported_Applications
bull OVA to HW Some OVAs are deliberately only for use with a particular TRC or CPU
See co-res policy page and Notes column in httpdocwikiciscocomwikiUnified_Communications_Virtualization_Downloads_28including_OVAOVF_Templates29
Why Usually due to CPU modelspeed dependencies
C200 M2 TRC1
(E5506 213 GHz)
UCM
25K
UCM
75K
UCM
10K
C200 M2 Specs-based
(56xx 253+GHz )
B200C210 M2 TRC or Specs-based
(E5640 266 GHz on TRC
56xx75xx 253+ GHz on specs-based)
UCM
25K
UCM
75K
UCM
10K
UCM
1K
UCM
1K
UCM
25K
UCM
75K
UCM
10K
UCM
1K
copy 2010 Cisco andor its affiliates All rights reserved 27
Core
Core
Core
Core
Core
Core
Core
Core
Core
Core
Core
Core
Core
Core
Core
Core
Core
Core
Core
Core
VM VM VM VM
V
M
V
M
V
M
V
M
V
M
V
M
V
M
V
M
VM VM
Dual-socket 4-core
Eg UCS C210 M2 TRC1
with dual E5640
Dual-socket 6-core
Eg UCS C210 M2 Specs-based
with UC-supported CPU model
and min speed
VM VM V
M
V
M
VM VM VM V
M
Idle
VM
VM
VM
Jumbo + 1 reserved
or
Mixed sizes + 1 reserved
or
Mixed sizes
or
21 Large eg UCM 10K
or
41 Med eg UCM 75K
or
81 Small eg UCM 25K
ldquoSmallrdquo VM
ldquoMediumrdquo VM
ldquoLargerdquo VM
ldquoJumbordquo VM
VM
VM
Idle
VM VM VM VM
V
M
V
M
V
M
V
M
V
M
V
M
V
M
V
M
VM VM
VM VM V
M
V
M
VM VM VM V
M
Idle
Mixed sizes + 1 reserved
or
Mixed sizes
or
31 Large eg UCM 10K
or
61 Med eg UCM 75K
or
121 Small eg UCM 25K
VM
Idle
V
M
V
M
V
M
V
M
VM VM
VM
VM VM
VM
VM VM
copy 2010 Cisco andor its affiliates All rights reserved 28
Virtual Software Switch Options
VM
LAN SAN
ESXi Hypervisor
Software Switch
vNIC
CNA
FCoE
VMware
vSwitch
VMware
dvSwitch
Cisco Nexus
1KV
Host based (local) Distributed Distributed
IEEE 8021Q VLAN
tagging
IEEE 8021Q VLAN
tagging
IEEE 8021Q VLAN
tagging
VLANs only visible to
local ESXi host
VLANs visible to all
ESXi hosts
VLANs visible to all
ESXi hosts
EtherChannel EtherChannel EtherChannel
-- -- Virtual PortChannel
-- -- QoS Marking
(DSCPCoS)
-- -- ACL
-- -- SPAN
RADIUSTACACS+
No VM needed No VM needed VM needed for VSM
vmNIC
UCS B200
Strongly recommended for UC on UCS B-Series
Not required but recommended for UC on UCS C-Series
copy 2010 Cisco andor its affiliates All rights reserved 29
pSwitch
ESXi
bull Cisco Software Switch in Hypervisor
bull Familiar network server operations amp management model
bull Enhanced diagnostic amp monitoring capability
bull Visibility direct to VM
ESXi Nexus
1000V
VEM
Nexus
1000V
VEM
Nexus 1000V VSM
copy 2010 Cisco andor its affiliates All rights reserved 30
Physical switch maps L3 DSCP to L2 CoS
CUCM marks traffic based on L3 DSCP values
pSwitch (CAT6K etc) can do mapping from L3 DSCP to L2 CoS (if needed)
CTL Packet L3
dc1-access-6k(config)mls qos map dscp-cos 24 to 3
dc1-access-6k(config)mls qos map dscp-cos 46 to 5
CS3
L20 L3CS3
CUCM
CAT6K
L23 L3CS3 L23 L3CS3
copy 2010 Cisco andor its affiliates All rights reserved 31
bull UCS 6100 doesnrsquot look into L3 IP header
bull DSCPToS setting in IP header is not altered by UCS
bull 6100 sends packet to uplink pEthernet switch
bull Default QoS settings on UCS
FCoE (ldquomatch cos 3rdquo) ndash no drop policy
Rest (ldquomatch anyrdquo) ndash Best Effort Queue
vSwitch amp UCS 6100 can not map L3 DSCP to L2 CoS
L20 L3CS3
CUCM
CAT6K
UCS 6100
copy 2010 Cisco andor its affiliates All rights reserved 32
bull UC blades Network Adapters QoS policy set to Platinum (CoS=5 No
Drop)
bull Non-UC blades Network Adapters QoS policy set to best effort
N1Kv Considerations
bull UC sig traffic (CoS3) share queues with FCoE traffic (CoS3)
bull UC sig traffic is given lossless behavior
bull Default CoS value of 3 for FCoE traffic should never be changed
Without N1Kv
Caveat
bull All traffic types from virtual UC App will get CoS value of Platinum
bull Non-UC application gets best-effort class might not be acceptable
L20 L3CS3
L23 L3CS3
L23 L3CS3
CUCM
N1KV
UCS 6100
CAT6K
copy 2010 Cisco andor its affiliates All rights reserved 33
Compute Layer
SANStorage
Layer ndash Cisco
SRND
Cisco
UCS 6100
Fabric
Interconnect
UCS 5100
Blade
Server
Cisco SAN
Switch
4x10GE
4x10GE
4x10GE
4x10GE
FC FC
FC FC
Nexus
1000V
FC Storage
SP-A SP-B
3rd party layer
CUCM VM IOPS ~ 200
200 IOPS 4KB ~ 64 Mbps per VM
bull Total capacity 28000 IOPS
bull 14000 IOPS per controller
bull 4 KByte block size
14000 IOPS x (4KB) ~ 428 Mbps
600 Mbps throughputcontroller
3rd Party SAN Example
Result
bull One 4 Gbps FC interface is enough to
handle the entire capacity of one Storage
Array
bull HA requires four FC interfaces
copy 2010 Cisco andor its affiliates All rights reserved 34
copy 2010 Cisco andor its affiliates All rights reserved 35
bull All UC Deployment Models are supported
No change in the current deployment models
Base deployment model ndash Single Site Centralized Call Processing etc are not changing
bull VM machine layout on a blade andor chassis
Unity Connection requires one extra CPU core (vCPU) on blade
bull Software checks for design rules
No rules or restrictions are in place in UC Apps to check if you are running the primary and sub on the same blade
bull CoWAN ruleslatency requirement are same
Does not depend on CUCM code or hardware
httpwwwciscocomgoucsrnd
copy 2010 Cisco andor its affiliates All rights reserved 36
bull SRND application-layer guidelines are same as when on MCS
Redundancy Rules are the same
Clustering over the WANlatency numbers
Mega Cluster supported in 85
Determine quantityrole of nodes
For HA No design checks validating proper placement of primary and secondary servers
CUCCE private network requirement
bull Mixed clusters of HP IBM UCS are supported
Subject to ldquocommon senserdquo rules ndash eg donrsquot make Pub or Primary less powerful than Sub or Secondary
bull Direct attach devices must be on physical MCS server
-MOH Live audio stream
-Tape BackupFloppy
bull New factors to consider for end-to-end QoS designconfiguration
37 copy 2010 Cisco andor its affiliates All rights reserved
FROM CUCM
System Release To Unified CM 85 To UC System 85
4x Multi Hop thru 61(x)71(x) Multi Hop thru 61(3)
51(2) Multi Hop thru 61(x)71(x) NA
51(3) 2 Hop thru 71(3) NA
61(1) 2 Hop thru 61(x)71(x) 2 Hop
61(2) 2 Hop thru 61(x)71(x) NA
61(3) 2 Hop thru 61(x)71(x) NA
61(4) Single Hop NA
61(5) Single Hop NA
70(1) 2 Hop thru 71(x) 2 Hop
71(2) 2 Hop thru 71(x) 2 Hop
71(3) Single Hop Single Hop Multi stagesBWC supported
71(5) Single Hop NA
80(1) Single Hop Single Hop Multi stagesBWC supported
80(2) 80(3) Single Hop NA
38 copy 2010 Cisco andor its affiliates All rights reserved
VMware Feature support bull VMware feature support varies by application
bull Some features are supported with Caveats some Partially
bull For example
bull Clone Virtual Machine
ldquoY (C)rdquo means VM has to be powered off
bull vMotion
ldquoY (C)rdquo means vMotion supported for live traffic calls shouldnrsquot be dropped (but not guaranteed)
ldquoPartialrdquo means in maintenance mode only
ESXi Features CUCM CUC CUP CCX
Clone Virtual Machine Y (C) Y (C) Y (C) Y (C)
VMware vMotion Y (C) Partial Partial Y (C)
Resize Virtual Machine Partial Partial Partial Partial
VMware HA Y (C) Y (C) Y (C) Y (C)
Boot From SAN Y (C) Y (C) Y (C) Y (C)
VMware DRS No No No No
Docwiki httpdocwikiciscocomwikiUnified_Communications_VMware_Requirements
copy 2010 Cisco andor its affiliates All rights reserved 39
Current Business Continuity and Disaster Recovery strategies are still applicable
The UC apps redundancy rules are same
Distribute UC application nodes across UCS blades chassis and sites to minimize failure impact
Primarysecondary on different blade chassis sites
On same blade mix Subs with TFTPMoH vs just Subs
Redundancy of UCS components (blade chassis FEX links Interconnect switching)
Redundancy of ldquonewrdquo network types (10GbE SAN multi-pathing etc)
copy 2010 Cisco andor its affiliates All rights reserved 40
TAC Support Demarcation
Server Hardware
Shared Storage
VMware Cisco
Application
UC on UCS
Tested Reference Configuration
Cisco 3rd-party Cisco
Cisco VCE VCE
3rd-party
VCE
UC on UCS
Specs-based (including Vblock option)
Cisco 3rd-party Cisco
Cisco VCE VCE
3rd-party
VCE
3rd-party VMware (HP IBM)
Specs-based 3rd-party 3rd-party 3rd-party Cisco
MCS 7800 Appliances Cisco NA NA Cisco
Customer-provided MCS 7800 equivalent 3rd party NA NA Cisco
41 copy 2010 Cisco andor its affiliates All rights reserved
Customer Example ndash Primary Data Center
OLD NEW
Hardware Nodes 62 Physical Servers ( EU HQ clusters) 14 Approx
Software Version 615 amp 851 851
Ucxn Version 421 851 ndash 3 pairs -- Virtualized
CER 20 70 86 -- Virtualized
CM SUB CM PUB CM SUB CM SUB
MOH TFTP CER CM SUB
CER UCxn UCxN CM SUB
42 copy 2010 Cisco andor its affiliates All rights reserved
Deployment Model ndash Data Center 1
CM SUB CM PUB CM SUB CM SUB
MOH TFTP UCxN UCxn
CER UCxn UCxn UCxn
43 copy 2010 Cisco andor its affiliates All rights reserved
Deployment Model ndash Data Center 2
CM SUB CM PUB CM SUB CM SUB
MOH TFTP Ucxn UCxn
CER UCxn UCxn UCxn
copy 2010 Cisco andor its affiliates All rights reserved 44
Customer Design
PSTN
IP WAN
SIP Proxy
Unified Communications Manager
Unity Connection
Unified Presence
Unified Contact Center Express
Cisco UCS 5108 Chassis with UCS B200 Blade Servers
Cisco UCS 6100 Fabric Interconnect Switch
Cisco UCS C210 or C200 General-Purpose Rack-mount Server
CUSP
(11K phones)
(3K phones)
(400 phones)
copy 2010 Cisco andor its affiliates All rights reserved 45
HQ Details
Blade 1 Blade 2
CPU-1 CPU-2 CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
Blade 3 Blade 4
CPU-1 CPU-2 CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
CUP-1 UCCX-1 CUP-2
Blade 5 Blade 6
CPU-1 CPU-2 CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
UCCX-1
UCxn-1 Active
Leave
idle for
UCxn
UCxn-2
Active
TFTP-2
SUB-2
SUB-4
SUB-6
SUB-8 CUP-2 UCCX-2
SUB-3 PUB SUB-1
CUP-1
TFTP-1 SUB-5
SUB-7
Blade Slot 7 Blade Slot 8
CPU-1 CPU-2 CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
CUCM VM OVAs
Messaging VM OVAs
Leave
idle for
UCxn
ldquoSparerdquo blade slots available for non-UC workloads such as Cisco Nexus 1000V VMware
vCenter or 3rd-party workloads such as directories email groupware or other business
applications
Contact Center VM OVAs
Presence VM OVAs
copy 2010 Cisco andor its affiliates All rights reserved 46
Rack Server 3
CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
UCxn-2 CCX-1 CUP
Rack Server 2
CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
Rack Server 1
CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
Leave
idle for
UCxn
UCxn-1 TFTP-2 SUB-2
Leave
idle for
UCxn
SUB-1 CCX-2 PUB TFTP-1
Branch Office Details
Rack Server 2
CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
Rack Server 1
CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
PUBTFTP CCX-1 CUP
SUB CCX-2
Leave
idle for
UCxn
Leave
idle for
UCxn
UC
xn
-1
UC
xn
-1
CUCM VM OVAs
Messaging VM OVAs
Contact Center VM OVAs
Presence VM OVAs
copy 2010 Cisco andor its affiliates All rights reserved 47
bull DAS Rack Mount Server (Cisco C-Series)
bull Popular DAS Protocol SCSI
bull iSCSI Access SCSI storage media using IP network
bull Fibre Channel The most popular SAN protocol today
Cable distance ~ 2 km
Popular speed - 4 Gbs
bull NAS (Network Attached Storage) uses NFS (Network File System) protocol over TCPIP
DAS
SCSI
Computer System
SCSI Bus Adapter
SCSI Device Driver
Volume Manager
File System
Application
iSCSI
File System
Application
SCSI Device Driver iSCSI Driver
TCPIP Stack
NIC
Volume Manager
NIC
TCPIP Stack
iSCSI Layer
Bus Adapter
Host
Server
Storage
Transport
Storage
Media
FC SAN
FC
FC HBA
SCSI Device Driver
Volume Manager
File System
Application
Computer System Computer System
Block IO
SAN IP
copy 2010 Cisco andor its affiliates All rights reserved 48
HDD 1
450gig
15K RPM
HDD 2
450gig
15K RPM
HDD 3
450gig
15K RPM
HDD 4
450gig
15K RPM
HDD 5
450gig
15K RPM
Single RAID5 Group (14 TB Usable Space)
PUB SUB1 UCCX1
UC
VM 1
UC
VM 2
UC
VM 3
LUN 2 (720 GB) LUN 1 (720 GB)
UCCX2 CUP1 CUP2
UC
VM 4
UC
VM 5 UC
VM 6
4 to 8 UC VMs per
LUN
(max dependent on
sumvDisks)
NASSAN Array Best Practices for UC
Must be lt2TB per
LUN Recommend
500GB to 15 TB
Use FC class disks
with ~180 IOPS (eg
450 GB 15K or
300 GB 15K)
copy 2010 Cisco andor its affiliates All rights reserved 49
DAS Example UCS C210 M2 TRC1
HDD 1
146 GB
15K RPM
HDD 2
146 GB
15K RPM
HDD 3
146 GB
15K RPM
HDD 4
146 GB
15K RPM
HDD 5
146 GB
15K RPM
HDD 6
146 GB
15K RPM
HDD 7
146 GB
15K RPM
HDD 8
146 GB
15K RPM
HDD 9
146 GB
15K RPM
HDD 10
146 GB
15K RPM
Single RAID1
Volume Single RAID5 Volume (1022 GB after RAID overhead)
VMFS Filestore (947 GB after VMFS overhead)
PUB UCCX1
UC
VM 1
UC
VM 3
CUP1
UC
VM 5
vSphere ESXi
image
Notes
bull VMFS block size limits
max vDisk size
bull Could have gt1 VMFS
datastore on RAID
volume
copy 2010 Cisco andor its affiliates All rights reserved 50
DAS Example UCS C200 M2 TRC1 for BE6K
HDD 1
1 TB
72K RPM
HDD 2
1 TB
72K RPM
HDD 3
1 TB
72K RPM
HDD 4 1
TB
72K RPM
Single RAID10 Volume (2 TB after RAID
overhead)
VMFS Filestore (18 TB after VMFS overhead)
PUB UCCX1
UC
VM 1
UC
VM 3
CUP1
UC
VM 5 vSphere ESXi
image
copy 2010 Cisco andor its affiliates All rights reserved 51
UC Application Co-residency
bull See wwwciscocomgouc-virtualized for latest
bull Based on supported OVA for download
OVA reserves cores RAM etc to VMs
Basic rule of thumb fill up blade until out of capacity
If blade contains VM for messaging must reserve core for ESXi
bull Hardware oversubscription not supported
52 copy 2010 Cisco andor its affiliates All rights reserved
Virtual Machine Sizing
bull Virtual Machine virtual hardware defined by an VM template
vCPU vRAM vDisk vNICs
bull Capacity
bullAn VM template is associated with a specific capacity
bull VM templates are packaged in a OVA file
bull There are usually different VM template per release For example CUCM_80_vmv7_v21ova
CUCM_85_vmv7_v21ova
CUCM_86_vmv7_v15ova
Includes product product version VMware hardware version template version
53 copy 2010 Cisco andor its affiliates All rights reserved httptoolsciscocomcucst
copy 2010 Cisco andor its affiliates All rights reserved 54
bull Customer-accessible
bullUC on UCS httpwwwciscocomgouconucs and wwwciscocomgouc-virtualized and wwwciscocomgoucsrnd and wwwciscocomgoswonly (UCS page)
bullUCS in general httpwwwciscocomgoucs
bullVblocks amp Virtual Computing Environment wwwvceportalcom and wwwvcecom
bullFlexPods wwwcisconetappcom
bullCisco Infrastructure as a Service (IaaS) Portal httpwwwciscocomgoiaas
bullCisco Unified Service Delivery Portal httpwwwciscocomgousd
bullldquoCUCM on Virtual Serversrdquo (summary of ldquowhatrsquos newdifferent when virtualizedrdquo) httpwwwciscocomenUSdocsvoice_ip_commcucmvirtualservershtml
bull ldquoVirtualization and UCS 101rdquo httpnewsroomciscocomdlls2010ts_102810html
bull UC on UCS ldquotech valuerdquo httpwwwciscocomenUSsolutionsns340ns339ns638ns914html_TWTVtwtv_episode_74html
copy 2010 Cisco andor its affiliates All rights reserved 55
bull Partner Central ldquoServers OS and Virtualizationrdquo
httpwwwciscocomwebpartnersselltechnologyipcservers_os_virt_for_uchtml
bull Partner Community ldquoServers OS and Virtualizationrdquo
httpswwwmyciscocommunitycomcommunitypartnercollaborationucservers
bull Design info
httpwwwciscocomgouc-virtualized and httpwwwciscocomgoucsrnd
bull ldquoWhatrsquos new whatrsquos different when on VMwarerdquo customer doc
httpwwwciscocomenUSdocsvoice_ip_commcucmvirtualservershtml
bull Product Literature
httpwwwciscocomgoucs and httpwwwciscocomgouconucs
bull Ordering Guide
httpwwwciscocomwebpartnersselltechnologyipcuc_tech_readinesshtml~7
bull Virtual Computing Environment Portal httpwwwvceportalcom
bull Cisco Infrastructure as a Service (IaaS) Portal httpwwwciscocomgoiaas
copy 2010 Cisco andor its affiliates All rights reserved 4
Excessively Centralized with Duplicate Networks
Inflexible
High TCO
Voice Video
Mobility Data
Mainframe PBX
Too Many Networks
Data Center 10 with
Traditional Communications
Excessively Distributed with Duplicate Networks
Sprawl
Medium to High TCO
hellip hellip
Servers and Appliances
Data
Center
Converged
Network
Too Many Fabrics
Data Center 20 with
Unified Communications
Flexible Operations on a Single Network
Agility + Governance
Low TCO
Virtualized ComputeStorage
hellip
Unified Fabric amp Networks
The Network
Data Center 30 with
Virtualized Communications
Same TCO Drivers Technology Facilities Management Burden
copy 2010 Cisco andor its affiliates All rights reserved 5
Eg Business Edition 6000
Eg Hosted Collaboration
Solution
Private
Cloud
Public
Cloud
Provider Cloud
or Hybrid
copy 2010 Cisco andor its affiliates All rights reserved 6
MCS Appliance UC on UCS
Tested Reference Configuration (TRC)
UC on UCS
Specs-based
Configuration-based
Highly Prescribed
Only 1 app per server
IBM x3650-M2 or x3250-M3
Only one CPU model supported
(single 4-core)
Fixed RAM
Exact match at part number level
for adapters
1 storage option only
Cisco owns app performance
Configuration-based
Prescribed
Typical 4-20 VMs per server
UCS B200B230B440 M2
C210C200 M2
E5640 E5506 E7-28704870
(4-core or 10-core)
Fixed RAM
Blades you pick adapter
Rack exact match at part number
level for adapters
Pick from a few storage options
Cisco owns app performance
Specs-based
A Few Restrictions
Typical 4-20 VMs per server
Any UCS that satisfies policy
56xx 75xx CPU 253+ GHz
E7-xxxx 24+ GHz
RAM depends on VMs
Regardless of server you pick the
adapters
Any storage ndash but you design it
Customer owns app performance
B230 M22-Socket Intel E7-2800 2 SSD 32 DIMM
B200 M22-Socket Intel 5600 2 SFF Disk 12 DIMM
B250 M22-Socket Intel 5600 2 SFF Disk 48 DIMM
B440 M24-Socket Intel E7-4800 4 SFF Disk 32 DIMM
C200 M22-Socket Intel 5600 4 Disks 12 DIMM 2 PCIe 1U
C210 M22-Socket Intel 5600 16 Disks 12 DIMM 5 PCIe 2U
C250 M22-Socket Intel 5600 8 Disks 48 DIMM 5 PCIe 2U
C460 M24-Socket Intel E7-4800 12 Disks 64 DIMM 10 PCIe 4U
Bla
de
Rack M
ou
nt
C260 M22-Socket Intel E7-2800 16 Disks 64 DIMM 6 PCIe 2U
Updated
New
Updated
Updated
copy 2010 Cisco andor its affiliates All rights reserved 7
bull Target Customer Profiles
ldquoReady willing ablerdquo to support servers VMware storage
Ready to move off appliance-oriented operations
UCS B-series for centralized medium to high server count
UCS C-series for low to medium server count or highly distributed
3rd-party server options for investment leverage
bull Platform Support
Virtual Machine Templates - defined by each UC app
Application VM Co-residency ndash ldquomix amp match UC with UCrdquo for most UC apps
Requries VMware vSphere 45 ndash ESXi only feature support depends on app vCenter is also required for specs-based
UCS HP and IBM server hardware options
DAS SAN NAS and Diskless storage options
Various NIC HBA CNA and Cisco VIC network options (1GB through 10GB)
copy 2009 Cisco Systems Inc All rights reserved Presentation_ID 8
UC on UCS Solution Components
Cisco Unified Communications 861
LAN
SAN Optional Shared Storage
PSTN
hellip UCS C-Series General-Purpose Rack-Mount Servers
LAN
SAN UCS 6100 Fabric
Interconnect Switches
Required Shared Storage
UCS 5100 Blade Server Chassis UCS 2100 Fabric Extender
UCS B-Series Blade Servers
PSTN
ldquoUC on UCSrdquo B-Series
ldquoUC on UCSrdquo C-Series
copy 2010 Cisco andor its affiliates All rights reserved 9
Unified Communication System
80bull Voice Video Presence Mobility Customer Care
bull Available in flexible deployment models
bull Deliver a unparalleled user experience
HCS Management Systembull Zero-touch fulfillment amp provisioning with self service
bull Service assurance for enabling high quality of services
bull Coordinated management and integration across domains
Optimized Virtualization Platform
(UC on UCS B-series)bull Resource optimized for reduced hardware capex
bull Installation amp upgrade automation
bull Provides flexibility customization amp additional redundancy
Scalable System Architecturebull Customer Aggregation amp SIP Trunking
bull SLA Enablement Security Scalability
bull Cloud Based SaaS Integration
Cisco Hosted Collaboration Solution Combining virtualization management amp
architecture elements for a comprehensive
platform
Cisco Business Edition 6000 Midmarket 100-1000 user solution for call
control mobility rich media presence and
contact center
Infrastructure Solutions Data Center ldquobuilding blocksrdquo
Vblock FlexPod
UC on FlexPod planned not committed
copy 2010 Cisco andor its affiliates All rights reserved 10
B230 M2 2-Socket Intel E7-2800 2 SSD 32 DIMM
B200 M2 2-Socket Intel 5600 2 SFF Disk 12 DIMM
B250 M2 2-Socket Intel 5600 2 SFF Disk 48 DIMM
B440 M2 4-Socket Intel E7-4800 4 SFF Disk 32 DIMM
C200 M2 (LFF) 2-Socket Intel 5600 4 Disks 12 DIMM 2 PCIe 1U
C220 M3 SFF 2-Socket Intel E5-2600 8 Disks 16 DIMM 2 PCIe 1U
C250 M2 2-Socket Intel 5600 8 Disks 48 DIMM 5 PCIe 2U
C460 M2 4-Socket Intel E7-4800 12 Disks 64 DIMM 10 PCIe 4U
Supported Hardware UC on UCS B
lad
e
Rack M
ou
nt
C260 M2 2-Socket Intel E7-2800 16 Disks 64 DIMM 6 PCIe 2U
UC on UCS Tested Reference Configuration UC on UCS Specs-based
BE6K
B200 M3 SFF 2-Socket Intel E5-2600 2 SFF Disk 24 DIMM
C240 M3 SFF 2-Socket Intel E5-2600 24 Disks 24 DIMM 5 PCIe 2U
C210 M2 2-Socket Intel 5600 16 Disks 12 DIMM 5 PCIe 2U
Target support
Fall 2012
Target support
Fall 2012
Target support
Fall 2012
(Note UCS Express
not supported)
copy 2010 Cisco andor its affiliates All rights reserved 11
Must be on VMware
HCL
Allowed Server
Vendors
Server model and IO devices on wwwvmwarecomgohcl
All parts must be supported by server vendor
No hardware oversubscription allowed for UC
VMware vCenter is REQUIRED
Processor
Intel Xeon 56xx75xx 253+ GHz or E7-xxxx 24+ GHz
CPU support varies by UC app
Required physical core count = sumUC VMs vCPU (+1 if Unity Cxn)
Capacity = sumUC VMs vRAM + 2GB for VMware
Follow server vendor for module densityconfiguration
Memory
Storage Network
Must be on VMware HCL and supported by server vendor
Eg 1GbE10GbE NIC ge2Gb FC HBA 10Gb CNA or VIC
UCS BC UCS Express Other 3rd-parties
Adapters (eg LANStorage Access)
SAN (FCoE FC iSCSI) NAS (NFS) Variable DASRAID
Storage capacity = sumUC VMs vDisk + VMwareRAID overhead
Storage performance = sumUC VM IOPS
copy 2010 Cisco andor its affiliates All rights reserved 12
Intel
Program
Nehalem-
EP
Nehalem-
EX
Westmere-
EP
Westmere-
EX
Romley-
EP
CPU Family 55xx
65xx
75xx
56xx E7-28xx
E7-48xx
E7-88xx
E5-26xx
CPU Cores 4 468 46 6810 468
CPU Speed 2-29 GHz 17-27 GHz 19-333 GHz 17-27 GHz 1-3 GHz
Example UCS
Models with
these CPUs
B200250 M1
C210 M1
C250 M1M2
C200 M2
B230 M1
B440 M1
C460 M1
B200250 M2
C210 M2
C250 M2
B230 M2
B440 M2
C260 M2
C460 M2
B200 M3
C220 M3
C240 M3
UC on UCS
Certifications
TRCs for B200 M1
TRCs for C210 M1
(E5540)
TRC for C200M2
(E5506)
Specs-based
(75xx 253+ GHz)
TRCs for B200 M2
TRCs for C210 M2
(E5640)
Specs-based
(56xx at 253+
GHz)
TRC for B230 M2
TRC for B440 M2
(E7-28704870)
Specs-based
(E7 at 24+ GHz)
Not Currently
Supported by UC
copy 2010 Cisco andor its affiliates All rights reserved 13
VM VM VM
VM VM VM
VM VM VM
VM VM VM
VM
VM
VM
ldquoSmallrdquo VM
ldquoMediumrdquo VM
ldquoLargerdquo VM
VM
VM
VM VM
VM VM VM
19 UC VMs with
total 40 vcpursquos
19 MCS Appliances 5 virtualized
servers (dual 4-core
B200 M2
TRC)
4 virtualized
servers (dual 6-core
B200 M2
specs-based)
2 virtualized
servers (dual 10-core
B230 M2
TRC)
19 UC
app
copies
copy 2010 Cisco andor its affiliates All rights reserved 14
UC on UCS Products with Owner UC on UCS TRC UC on UCS Specs-based
Unified Communications Manager
Business Edition 6000 C200 M2 only Not supported
Unity Connection
Unified Presence
Unified Contact Center Express
Cisco Emergency Responder
Session Manager Edition
InterCompany Media Engine
Unified Attendant Consoles
Unity
Unified Workforce Optimization (WFO)
Unified Contact Center Enterprise Planned
Unified Intelligence Center
Unified Customer Voice Portal Planned
MediaSense Planned
Unified Contact Center Mgmt Portal
SocialMiner
Finesse Planned
Unified EmailWeb Interaction Mgr
Prime UCMS (OMPMSMSSM)
Webex Premise Planned Planned
Unified MeetingPlace Planned Planned
TMSCTMS Planned Planned
VCS Planned Planned
copy 2010 Cisco andor its affiliates All rights reserved 15
Why virtualize
your UC
Why virtualize
on UCS
Lower TCO
Business
Agility
Additional
Savings and
Increased
Agility
End to End
Solution
Single
Support
Tested Reference Configurations
Vblocks
Cisco options
VCE Vblock options
Infrastructure Simplification (Cables Adapters Switching)
Converge Communications and DC Networks ndash ldquowire oncerdquo
Consolidates System Mgmt
Easier Service Provisioning
Reduce ServersStorage
Reduced Power Cooling Cabling Space Weight
Investment Leverage amp Easy Server Repurposing
Efficient App Expansion
Accelerated UC rollouts
Better Business Continuity
PortableMobile VMs
UCS is the industryrsquos only
fully unified and virtualization-
aware compute solution
copy 2010 Cisco andor its affiliates All rights reserved 16
CAPEX
bull Reduced Server Count (50-75)
bull NetworkStorage Consolidation (50+)
bull Reduced Cabling (50+)
OPEX
Reduced Rack amp Floor Space (36)
Reduced PowerCooling (20+)
Fewer Servers to Manage (50-75 less)
Reduced MaintenanceSupport Costs (~20)
Example 5000 users Dial tone voicemail and Presence 10 are Contact Center Agents
11 non-virtualized rack servers required for UC more for other business apps
copy 2010 Cisco andor its affiliates All rights reserved 17
CAPEXOPEX
bull Similar Consolidation and Operational EfficiencyScale benefits as with UC on UCS B-series
Other Benefits
Lower initial investment
Simple entrymigration to virtualized UC ndash Data Center expertise not required unless using SAN option
Example 5000 users Dial tone voicemail and Presence 10 are Contact Center Agents
11 non-virtualized rack servers required for UC more for other business apps
copy 2010 Cisco andor its affiliates All rights reserved 18
$-
$500
$1000
$1500
$2000
$2500
$3000
2 4 8 10 12 20 50 100
UCS B230 M2 TRC OPEX ($K)
UCS B230 M2 TRC CAPEX ($K)
UCS B200 M2 TRC OPEX ($K)
UCS B200 M2 TRC CAPEX ($K)
UCS C210 M2 TRC OPEX ($K)
UCS C210 M2 TRC CAPEX ($K)
MCS 7845-I3 OPEX ($K)
MCS 7845-I3 CAPEX ($K)
Assumptions
bull UC only no other business applications included ldquoSparerdquo or ldquohot standbyrdquo hosts not included
bull ldquoServerrdquo is either an MCS Appliance or a 2-vcpu-core ldquoVirtual Machinerdquo
bull Dual sites split MCS or UCS TRC servers across sites no single point of failure ndash redundant sites switching blade chassis rackblade servers
bull Using list pricing for MCS-7845-I3-IPC1 UCS-C210M2-VCD2 UCS-B200M2-VCS1 UCS-B230M2-VCDL1 and VMware Enterprise Plus Edition
Appliance or VM Count
PSTN
2104
2104
2104
2104
SANLAN
Dual Site Scenario PSTN
2104
2104
2104
2104
SANLAN
hellip
hellip
hellip
hellip
UC on UCS
B200 B230
UC on UCS
C210
MCS 7845
B230 M2
vs B200 M2
C210 M2
vs MCS 7845
20 copy 2010 Cisco andor its affiliates All rights reserved
Current Offers Technical Overview
21 copy 2010 Cisco andor its affiliates All rights reserved
Eg 4 physical servers
Each MCS 7800 hosts only
one UC app instance
4 virtual servers (VMrsquos) on 1 physical server
Single virtualized server with total 8 physical
cores hosts all UC app instances
Unity
Connection
Unified CM
VM for
Unified
CM
Sub
Unified CCX
VM for
Unity
Cxn
VM for
Unified
CCX
VM for
Unified
CM
Pub
or
copy 2010 Cisco andor its affiliates All rights reserved 22
Server Model TRC CPU RAM Storage Adapters
UCS B200 M2 Blade Server TRC 1
Dual E5640 (8 physical cores total)
48 GB DAS (RAID1) for
VMware FC SAN for UC apps
Cisco VIC
TRC 2 Dual E5640 (8
physical cores total) 48 GB Diskless Cisco VIC
UCS B230 M2 Blade Server
TRC 1 Dual E7-2870 (20
physical cores total) 128 GB Diskless Cisco VIC
UCS B440 M2 Blade Server
TRC1 Dual E7-4870 (40
physical cores total)
256 GB Diskless Cisco VIC
UCS C260 M2 Rack-Mount Server
TRC 1 Dual E7-2870 (20
physical cores total) 128 GB DAS (2x RAID5) 1GbE NIC
UCS C210 M2 General-Purpose
Rack-Mount Server TRC 1
Dual E5640 (8 physical cores total)
48 GB
DAS (2 disks RAID1) for VMware + DAS (8 disks RAID5) for
UC apps
1GbE NIC
TRC 2 Dual E5640 (8
physical cores total) 48 GB
DAS (2 disks RAID1) for VMware FC SAN
for UC apps
1GbE NIC and 4G FC HBA
TRC 3 Dual E5640 (8
physical cores total) 48 GB Diskless
1GbE NIC and 4G FC HBA
UCS C200 M2
General-Purpose Rack-Mount Server
TRC 1 Dual E5506 (8
physical cores total) 24 GB
DAS (4 disks RAID10) for VMware
+ UC apps 1GbE NIC
copy 2010 Cisco andor its affiliates All rights reserved 23
UC app Scale
(ldquousersrdquo)
vCPU (cores) Usually 253+ GHz
per core required
vRAM
(GB)
vDisk
(GB)
Notes
Unified
CM
1000 2 4 1 x 80 UCS C200 or BE6K only
2500 1 225 1 x 80 Not for use with C200BE6K
7500 2 6 2 x 80
10000 4 6 2 x 80
Unity
Connection
500 1 2 1 x 160
1000 1 4 1 x 160
5000 2 4 1 x 200
10000 4 4 2 x 146 Not for use with C200BE6K
20000 7 8 2 x 300
Unified
Presence
1000 1 2 1 x 80
2500 2 4 1 x 80 Not for use with C200BE6K
5000 4 4 2 x 80
Unified CCX 100 2 4 2 x 146 UCS C200 or BE6K only
300 2 4 2 x 146 Not for use with C200BE6K
400 4 8 2 x 146
Not exhaustive subject to change see wwwciscocomgouc-virtualized for latest
ie user count for particular values of BHCA trace level encryption CTI and other factors Actual
supportable user count may vary by deployment
copy 2010 Cisco andor its affiliates All rights reserved 24
Policy still lives here httpdocwikiciscocomwikiUnified_Communications_Virtualization_Sizing_GuidelinesApplication_Co-residency_Support_Policy
Three aspects
1 Allowed App Mix on same physical server
SAME RULES for TRC vs specs-based UCSHPIBM
2 Allowed VM OVA choices
DIFFERENT RULES for TRC vs specs-based due to CPU differences
3 Max number of VMs on same physical server
SAME RULES for TRC vs specs-based to determine max but specs-based might allow more VMs
Note DAS IO bottlenecks may prevent very high VM counts even if CPURAM are sufficient
copy 2010 Cisco andor its affiliates All rights reserved 25
bull Which apps can share the same physical server In general any UC with UC from apps listed at httpdocwikiciscocomwikiUnified_Communications_Virtualization_Supported_Applications
But note some UC apps restrict this eg BE6K CUCCE See their rules on their docwiki ldquochild pagesrdquo
NMTGrsquos UC Mgmt Suite (CUOM CUSM CUSSM CUPM) counts as a UC app for this
Note UCS C200 M2 TRC1 for non-BE6K no longer has special restrictions on UC App Mix
bull SEPARATE PHYSICAL SERVER required for non-UC or 3rd-party Eg N1KV ARC SingleWire vCenter FilePrint Directory CRMERP Groupware non-CUCM TFTP Nuance etc
UC VM UC VMhellipVM for
VMware
vCenter
VM for
Nexus
1KV
VSM
VM for
Solutions
Plus
CTDP
app
VM for
Unaffiliated
3rd-party
app
UC VM UC VMhellipVM for
VMware
vCenter
VM for
Nexus
1KV
VSM
VM for
Solutions
Plus
CTDP
app
VM for
Unaffiliated
3rd-party
app
UC VM UC VMhellipVM for
VMware
vCenter
VM for
Nexus
1KV
VSM
VM for
Solutions
Plus
CTDP
app
VM for
Unaffiliated
3rd-party
app
UC VM UC VMhellipVM for
VMware
vCenter
VM for
Nexus
1KV
VSM
VM for
Solutions
Plus
CTDP
app
VM for
Unaffiliated
3rd-party
app
Different blades in same chassis OK
Same blade same chassis not OK
copy 2010 Cisco andor its affiliates All rights reserved 26
bull App to HW some apps eg CUCCE donrsquot allow any of their OVAs on certain TRCs
See httpdocwikiciscocomwikiUnified_Communications_Virtualization_Supported_Applications
bull OVA to HW Some OVAs are deliberately only for use with a particular TRC or CPU
See co-res policy page and Notes column in httpdocwikiciscocomwikiUnified_Communications_Virtualization_Downloads_28including_OVAOVF_Templates29
Why Usually due to CPU modelspeed dependencies
C200 M2 TRC1
(E5506 213 GHz)
UCM
25K
UCM
75K
UCM
10K
C200 M2 Specs-based
(56xx 253+GHz )
B200C210 M2 TRC or Specs-based
(E5640 266 GHz on TRC
56xx75xx 253+ GHz on specs-based)
UCM
25K
UCM
75K
UCM
10K
UCM
1K
UCM
1K
UCM
25K
UCM
75K
UCM
10K
UCM
1K
copy 2010 Cisco andor its affiliates All rights reserved 27
Core
Core
Core
Core
Core
Core
Core
Core
Core
Core
Core
Core
Core
Core
Core
Core
Core
Core
Core
Core
VM VM VM VM
V
M
V
M
V
M
V
M
V
M
V
M
V
M
V
M
VM VM
Dual-socket 4-core
Eg UCS C210 M2 TRC1
with dual E5640
Dual-socket 6-core
Eg UCS C210 M2 Specs-based
with UC-supported CPU model
and min speed
VM VM V
M
V
M
VM VM VM V
M
Idle
VM
VM
VM
Jumbo + 1 reserved
or
Mixed sizes + 1 reserved
or
Mixed sizes
or
21 Large eg UCM 10K
or
41 Med eg UCM 75K
or
81 Small eg UCM 25K
ldquoSmallrdquo VM
ldquoMediumrdquo VM
ldquoLargerdquo VM
ldquoJumbordquo VM
VM
VM
Idle
VM VM VM VM
V
M
V
M
V
M
V
M
V
M
V
M
V
M
V
M
VM VM
VM VM V
M
V
M
VM VM VM V
M
Idle
Mixed sizes + 1 reserved
or
Mixed sizes
or
31 Large eg UCM 10K
or
61 Med eg UCM 75K
or
121 Small eg UCM 25K
VM
Idle
V
M
V
M
V
M
V
M
VM VM
VM
VM VM
VM
VM VM
copy 2010 Cisco andor its affiliates All rights reserved 28
Virtual Software Switch Options
VM
LAN SAN
ESXi Hypervisor
Software Switch
vNIC
CNA
FCoE
VMware
vSwitch
VMware
dvSwitch
Cisco Nexus
1KV
Host based (local) Distributed Distributed
IEEE 8021Q VLAN
tagging
IEEE 8021Q VLAN
tagging
IEEE 8021Q VLAN
tagging
VLANs only visible to
local ESXi host
VLANs visible to all
ESXi hosts
VLANs visible to all
ESXi hosts
EtherChannel EtherChannel EtherChannel
-- -- Virtual PortChannel
-- -- QoS Marking
(DSCPCoS)
-- -- ACL
-- -- SPAN
RADIUSTACACS+
No VM needed No VM needed VM needed for VSM
vmNIC
UCS B200
Strongly recommended for UC on UCS B-Series
Not required but recommended for UC on UCS C-Series
copy 2010 Cisco andor its affiliates All rights reserved 29
pSwitch
ESXi
bull Cisco Software Switch in Hypervisor
bull Familiar network server operations amp management model
bull Enhanced diagnostic amp monitoring capability
bull Visibility direct to VM
ESXi Nexus
1000V
VEM
Nexus
1000V
VEM
Nexus 1000V VSM
copy 2010 Cisco andor its affiliates All rights reserved 30
Physical switch maps L3 DSCP to L2 CoS
CUCM marks traffic based on L3 DSCP values
pSwitch (CAT6K etc) can do mapping from L3 DSCP to L2 CoS (if needed)
CTL Packet L3
dc1-access-6k(config)mls qos map dscp-cos 24 to 3
dc1-access-6k(config)mls qos map dscp-cos 46 to 5
CS3
L20 L3CS3
CUCM
CAT6K
L23 L3CS3 L23 L3CS3
copy 2010 Cisco andor its affiliates All rights reserved 31
bull UCS 6100 doesnrsquot look into L3 IP header
bull DSCPToS setting in IP header is not altered by UCS
bull 6100 sends packet to uplink pEthernet switch
bull Default QoS settings on UCS
FCoE (ldquomatch cos 3rdquo) ndash no drop policy
Rest (ldquomatch anyrdquo) ndash Best Effort Queue
vSwitch amp UCS 6100 can not map L3 DSCP to L2 CoS
L20 L3CS3
CUCM
CAT6K
UCS 6100
copy 2010 Cisco andor its affiliates All rights reserved 32
bull UC blades Network Adapters QoS policy set to Platinum (CoS=5 No
Drop)
bull Non-UC blades Network Adapters QoS policy set to best effort
N1Kv Considerations
bull UC sig traffic (CoS3) share queues with FCoE traffic (CoS3)
bull UC sig traffic is given lossless behavior
bull Default CoS value of 3 for FCoE traffic should never be changed
Without N1Kv
Caveat
bull All traffic types from virtual UC App will get CoS value of Platinum
bull Non-UC application gets best-effort class might not be acceptable
L20 L3CS3
L23 L3CS3
L23 L3CS3
CUCM
N1KV
UCS 6100
CAT6K
copy 2010 Cisco andor its affiliates All rights reserved 33
Compute Layer
SANStorage
Layer ndash Cisco
SRND
Cisco
UCS 6100
Fabric
Interconnect
UCS 5100
Blade
Server
Cisco SAN
Switch
4x10GE
4x10GE
4x10GE
4x10GE
FC FC
FC FC
Nexus
1000V
FC Storage
SP-A SP-B
3rd party layer
CUCM VM IOPS ~ 200
200 IOPS 4KB ~ 64 Mbps per VM
bull Total capacity 28000 IOPS
bull 14000 IOPS per controller
bull 4 KByte block size
14000 IOPS x (4KB) ~ 428 Mbps
600 Mbps throughputcontroller
3rd Party SAN Example
Result
bull One 4 Gbps FC interface is enough to
handle the entire capacity of one Storage
Array
bull HA requires four FC interfaces
copy 2010 Cisco andor its affiliates All rights reserved 34
copy 2010 Cisco andor its affiliates All rights reserved 35
bull All UC Deployment Models are supported
No change in the current deployment models
Base deployment model ndash Single Site Centralized Call Processing etc are not changing
bull VM machine layout on a blade andor chassis
Unity Connection requires one extra CPU core (vCPU) on blade
bull Software checks for design rules
No rules or restrictions are in place in UC Apps to check if you are running the primary and sub on the same blade
bull CoWAN ruleslatency requirement are same
Does not depend on CUCM code or hardware
httpwwwciscocomgoucsrnd
copy 2010 Cisco andor its affiliates All rights reserved 36
bull SRND application-layer guidelines are same as when on MCS
Redundancy Rules are the same
Clustering over the WANlatency numbers
Mega Cluster supported in 85
Determine quantityrole of nodes
For HA No design checks validating proper placement of primary and secondary servers
CUCCE private network requirement
bull Mixed clusters of HP IBM UCS are supported
Subject to ldquocommon senserdquo rules ndash eg donrsquot make Pub or Primary less powerful than Sub or Secondary
bull Direct attach devices must be on physical MCS server
-MOH Live audio stream
-Tape BackupFloppy
bull New factors to consider for end-to-end QoS designconfiguration
37 copy 2010 Cisco andor its affiliates All rights reserved
FROM CUCM
System Release To Unified CM 85 To UC System 85
4x Multi Hop thru 61(x)71(x) Multi Hop thru 61(3)
51(2) Multi Hop thru 61(x)71(x) NA
51(3) 2 Hop thru 71(3) NA
61(1) 2 Hop thru 61(x)71(x) 2 Hop
61(2) 2 Hop thru 61(x)71(x) NA
61(3) 2 Hop thru 61(x)71(x) NA
61(4) Single Hop NA
61(5) Single Hop NA
70(1) 2 Hop thru 71(x) 2 Hop
71(2) 2 Hop thru 71(x) 2 Hop
71(3) Single Hop Single Hop Multi stagesBWC supported
71(5) Single Hop NA
80(1) Single Hop Single Hop Multi stagesBWC supported
80(2) 80(3) Single Hop NA
38 copy 2010 Cisco andor its affiliates All rights reserved
VMware Feature support bull VMware feature support varies by application
bull Some features are supported with Caveats some Partially
bull For example
bull Clone Virtual Machine
ldquoY (C)rdquo means VM has to be powered off
bull vMotion
ldquoY (C)rdquo means vMotion supported for live traffic calls shouldnrsquot be dropped (but not guaranteed)
ldquoPartialrdquo means in maintenance mode only
ESXi Features CUCM CUC CUP CCX
Clone Virtual Machine Y (C) Y (C) Y (C) Y (C)
VMware vMotion Y (C) Partial Partial Y (C)
Resize Virtual Machine Partial Partial Partial Partial
VMware HA Y (C) Y (C) Y (C) Y (C)
Boot From SAN Y (C) Y (C) Y (C) Y (C)
VMware DRS No No No No
Docwiki httpdocwikiciscocomwikiUnified_Communications_VMware_Requirements
copy 2010 Cisco andor its affiliates All rights reserved 39
Current Business Continuity and Disaster Recovery strategies are still applicable
The UC apps redundancy rules are same
Distribute UC application nodes across UCS blades chassis and sites to minimize failure impact
Primarysecondary on different blade chassis sites
On same blade mix Subs with TFTPMoH vs just Subs
Redundancy of UCS components (blade chassis FEX links Interconnect switching)
Redundancy of ldquonewrdquo network types (10GbE SAN multi-pathing etc)
copy 2010 Cisco andor its affiliates All rights reserved 40
TAC Support Demarcation
Server Hardware
Shared Storage
VMware Cisco
Application
UC on UCS
Tested Reference Configuration
Cisco 3rd-party Cisco
Cisco VCE VCE
3rd-party
VCE
UC on UCS
Specs-based (including Vblock option)
Cisco 3rd-party Cisco
Cisco VCE VCE
3rd-party
VCE
3rd-party VMware (HP IBM)
Specs-based 3rd-party 3rd-party 3rd-party Cisco
MCS 7800 Appliances Cisco NA NA Cisco
Customer-provided MCS 7800 equivalent 3rd party NA NA Cisco
41 copy 2010 Cisco andor its affiliates All rights reserved
Customer Example ndash Primary Data Center
OLD NEW
Hardware Nodes 62 Physical Servers ( EU HQ clusters) 14 Approx
Software Version 615 amp 851 851
Ucxn Version 421 851 ndash 3 pairs -- Virtualized
CER 20 70 86 -- Virtualized
CM SUB CM PUB CM SUB CM SUB
MOH TFTP CER CM SUB
CER UCxn UCxN CM SUB
42 copy 2010 Cisco andor its affiliates All rights reserved
Deployment Model ndash Data Center 1
CM SUB CM PUB CM SUB CM SUB
MOH TFTP UCxN UCxn
CER UCxn UCxn UCxn
43 copy 2010 Cisco andor its affiliates All rights reserved
Deployment Model ndash Data Center 2
CM SUB CM PUB CM SUB CM SUB
MOH TFTP Ucxn UCxn
CER UCxn UCxn UCxn
copy 2010 Cisco andor its affiliates All rights reserved 44
Customer Design
PSTN
IP WAN
SIP Proxy
Unified Communications Manager
Unity Connection
Unified Presence
Unified Contact Center Express
Cisco UCS 5108 Chassis with UCS B200 Blade Servers
Cisco UCS 6100 Fabric Interconnect Switch
Cisco UCS C210 or C200 General-Purpose Rack-mount Server
CUSP
(11K phones)
(3K phones)
(400 phones)
copy 2010 Cisco andor its affiliates All rights reserved 45
HQ Details
Blade 1 Blade 2
CPU-1 CPU-2 CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
Blade 3 Blade 4
CPU-1 CPU-2 CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
CUP-1 UCCX-1 CUP-2
Blade 5 Blade 6
CPU-1 CPU-2 CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
UCCX-1
UCxn-1 Active
Leave
idle for
UCxn
UCxn-2
Active
TFTP-2
SUB-2
SUB-4
SUB-6
SUB-8 CUP-2 UCCX-2
SUB-3 PUB SUB-1
CUP-1
TFTP-1 SUB-5
SUB-7
Blade Slot 7 Blade Slot 8
CPU-1 CPU-2 CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
CUCM VM OVAs
Messaging VM OVAs
Leave
idle for
UCxn
ldquoSparerdquo blade slots available for non-UC workloads such as Cisco Nexus 1000V VMware
vCenter or 3rd-party workloads such as directories email groupware or other business
applications
Contact Center VM OVAs
Presence VM OVAs
copy 2010 Cisco andor its affiliates All rights reserved 46
Rack Server 3
CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
UCxn-2 CCX-1 CUP
Rack Server 2
CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
Rack Server 1
CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
Leave
idle for
UCxn
UCxn-1 TFTP-2 SUB-2
Leave
idle for
UCxn
SUB-1 CCX-2 PUB TFTP-1
Branch Office Details
Rack Server 2
CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
Rack Server 1
CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
PUBTFTP CCX-1 CUP
SUB CCX-2
Leave
idle for
UCxn
Leave
idle for
UCxn
UC
xn
-1
UC
xn
-1
CUCM VM OVAs
Messaging VM OVAs
Contact Center VM OVAs
Presence VM OVAs
copy 2010 Cisco andor its affiliates All rights reserved 47
bull DAS Rack Mount Server (Cisco C-Series)
bull Popular DAS Protocol SCSI
bull iSCSI Access SCSI storage media using IP network
bull Fibre Channel The most popular SAN protocol today
Cable distance ~ 2 km
Popular speed - 4 Gbs
bull NAS (Network Attached Storage) uses NFS (Network File System) protocol over TCPIP
DAS
SCSI
Computer System
SCSI Bus Adapter
SCSI Device Driver
Volume Manager
File System
Application
iSCSI
File System
Application
SCSI Device Driver iSCSI Driver
TCPIP Stack
NIC
Volume Manager
NIC
TCPIP Stack
iSCSI Layer
Bus Adapter
Host
Server
Storage
Transport
Storage
Media
FC SAN
FC
FC HBA
SCSI Device Driver
Volume Manager
File System
Application
Computer System Computer System
Block IO
SAN IP
copy 2010 Cisco andor its affiliates All rights reserved 48
HDD 1
450gig
15K RPM
HDD 2
450gig
15K RPM
HDD 3
450gig
15K RPM
HDD 4
450gig
15K RPM
HDD 5
450gig
15K RPM
Single RAID5 Group (14 TB Usable Space)
PUB SUB1 UCCX1
UC
VM 1
UC
VM 2
UC
VM 3
LUN 2 (720 GB) LUN 1 (720 GB)
UCCX2 CUP1 CUP2
UC
VM 4
UC
VM 5 UC
VM 6
4 to 8 UC VMs per
LUN
(max dependent on
sumvDisks)
NASSAN Array Best Practices for UC
Must be lt2TB per
LUN Recommend
500GB to 15 TB
Use FC class disks
with ~180 IOPS (eg
450 GB 15K or
300 GB 15K)
copy 2010 Cisco andor its affiliates All rights reserved 49
DAS Example UCS C210 M2 TRC1
HDD 1
146 GB
15K RPM
HDD 2
146 GB
15K RPM
HDD 3
146 GB
15K RPM
HDD 4
146 GB
15K RPM
HDD 5
146 GB
15K RPM
HDD 6
146 GB
15K RPM
HDD 7
146 GB
15K RPM
HDD 8
146 GB
15K RPM
HDD 9
146 GB
15K RPM
HDD 10
146 GB
15K RPM
Single RAID1
Volume Single RAID5 Volume (1022 GB after RAID overhead)
VMFS Filestore (947 GB after VMFS overhead)
PUB UCCX1
UC
VM 1
UC
VM 3
CUP1
UC
VM 5
vSphere ESXi
image
Notes
bull VMFS block size limits
max vDisk size
bull Could have gt1 VMFS
datastore on RAID
volume
copy 2010 Cisco andor its affiliates All rights reserved 50
DAS Example UCS C200 M2 TRC1 for BE6K
HDD 1
1 TB
72K RPM
HDD 2
1 TB
72K RPM
HDD 3
1 TB
72K RPM
HDD 4 1
TB
72K RPM
Single RAID10 Volume (2 TB after RAID
overhead)
VMFS Filestore (18 TB after VMFS overhead)
PUB UCCX1
UC
VM 1
UC
VM 3
CUP1
UC
VM 5 vSphere ESXi
image
copy 2010 Cisco andor its affiliates All rights reserved 51
UC Application Co-residency
bull See wwwciscocomgouc-virtualized for latest
bull Based on supported OVA for download
OVA reserves cores RAM etc to VMs
Basic rule of thumb fill up blade until out of capacity
If blade contains VM for messaging must reserve core for ESXi
bull Hardware oversubscription not supported
52 copy 2010 Cisco andor its affiliates All rights reserved
Virtual Machine Sizing
bull Virtual Machine virtual hardware defined by an VM template
vCPU vRAM vDisk vNICs
bull Capacity
bullAn VM template is associated with a specific capacity
bull VM templates are packaged in a OVA file
bull There are usually different VM template per release For example CUCM_80_vmv7_v21ova
CUCM_85_vmv7_v21ova
CUCM_86_vmv7_v15ova
Includes product product version VMware hardware version template version
53 copy 2010 Cisco andor its affiliates All rights reserved httptoolsciscocomcucst
copy 2010 Cisco andor its affiliates All rights reserved 54
bull Customer-accessible
bullUC on UCS httpwwwciscocomgouconucs and wwwciscocomgouc-virtualized and wwwciscocomgoucsrnd and wwwciscocomgoswonly (UCS page)
bullUCS in general httpwwwciscocomgoucs
bullVblocks amp Virtual Computing Environment wwwvceportalcom and wwwvcecom
bullFlexPods wwwcisconetappcom
bullCisco Infrastructure as a Service (IaaS) Portal httpwwwciscocomgoiaas
bullCisco Unified Service Delivery Portal httpwwwciscocomgousd
bullldquoCUCM on Virtual Serversrdquo (summary of ldquowhatrsquos newdifferent when virtualizedrdquo) httpwwwciscocomenUSdocsvoice_ip_commcucmvirtualservershtml
bull ldquoVirtualization and UCS 101rdquo httpnewsroomciscocomdlls2010ts_102810html
bull UC on UCS ldquotech valuerdquo httpwwwciscocomenUSsolutionsns340ns339ns638ns914html_TWTVtwtv_episode_74html
copy 2010 Cisco andor its affiliates All rights reserved 55
bull Partner Central ldquoServers OS and Virtualizationrdquo
httpwwwciscocomwebpartnersselltechnologyipcservers_os_virt_for_uchtml
bull Partner Community ldquoServers OS and Virtualizationrdquo
httpswwwmyciscocommunitycomcommunitypartnercollaborationucservers
bull Design info
httpwwwciscocomgouc-virtualized and httpwwwciscocomgoucsrnd
bull ldquoWhatrsquos new whatrsquos different when on VMwarerdquo customer doc
httpwwwciscocomenUSdocsvoice_ip_commcucmvirtualservershtml
bull Product Literature
httpwwwciscocomgoucs and httpwwwciscocomgouconucs
bull Ordering Guide
httpwwwciscocomwebpartnersselltechnologyipcuc_tech_readinesshtml~7
bull Virtual Computing Environment Portal httpwwwvceportalcom
bull Cisco Infrastructure as a Service (IaaS) Portal httpwwwciscocomgoiaas
copy 2010 Cisco andor its affiliates All rights reserved 5
Eg Business Edition 6000
Eg Hosted Collaboration
Solution
Private
Cloud
Public
Cloud
Provider Cloud
or Hybrid
copy 2010 Cisco andor its affiliates All rights reserved 6
MCS Appliance UC on UCS
Tested Reference Configuration (TRC)
UC on UCS
Specs-based
Configuration-based
Highly Prescribed
Only 1 app per server
IBM x3650-M2 or x3250-M3
Only one CPU model supported
(single 4-core)
Fixed RAM
Exact match at part number level
for adapters
1 storage option only
Cisco owns app performance
Configuration-based
Prescribed
Typical 4-20 VMs per server
UCS B200B230B440 M2
C210C200 M2
E5640 E5506 E7-28704870
(4-core or 10-core)
Fixed RAM
Blades you pick adapter
Rack exact match at part number
level for adapters
Pick from a few storage options
Cisco owns app performance
Specs-based
A Few Restrictions
Typical 4-20 VMs per server
Any UCS that satisfies policy
56xx 75xx CPU 253+ GHz
E7-xxxx 24+ GHz
RAM depends on VMs
Regardless of server you pick the
adapters
Any storage ndash but you design it
Customer owns app performance
B230 M22-Socket Intel E7-2800 2 SSD 32 DIMM
B200 M22-Socket Intel 5600 2 SFF Disk 12 DIMM
B250 M22-Socket Intel 5600 2 SFF Disk 48 DIMM
B440 M24-Socket Intel E7-4800 4 SFF Disk 32 DIMM
C200 M22-Socket Intel 5600 4 Disks 12 DIMM 2 PCIe 1U
C210 M22-Socket Intel 5600 16 Disks 12 DIMM 5 PCIe 2U
C250 M22-Socket Intel 5600 8 Disks 48 DIMM 5 PCIe 2U
C460 M24-Socket Intel E7-4800 12 Disks 64 DIMM 10 PCIe 4U
Bla
de
Rack M
ou
nt
C260 M22-Socket Intel E7-2800 16 Disks 64 DIMM 6 PCIe 2U
Updated
New
Updated
Updated
copy 2010 Cisco andor its affiliates All rights reserved 7
bull Target Customer Profiles
ldquoReady willing ablerdquo to support servers VMware storage
Ready to move off appliance-oriented operations
UCS B-series for centralized medium to high server count
UCS C-series for low to medium server count or highly distributed
3rd-party server options for investment leverage
bull Platform Support
Virtual Machine Templates - defined by each UC app
Application VM Co-residency ndash ldquomix amp match UC with UCrdquo for most UC apps
Requries VMware vSphere 45 ndash ESXi only feature support depends on app vCenter is also required for specs-based
UCS HP and IBM server hardware options
DAS SAN NAS and Diskless storage options
Various NIC HBA CNA and Cisco VIC network options (1GB through 10GB)
copy 2009 Cisco Systems Inc All rights reserved Presentation_ID 8
UC on UCS Solution Components
Cisco Unified Communications 861
LAN
SAN Optional Shared Storage
PSTN
hellip UCS C-Series General-Purpose Rack-Mount Servers
LAN
SAN UCS 6100 Fabric
Interconnect Switches
Required Shared Storage
UCS 5100 Blade Server Chassis UCS 2100 Fabric Extender
UCS B-Series Blade Servers
PSTN
ldquoUC on UCSrdquo B-Series
ldquoUC on UCSrdquo C-Series
copy 2010 Cisco andor its affiliates All rights reserved 9
Unified Communication System
80bull Voice Video Presence Mobility Customer Care
bull Available in flexible deployment models
bull Deliver a unparalleled user experience
HCS Management Systembull Zero-touch fulfillment amp provisioning with self service
bull Service assurance for enabling high quality of services
bull Coordinated management and integration across domains
Optimized Virtualization Platform
(UC on UCS B-series)bull Resource optimized for reduced hardware capex
bull Installation amp upgrade automation
bull Provides flexibility customization amp additional redundancy
Scalable System Architecturebull Customer Aggregation amp SIP Trunking
bull SLA Enablement Security Scalability
bull Cloud Based SaaS Integration
Cisco Hosted Collaboration Solution Combining virtualization management amp
architecture elements for a comprehensive
platform
Cisco Business Edition 6000 Midmarket 100-1000 user solution for call
control mobility rich media presence and
contact center
Infrastructure Solutions Data Center ldquobuilding blocksrdquo
Vblock FlexPod
UC on FlexPod planned not committed
copy 2010 Cisco andor its affiliates All rights reserved 10
B230 M2 2-Socket Intel E7-2800 2 SSD 32 DIMM
B200 M2 2-Socket Intel 5600 2 SFF Disk 12 DIMM
B250 M2 2-Socket Intel 5600 2 SFF Disk 48 DIMM
B440 M2 4-Socket Intel E7-4800 4 SFF Disk 32 DIMM
C200 M2 (LFF) 2-Socket Intel 5600 4 Disks 12 DIMM 2 PCIe 1U
C220 M3 SFF 2-Socket Intel E5-2600 8 Disks 16 DIMM 2 PCIe 1U
C250 M2 2-Socket Intel 5600 8 Disks 48 DIMM 5 PCIe 2U
C460 M2 4-Socket Intel E7-4800 12 Disks 64 DIMM 10 PCIe 4U
Supported Hardware UC on UCS B
lad
e
Rack M
ou
nt
C260 M2 2-Socket Intel E7-2800 16 Disks 64 DIMM 6 PCIe 2U
UC on UCS Tested Reference Configuration UC on UCS Specs-based
BE6K
B200 M3 SFF 2-Socket Intel E5-2600 2 SFF Disk 24 DIMM
C240 M3 SFF 2-Socket Intel E5-2600 24 Disks 24 DIMM 5 PCIe 2U
C210 M2 2-Socket Intel 5600 16 Disks 12 DIMM 5 PCIe 2U
Target support
Fall 2012
Target support
Fall 2012
Target support
Fall 2012
(Note UCS Express
not supported)
copy 2010 Cisco andor its affiliates All rights reserved 11
Must be on VMware
HCL
Allowed Server
Vendors
Server model and IO devices on wwwvmwarecomgohcl
All parts must be supported by server vendor
No hardware oversubscription allowed for UC
VMware vCenter is REQUIRED
Processor
Intel Xeon 56xx75xx 253+ GHz or E7-xxxx 24+ GHz
CPU support varies by UC app
Required physical core count = sumUC VMs vCPU (+1 if Unity Cxn)
Capacity = sumUC VMs vRAM + 2GB for VMware
Follow server vendor for module densityconfiguration
Memory
Storage Network
Must be on VMware HCL and supported by server vendor
Eg 1GbE10GbE NIC ge2Gb FC HBA 10Gb CNA or VIC
UCS BC UCS Express Other 3rd-parties
Adapters (eg LANStorage Access)
SAN (FCoE FC iSCSI) NAS (NFS) Variable DASRAID
Storage capacity = sumUC VMs vDisk + VMwareRAID overhead
Storage performance = sumUC VM IOPS
copy 2010 Cisco andor its affiliates All rights reserved 12
Intel
Program
Nehalem-
EP
Nehalem-
EX
Westmere-
EP
Westmere-
EX
Romley-
EP
CPU Family 55xx
65xx
75xx
56xx E7-28xx
E7-48xx
E7-88xx
E5-26xx
CPU Cores 4 468 46 6810 468
CPU Speed 2-29 GHz 17-27 GHz 19-333 GHz 17-27 GHz 1-3 GHz
Example UCS
Models with
these CPUs
B200250 M1
C210 M1
C250 M1M2
C200 M2
B230 M1
B440 M1
C460 M1
B200250 M2
C210 M2
C250 M2
B230 M2
B440 M2
C260 M2
C460 M2
B200 M3
C220 M3
C240 M3
UC on UCS
Certifications
TRCs for B200 M1
TRCs for C210 M1
(E5540)
TRC for C200M2
(E5506)
Specs-based
(75xx 253+ GHz)
TRCs for B200 M2
TRCs for C210 M2
(E5640)
Specs-based
(56xx at 253+
GHz)
TRC for B230 M2
TRC for B440 M2
(E7-28704870)
Specs-based
(E7 at 24+ GHz)
Not Currently
Supported by UC
copy 2010 Cisco andor its affiliates All rights reserved 13
VM VM VM
VM VM VM
VM VM VM
VM VM VM
VM
VM
VM
ldquoSmallrdquo VM
ldquoMediumrdquo VM
ldquoLargerdquo VM
VM
VM
VM VM
VM VM VM
19 UC VMs with
total 40 vcpursquos
19 MCS Appliances 5 virtualized
servers (dual 4-core
B200 M2
TRC)
4 virtualized
servers (dual 6-core
B200 M2
specs-based)
2 virtualized
servers (dual 10-core
B230 M2
TRC)
19 UC
app
copies
copy 2010 Cisco andor its affiliates All rights reserved 14
UC on UCS Products with Owner UC on UCS TRC UC on UCS Specs-based
Unified Communications Manager
Business Edition 6000 C200 M2 only Not supported
Unity Connection
Unified Presence
Unified Contact Center Express
Cisco Emergency Responder
Session Manager Edition
InterCompany Media Engine
Unified Attendant Consoles
Unity
Unified Workforce Optimization (WFO)
Unified Contact Center Enterprise Planned
Unified Intelligence Center
Unified Customer Voice Portal Planned
MediaSense Planned
Unified Contact Center Mgmt Portal
SocialMiner
Finesse Planned
Unified EmailWeb Interaction Mgr
Prime UCMS (OMPMSMSSM)
Webex Premise Planned Planned
Unified MeetingPlace Planned Planned
TMSCTMS Planned Planned
VCS Planned Planned
copy 2010 Cisco andor its affiliates All rights reserved 15
Why virtualize
your UC
Why virtualize
on UCS
Lower TCO
Business
Agility
Additional
Savings and
Increased
Agility
End to End
Solution
Single
Support
Tested Reference Configurations
Vblocks
Cisco options
VCE Vblock options
Infrastructure Simplification (Cables Adapters Switching)
Converge Communications and DC Networks ndash ldquowire oncerdquo
Consolidates System Mgmt
Easier Service Provisioning
Reduce ServersStorage
Reduced Power Cooling Cabling Space Weight
Investment Leverage amp Easy Server Repurposing
Efficient App Expansion
Accelerated UC rollouts
Better Business Continuity
PortableMobile VMs
UCS is the industryrsquos only
fully unified and virtualization-
aware compute solution
copy 2010 Cisco andor its affiliates All rights reserved 16
CAPEX
bull Reduced Server Count (50-75)
bull NetworkStorage Consolidation (50+)
bull Reduced Cabling (50+)
OPEX
Reduced Rack amp Floor Space (36)
Reduced PowerCooling (20+)
Fewer Servers to Manage (50-75 less)
Reduced MaintenanceSupport Costs (~20)
Example 5000 users Dial tone voicemail and Presence 10 are Contact Center Agents
11 non-virtualized rack servers required for UC more for other business apps
copy 2010 Cisco andor its affiliates All rights reserved 17
CAPEXOPEX
bull Similar Consolidation and Operational EfficiencyScale benefits as with UC on UCS B-series
Other Benefits
Lower initial investment
Simple entrymigration to virtualized UC ndash Data Center expertise not required unless using SAN option
Example 5000 users Dial tone voicemail and Presence 10 are Contact Center Agents
11 non-virtualized rack servers required for UC more for other business apps
copy 2010 Cisco andor its affiliates All rights reserved 18
$-
$500
$1000
$1500
$2000
$2500
$3000
2 4 8 10 12 20 50 100
UCS B230 M2 TRC OPEX ($K)
UCS B230 M2 TRC CAPEX ($K)
UCS B200 M2 TRC OPEX ($K)
UCS B200 M2 TRC CAPEX ($K)
UCS C210 M2 TRC OPEX ($K)
UCS C210 M2 TRC CAPEX ($K)
MCS 7845-I3 OPEX ($K)
MCS 7845-I3 CAPEX ($K)
Assumptions
bull UC only no other business applications included ldquoSparerdquo or ldquohot standbyrdquo hosts not included
bull ldquoServerrdquo is either an MCS Appliance or a 2-vcpu-core ldquoVirtual Machinerdquo
bull Dual sites split MCS or UCS TRC servers across sites no single point of failure ndash redundant sites switching blade chassis rackblade servers
bull Using list pricing for MCS-7845-I3-IPC1 UCS-C210M2-VCD2 UCS-B200M2-VCS1 UCS-B230M2-VCDL1 and VMware Enterprise Plus Edition
Appliance or VM Count
PSTN
2104
2104
2104
2104
SANLAN
Dual Site Scenario PSTN
2104
2104
2104
2104
SANLAN
hellip
hellip
hellip
hellip
UC on UCS
B200 B230
UC on UCS
C210
MCS 7845
B230 M2
vs B200 M2
C210 M2
vs MCS 7845
20 copy 2010 Cisco andor its affiliates All rights reserved
Current Offers Technical Overview
21 copy 2010 Cisco andor its affiliates All rights reserved
Eg 4 physical servers
Each MCS 7800 hosts only
one UC app instance
4 virtual servers (VMrsquos) on 1 physical server
Single virtualized server with total 8 physical
cores hosts all UC app instances
Unity
Connection
Unified CM
VM for
Unified
CM
Sub
Unified CCX
VM for
Unity
Cxn
VM for
Unified
CCX
VM for
Unified
CM
Pub
or
copy 2010 Cisco andor its affiliates All rights reserved 22
Server Model TRC CPU RAM Storage Adapters
UCS B200 M2 Blade Server TRC 1
Dual E5640 (8 physical cores total)
48 GB DAS (RAID1) for
VMware FC SAN for UC apps
Cisco VIC
TRC 2 Dual E5640 (8
physical cores total) 48 GB Diskless Cisco VIC
UCS B230 M2 Blade Server
TRC 1 Dual E7-2870 (20
physical cores total) 128 GB Diskless Cisco VIC
UCS B440 M2 Blade Server
TRC1 Dual E7-4870 (40
physical cores total)
256 GB Diskless Cisco VIC
UCS C260 M2 Rack-Mount Server
TRC 1 Dual E7-2870 (20
physical cores total) 128 GB DAS (2x RAID5) 1GbE NIC
UCS C210 M2 General-Purpose
Rack-Mount Server TRC 1
Dual E5640 (8 physical cores total)
48 GB
DAS (2 disks RAID1) for VMware + DAS (8 disks RAID5) for
UC apps
1GbE NIC
TRC 2 Dual E5640 (8
physical cores total) 48 GB
DAS (2 disks RAID1) for VMware FC SAN
for UC apps
1GbE NIC and 4G FC HBA
TRC 3 Dual E5640 (8
physical cores total) 48 GB Diskless
1GbE NIC and 4G FC HBA
UCS C200 M2
General-Purpose Rack-Mount Server
TRC 1 Dual E5506 (8
physical cores total) 24 GB
DAS (4 disks RAID10) for VMware
+ UC apps 1GbE NIC
copy 2010 Cisco andor its affiliates All rights reserved 23
UC app Scale
(ldquousersrdquo)
vCPU (cores) Usually 253+ GHz
per core required
vRAM
(GB)
vDisk
(GB)
Notes
Unified
CM
1000 2 4 1 x 80 UCS C200 or BE6K only
2500 1 225 1 x 80 Not for use with C200BE6K
7500 2 6 2 x 80
10000 4 6 2 x 80
Unity
Connection
500 1 2 1 x 160
1000 1 4 1 x 160
5000 2 4 1 x 200
10000 4 4 2 x 146 Not for use with C200BE6K
20000 7 8 2 x 300
Unified
Presence
1000 1 2 1 x 80
2500 2 4 1 x 80 Not for use with C200BE6K
5000 4 4 2 x 80
Unified CCX 100 2 4 2 x 146 UCS C200 or BE6K only
300 2 4 2 x 146 Not for use with C200BE6K
400 4 8 2 x 146
Not exhaustive subject to change see wwwciscocomgouc-virtualized for latest
ie user count for particular values of BHCA trace level encryption CTI and other factors Actual
supportable user count may vary by deployment
copy 2010 Cisco andor its affiliates All rights reserved 24
Policy still lives here httpdocwikiciscocomwikiUnified_Communications_Virtualization_Sizing_GuidelinesApplication_Co-residency_Support_Policy
Three aspects
1 Allowed App Mix on same physical server
SAME RULES for TRC vs specs-based UCSHPIBM
2 Allowed VM OVA choices
DIFFERENT RULES for TRC vs specs-based due to CPU differences
3 Max number of VMs on same physical server
SAME RULES for TRC vs specs-based to determine max but specs-based might allow more VMs
Note DAS IO bottlenecks may prevent very high VM counts even if CPURAM are sufficient
copy 2010 Cisco andor its affiliates All rights reserved 25
bull Which apps can share the same physical server In general any UC with UC from apps listed at httpdocwikiciscocomwikiUnified_Communications_Virtualization_Supported_Applications
But note some UC apps restrict this eg BE6K CUCCE See their rules on their docwiki ldquochild pagesrdquo
NMTGrsquos UC Mgmt Suite (CUOM CUSM CUSSM CUPM) counts as a UC app for this
Note UCS C200 M2 TRC1 for non-BE6K no longer has special restrictions on UC App Mix
bull SEPARATE PHYSICAL SERVER required for non-UC or 3rd-party Eg N1KV ARC SingleWire vCenter FilePrint Directory CRMERP Groupware non-CUCM TFTP Nuance etc
UC VM UC VMhellipVM for
VMware
vCenter
VM for
Nexus
1KV
VSM
VM for
Solutions
Plus
CTDP
app
VM for
Unaffiliated
3rd-party
app
UC VM UC VMhellipVM for
VMware
vCenter
VM for
Nexus
1KV
VSM
VM for
Solutions
Plus
CTDP
app
VM for
Unaffiliated
3rd-party
app
UC VM UC VMhellipVM for
VMware
vCenter
VM for
Nexus
1KV
VSM
VM for
Solutions
Plus
CTDP
app
VM for
Unaffiliated
3rd-party
app
UC VM UC VMhellipVM for
VMware
vCenter
VM for
Nexus
1KV
VSM
VM for
Solutions
Plus
CTDP
app
VM for
Unaffiliated
3rd-party
app
Different blades in same chassis OK
Same blade same chassis not OK
copy 2010 Cisco andor its affiliates All rights reserved 26
bull App to HW some apps eg CUCCE donrsquot allow any of their OVAs on certain TRCs
See httpdocwikiciscocomwikiUnified_Communications_Virtualization_Supported_Applications
bull OVA to HW Some OVAs are deliberately only for use with a particular TRC or CPU
See co-res policy page and Notes column in httpdocwikiciscocomwikiUnified_Communications_Virtualization_Downloads_28including_OVAOVF_Templates29
Why Usually due to CPU modelspeed dependencies
C200 M2 TRC1
(E5506 213 GHz)
UCM
25K
UCM
75K
UCM
10K
C200 M2 Specs-based
(56xx 253+GHz )
B200C210 M2 TRC or Specs-based
(E5640 266 GHz on TRC
56xx75xx 253+ GHz on specs-based)
UCM
25K
UCM
75K
UCM
10K
UCM
1K
UCM
1K
UCM
25K
UCM
75K
UCM
10K
UCM
1K
copy 2010 Cisco andor its affiliates All rights reserved 27
Core
Core
Core
Core
Core
Core
Core
Core
Core
Core
Core
Core
Core
Core
Core
Core
Core
Core
Core
Core
VM VM VM VM
V
M
V
M
V
M
V
M
V
M
V
M
V
M
V
M
VM VM
Dual-socket 4-core
Eg UCS C210 M2 TRC1
with dual E5640
Dual-socket 6-core
Eg UCS C210 M2 Specs-based
with UC-supported CPU model
and min speed
VM VM V
M
V
M
VM VM VM V
M
Idle
VM
VM
VM
Jumbo + 1 reserved
or
Mixed sizes + 1 reserved
or
Mixed sizes
or
21 Large eg UCM 10K
or
41 Med eg UCM 75K
or
81 Small eg UCM 25K
ldquoSmallrdquo VM
ldquoMediumrdquo VM
ldquoLargerdquo VM
ldquoJumbordquo VM
VM
VM
Idle
VM VM VM VM
V
M
V
M
V
M
V
M
V
M
V
M
V
M
V
M
VM VM
VM VM V
M
V
M
VM VM VM V
M
Idle
Mixed sizes + 1 reserved
or
Mixed sizes
or
31 Large eg UCM 10K
or
61 Med eg UCM 75K
or
121 Small eg UCM 25K
VM
Idle
V
M
V
M
V
M
V
M
VM VM
VM
VM VM
VM
VM VM
copy 2010 Cisco andor its affiliates All rights reserved 28
Virtual Software Switch Options
VM
LAN SAN
ESXi Hypervisor
Software Switch
vNIC
CNA
FCoE
VMware
vSwitch
VMware
dvSwitch
Cisco Nexus
1KV
Host based (local) Distributed Distributed
IEEE 8021Q VLAN
tagging
IEEE 8021Q VLAN
tagging
IEEE 8021Q VLAN
tagging
VLANs only visible to
local ESXi host
VLANs visible to all
ESXi hosts
VLANs visible to all
ESXi hosts
EtherChannel EtherChannel EtherChannel
-- -- Virtual PortChannel
-- -- QoS Marking
(DSCPCoS)
-- -- ACL
-- -- SPAN
RADIUSTACACS+
No VM needed No VM needed VM needed for VSM
vmNIC
UCS B200
Strongly recommended for UC on UCS B-Series
Not required but recommended for UC on UCS C-Series
copy 2010 Cisco andor its affiliates All rights reserved 29
pSwitch
ESXi
bull Cisco Software Switch in Hypervisor
bull Familiar network server operations amp management model
bull Enhanced diagnostic amp monitoring capability
bull Visibility direct to VM
ESXi Nexus
1000V
VEM
Nexus
1000V
VEM
Nexus 1000V VSM
copy 2010 Cisco andor its affiliates All rights reserved 30
Physical switch maps L3 DSCP to L2 CoS
CUCM marks traffic based on L3 DSCP values
pSwitch (CAT6K etc) can do mapping from L3 DSCP to L2 CoS (if needed)
CTL Packet L3
dc1-access-6k(config)mls qos map dscp-cos 24 to 3
dc1-access-6k(config)mls qos map dscp-cos 46 to 5
CS3
L20 L3CS3
CUCM
CAT6K
L23 L3CS3 L23 L3CS3
copy 2010 Cisco andor its affiliates All rights reserved 31
bull UCS 6100 doesnrsquot look into L3 IP header
bull DSCPToS setting in IP header is not altered by UCS
bull 6100 sends packet to uplink pEthernet switch
bull Default QoS settings on UCS
FCoE (ldquomatch cos 3rdquo) ndash no drop policy
Rest (ldquomatch anyrdquo) ndash Best Effort Queue
vSwitch amp UCS 6100 can not map L3 DSCP to L2 CoS
L20 L3CS3
CUCM
CAT6K
UCS 6100
copy 2010 Cisco andor its affiliates All rights reserved 32
bull UC blades Network Adapters QoS policy set to Platinum (CoS=5 No
Drop)
bull Non-UC blades Network Adapters QoS policy set to best effort
N1Kv Considerations
bull UC sig traffic (CoS3) share queues with FCoE traffic (CoS3)
bull UC sig traffic is given lossless behavior
bull Default CoS value of 3 for FCoE traffic should never be changed
Without N1Kv
Caveat
bull All traffic types from virtual UC App will get CoS value of Platinum
bull Non-UC application gets best-effort class might not be acceptable
L20 L3CS3
L23 L3CS3
L23 L3CS3
CUCM
N1KV
UCS 6100
CAT6K
copy 2010 Cisco andor its affiliates All rights reserved 33
Compute Layer
SANStorage
Layer ndash Cisco
SRND
Cisco
UCS 6100
Fabric
Interconnect
UCS 5100
Blade
Server
Cisco SAN
Switch
4x10GE
4x10GE
4x10GE
4x10GE
FC FC
FC FC
Nexus
1000V
FC Storage
SP-A SP-B
3rd party layer
CUCM VM IOPS ~ 200
200 IOPS 4KB ~ 64 Mbps per VM
bull Total capacity 28000 IOPS
bull 14000 IOPS per controller
bull 4 KByte block size
14000 IOPS x (4KB) ~ 428 Mbps
600 Mbps throughputcontroller
3rd Party SAN Example
Result
bull One 4 Gbps FC interface is enough to
handle the entire capacity of one Storage
Array
bull HA requires four FC interfaces
copy 2010 Cisco andor its affiliates All rights reserved 34
copy 2010 Cisco andor its affiliates All rights reserved 35
bull All UC Deployment Models are supported
No change in the current deployment models
Base deployment model ndash Single Site Centralized Call Processing etc are not changing
bull VM machine layout on a blade andor chassis
Unity Connection requires one extra CPU core (vCPU) on blade
bull Software checks for design rules
No rules or restrictions are in place in UC Apps to check if you are running the primary and sub on the same blade
bull CoWAN ruleslatency requirement are same
Does not depend on CUCM code or hardware
httpwwwciscocomgoucsrnd
copy 2010 Cisco andor its affiliates All rights reserved 36
bull SRND application-layer guidelines are same as when on MCS
Redundancy Rules are the same
Clustering over the WANlatency numbers
Mega Cluster supported in 85
Determine quantityrole of nodes
For HA No design checks validating proper placement of primary and secondary servers
CUCCE private network requirement
bull Mixed clusters of HP IBM UCS are supported
Subject to ldquocommon senserdquo rules ndash eg donrsquot make Pub or Primary less powerful than Sub or Secondary
bull Direct attach devices must be on physical MCS server
-MOH Live audio stream
-Tape BackupFloppy
bull New factors to consider for end-to-end QoS designconfiguration
37 copy 2010 Cisco andor its affiliates All rights reserved
FROM CUCM
System Release To Unified CM 85 To UC System 85
4x Multi Hop thru 61(x)71(x) Multi Hop thru 61(3)
51(2) Multi Hop thru 61(x)71(x) NA
51(3) 2 Hop thru 71(3) NA
61(1) 2 Hop thru 61(x)71(x) 2 Hop
61(2) 2 Hop thru 61(x)71(x) NA
61(3) 2 Hop thru 61(x)71(x) NA
61(4) Single Hop NA
61(5) Single Hop NA
70(1) 2 Hop thru 71(x) 2 Hop
71(2) 2 Hop thru 71(x) 2 Hop
71(3) Single Hop Single Hop Multi stagesBWC supported
71(5) Single Hop NA
80(1) Single Hop Single Hop Multi stagesBWC supported
80(2) 80(3) Single Hop NA
38 copy 2010 Cisco andor its affiliates All rights reserved
VMware Feature support bull VMware feature support varies by application
bull Some features are supported with Caveats some Partially
bull For example
bull Clone Virtual Machine
ldquoY (C)rdquo means VM has to be powered off
bull vMotion
ldquoY (C)rdquo means vMotion supported for live traffic calls shouldnrsquot be dropped (but not guaranteed)
ldquoPartialrdquo means in maintenance mode only
ESXi Features CUCM CUC CUP CCX
Clone Virtual Machine Y (C) Y (C) Y (C) Y (C)
VMware vMotion Y (C) Partial Partial Y (C)
Resize Virtual Machine Partial Partial Partial Partial
VMware HA Y (C) Y (C) Y (C) Y (C)
Boot From SAN Y (C) Y (C) Y (C) Y (C)
VMware DRS No No No No
Docwiki httpdocwikiciscocomwikiUnified_Communications_VMware_Requirements
copy 2010 Cisco andor its affiliates All rights reserved 39
Current Business Continuity and Disaster Recovery strategies are still applicable
The UC apps redundancy rules are same
Distribute UC application nodes across UCS blades chassis and sites to minimize failure impact
Primarysecondary on different blade chassis sites
On same blade mix Subs with TFTPMoH vs just Subs
Redundancy of UCS components (blade chassis FEX links Interconnect switching)
Redundancy of ldquonewrdquo network types (10GbE SAN multi-pathing etc)
copy 2010 Cisco andor its affiliates All rights reserved 40
TAC Support Demarcation
Server Hardware
Shared Storage
VMware Cisco
Application
UC on UCS
Tested Reference Configuration
Cisco 3rd-party Cisco
Cisco VCE VCE
3rd-party
VCE
UC on UCS
Specs-based (including Vblock option)
Cisco 3rd-party Cisco
Cisco VCE VCE
3rd-party
VCE
3rd-party VMware (HP IBM)
Specs-based 3rd-party 3rd-party 3rd-party Cisco
MCS 7800 Appliances Cisco NA NA Cisco
Customer-provided MCS 7800 equivalent 3rd party NA NA Cisco
41 copy 2010 Cisco andor its affiliates All rights reserved
Customer Example ndash Primary Data Center
OLD NEW
Hardware Nodes 62 Physical Servers ( EU HQ clusters) 14 Approx
Software Version 615 amp 851 851
Ucxn Version 421 851 ndash 3 pairs -- Virtualized
CER 20 70 86 -- Virtualized
CM SUB CM PUB CM SUB CM SUB
MOH TFTP CER CM SUB
CER UCxn UCxN CM SUB
42 copy 2010 Cisco andor its affiliates All rights reserved
Deployment Model ndash Data Center 1
CM SUB CM PUB CM SUB CM SUB
MOH TFTP UCxN UCxn
CER UCxn UCxn UCxn
43 copy 2010 Cisco andor its affiliates All rights reserved
Deployment Model ndash Data Center 2
CM SUB CM PUB CM SUB CM SUB
MOH TFTP Ucxn UCxn
CER UCxn UCxn UCxn
copy 2010 Cisco andor its affiliates All rights reserved 44
Customer Design
PSTN
IP WAN
SIP Proxy
Unified Communications Manager
Unity Connection
Unified Presence
Unified Contact Center Express
Cisco UCS 5108 Chassis with UCS B200 Blade Servers
Cisco UCS 6100 Fabric Interconnect Switch
Cisco UCS C210 or C200 General-Purpose Rack-mount Server
CUSP
(11K phones)
(3K phones)
(400 phones)
copy 2010 Cisco andor its affiliates All rights reserved 45
HQ Details
Blade 1 Blade 2
CPU-1 CPU-2 CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
Blade 3 Blade 4
CPU-1 CPU-2 CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
CUP-1 UCCX-1 CUP-2
Blade 5 Blade 6
CPU-1 CPU-2 CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
UCCX-1
UCxn-1 Active
Leave
idle for
UCxn
UCxn-2
Active
TFTP-2
SUB-2
SUB-4
SUB-6
SUB-8 CUP-2 UCCX-2
SUB-3 PUB SUB-1
CUP-1
TFTP-1 SUB-5
SUB-7
Blade Slot 7 Blade Slot 8
CPU-1 CPU-2 CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
CUCM VM OVAs
Messaging VM OVAs
Leave
idle for
UCxn
ldquoSparerdquo blade slots available for non-UC workloads such as Cisco Nexus 1000V VMware
vCenter or 3rd-party workloads such as directories email groupware or other business
applications
Contact Center VM OVAs
Presence VM OVAs
copy 2010 Cisco andor its affiliates All rights reserved 46
Rack Server 3
CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
UCxn-2 CCX-1 CUP
Rack Server 2
CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
Rack Server 1
CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
Leave
idle for
UCxn
UCxn-1 TFTP-2 SUB-2
Leave
idle for
UCxn
SUB-1 CCX-2 PUB TFTP-1
Branch Office Details
Rack Server 2
CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
Rack Server 1
CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
PUBTFTP CCX-1 CUP
SUB CCX-2
Leave
idle for
UCxn
Leave
idle for
UCxn
UC
xn
-1
UC
xn
-1
CUCM VM OVAs
Messaging VM OVAs
Contact Center VM OVAs
Presence VM OVAs
copy 2010 Cisco andor its affiliates All rights reserved 47
bull DAS Rack Mount Server (Cisco C-Series)
bull Popular DAS Protocol SCSI
bull iSCSI Access SCSI storage media using IP network
bull Fibre Channel The most popular SAN protocol today
Cable distance ~ 2 km
Popular speed - 4 Gbs
bull NAS (Network Attached Storage) uses NFS (Network File System) protocol over TCPIP
DAS
SCSI
Computer System
SCSI Bus Adapter
SCSI Device Driver
Volume Manager
File System
Application
iSCSI
File System
Application
SCSI Device Driver iSCSI Driver
TCPIP Stack
NIC
Volume Manager
NIC
TCPIP Stack
iSCSI Layer
Bus Adapter
Host
Server
Storage
Transport
Storage
Media
FC SAN
FC
FC HBA
SCSI Device Driver
Volume Manager
File System
Application
Computer System Computer System
Block IO
SAN IP
copy 2010 Cisco andor its affiliates All rights reserved 48
HDD 1
450gig
15K RPM
HDD 2
450gig
15K RPM
HDD 3
450gig
15K RPM
HDD 4
450gig
15K RPM
HDD 5
450gig
15K RPM
Single RAID5 Group (14 TB Usable Space)
PUB SUB1 UCCX1
UC
VM 1
UC
VM 2
UC
VM 3
LUN 2 (720 GB) LUN 1 (720 GB)
UCCX2 CUP1 CUP2
UC
VM 4
UC
VM 5 UC
VM 6
4 to 8 UC VMs per
LUN
(max dependent on
sumvDisks)
NASSAN Array Best Practices for UC
Must be lt2TB per
LUN Recommend
500GB to 15 TB
Use FC class disks
with ~180 IOPS (eg
450 GB 15K or
300 GB 15K)
copy 2010 Cisco andor its affiliates All rights reserved 49
DAS Example UCS C210 M2 TRC1
HDD 1
146 GB
15K RPM
HDD 2
146 GB
15K RPM
HDD 3
146 GB
15K RPM
HDD 4
146 GB
15K RPM
HDD 5
146 GB
15K RPM
HDD 6
146 GB
15K RPM
HDD 7
146 GB
15K RPM
HDD 8
146 GB
15K RPM
HDD 9
146 GB
15K RPM
HDD 10
146 GB
15K RPM
Single RAID1
Volume Single RAID5 Volume (1022 GB after RAID overhead)
VMFS Filestore (947 GB after VMFS overhead)
PUB UCCX1
UC
VM 1
UC
VM 3
CUP1
UC
VM 5
vSphere ESXi
image
Notes
bull VMFS block size limits
max vDisk size
bull Could have gt1 VMFS
datastore on RAID
volume
copy 2010 Cisco andor its affiliates All rights reserved 50
DAS Example UCS C200 M2 TRC1 for BE6K
HDD 1
1 TB
72K RPM
HDD 2
1 TB
72K RPM
HDD 3
1 TB
72K RPM
HDD 4 1
TB
72K RPM
Single RAID10 Volume (2 TB after RAID
overhead)
VMFS Filestore (18 TB after VMFS overhead)
PUB UCCX1
UC
VM 1
UC
VM 3
CUP1
UC
VM 5 vSphere ESXi
image
copy 2010 Cisco andor its affiliates All rights reserved 51
UC Application Co-residency
bull See wwwciscocomgouc-virtualized for latest
bull Based on supported OVA for download
OVA reserves cores RAM etc to VMs
Basic rule of thumb fill up blade until out of capacity
If blade contains VM for messaging must reserve core for ESXi
bull Hardware oversubscription not supported
52 copy 2010 Cisco andor its affiliates All rights reserved
Virtual Machine Sizing
bull Virtual Machine virtual hardware defined by an VM template
vCPU vRAM vDisk vNICs
bull Capacity
bullAn VM template is associated with a specific capacity
bull VM templates are packaged in a OVA file
bull There are usually different VM template per release For example CUCM_80_vmv7_v21ova
CUCM_85_vmv7_v21ova
CUCM_86_vmv7_v15ova
Includes product product version VMware hardware version template version
53 copy 2010 Cisco andor its affiliates All rights reserved httptoolsciscocomcucst
copy 2010 Cisco andor its affiliates All rights reserved 54
bull Customer-accessible
bullUC on UCS httpwwwciscocomgouconucs and wwwciscocomgouc-virtualized and wwwciscocomgoucsrnd and wwwciscocomgoswonly (UCS page)
bullUCS in general httpwwwciscocomgoucs
bullVblocks amp Virtual Computing Environment wwwvceportalcom and wwwvcecom
bullFlexPods wwwcisconetappcom
bullCisco Infrastructure as a Service (IaaS) Portal httpwwwciscocomgoiaas
bullCisco Unified Service Delivery Portal httpwwwciscocomgousd
bullldquoCUCM on Virtual Serversrdquo (summary of ldquowhatrsquos newdifferent when virtualizedrdquo) httpwwwciscocomenUSdocsvoice_ip_commcucmvirtualservershtml
bull ldquoVirtualization and UCS 101rdquo httpnewsroomciscocomdlls2010ts_102810html
bull UC on UCS ldquotech valuerdquo httpwwwciscocomenUSsolutionsns340ns339ns638ns914html_TWTVtwtv_episode_74html
copy 2010 Cisco andor its affiliates All rights reserved 55
bull Partner Central ldquoServers OS and Virtualizationrdquo
httpwwwciscocomwebpartnersselltechnologyipcservers_os_virt_for_uchtml
bull Partner Community ldquoServers OS and Virtualizationrdquo
httpswwwmyciscocommunitycomcommunitypartnercollaborationucservers
bull Design info
httpwwwciscocomgouc-virtualized and httpwwwciscocomgoucsrnd
bull ldquoWhatrsquos new whatrsquos different when on VMwarerdquo customer doc
httpwwwciscocomenUSdocsvoice_ip_commcucmvirtualservershtml
bull Product Literature
httpwwwciscocomgoucs and httpwwwciscocomgouconucs
bull Ordering Guide
httpwwwciscocomwebpartnersselltechnologyipcuc_tech_readinesshtml~7
bull Virtual Computing Environment Portal httpwwwvceportalcom
bull Cisco Infrastructure as a Service (IaaS) Portal httpwwwciscocomgoiaas
copy 2010 Cisco andor its affiliates All rights reserved 6
MCS Appliance UC on UCS
Tested Reference Configuration (TRC)
UC on UCS
Specs-based
Configuration-based
Highly Prescribed
Only 1 app per server
IBM x3650-M2 or x3250-M3
Only one CPU model supported
(single 4-core)
Fixed RAM
Exact match at part number level
for adapters
1 storage option only
Cisco owns app performance
Configuration-based
Prescribed
Typical 4-20 VMs per server
UCS B200B230B440 M2
C210C200 M2
E5640 E5506 E7-28704870
(4-core or 10-core)
Fixed RAM
Blades you pick adapter
Rack exact match at part number
level for adapters
Pick from a few storage options
Cisco owns app performance
Specs-based
A Few Restrictions
Typical 4-20 VMs per server
Any UCS that satisfies policy
56xx 75xx CPU 253+ GHz
E7-xxxx 24+ GHz
RAM depends on VMs
Regardless of server you pick the
adapters
Any storage ndash but you design it
Customer owns app performance
B230 M22-Socket Intel E7-2800 2 SSD 32 DIMM
B200 M22-Socket Intel 5600 2 SFF Disk 12 DIMM
B250 M22-Socket Intel 5600 2 SFF Disk 48 DIMM
B440 M24-Socket Intel E7-4800 4 SFF Disk 32 DIMM
C200 M22-Socket Intel 5600 4 Disks 12 DIMM 2 PCIe 1U
C210 M22-Socket Intel 5600 16 Disks 12 DIMM 5 PCIe 2U
C250 M22-Socket Intel 5600 8 Disks 48 DIMM 5 PCIe 2U
C460 M24-Socket Intel E7-4800 12 Disks 64 DIMM 10 PCIe 4U
Bla
de
Rack M
ou
nt
C260 M22-Socket Intel E7-2800 16 Disks 64 DIMM 6 PCIe 2U
Updated
New
Updated
Updated
copy 2010 Cisco andor its affiliates All rights reserved 7
bull Target Customer Profiles
ldquoReady willing ablerdquo to support servers VMware storage
Ready to move off appliance-oriented operations
UCS B-series for centralized medium to high server count
UCS C-series for low to medium server count or highly distributed
3rd-party server options for investment leverage
bull Platform Support
Virtual Machine Templates - defined by each UC app
Application VM Co-residency ndash ldquomix amp match UC with UCrdquo for most UC apps
Requries VMware vSphere 45 ndash ESXi only feature support depends on app vCenter is also required for specs-based
UCS HP and IBM server hardware options
DAS SAN NAS and Diskless storage options
Various NIC HBA CNA and Cisco VIC network options (1GB through 10GB)
copy 2009 Cisco Systems Inc All rights reserved Presentation_ID 8
UC on UCS Solution Components
Cisco Unified Communications 861
LAN
SAN Optional Shared Storage
PSTN
hellip UCS C-Series General-Purpose Rack-Mount Servers
LAN
SAN UCS 6100 Fabric
Interconnect Switches
Required Shared Storage
UCS 5100 Blade Server Chassis UCS 2100 Fabric Extender
UCS B-Series Blade Servers
PSTN
ldquoUC on UCSrdquo B-Series
ldquoUC on UCSrdquo C-Series
copy 2010 Cisco andor its affiliates All rights reserved 9
Unified Communication System
80bull Voice Video Presence Mobility Customer Care
bull Available in flexible deployment models
bull Deliver a unparalleled user experience
HCS Management Systembull Zero-touch fulfillment amp provisioning with self service
bull Service assurance for enabling high quality of services
bull Coordinated management and integration across domains
Optimized Virtualization Platform
(UC on UCS B-series)bull Resource optimized for reduced hardware capex
bull Installation amp upgrade automation
bull Provides flexibility customization amp additional redundancy
Scalable System Architecturebull Customer Aggregation amp SIP Trunking
bull SLA Enablement Security Scalability
bull Cloud Based SaaS Integration
Cisco Hosted Collaboration Solution Combining virtualization management amp
architecture elements for a comprehensive
platform
Cisco Business Edition 6000 Midmarket 100-1000 user solution for call
control mobility rich media presence and
contact center
Infrastructure Solutions Data Center ldquobuilding blocksrdquo
Vblock FlexPod
UC on FlexPod planned not committed
copy 2010 Cisco andor its affiliates All rights reserved 10
B230 M2 2-Socket Intel E7-2800 2 SSD 32 DIMM
B200 M2 2-Socket Intel 5600 2 SFF Disk 12 DIMM
B250 M2 2-Socket Intel 5600 2 SFF Disk 48 DIMM
B440 M2 4-Socket Intel E7-4800 4 SFF Disk 32 DIMM
C200 M2 (LFF) 2-Socket Intel 5600 4 Disks 12 DIMM 2 PCIe 1U
C220 M3 SFF 2-Socket Intel E5-2600 8 Disks 16 DIMM 2 PCIe 1U
C250 M2 2-Socket Intel 5600 8 Disks 48 DIMM 5 PCIe 2U
C460 M2 4-Socket Intel E7-4800 12 Disks 64 DIMM 10 PCIe 4U
Supported Hardware UC on UCS B
lad
e
Rack M
ou
nt
C260 M2 2-Socket Intel E7-2800 16 Disks 64 DIMM 6 PCIe 2U
UC on UCS Tested Reference Configuration UC on UCS Specs-based
BE6K
B200 M3 SFF 2-Socket Intel E5-2600 2 SFF Disk 24 DIMM
C240 M3 SFF 2-Socket Intel E5-2600 24 Disks 24 DIMM 5 PCIe 2U
C210 M2 2-Socket Intel 5600 16 Disks 12 DIMM 5 PCIe 2U
Target support
Fall 2012
Target support
Fall 2012
Target support
Fall 2012
(Note UCS Express
not supported)
copy 2010 Cisco andor its affiliates All rights reserved 11
Must be on VMware
HCL
Allowed Server
Vendors
Server model and IO devices on wwwvmwarecomgohcl
All parts must be supported by server vendor
No hardware oversubscription allowed for UC
VMware vCenter is REQUIRED
Processor
Intel Xeon 56xx75xx 253+ GHz or E7-xxxx 24+ GHz
CPU support varies by UC app
Required physical core count = sumUC VMs vCPU (+1 if Unity Cxn)
Capacity = sumUC VMs vRAM + 2GB for VMware
Follow server vendor for module densityconfiguration
Memory
Storage Network
Must be on VMware HCL and supported by server vendor
Eg 1GbE10GbE NIC ge2Gb FC HBA 10Gb CNA or VIC
UCS BC UCS Express Other 3rd-parties
Adapters (eg LANStorage Access)
SAN (FCoE FC iSCSI) NAS (NFS) Variable DASRAID
Storage capacity = sumUC VMs vDisk + VMwareRAID overhead
Storage performance = sumUC VM IOPS
copy 2010 Cisco andor its affiliates All rights reserved 12
Intel
Program
Nehalem-
EP
Nehalem-
EX
Westmere-
EP
Westmere-
EX
Romley-
EP
CPU Family 55xx
65xx
75xx
56xx E7-28xx
E7-48xx
E7-88xx
E5-26xx
CPU Cores 4 468 46 6810 468
CPU Speed 2-29 GHz 17-27 GHz 19-333 GHz 17-27 GHz 1-3 GHz
Example UCS
Models with
these CPUs
B200250 M1
C210 M1
C250 M1M2
C200 M2
B230 M1
B440 M1
C460 M1
B200250 M2
C210 M2
C250 M2
B230 M2
B440 M2
C260 M2
C460 M2
B200 M3
C220 M3
C240 M3
UC on UCS
Certifications
TRCs for B200 M1
TRCs for C210 M1
(E5540)
TRC for C200M2
(E5506)
Specs-based
(75xx 253+ GHz)
TRCs for B200 M2
TRCs for C210 M2
(E5640)
Specs-based
(56xx at 253+
GHz)
TRC for B230 M2
TRC for B440 M2
(E7-28704870)
Specs-based
(E7 at 24+ GHz)
Not Currently
Supported by UC
copy 2010 Cisco andor its affiliates All rights reserved 13
VM VM VM
VM VM VM
VM VM VM
VM VM VM
VM
VM
VM
ldquoSmallrdquo VM
ldquoMediumrdquo VM
ldquoLargerdquo VM
VM
VM
VM VM
VM VM VM
19 UC VMs with
total 40 vcpursquos
19 MCS Appliances 5 virtualized
servers (dual 4-core
B200 M2
TRC)
4 virtualized
servers (dual 6-core
B200 M2
specs-based)
2 virtualized
servers (dual 10-core
B230 M2
TRC)
19 UC
app
copies
copy 2010 Cisco andor its affiliates All rights reserved 14
UC on UCS Products with Owner UC on UCS TRC UC on UCS Specs-based
Unified Communications Manager
Business Edition 6000 C200 M2 only Not supported
Unity Connection
Unified Presence
Unified Contact Center Express
Cisco Emergency Responder
Session Manager Edition
InterCompany Media Engine
Unified Attendant Consoles
Unity
Unified Workforce Optimization (WFO)
Unified Contact Center Enterprise Planned
Unified Intelligence Center
Unified Customer Voice Portal Planned
MediaSense Planned
Unified Contact Center Mgmt Portal
SocialMiner
Finesse Planned
Unified EmailWeb Interaction Mgr
Prime UCMS (OMPMSMSSM)
Webex Premise Planned Planned
Unified MeetingPlace Planned Planned
TMSCTMS Planned Planned
VCS Planned Planned
copy 2010 Cisco andor its affiliates All rights reserved 15
Why virtualize
your UC
Why virtualize
on UCS
Lower TCO
Business
Agility
Additional
Savings and
Increased
Agility
End to End
Solution
Single
Support
Tested Reference Configurations
Vblocks
Cisco options
VCE Vblock options
Infrastructure Simplification (Cables Adapters Switching)
Converge Communications and DC Networks ndash ldquowire oncerdquo
Consolidates System Mgmt
Easier Service Provisioning
Reduce ServersStorage
Reduced Power Cooling Cabling Space Weight
Investment Leverage amp Easy Server Repurposing
Efficient App Expansion
Accelerated UC rollouts
Better Business Continuity
PortableMobile VMs
UCS is the industryrsquos only
fully unified and virtualization-
aware compute solution
copy 2010 Cisco andor its affiliates All rights reserved 16
CAPEX
bull Reduced Server Count (50-75)
bull NetworkStorage Consolidation (50+)
bull Reduced Cabling (50+)
OPEX
Reduced Rack amp Floor Space (36)
Reduced PowerCooling (20+)
Fewer Servers to Manage (50-75 less)
Reduced MaintenanceSupport Costs (~20)
Example 5000 users Dial tone voicemail and Presence 10 are Contact Center Agents
11 non-virtualized rack servers required for UC more for other business apps
copy 2010 Cisco andor its affiliates All rights reserved 17
CAPEXOPEX
bull Similar Consolidation and Operational EfficiencyScale benefits as with UC on UCS B-series
Other Benefits
Lower initial investment
Simple entrymigration to virtualized UC ndash Data Center expertise not required unless using SAN option
Example 5000 users Dial tone voicemail and Presence 10 are Contact Center Agents
11 non-virtualized rack servers required for UC more for other business apps
copy 2010 Cisco andor its affiliates All rights reserved 18
$-
$500
$1000
$1500
$2000
$2500
$3000
2 4 8 10 12 20 50 100
UCS B230 M2 TRC OPEX ($K)
UCS B230 M2 TRC CAPEX ($K)
UCS B200 M2 TRC OPEX ($K)
UCS B200 M2 TRC CAPEX ($K)
UCS C210 M2 TRC OPEX ($K)
UCS C210 M2 TRC CAPEX ($K)
MCS 7845-I3 OPEX ($K)
MCS 7845-I3 CAPEX ($K)
Assumptions
bull UC only no other business applications included ldquoSparerdquo or ldquohot standbyrdquo hosts not included
bull ldquoServerrdquo is either an MCS Appliance or a 2-vcpu-core ldquoVirtual Machinerdquo
bull Dual sites split MCS or UCS TRC servers across sites no single point of failure ndash redundant sites switching blade chassis rackblade servers
bull Using list pricing for MCS-7845-I3-IPC1 UCS-C210M2-VCD2 UCS-B200M2-VCS1 UCS-B230M2-VCDL1 and VMware Enterprise Plus Edition
Appliance or VM Count
PSTN
2104
2104
2104
2104
SANLAN
Dual Site Scenario PSTN
2104
2104
2104
2104
SANLAN
hellip
hellip
hellip
hellip
UC on UCS
B200 B230
UC on UCS
C210
MCS 7845
B230 M2
vs B200 M2
C210 M2
vs MCS 7845
20 copy 2010 Cisco andor its affiliates All rights reserved
Current Offers Technical Overview
21 copy 2010 Cisco andor its affiliates All rights reserved
Eg 4 physical servers
Each MCS 7800 hosts only
one UC app instance
4 virtual servers (VMrsquos) on 1 physical server
Single virtualized server with total 8 physical
cores hosts all UC app instances
Unity
Connection
Unified CM
VM for
Unified
CM
Sub
Unified CCX
VM for
Unity
Cxn
VM for
Unified
CCX
VM for
Unified
CM
Pub
or
copy 2010 Cisco andor its affiliates All rights reserved 22
Server Model TRC CPU RAM Storage Adapters
UCS B200 M2 Blade Server TRC 1
Dual E5640 (8 physical cores total)
48 GB DAS (RAID1) for
VMware FC SAN for UC apps
Cisco VIC
TRC 2 Dual E5640 (8
physical cores total) 48 GB Diskless Cisco VIC
UCS B230 M2 Blade Server
TRC 1 Dual E7-2870 (20
physical cores total) 128 GB Diskless Cisco VIC
UCS B440 M2 Blade Server
TRC1 Dual E7-4870 (40
physical cores total)
256 GB Diskless Cisco VIC
UCS C260 M2 Rack-Mount Server
TRC 1 Dual E7-2870 (20
physical cores total) 128 GB DAS (2x RAID5) 1GbE NIC
UCS C210 M2 General-Purpose
Rack-Mount Server TRC 1
Dual E5640 (8 physical cores total)
48 GB
DAS (2 disks RAID1) for VMware + DAS (8 disks RAID5) for
UC apps
1GbE NIC
TRC 2 Dual E5640 (8
physical cores total) 48 GB
DAS (2 disks RAID1) for VMware FC SAN
for UC apps
1GbE NIC and 4G FC HBA
TRC 3 Dual E5640 (8
physical cores total) 48 GB Diskless
1GbE NIC and 4G FC HBA
UCS C200 M2
General-Purpose Rack-Mount Server
TRC 1 Dual E5506 (8
physical cores total) 24 GB
DAS (4 disks RAID10) for VMware
+ UC apps 1GbE NIC
copy 2010 Cisco andor its affiliates All rights reserved 23
UC app Scale
(ldquousersrdquo)
vCPU (cores) Usually 253+ GHz
per core required
vRAM
(GB)
vDisk
(GB)
Notes
Unified
CM
1000 2 4 1 x 80 UCS C200 or BE6K only
2500 1 225 1 x 80 Not for use with C200BE6K
7500 2 6 2 x 80
10000 4 6 2 x 80
Unity
Connection
500 1 2 1 x 160
1000 1 4 1 x 160
5000 2 4 1 x 200
10000 4 4 2 x 146 Not for use with C200BE6K
20000 7 8 2 x 300
Unified
Presence
1000 1 2 1 x 80
2500 2 4 1 x 80 Not for use with C200BE6K
5000 4 4 2 x 80
Unified CCX 100 2 4 2 x 146 UCS C200 or BE6K only
300 2 4 2 x 146 Not for use with C200BE6K
400 4 8 2 x 146
Not exhaustive subject to change see wwwciscocomgouc-virtualized for latest
ie user count for particular values of BHCA trace level encryption CTI and other factors Actual
supportable user count may vary by deployment
copy 2010 Cisco andor its affiliates All rights reserved 24
Policy still lives here httpdocwikiciscocomwikiUnified_Communications_Virtualization_Sizing_GuidelinesApplication_Co-residency_Support_Policy
Three aspects
1 Allowed App Mix on same physical server
SAME RULES for TRC vs specs-based UCSHPIBM
2 Allowed VM OVA choices
DIFFERENT RULES for TRC vs specs-based due to CPU differences
3 Max number of VMs on same physical server
SAME RULES for TRC vs specs-based to determine max but specs-based might allow more VMs
Note DAS IO bottlenecks may prevent very high VM counts even if CPURAM are sufficient
copy 2010 Cisco andor its affiliates All rights reserved 25
bull Which apps can share the same physical server In general any UC with UC from apps listed at httpdocwikiciscocomwikiUnified_Communications_Virtualization_Supported_Applications
But note some UC apps restrict this eg BE6K CUCCE See their rules on their docwiki ldquochild pagesrdquo
NMTGrsquos UC Mgmt Suite (CUOM CUSM CUSSM CUPM) counts as a UC app for this
Note UCS C200 M2 TRC1 for non-BE6K no longer has special restrictions on UC App Mix
bull SEPARATE PHYSICAL SERVER required for non-UC or 3rd-party Eg N1KV ARC SingleWire vCenter FilePrint Directory CRMERP Groupware non-CUCM TFTP Nuance etc
UC VM UC VMhellipVM for
VMware
vCenter
VM for
Nexus
1KV
VSM
VM for
Solutions
Plus
CTDP
app
VM for
Unaffiliated
3rd-party
app
UC VM UC VMhellipVM for
VMware
vCenter
VM for
Nexus
1KV
VSM
VM for
Solutions
Plus
CTDP
app
VM for
Unaffiliated
3rd-party
app
UC VM UC VMhellipVM for
VMware
vCenter
VM for
Nexus
1KV
VSM
VM for
Solutions
Plus
CTDP
app
VM for
Unaffiliated
3rd-party
app
UC VM UC VMhellipVM for
VMware
vCenter
VM for
Nexus
1KV
VSM
VM for
Solutions
Plus
CTDP
app
VM for
Unaffiliated
3rd-party
app
Different blades in same chassis OK
Same blade same chassis not OK
copy 2010 Cisco andor its affiliates All rights reserved 26
bull App to HW some apps eg CUCCE donrsquot allow any of their OVAs on certain TRCs
See httpdocwikiciscocomwikiUnified_Communications_Virtualization_Supported_Applications
bull OVA to HW Some OVAs are deliberately only for use with a particular TRC or CPU
See co-res policy page and Notes column in httpdocwikiciscocomwikiUnified_Communications_Virtualization_Downloads_28including_OVAOVF_Templates29
Why Usually due to CPU modelspeed dependencies
C200 M2 TRC1
(E5506 213 GHz)
UCM
25K
UCM
75K
UCM
10K
C200 M2 Specs-based
(56xx 253+GHz )
B200C210 M2 TRC or Specs-based
(E5640 266 GHz on TRC
56xx75xx 253+ GHz on specs-based)
UCM
25K
UCM
75K
UCM
10K
UCM
1K
UCM
1K
UCM
25K
UCM
75K
UCM
10K
UCM
1K
copy 2010 Cisco andor its affiliates All rights reserved 27
Core
Core
Core
Core
Core
Core
Core
Core
Core
Core
Core
Core
Core
Core
Core
Core
Core
Core
Core
Core
VM VM VM VM
V
M
V
M
V
M
V
M
V
M
V
M
V
M
V
M
VM VM
Dual-socket 4-core
Eg UCS C210 M2 TRC1
with dual E5640
Dual-socket 6-core
Eg UCS C210 M2 Specs-based
with UC-supported CPU model
and min speed
VM VM V
M
V
M
VM VM VM V
M
Idle
VM
VM
VM
Jumbo + 1 reserved
or
Mixed sizes + 1 reserved
or
Mixed sizes
or
21 Large eg UCM 10K
or
41 Med eg UCM 75K
or
81 Small eg UCM 25K
ldquoSmallrdquo VM
ldquoMediumrdquo VM
ldquoLargerdquo VM
ldquoJumbordquo VM
VM
VM
Idle
VM VM VM VM
V
M
V
M
V
M
V
M
V
M
V
M
V
M
V
M
VM VM
VM VM V
M
V
M
VM VM VM V
M
Idle
Mixed sizes + 1 reserved
or
Mixed sizes
or
31 Large eg UCM 10K
or
61 Med eg UCM 75K
or
121 Small eg UCM 25K
VM
Idle
V
M
V
M
V
M
V
M
VM VM
VM
VM VM
VM
VM VM
copy 2010 Cisco andor its affiliates All rights reserved 28
Virtual Software Switch Options
VM
LAN SAN
ESXi Hypervisor
Software Switch
vNIC
CNA
FCoE
VMware
vSwitch
VMware
dvSwitch
Cisco Nexus
1KV
Host based (local) Distributed Distributed
IEEE 8021Q VLAN
tagging
IEEE 8021Q VLAN
tagging
IEEE 8021Q VLAN
tagging
VLANs only visible to
local ESXi host
VLANs visible to all
ESXi hosts
VLANs visible to all
ESXi hosts
EtherChannel EtherChannel EtherChannel
-- -- Virtual PortChannel
-- -- QoS Marking
(DSCPCoS)
-- -- ACL
-- -- SPAN
RADIUSTACACS+
No VM needed No VM needed VM needed for VSM
vmNIC
UCS B200
Strongly recommended for UC on UCS B-Series
Not required but recommended for UC on UCS C-Series
copy 2010 Cisco andor its affiliates All rights reserved 29
pSwitch
ESXi
bull Cisco Software Switch in Hypervisor
bull Familiar network server operations amp management model
bull Enhanced diagnostic amp monitoring capability
bull Visibility direct to VM
ESXi Nexus
1000V
VEM
Nexus
1000V
VEM
Nexus 1000V VSM
copy 2010 Cisco andor its affiliates All rights reserved 30
Physical switch maps L3 DSCP to L2 CoS
CUCM marks traffic based on L3 DSCP values
pSwitch (CAT6K etc) can do mapping from L3 DSCP to L2 CoS (if needed)
CTL Packet L3
dc1-access-6k(config)mls qos map dscp-cos 24 to 3
dc1-access-6k(config)mls qos map dscp-cos 46 to 5
CS3
L20 L3CS3
CUCM
CAT6K
L23 L3CS3 L23 L3CS3
copy 2010 Cisco andor its affiliates All rights reserved 31
bull UCS 6100 doesnrsquot look into L3 IP header
bull DSCPToS setting in IP header is not altered by UCS
bull 6100 sends packet to uplink pEthernet switch
bull Default QoS settings on UCS
FCoE (ldquomatch cos 3rdquo) ndash no drop policy
Rest (ldquomatch anyrdquo) ndash Best Effort Queue
vSwitch amp UCS 6100 can not map L3 DSCP to L2 CoS
L20 L3CS3
CUCM
CAT6K
UCS 6100
copy 2010 Cisco andor its affiliates All rights reserved 32
bull UC blades Network Adapters QoS policy set to Platinum (CoS=5 No
Drop)
bull Non-UC blades Network Adapters QoS policy set to best effort
N1Kv Considerations
bull UC sig traffic (CoS3) share queues with FCoE traffic (CoS3)
bull UC sig traffic is given lossless behavior
bull Default CoS value of 3 for FCoE traffic should never be changed
Without N1Kv
Caveat
bull All traffic types from virtual UC App will get CoS value of Platinum
bull Non-UC application gets best-effort class might not be acceptable
L20 L3CS3
L23 L3CS3
L23 L3CS3
CUCM
N1KV
UCS 6100
CAT6K
copy 2010 Cisco andor its affiliates All rights reserved 33
Compute Layer
SANStorage
Layer ndash Cisco
SRND
Cisco
UCS 6100
Fabric
Interconnect
UCS 5100
Blade
Server
Cisco SAN
Switch
4x10GE
4x10GE
4x10GE
4x10GE
FC FC
FC FC
Nexus
1000V
FC Storage
SP-A SP-B
3rd party layer
CUCM VM IOPS ~ 200
200 IOPS 4KB ~ 64 Mbps per VM
bull Total capacity 28000 IOPS
bull 14000 IOPS per controller
bull 4 KByte block size
14000 IOPS x (4KB) ~ 428 Mbps
600 Mbps throughputcontroller
3rd Party SAN Example
Result
bull One 4 Gbps FC interface is enough to
handle the entire capacity of one Storage
Array
bull HA requires four FC interfaces
copy 2010 Cisco andor its affiliates All rights reserved 34
copy 2010 Cisco andor its affiliates All rights reserved 35
bull All UC Deployment Models are supported
No change in the current deployment models
Base deployment model ndash Single Site Centralized Call Processing etc are not changing
bull VM machine layout on a blade andor chassis
Unity Connection requires one extra CPU core (vCPU) on blade
bull Software checks for design rules
No rules or restrictions are in place in UC Apps to check if you are running the primary and sub on the same blade
bull CoWAN ruleslatency requirement are same
Does not depend on CUCM code or hardware
httpwwwciscocomgoucsrnd
copy 2010 Cisco andor its affiliates All rights reserved 36
bull SRND application-layer guidelines are same as when on MCS
Redundancy Rules are the same
Clustering over the WANlatency numbers
Mega Cluster supported in 85
Determine quantityrole of nodes
For HA No design checks validating proper placement of primary and secondary servers
CUCCE private network requirement
bull Mixed clusters of HP IBM UCS are supported
Subject to ldquocommon senserdquo rules ndash eg donrsquot make Pub or Primary less powerful than Sub or Secondary
bull Direct attach devices must be on physical MCS server
-MOH Live audio stream
-Tape BackupFloppy
bull New factors to consider for end-to-end QoS designconfiguration
37 copy 2010 Cisco andor its affiliates All rights reserved
FROM CUCM
System Release To Unified CM 85 To UC System 85
4x Multi Hop thru 61(x)71(x) Multi Hop thru 61(3)
51(2) Multi Hop thru 61(x)71(x) NA
51(3) 2 Hop thru 71(3) NA
61(1) 2 Hop thru 61(x)71(x) 2 Hop
61(2) 2 Hop thru 61(x)71(x) NA
61(3) 2 Hop thru 61(x)71(x) NA
61(4) Single Hop NA
61(5) Single Hop NA
70(1) 2 Hop thru 71(x) 2 Hop
71(2) 2 Hop thru 71(x) 2 Hop
71(3) Single Hop Single Hop Multi stagesBWC supported
71(5) Single Hop NA
80(1) Single Hop Single Hop Multi stagesBWC supported
80(2) 80(3) Single Hop NA
38 copy 2010 Cisco andor its affiliates All rights reserved
VMware Feature support bull VMware feature support varies by application
bull Some features are supported with Caveats some Partially
bull For example
bull Clone Virtual Machine
ldquoY (C)rdquo means VM has to be powered off
bull vMotion
ldquoY (C)rdquo means vMotion supported for live traffic calls shouldnrsquot be dropped (but not guaranteed)
ldquoPartialrdquo means in maintenance mode only
ESXi Features CUCM CUC CUP CCX
Clone Virtual Machine Y (C) Y (C) Y (C) Y (C)
VMware vMotion Y (C) Partial Partial Y (C)
Resize Virtual Machine Partial Partial Partial Partial
VMware HA Y (C) Y (C) Y (C) Y (C)
Boot From SAN Y (C) Y (C) Y (C) Y (C)
VMware DRS No No No No
Docwiki httpdocwikiciscocomwikiUnified_Communications_VMware_Requirements
copy 2010 Cisco andor its affiliates All rights reserved 39
Current Business Continuity and Disaster Recovery strategies are still applicable
The UC apps redundancy rules are same
Distribute UC application nodes across UCS blades chassis and sites to minimize failure impact
Primarysecondary on different blade chassis sites
On same blade mix Subs with TFTPMoH vs just Subs
Redundancy of UCS components (blade chassis FEX links Interconnect switching)
Redundancy of ldquonewrdquo network types (10GbE SAN multi-pathing etc)
copy 2010 Cisco andor its affiliates All rights reserved 40
TAC Support Demarcation
Server Hardware
Shared Storage
VMware Cisco
Application
UC on UCS
Tested Reference Configuration
Cisco 3rd-party Cisco
Cisco VCE VCE
3rd-party
VCE
UC on UCS
Specs-based (including Vblock option)
Cisco 3rd-party Cisco
Cisco VCE VCE
3rd-party
VCE
3rd-party VMware (HP IBM)
Specs-based 3rd-party 3rd-party 3rd-party Cisco
MCS 7800 Appliances Cisco NA NA Cisco
Customer-provided MCS 7800 equivalent 3rd party NA NA Cisco
41 copy 2010 Cisco andor its affiliates All rights reserved
Customer Example ndash Primary Data Center
OLD NEW
Hardware Nodes 62 Physical Servers ( EU HQ clusters) 14 Approx
Software Version 615 amp 851 851
Ucxn Version 421 851 ndash 3 pairs -- Virtualized
CER 20 70 86 -- Virtualized
CM SUB CM PUB CM SUB CM SUB
MOH TFTP CER CM SUB
CER UCxn UCxN CM SUB
42 copy 2010 Cisco andor its affiliates All rights reserved
Deployment Model ndash Data Center 1
CM SUB CM PUB CM SUB CM SUB
MOH TFTP UCxN UCxn
CER UCxn UCxn UCxn
43 copy 2010 Cisco andor its affiliates All rights reserved
Deployment Model ndash Data Center 2
CM SUB CM PUB CM SUB CM SUB
MOH TFTP Ucxn UCxn
CER UCxn UCxn UCxn
copy 2010 Cisco andor its affiliates All rights reserved 44
Customer Design
PSTN
IP WAN
SIP Proxy
Unified Communications Manager
Unity Connection
Unified Presence
Unified Contact Center Express
Cisco UCS 5108 Chassis with UCS B200 Blade Servers
Cisco UCS 6100 Fabric Interconnect Switch
Cisco UCS C210 or C200 General-Purpose Rack-mount Server
CUSP
(11K phones)
(3K phones)
(400 phones)
copy 2010 Cisco andor its affiliates All rights reserved 45
HQ Details
Blade 1 Blade 2
CPU-1 CPU-2 CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
Blade 3 Blade 4
CPU-1 CPU-2 CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
CUP-1 UCCX-1 CUP-2
Blade 5 Blade 6
CPU-1 CPU-2 CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
UCCX-1
UCxn-1 Active
Leave
idle for
UCxn
UCxn-2
Active
TFTP-2
SUB-2
SUB-4
SUB-6
SUB-8 CUP-2 UCCX-2
SUB-3 PUB SUB-1
CUP-1
TFTP-1 SUB-5
SUB-7
Blade Slot 7 Blade Slot 8
CPU-1 CPU-2 CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
CUCM VM OVAs
Messaging VM OVAs
Leave
idle for
UCxn
ldquoSparerdquo blade slots available for non-UC workloads such as Cisco Nexus 1000V VMware
vCenter or 3rd-party workloads such as directories email groupware or other business
applications
Contact Center VM OVAs
Presence VM OVAs
copy 2010 Cisco andor its affiliates All rights reserved 46
Rack Server 3
CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
UCxn-2 CCX-1 CUP
Rack Server 2
CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
Rack Server 1
CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
Leave
idle for
UCxn
UCxn-1 TFTP-2 SUB-2
Leave
idle for
UCxn
SUB-1 CCX-2 PUB TFTP-1
Branch Office Details
Rack Server 2
CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
Rack Server 1
CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
PUBTFTP CCX-1 CUP
SUB CCX-2
Leave
idle for
UCxn
Leave
idle for
UCxn
UC
xn
-1
UC
xn
-1
CUCM VM OVAs
Messaging VM OVAs
Contact Center VM OVAs
Presence VM OVAs
copy 2010 Cisco andor its affiliates All rights reserved 47
bull DAS Rack Mount Server (Cisco C-Series)
bull Popular DAS Protocol SCSI
bull iSCSI Access SCSI storage media using IP network
bull Fibre Channel The most popular SAN protocol today
Cable distance ~ 2 km
Popular speed - 4 Gbs
bull NAS (Network Attached Storage) uses NFS (Network File System) protocol over TCPIP
DAS
SCSI
Computer System
SCSI Bus Adapter
SCSI Device Driver
Volume Manager
File System
Application
iSCSI
File System
Application
SCSI Device Driver iSCSI Driver
TCPIP Stack
NIC
Volume Manager
NIC
TCPIP Stack
iSCSI Layer
Bus Adapter
Host
Server
Storage
Transport
Storage
Media
FC SAN
FC
FC HBA
SCSI Device Driver
Volume Manager
File System
Application
Computer System Computer System
Block IO
SAN IP
copy 2010 Cisco andor its affiliates All rights reserved 48
HDD 1
450gig
15K RPM
HDD 2
450gig
15K RPM
HDD 3
450gig
15K RPM
HDD 4
450gig
15K RPM
HDD 5
450gig
15K RPM
Single RAID5 Group (14 TB Usable Space)
PUB SUB1 UCCX1
UC
VM 1
UC
VM 2
UC
VM 3
LUN 2 (720 GB) LUN 1 (720 GB)
UCCX2 CUP1 CUP2
UC
VM 4
UC
VM 5 UC
VM 6
4 to 8 UC VMs per
LUN
(max dependent on
sumvDisks)
NASSAN Array Best Practices for UC
Must be lt2TB per
LUN Recommend
500GB to 15 TB
Use FC class disks
with ~180 IOPS (eg
450 GB 15K or
300 GB 15K)
copy 2010 Cisco andor its affiliates All rights reserved 49
DAS Example UCS C210 M2 TRC1
HDD 1
146 GB
15K RPM
HDD 2
146 GB
15K RPM
HDD 3
146 GB
15K RPM
HDD 4
146 GB
15K RPM
HDD 5
146 GB
15K RPM
HDD 6
146 GB
15K RPM
HDD 7
146 GB
15K RPM
HDD 8
146 GB
15K RPM
HDD 9
146 GB
15K RPM
HDD 10
146 GB
15K RPM
Single RAID1
Volume Single RAID5 Volume (1022 GB after RAID overhead)
VMFS Filestore (947 GB after VMFS overhead)
PUB UCCX1
UC
VM 1
UC
VM 3
CUP1
UC
VM 5
vSphere ESXi
image
Notes
bull VMFS block size limits
max vDisk size
bull Could have gt1 VMFS
datastore on RAID
volume
copy 2010 Cisco andor its affiliates All rights reserved 50
DAS Example UCS C200 M2 TRC1 for BE6K
HDD 1
1 TB
72K RPM
HDD 2
1 TB
72K RPM
HDD 3
1 TB
72K RPM
HDD 4 1
TB
72K RPM
Single RAID10 Volume (2 TB after RAID
overhead)
VMFS Filestore (18 TB after VMFS overhead)
PUB UCCX1
UC
VM 1
UC
VM 3
CUP1
UC
VM 5 vSphere ESXi
image
copy 2010 Cisco andor its affiliates All rights reserved 51
UC Application Co-residency
bull See wwwciscocomgouc-virtualized for latest
bull Based on supported OVA for download
OVA reserves cores RAM etc to VMs
Basic rule of thumb fill up blade until out of capacity
If blade contains VM for messaging must reserve core for ESXi
bull Hardware oversubscription not supported
52 copy 2010 Cisco andor its affiliates All rights reserved
Virtual Machine Sizing
bull Virtual Machine virtual hardware defined by an VM template
vCPU vRAM vDisk vNICs
bull Capacity
bullAn VM template is associated with a specific capacity
bull VM templates are packaged in a OVA file
bull There are usually different VM template per release For example CUCM_80_vmv7_v21ova
CUCM_85_vmv7_v21ova
CUCM_86_vmv7_v15ova
Includes product product version VMware hardware version template version
53 copy 2010 Cisco andor its affiliates All rights reserved httptoolsciscocomcucst
copy 2010 Cisco andor its affiliates All rights reserved 54
bull Customer-accessible
bullUC on UCS httpwwwciscocomgouconucs and wwwciscocomgouc-virtualized and wwwciscocomgoucsrnd and wwwciscocomgoswonly (UCS page)
bullUCS in general httpwwwciscocomgoucs
bullVblocks amp Virtual Computing Environment wwwvceportalcom and wwwvcecom
bullFlexPods wwwcisconetappcom
bullCisco Infrastructure as a Service (IaaS) Portal httpwwwciscocomgoiaas
bullCisco Unified Service Delivery Portal httpwwwciscocomgousd
bullldquoCUCM on Virtual Serversrdquo (summary of ldquowhatrsquos newdifferent when virtualizedrdquo) httpwwwciscocomenUSdocsvoice_ip_commcucmvirtualservershtml
bull ldquoVirtualization and UCS 101rdquo httpnewsroomciscocomdlls2010ts_102810html
bull UC on UCS ldquotech valuerdquo httpwwwciscocomenUSsolutionsns340ns339ns638ns914html_TWTVtwtv_episode_74html
copy 2010 Cisco andor its affiliates All rights reserved 55
bull Partner Central ldquoServers OS and Virtualizationrdquo
httpwwwciscocomwebpartnersselltechnologyipcservers_os_virt_for_uchtml
bull Partner Community ldquoServers OS and Virtualizationrdquo
httpswwwmyciscocommunitycomcommunitypartnercollaborationucservers
bull Design info
httpwwwciscocomgouc-virtualized and httpwwwciscocomgoucsrnd
bull ldquoWhatrsquos new whatrsquos different when on VMwarerdquo customer doc
httpwwwciscocomenUSdocsvoice_ip_commcucmvirtualservershtml
bull Product Literature
httpwwwciscocomgoucs and httpwwwciscocomgouconucs
bull Ordering Guide
httpwwwciscocomwebpartnersselltechnologyipcuc_tech_readinesshtml~7
bull Virtual Computing Environment Portal httpwwwvceportalcom
bull Cisco Infrastructure as a Service (IaaS) Portal httpwwwciscocomgoiaas
copy 2010 Cisco andor its affiliates All rights reserved 7
bull Target Customer Profiles
ldquoReady willing ablerdquo to support servers VMware storage
Ready to move off appliance-oriented operations
UCS B-series for centralized medium to high server count
UCS C-series for low to medium server count or highly distributed
3rd-party server options for investment leverage
bull Platform Support
Virtual Machine Templates - defined by each UC app
Application VM Co-residency ndash ldquomix amp match UC with UCrdquo for most UC apps
Requries VMware vSphere 45 ndash ESXi only feature support depends on app vCenter is also required for specs-based
UCS HP and IBM server hardware options
DAS SAN NAS and Diskless storage options
Various NIC HBA CNA and Cisco VIC network options (1GB through 10GB)
copy 2009 Cisco Systems Inc All rights reserved Presentation_ID 8
UC on UCS Solution Components
Cisco Unified Communications 861
LAN
SAN Optional Shared Storage
PSTN
hellip UCS C-Series General-Purpose Rack-Mount Servers
LAN
SAN UCS 6100 Fabric
Interconnect Switches
Required Shared Storage
UCS 5100 Blade Server Chassis UCS 2100 Fabric Extender
UCS B-Series Blade Servers
PSTN
ldquoUC on UCSrdquo B-Series
ldquoUC on UCSrdquo C-Series
copy 2010 Cisco andor its affiliates All rights reserved 9
Unified Communication System
80bull Voice Video Presence Mobility Customer Care
bull Available in flexible deployment models
bull Deliver a unparalleled user experience
HCS Management Systembull Zero-touch fulfillment amp provisioning with self service
bull Service assurance for enabling high quality of services
bull Coordinated management and integration across domains
Optimized Virtualization Platform
(UC on UCS B-series)bull Resource optimized for reduced hardware capex
bull Installation amp upgrade automation
bull Provides flexibility customization amp additional redundancy
Scalable System Architecturebull Customer Aggregation amp SIP Trunking
bull SLA Enablement Security Scalability
bull Cloud Based SaaS Integration
Cisco Hosted Collaboration Solution Combining virtualization management amp
architecture elements for a comprehensive
platform
Cisco Business Edition 6000 Midmarket 100-1000 user solution for call
control mobility rich media presence and
contact center
Infrastructure Solutions Data Center ldquobuilding blocksrdquo
Vblock FlexPod
UC on FlexPod planned not committed
copy 2010 Cisco andor its affiliates All rights reserved 10
B230 M2 2-Socket Intel E7-2800 2 SSD 32 DIMM
B200 M2 2-Socket Intel 5600 2 SFF Disk 12 DIMM
B250 M2 2-Socket Intel 5600 2 SFF Disk 48 DIMM
B440 M2 4-Socket Intel E7-4800 4 SFF Disk 32 DIMM
C200 M2 (LFF) 2-Socket Intel 5600 4 Disks 12 DIMM 2 PCIe 1U
C220 M3 SFF 2-Socket Intel E5-2600 8 Disks 16 DIMM 2 PCIe 1U
C250 M2 2-Socket Intel 5600 8 Disks 48 DIMM 5 PCIe 2U
C460 M2 4-Socket Intel E7-4800 12 Disks 64 DIMM 10 PCIe 4U
Supported Hardware UC on UCS B
lad
e
Rack M
ou
nt
C260 M2 2-Socket Intel E7-2800 16 Disks 64 DIMM 6 PCIe 2U
UC on UCS Tested Reference Configuration UC on UCS Specs-based
BE6K
B200 M3 SFF 2-Socket Intel E5-2600 2 SFF Disk 24 DIMM
C240 M3 SFF 2-Socket Intel E5-2600 24 Disks 24 DIMM 5 PCIe 2U
C210 M2 2-Socket Intel 5600 16 Disks 12 DIMM 5 PCIe 2U
Target support
Fall 2012
Target support
Fall 2012
Target support
Fall 2012
(Note UCS Express
not supported)
copy 2010 Cisco andor its affiliates All rights reserved 11
Must be on VMware
HCL
Allowed Server
Vendors
Server model and IO devices on wwwvmwarecomgohcl
All parts must be supported by server vendor
No hardware oversubscription allowed for UC
VMware vCenter is REQUIRED
Processor
Intel Xeon 56xx75xx 253+ GHz or E7-xxxx 24+ GHz
CPU support varies by UC app
Required physical core count = sumUC VMs vCPU (+1 if Unity Cxn)
Capacity = sumUC VMs vRAM + 2GB for VMware
Follow server vendor for module densityconfiguration
Memory
Storage Network
Must be on VMware HCL and supported by server vendor
Eg 1GbE10GbE NIC ge2Gb FC HBA 10Gb CNA or VIC
UCS BC UCS Express Other 3rd-parties
Adapters (eg LANStorage Access)
SAN (FCoE FC iSCSI) NAS (NFS) Variable DASRAID
Storage capacity = sumUC VMs vDisk + VMwareRAID overhead
Storage performance = sumUC VM IOPS
copy 2010 Cisco andor its affiliates All rights reserved 12
Intel
Program
Nehalem-
EP
Nehalem-
EX
Westmere-
EP
Westmere-
EX
Romley-
EP
CPU Family 55xx
65xx
75xx
56xx E7-28xx
E7-48xx
E7-88xx
E5-26xx
CPU Cores 4 468 46 6810 468
CPU Speed 2-29 GHz 17-27 GHz 19-333 GHz 17-27 GHz 1-3 GHz
Example UCS
Models with
these CPUs
B200250 M1
C210 M1
C250 M1M2
C200 M2
B230 M1
B440 M1
C460 M1
B200250 M2
C210 M2
C250 M2
B230 M2
B440 M2
C260 M2
C460 M2
B200 M3
C220 M3
C240 M3
UC on UCS
Certifications
TRCs for B200 M1
TRCs for C210 M1
(E5540)
TRC for C200M2
(E5506)
Specs-based
(75xx 253+ GHz)
TRCs for B200 M2
TRCs for C210 M2
(E5640)
Specs-based
(56xx at 253+
GHz)
TRC for B230 M2
TRC for B440 M2
(E7-28704870)
Specs-based
(E7 at 24+ GHz)
Not Currently
Supported by UC
copy 2010 Cisco andor its affiliates All rights reserved 13
VM VM VM
VM VM VM
VM VM VM
VM VM VM
VM
VM
VM
ldquoSmallrdquo VM
ldquoMediumrdquo VM
ldquoLargerdquo VM
VM
VM
VM VM
VM VM VM
19 UC VMs with
total 40 vcpursquos
19 MCS Appliances 5 virtualized
servers (dual 4-core
B200 M2
TRC)
4 virtualized
servers (dual 6-core
B200 M2
specs-based)
2 virtualized
servers (dual 10-core
B230 M2
TRC)
19 UC
app
copies
copy 2010 Cisco andor its affiliates All rights reserved 14
UC on UCS Products with Owner UC on UCS TRC UC on UCS Specs-based
Unified Communications Manager
Business Edition 6000 C200 M2 only Not supported
Unity Connection
Unified Presence
Unified Contact Center Express
Cisco Emergency Responder
Session Manager Edition
InterCompany Media Engine
Unified Attendant Consoles
Unity
Unified Workforce Optimization (WFO)
Unified Contact Center Enterprise Planned
Unified Intelligence Center
Unified Customer Voice Portal Planned
MediaSense Planned
Unified Contact Center Mgmt Portal
SocialMiner
Finesse Planned
Unified EmailWeb Interaction Mgr
Prime UCMS (OMPMSMSSM)
Webex Premise Planned Planned
Unified MeetingPlace Planned Planned
TMSCTMS Planned Planned
VCS Planned Planned
copy 2010 Cisco andor its affiliates All rights reserved 15
Why virtualize
your UC
Why virtualize
on UCS
Lower TCO
Business
Agility
Additional
Savings and
Increased
Agility
End to End
Solution
Single
Support
Tested Reference Configurations
Vblocks
Cisco options
VCE Vblock options
Infrastructure Simplification (Cables Adapters Switching)
Converge Communications and DC Networks ndash ldquowire oncerdquo
Consolidates System Mgmt
Easier Service Provisioning
Reduce ServersStorage
Reduced Power Cooling Cabling Space Weight
Investment Leverage amp Easy Server Repurposing
Efficient App Expansion
Accelerated UC rollouts
Better Business Continuity
PortableMobile VMs
UCS is the industryrsquos only
fully unified and virtualization-
aware compute solution
copy 2010 Cisco andor its affiliates All rights reserved 16
CAPEX
bull Reduced Server Count (50-75)
bull NetworkStorage Consolidation (50+)
bull Reduced Cabling (50+)
OPEX
Reduced Rack amp Floor Space (36)
Reduced PowerCooling (20+)
Fewer Servers to Manage (50-75 less)
Reduced MaintenanceSupport Costs (~20)
Example 5000 users Dial tone voicemail and Presence 10 are Contact Center Agents
11 non-virtualized rack servers required for UC more for other business apps
copy 2010 Cisco andor its affiliates All rights reserved 17
CAPEXOPEX
bull Similar Consolidation and Operational EfficiencyScale benefits as with UC on UCS B-series
Other Benefits
Lower initial investment
Simple entrymigration to virtualized UC ndash Data Center expertise not required unless using SAN option
Example 5000 users Dial tone voicemail and Presence 10 are Contact Center Agents
11 non-virtualized rack servers required for UC more for other business apps
copy 2010 Cisco andor its affiliates All rights reserved 18
$-
$500
$1000
$1500
$2000
$2500
$3000
2 4 8 10 12 20 50 100
UCS B230 M2 TRC OPEX ($K)
UCS B230 M2 TRC CAPEX ($K)
UCS B200 M2 TRC OPEX ($K)
UCS B200 M2 TRC CAPEX ($K)
UCS C210 M2 TRC OPEX ($K)
UCS C210 M2 TRC CAPEX ($K)
MCS 7845-I3 OPEX ($K)
MCS 7845-I3 CAPEX ($K)
Assumptions
bull UC only no other business applications included ldquoSparerdquo or ldquohot standbyrdquo hosts not included
bull ldquoServerrdquo is either an MCS Appliance or a 2-vcpu-core ldquoVirtual Machinerdquo
bull Dual sites split MCS or UCS TRC servers across sites no single point of failure ndash redundant sites switching blade chassis rackblade servers
bull Using list pricing for MCS-7845-I3-IPC1 UCS-C210M2-VCD2 UCS-B200M2-VCS1 UCS-B230M2-VCDL1 and VMware Enterprise Plus Edition
Appliance or VM Count
PSTN
2104
2104
2104
2104
SANLAN
Dual Site Scenario PSTN
2104
2104
2104
2104
SANLAN
hellip
hellip
hellip
hellip
UC on UCS
B200 B230
UC on UCS
C210
MCS 7845
B230 M2
vs B200 M2
C210 M2
vs MCS 7845
20 copy 2010 Cisco andor its affiliates All rights reserved
Current Offers Technical Overview
21 copy 2010 Cisco andor its affiliates All rights reserved
Eg 4 physical servers
Each MCS 7800 hosts only
one UC app instance
4 virtual servers (VMrsquos) on 1 physical server
Single virtualized server with total 8 physical
cores hosts all UC app instances
Unity
Connection
Unified CM
VM for
Unified
CM
Sub
Unified CCX
VM for
Unity
Cxn
VM for
Unified
CCX
VM for
Unified
CM
Pub
or
copy 2010 Cisco andor its affiliates All rights reserved 22
Server Model TRC CPU RAM Storage Adapters
UCS B200 M2 Blade Server TRC 1
Dual E5640 (8 physical cores total)
48 GB DAS (RAID1) for
VMware FC SAN for UC apps
Cisco VIC
TRC 2 Dual E5640 (8
physical cores total) 48 GB Diskless Cisco VIC
UCS B230 M2 Blade Server
TRC 1 Dual E7-2870 (20
physical cores total) 128 GB Diskless Cisco VIC
UCS B440 M2 Blade Server
TRC1 Dual E7-4870 (40
physical cores total)
256 GB Diskless Cisco VIC
UCS C260 M2 Rack-Mount Server
TRC 1 Dual E7-2870 (20
physical cores total) 128 GB DAS (2x RAID5) 1GbE NIC
UCS C210 M2 General-Purpose
Rack-Mount Server TRC 1
Dual E5640 (8 physical cores total)
48 GB
DAS (2 disks RAID1) for VMware + DAS (8 disks RAID5) for
UC apps
1GbE NIC
TRC 2 Dual E5640 (8
physical cores total) 48 GB
DAS (2 disks RAID1) for VMware FC SAN
for UC apps
1GbE NIC and 4G FC HBA
TRC 3 Dual E5640 (8
physical cores total) 48 GB Diskless
1GbE NIC and 4G FC HBA
UCS C200 M2
General-Purpose Rack-Mount Server
TRC 1 Dual E5506 (8
physical cores total) 24 GB
DAS (4 disks RAID10) for VMware
+ UC apps 1GbE NIC
copy 2010 Cisco andor its affiliates All rights reserved 23
UC app Scale
(ldquousersrdquo)
vCPU (cores) Usually 253+ GHz
per core required
vRAM
(GB)
vDisk
(GB)
Notes
Unified
CM
1000 2 4 1 x 80 UCS C200 or BE6K only
2500 1 225 1 x 80 Not for use with C200BE6K
7500 2 6 2 x 80
10000 4 6 2 x 80
Unity
Connection
500 1 2 1 x 160
1000 1 4 1 x 160
5000 2 4 1 x 200
10000 4 4 2 x 146 Not for use with C200BE6K
20000 7 8 2 x 300
Unified
Presence
1000 1 2 1 x 80
2500 2 4 1 x 80 Not for use with C200BE6K
5000 4 4 2 x 80
Unified CCX 100 2 4 2 x 146 UCS C200 or BE6K only
300 2 4 2 x 146 Not for use with C200BE6K
400 4 8 2 x 146
Not exhaustive subject to change see wwwciscocomgouc-virtualized for latest
ie user count for particular values of BHCA trace level encryption CTI and other factors Actual
supportable user count may vary by deployment
copy 2010 Cisco andor its affiliates All rights reserved 24
Policy still lives here httpdocwikiciscocomwikiUnified_Communications_Virtualization_Sizing_GuidelinesApplication_Co-residency_Support_Policy
Three aspects
1 Allowed App Mix on same physical server
SAME RULES for TRC vs specs-based UCSHPIBM
2 Allowed VM OVA choices
DIFFERENT RULES for TRC vs specs-based due to CPU differences
3 Max number of VMs on same physical server
SAME RULES for TRC vs specs-based to determine max but specs-based might allow more VMs
Note DAS IO bottlenecks may prevent very high VM counts even if CPURAM are sufficient
copy 2010 Cisco andor its affiliates All rights reserved 25
bull Which apps can share the same physical server In general any UC with UC from apps listed at httpdocwikiciscocomwikiUnified_Communications_Virtualization_Supported_Applications
But note some UC apps restrict this eg BE6K CUCCE See their rules on their docwiki ldquochild pagesrdquo
NMTGrsquos UC Mgmt Suite (CUOM CUSM CUSSM CUPM) counts as a UC app for this
Note UCS C200 M2 TRC1 for non-BE6K no longer has special restrictions on UC App Mix
bull SEPARATE PHYSICAL SERVER required for non-UC or 3rd-party Eg N1KV ARC SingleWire vCenter FilePrint Directory CRMERP Groupware non-CUCM TFTP Nuance etc
UC VM UC VMhellipVM for
VMware
vCenter
VM for
Nexus
1KV
VSM
VM for
Solutions
Plus
CTDP
app
VM for
Unaffiliated
3rd-party
app
UC VM UC VMhellipVM for
VMware
vCenter
VM for
Nexus
1KV
VSM
VM for
Solutions
Plus
CTDP
app
VM for
Unaffiliated
3rd-party
app
UC VM UC VMhellipVM for
VMware
vCenter
VM for
Nexus
1KV
VSM
VM for
Solutions
Plus
CTDP
app
VM for
Unaffiliated
3rd-party
app
UC VM UC VMhellipVM for
VMware
vCenter
VM for
Nexus
1KV
VSM
VM for
Solutions
Plus
CTDP
app
VM for
Unaffiliated
3rd-party
app
Different blades in same chassis OK
Same blade same chassis not OK
copy 2010 Cisco andor its affiliates All rights reserved 26
bull App to HW some apps eg CUCCE donrsquot allow any of their OVAs on certain TRCs
See httpdocwikiciscocomwikiUnified_Communications_Virtualization_Supported_Applications
bull OVA to HW Some OVAs are deliberately only for use with a particular TRC or CPU
See co-res policy page and Notes column in httpdocwikiciscocomwikiUnified_Communications_Virtualization_Downloads_28including_OVAOVF_Templates29
Why Usually due to CPU modelspeed dependencies
C200 M2 TRC1
(E5506 213 GHz)
UCM
25K
UCM
75K
UCM
10K
C200 M2 Specs-based
(56xx 253+GHz )
B200C210 M2 TRC or Specs-based
(E5640 266 GHz on TRC
56xx75xx 253+ GHz on specs-based)
UCM
25K
UCM
75K
UCM
10K
UCM
1K
UCM
1K
UCM
25K
UCM
75K
UCM
10K
UCM
1K
copy 2010 Cisco andor its affiliates All rights reserved 27
Core
Core
Core
Core
Core
Core
Core
Core
Core
Core
Core
Core
Core
Core
Core
Core
Core
Core
Core
Core
VM VM VM VM
V
M
V
M
V
M
V
M
V
M
V
M
V
M
V
M
VM VM
Dual-socket 4-core
Eg UCS C210 M2 TRC1
with dual E5640
Dual-socket 6-core
Eg UCS C210 M2 Specs-based
with UC-supported CPU model
and min speed
VM VM V
M
V
M
VM VM VM V
M
Idle
VM
VM
VM
Jumbo + 1 reserved
or
Mixed sizes + 1 reserved
or
Mixed sizes
or
21 Large eg UCM 10K
or
41 Med eg UCM 75K
or
81 Small eg UCM 25K
ldquoSmallrdquo VM
ldquoMediumrdquo VM
ldquoLargerdquo VM
ldquoJumbordquo VM
VM
VM
Idle
VM VM VM VM
V
M
V
M
V
M
V
M
V
M
V
M
V
M
V
M
VM VM
VM VM V
M
V
M
VM VM VM V
M
Idle
Mixed sizes + 1 reserved
or
Mixed sizes
or
31 Large eg UCM 10K
or
61 Med eg UCM 75K
or
121 Small eg UCM 25K
VM
Idle
V
M
V
M
V
M
V
M
VM VM
VM
VM VM
VM
VM VM
copy 2010 Cisco andor its affiliates All rights reserved 28
Virtual Software Switch Options
VM
LAN SAN
ESXi Hypervisor
Software Switch
vNIC
CNA
FCoE
VMware
vSwitch
VMware
dvSwitch
Cisco Nexus
1KV
Host based (local) Distributed Distributed
IEEE 8021Q VLAN
tagging
IEEE 8021Q VLAN
tagging
IEEE 8021Q VLAN
tagging
VLANs only visible to
local ESXi host
VLANs visible to all
ESXi hosts
VLANs visible to all
ESXi hosts
EtherChannel EtherChannel EtherChannel
-- -- Virtual PortChannel
-- -- QoS Marking
(DSCPCoS)
-- -- ACL
-- -- SPAN
RADIUSTACACS+
No VM needed No VM needed VM needed for VSM
vmNIC
UCS B200
Strongly recommended for UC on UCS B-Series
Not required but recommended for UC on UCS C-Series
copy 2010 Cisco andor its affiliates All rights reserved 29
pSwitch
ESXi
bull Cisco Software Switch in Hypervisor
bull Familiar network server operations amp management model
bull Enhanced diagnostic amp monitoring capability
bull Visibility direct to VM
ESXi Nexus
1000V
VEM
Nexus
1000V
VEM
Nexus 1000V VSM
copy 2010 Cisco andor its affiliates All rights reserved 30
Physical switch maps L3 DSCP to L2 CoS
CUCM marks traffic based on L3 DSCP values
pSwitch (CAT6K etc) can do mapping from L3 DSCP to L2 CoS (if needed)
CTL Packet L3
dc1-access-6k(config)mls qos map dscp-cos 24 to 3
dc1-access-6k(config)mls qos map dscp-cos 46 to 5
CS3
L20 L3CS3
CUCM
CAT6K
L23 L3CS3 L23 L3CS3
copy 2010 Cisco andor its affiliates All rights reserved 31
bull UCS 6100 doesnrsquot look into L3 IP header
bull DSCPToS setting in IP header is not altered by UCS
bull 6100 sends packet to uplink pEthernet switch
bull Default QoS settings on UCS
FCoE (ldquomatch cos 3rdquo) ndash no drop policy
Rest (ldquomatch anyrdquo) ndash Best Effort Queue
vSwitch amp UCS 6100 can not map L3 DSCP to L2 CoS
L20 L3CS3
CUCM
CAT6K
UCS 6100
copy 2010 Cisco andor its affiliates All rights reserved 32
bull UC blades Network Adapters QoS policy set to Platinum (CoS=5 No
Drop)
bull Non-UC blades Network Adapters QoS policy set to best effort
N1Kv Considerations
bull UC sig traffic (CoS3) share queues with FCoE traffic (CoS3)
bull UC sig traffic is given lossless behavior
bull Default CoS value of 3 for FCoE traffic should never be changed
Without N1Kv
Caveat
bull All traffic types from virtual UC App will get CoS value of Platinum
bull Non-UC application gets best-effort class might not be acceptable
L20 L3CS3
L23 L3CS3
L23 L3CS3
CUCM
N1KV
UCS 6100
CAT6K
copy 2010 Cisco andor its affiliates All rights reserved 33
Compute Layer
SANStorage
Layer ndash Cisco
SRND
Cisco
UCS 6100
Fabric
Interconnect
UCS 5100
Blade
Server
Cisco SAN
Switch
4x10GE
4x10GE
4x10GE
4x10GE
FC FC
FC FC
Nexus
1000V
FC Storage
SP-A SP-B
3rd party layer
CUCM VM IOPS ~ 200
200 IOPS 4KB ~ 64 Mbps per VM
bull Total capacity 28000 IOPS
bull 14000 IOPS per controller
bull 4 KByte block size
14000 IOPS x (4KB) ~ 428 Mbps
600 Mbps throughputcontroller
3rd Party SAN Example
Result
bull One 4 Gbps FC interface is enough to
handle the entire capacity of one Storage
Array
bull HA requires four FC interfaces
copy 2010 Cisco andor its affiliates All rights reserved 34
copy 2010 Cisco andor its affiliates All rights reserved 35
bull All UC Deployment Models are supported
No change in the current deployment models
Base deployment model ndash Single Site Centralized Call Processing etc are not changing
bull VM machine layout on a blade andor chassis
Unity Connection requires one extra CPU core (vCPU) on blade
bull Software checks for design rules
No rules or restrictions are in place in UC Apps to check if you are running the primary and sub on the same blade
bull CoWAN ruleslatency requirement are same
Does not depend on CUCM code or hardware
httpwwwciscocomgoucsrnd
copy 2010 Cisco andor its affiliates All rights reserved 36
bull SRND application-layer guidelines are same as when on MCS
Redundancy Rules are the same
Clustering over the WANlatency numbers
Mega Cluster supported in 85
Determine quantityrole of nodes
For HA No design checks validating proper placement of primary and secondary servers
CUCCE private network requirement
bull Mixed clusters of HP IBM UCS are supported
Subject to ldquocommon senserdquo rules ndash eg donrsquot make Pub or Primary less powerful than Sub or Secondary
bull Direct attach devices must be on physical MCS server
-MOH Live audio stream
-Tape BackupFloppy
bull New factors to consider for end-to-end QoS designconfiguration
37 copy 2010 Cisco andor its affiliates All rights reserved
FROM CUCM
System Release To Unified CM 85 To UC System 85
4x Multi Hop thru 61(x)71(x) Multi Hop thru 61(3)
51(2) Multi Hop thru 61(x)71(x) NA
51(3) 2 Hop thru 71(3) NA
61(1) 2 Hop thru 61(x)71(x) 2 Hop
61(2) 2 Hop thru 61(x)71(x) NA
61(3) 2 Hop thru 61(x)71(x) NA
61(4) Single Hop NA
61(5) Single Hop NA
70(1) 2 Hop thru 71(x) 2 Hop
71(2) 2 Hop thru 71(x) 2 Hop
71(3) Single Hop Single Hop Multi stagesBWC supported
71(5) Single Hop NA
80(1) Single Hop Single Hop Multi stagesBWC supported
80(2) 80(3) Single Hop NA
38 copy 2010 Cisco andor its affiliates All rights reserved
VMware Feature support bull VMware feature support varies by application
bull Some features are supported with Caveats some Partially
bull For example
bull Clone Virtual Machine
ldquoY (C)rdquo means VM has to be powered off
bull vMotion
ldquoY (C)rdquo means vMotion supported for live traffic calls shouldnrsquot be dropped (but not guaranteed)
ldquoPartialrdquo means in maintenance mode only
ESXi Features CUCM CUC CUP CCX
Clone Virtual Machine Y (C) Y (C) Y (C) Y (C)
VMware vMotion Y (C) Partial Partial Y (C)
Resize Virtual Machine Partial Partial Partial Partial
VMware HA Y (C) Y (C) Y (C) Y (C)
Boot From SAN Y (C) Y (C) Y (C) Y (C)
VMware DRS No No No No
Docwiki httpdocwikiciscocomwikiUnified_Communications_VMware_Requirements
copy 2010 Cisco andor its affiliates All rights reserved 39
Current Business Continuity and Disaster Recovery strategies are still applicable
The UC apps redundancy rules are same
Distribute UC application nodes across UCS blades chassis and sites to minimize failure impact
Primarysecondary on different blade chassis sites
On same blade mix Subs with TFTPMoH vs just Subs
Redundancy of UCS components (blade chassis FEX links Interconnect switching)
Redundancy of ldquonewrdquo network types (10GbE SAN multi-pathing etc)
copy 2010 Cisco andor its affiliates All rights reserved 40
TAC Support Demarcation
Server Hardware
Shared Storage
VMware Cisco
Application
UC on UCS
Tested Reference Configuration
Cisco 3rd-party Cisco
Cisco VCE VCE
3rd-party
VCE
UC on UCS
Specs-based (including Vblock option)
Cisco 3rd-party Cisco
Cisco VCE VCE
3rd-party
VCE
3rd-party VMware (HP IBM)
Specs-based 3rd-party 3rd-party 3rd-party Cisco
MCS 7800 Appliances Cisco NA NA Cisco
Customer-provided MCS 7800 equivalent 3rd party NA NA Cisco
41 copy 2010 Cisco andor its affiliates All rights reserved
Customer Example ndash Primary Data Center
OLD NEW
Hardware Nodes 62 Physical Servers ( EU HQ clusters) 14 Approx
Software Version 615 amp 851 851
Ucxn Version 421 851 ndash 3 pairs -- Virtualized
CER 20 70 86 -- Virtualized
CM SUB CM PUB CM SUB CM SUB
MOH TFTP CER CM SUB
CER UCxn UCxN CM SUB
42 copy 2010 Cisco andor its affiliates All rights reserved
Deployment Model ndash Data Center 1
CM SUB CM PUB CM SUB CM SUB
MOH TFTP UCxN UCxn
CER UCxn UCxn UCxn
43 copy 2010 Cisco andor its affiliates All rights reserved
Deployment Model ndash Data Center 2
CM SUB CM PUB CM SUB CM SUB
MOH TFTP Ucxn UCxn
CER UCxn UCxn UCxn
copy 2010 Cisco andor its affiliates All rights reserved 44
Customer Design
PSTN
IP WAN
SIP Proxy
Unified Communications Manager
Unity Connection
Unified Presence
Unified Contact Center Express
Cisco UCS 5108 Chassis with UCS B200 Blade Servers
Cisco UCS 6100 Fabric Interconnect Switch
Cisco UCS C210 or C200 General-Purpose Rack-mount Server
CUSP
(11K phones)
(3K phones)
(400 phones)
copy 2010 Cisco andor its affiliates All rights reserved 45
HQ Details
Blade 1 Blade 2
CPU-1 CPU-2 CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
Blade 3 Blade 4
CPU-1 CPU-2 CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
CUP-1 UCCX-1 CUP-2
Blade 5 Blade 6
CPU-1 CPU-2 CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
UCCX-1
UCxn-1 Active
Leave
idle for
UCxn
UCxn-2
Active
TFTP-2
SUB-2
SUB-4
SUB-6
SUB-8 CUP-2 UCCX-2
SUB-3 PUB SUB-1
CUP-1
TFTP-1 SUB-5
SUB-7
Blade Slot 7 Blade Slot 8
CPU-1 CPU-2 CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
CUCM VM OVAs
Messaging VM OVAs
Leave
idle for
UCxn
ldquoSparerdquo blade slots available for non-UC workloads such as Cisco Nexus 1000V VMware
vCenter or 3rd-party workloads such as directories email groupware or other business
applications
Contact Center VM OVAs
Presence VM OVAs
copy 2010 Cisco andor its affiliates All rights reserved 46
Rack Server 3
CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
UCxn-2 CCX-1 CUP
Rack Server 2
CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
Rack Server 1
CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
Leave
idle for
UCxn
UCxn-1 TFTP-2 SUB-2
Leave
idle for
UCxn
SUB-1 CCX-2 PUB TFTP-1
Branch Office Details
Rack Server 2
CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
Rack Server 1
CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
PUBTFTP CCX-1 CUP
SUB CCX-2
Leave
idle for
UCxn
Leave
idle for
UCxn
UC
xn
-1
UC
xn
-1
CUCM VM OVAs
Messaging VM OVAs
Contact Center VM OVAs
Presence VM OVAs
copy 2010 Cisco andor its affiliates All rights reserved 47
bull DAS Rack Mount Server (Cisco C-Series)
bull Popular DAS Protocol SCSI
bull iSCSI Access SCSI storage media using IP network
bull Fibre Channel The most popular SAN protocol today
Cable distance ~ 2 km
Popular speed - 4 Gbs
bull NAS (Network Attached Storage) uses NFS (Network File System) protocol over TCPIP
DAS
SCSI
Computer System
SCSI Bus Adapter
SCSI Device Driver
Volume Manager
File System
Application
iSCSI
File System
Application
SCSI Device Driver iSCSI Driver
TCPIP Stack
NIC
Volume Manager
NIC
TCPIP Stack
iSCSI Layer
Bus Adapter
Host
Server
Storage
Transport
Storage
Media
FC SAN
FC
FC HBA
SCSI Device Driver
Volume Manager
File System
Application
Computer System Computer System
Block IO
SAN IP
copy 2010 Cisco andor its affiliates All rights reserved 48
HDD 1
450gig
15K RPM
HDD 2
450gig
15K RPM
HDD 3
450gig
15K RPM
HDD 4
450gig
15K RPM
HDD 5
450gig
15K RPM
Single RAID5 Group (14 TB Usable Space)
PUB SUB1 UCCX1
UC
VM 1
UC
VM 2
UC
VM 3
LUN 2 (720 GB) LUN 1 (720 GB)
UCCX2 CUP1 CUP2
UC
VM 4
UC
VM 5 UC
VM 6
4 to 8 UC VMs per
LUN
(max dependent on
sumvDisks)
NASSAN Array Best Practices for UC
Must be lt2TB per
LUN Recommend
500GB to 15 TB
Use FC class disks
with ~180 IOPS (eg
450 GB 15K or
300 GB 15K)
copy 2010 Cisco andor its affiliates All rights reserved 49
DAS Example UCS C210 M2 TRC1
HDD 1
146 GB
15K RPM
HDD 2
146 GB
15K RPM
HDD 3
146 GB
15K RPM
HDD 4
146 GB
15K RPM
HDD 5
146 GB
15K RPM
HDD 6
146 GB
15K RPM
HDD 7
146 GB
15K RPM
HDD 8
146 GB
15K RPM
HDD 9
146 GB
15K RPM
HDD 10
146 GB
15K RPM
Single RAID1
Volume Single RAID5 Volume (1022 GB after RAID overhead)
VMFS Filestore (947 GB after VMFS overhead)
PUB UCCX1
UC
VM 1
UC
VM 3
CUP1
UC
VM 5
vSphere ESXi
image
Notes
bull VMFS block size limits
max vDisk size
bull Could have gt1 VMFS
datastore on RAID
volume
copy 2010 Cisco andor its affiliates All rights reserved 50
DAS Example UCS C200 M2 TRC1 for BE6K
HDD 1
1 TB
72K RPM
HDD 2
1 TB
72K RPM
HDD 3
1 TB
72K RPM
HDD 4 1
TB
72K RPM
Single RAID10 Volume (2 TB after RAID
overhead)
VMFS Filestore (18 TB after VMFS overhead)
PUB UCCX1
UC
VM 1
UC
VM 3
CUP1
UC
VM 5 vSphere ESXi
image
copy 2010 Cisco andor its affiliates All rights reserved 51
UC Application Co-residency
bull See wwwciscocomgouc-virtualized for latest
bull Based on supported OVA for download
OVA reserves cores RAM etc to VMs
Basic rule of thumb fill up blade until out of capacity
If blade contains VM for messaging must reserve core for ESXi
bull Hardware oversubscription not supported
52 copy 2010 Cisco andor its affiliates All rights reserved
Virtual Machine Sizing
bull Virtual Machine virtual hardware defined by an VM template
vCPU vRAM vDisk vNICs
bull Capacity
bullAn VM template is associated with a specific capacity
bull VM templates are packaged in a OVA file
bull There are usually different VM template per release For example CUCM_80_vmv7_v21ova
CUCM_85_vmv7_v21ova
CUCM_86_vmv7_v15ova
Includes product product version VMware hardware version template version
53 copy 2010 Cisco andor its affiliates All rights reserved httptoolsciscocomcucst
copy 2010 Cisco andor its affiliates All rights reserved 54
bull Customer-accessible
bullUC on UCS httpwwwciscocomgouconucs and wwwciscocomgouc-virtualized and wwwciscocomgoucsrnd and wwwciscocomgoswonly (UCS page)
bullUCS in general httpwwwciscocomgoucs
bullVblocks amp Virtual Computing Environment wwwvceportalcom and wwwvcecom
bullFlexPods wwwcisconetappcom
bullCisco Infrastructure as a Service (IaaS) Portal httpwwwciscocomgoiaas
bullCisco Unified Service Delivery Portal httpwwwciscocomgousd
bullldquoCUCM on Virtual Serversrdquo (summary of ldquowhatrsquos newdifferent when virtualizedrdquo) httpwwwciscocomenUSdocsvoice_ip_commcucmvirtualservershtml
bull ldquoVirtualization and UCS 101rdquo httpnewsroomciscocomdlls2010ts_102810html
bull UC on UCS ldquotech valuerdquo httpwwwciscocomenUSsolutionsns340ns339ns638ns914html_TWTVtwtv_episode_74html
copy 2010 Cisco andor its affiliates All rights reserved 55
bull Partner Central ldquoServers OS and Virtualizationrdquo
httpwwwciscocomwebpartnersselltechnologyipcservers_os_virt_for_uchtml
bull Partner Community ldquoServers OS and Virtualizationrdquo
httpswwwmyciscocommunitycomcommunitypartnercollaborationucservers
bull Design info
httpwwwciscocomgouc-virtualized and httpwwwciscocomgoucsrnd
bull ldquoWhatrsquos new whatrsquos different when on VMwarerdquo customer doc
httpwwwciscocomenUSdocsvoice_ip_commcucmvirtualservershtml
bull Product Literature
httpwwwciscocomgoucs and httpwwwciscocomgouconucs
bull Ordering Guide
httpwwwciscocomwebpartnersselltechnologyipcuc_tech_readinesshtml~7
bull Virtual Computing Environment Portal httpwwwvceportalcom
bull Cisco Infrastructure as a Service (IaaS) Portal httpwwwciscocomgoiaas
copy 2009 Cisco Systems Inc All rights reserved Presentation_ID 8
UC on UCS Solution Components
Cisco Unified Communications 861
LAN
SAN Optional Shared Storage
PSTN
hellip UCS C-Series General-Purpose Rack-Mount Servers
LAN
SAN UCS 6100 Fabric
Interconnect Switches
Required Shared Storage
UCS 5100 Blade Server Chassis UCS 2100 Fabric Extender
UCS B-Series Blade Servers
PSTN
ldquoUC on UCSrdquo B-Series
ldquoUC on UCSrdquo C-Series
copy 2010 Cisco andor its affiliates All rights reserved 9
Unified Communication System
80bull Voice Video Presence Mobility Customer Care
bull Available in flexible deployment models
bull Deliver a unparalleled user experience
HCS Management Systembull Zero-touch fulfillment amp provisioning with self service
bull Service assurance for enabling high quality of services
bull Coordinated management and integration across domains
Optimized Virtualization Platform
(UC on UCS B-series)bull Resource optimized for reduced hardware capex
bull Installation amp upgrade automation
bull Provides flexibility customization amp additional redundancy
Scalable System Architecturebull Customer Aggregation amp SIP Trunking
bull SLA Enablement Security Scalability
bull Cloud Based SaaS Integration
Cisco Hosted Collaboration Solution Combining virtualization management amp
architecture elements for a comprehensive
platform
Cisco Business Edition 6000 Midmarket 100-1000 user solution for call
control mobility rich media presence and
contact center
Infrastructure Solutions Data Center ldquobuilding blocksrdquo
Vblock FlexPod
UC on FlexPod planned not committed
copy 2010 Cisco andor its affiliates All rights reserved 10
B230 M2 2-Socket Intel E7-2800 2 SSD 32 DIMM
B200 M2 2-Socket Intel 5600 2 SFF Disk 12 DIMM
B250 M2 2-Socket Intel 5600 2 SFF Disk 48 DIMM
B440 M2 4-Socket Intel E7-4800 4 SFF Disk 32 DIMM
C200 M2 (LFF) 2-Socket Intel 5600 4 Disks 12 DIMM 2 PCIe 1U
C220 M3 SFF 2-Socket Intel E5-2600 8 Disks 16 DIMM 2 PCIe 1U
C250 M2 2-Socket Intel 5600 8 Disks 48 DIMM 5 PCIe 2U
C460 M2 4-Socket Intel E7-4800 12 Disks 64 DIMM 10 PCIe 4U
Supported Hardware UC on UCS B
lad
e
Rack M
ou
nt
C260 M2 2-Socket Intel E7-2800 16 Disks 64 DIMM 6 PCIe 2U
UC on UCS Tested Reference Configuration UC on UCS Specs-based
BE6K
B200 M3 SFF 2-Socket Intel E5-2600 2 SFF Disk 24 DIMM
C240 M3 SFF 2-Socket Intel E5-2600 24 Disks 24 DIMM 5 PCIe 2U
C210 M2 2-Socket Intel 5600 16 Disks 12 DIMM 5 PCIe 2U
Target support
Fall 2012
Target support
Fall 2012
Target support
Fall 2012
(Note UCS Express
not supported)
copy 2010 Cisco andor its affiliates All rights reserved 11
Must be on VMware
HCL
Allowed Server
Vendors
Server model and IO devices on wwwvmwarecomgohcl
All parts must be supported by server vendor
No hardware oversubscription allowed for UC
VMware vCenter is REQUIRED
Processor
Intel Xeon 56xx75xx 253+ GHz or E7-xxxx 24+ GHz
CPU support varies by UC app
Required physical core count = sumUC VMs vCPU (+1 if Unity Cxn)
Capacity = sumUC VMs vRAM + 2GB for VMware
Follow server vendor for module densityconfiguration
Memory
Storage Network
Must be on VMware HCL and supported by server vendor
Eg 1GbE10GbE NIC ge2Gb FC HBA 10Gb CNA or VIC
UCS BC UCS Express Other 3rd-parties
Adapters (eg LANStorage Access)
SAN (FCoE FC iSCSI) NAS (NFS) Variable DASRAID
Storage capacity = sumUC VMs vDisk + VMwareRAID overhead
Storage performance = sumUC VM IOPS
copy 2010 Cisco andor its affiliates All rights reserved 12
Intel
Program
Nehalem-
EP
Nehalem-
EX
Westmere-
EP
Westmere-
EX
Romley-
EP
CPU Family 55xx
65xx
75xx
56xx E7-28xx
E7-48xx
E7-88xx
E5-26xx
CPU Cores 4 468 46 6810 468
CPU Speed 2-29 GHz 17-27 GHz 19-333 GHz 17-27 GHz 1-3 GHz
Example UCS
Models with
these CPUs
B200250 M1
C210 M1
C250 M1M2
C200 M2
B230 M1
B440 M1
C460 M1
B200250 M2
C210 M2
C250 M2
B230 M2
B440 M2
C260 M2
C460 M2
B200 M3
C220 M3
C240 M3
UC on UCS
Certifications
TRCs for B200 M1
TRCs for C210 M1
(E5540)
TRC for C200M2
(E5506)
Specs-based
(75xx 253+ GHz)
TRCs for B200 M2
TRCs for C210 M2
(E5640)
Specs-based
(56xx at 253+
GHz)
TRC for B230 M2
TRC for B440 M2
(E7-28704870)
Specs-based
(E7 at 24+ GHz)
Not Currently
Supported by UC
copy 2010 Cisco andor its affiliates All rights reserved 13
VM VM VM
VM VM VM
VM VM VM
VM VM VM
VM
VM
VM
ldquoSmallrdquo VM
ldquoMediumrdquo VM
ldquoLargerdquo VM
VM
VM
VM VM
VM VM VM
19 UC VMs with
total 40 vcpursquos
19 MCS Appliances 5 virtualized
servers (dual 4-core
B200 M2
TRC)
4 virtualized
servers (dual 6-core
B200 M2
specs-based)
2 virtualized
servers (dual 10-core
B230 M2
TRC)
19 UC
app
copies
copy 2010 Cisco andor its affiliates All rights reserved 14
UC on UCS Products with Owner UC on UCS TRC UC on UCS Specs-based
Unified Communications Manager
Business Edition 6000 C200 M2 only Not supported
Unity Connection
Unified Presence
Unified Contact Center Express
Cisco Emergency Responder
Session Manager Edition
InterCompany Media Engine
Unified Attendant Consoles
Unity
Unified Workforce Optimization (WFO)
Unified Contact Center Enterprise Planned
Unified Intelligence Center
Unified Customer Voice Portal Planned
MediaSense Planned
Unified Contact Center Mgmt Portal
SocialMiner
Finesse Planned
Unified EmailWeb Interaction Mgr
Prime UCMS (OMPMSMSSM)
Webex Premise Planned Planned
Unified MeetingPlace Planned Planned
TMSCTMS Planned Planned
VCS Planned Planned
copy 2010 Cisco andor its affiliates All rights reserved 15
Why virtualize
your UC
Why virtualize
on UCS
Lower TCO
Business
Agility
Additional
Savings and
Increased
Agility
End to End
Solution
Single
Support
Tested Reference Configurations
Vblocks
Cisco options
VCE Vblock options
Infrastructure Simplification (Cables Adapters Switching)
Converge Communications and DC Networks ndash ldquowire oncerdquo
Consolidates System Mgmt
Easier Service Provisioning
Reduce ServersStorage
Reduced Power Cooling Cabling Space Weight
Investment Leverage amp Easy Server Repurposing
Efficient App Expansion
Accelerated UC rollouts
Better Business Continuity
PortableMobile VMs
UCS is the industryrsquos only
fully unified and virtualization-
aware compute solution
copy 2010 Cisco andor its affiliates All rights reserved 16
CAPEX
bull Reduced Server Count (50-75)
bull NetworkStorage Consolidation (50+)
bull Reduced Cabling (50+)
OPEX
Reduced Rack amp Floor Space (36)
Reduced PowerCooling (20+)
Fewer Servers to Manage (50-75 less)
Reduced MaintenanceSupport Costs (~20)
Example 5000 users Dial tone voicemail and Presence 10 are Contact Center Agents
11 non-virtualized rack servers required for UC more for other business apps
copy 2010 Cisco andor its affiliates All rights reserved 17
CAPEXOPEX
bull Similar Consolidation and Operational EfficiencyScale benefits as with UC on UCS B-series
Other Benefits
Lower initial investment
Simple entrymigration to virtualized UC ndash Data Center expertise not required unless using SAN option
Example 5000 users Dial tone voicemail and Presence 10 are Contact Center Agents
11 non-virtualized rack servers required for UC more for other business apps
copy 2010 Cisco andor its affiliates All rights reserved 18
$-
$500
$1000
$1500
$2000
$2500
$3000
2 4 8 10 12 20 50 100
UCS B230 M2 TRC OPEX ($K)
UCS B230 M2 TRC CAPEX ($K)
UCS B200 M2 TRC OPEX ($K)
UCS B200 M2 TRC CAPEX ($K)
UCS C210 M2 TRC OPEX ($K)
UCS C210 M2 TRC CAPEX ($K)
MCS 7845-I3 OPEX ($K)
MCS 7845-I3 CAPEX ($K)
Assumptions
bull UC only no other business applications included ldquoSparerdquo or ldquohot standbyrdquo hosts not included
bull ldquoServerrdquo is either an MCS Appliance or a 2-vcpu-core ldquoVirtual Machinerdquo
bull Dual sites split MCS or UCS TRC servers across sites no single point of failure ndash redundant sites switching blade chassis rackblade servers
bull Using list pricing for MCS-7845-I3-IPC1 UCS-C210M2-VCD2 UCS-B200M2-VCS1 UCS-B230M2-VCDL1 and VMware Enterprise Plus Edition
Appliance or VM Count
PSTN
2104
2104
2104
2104
SANLAN
Dual Site Scenario PSTN
2104
2104
2104
2104
SANLAN
hellip
hellip
hellip
hellip
UC on UCS
B200 B230
UC on UCS
C210
MCS 7845
B230 M2
vs B200 M2
C210 M2
vs MCS 7845
20 copy 2010 Cisco andor its affiliates All rights reserved
Current Offers Technical Overview
21 copy 2010 Cisco andor its affiliates All rights reserved
Eg 4 physical servers
Each MCS 7800 hosts only
one UC app instance
4 virtual servers (VMrsquos) on 1 physical server
Single virtualized server with total 8 physical
cores hosts all UC app instances
Unity
Connection
Unified CM
VM for
Unified
CM
Sub
Unified CCX
VM for
Unity
Cxn
VM for
Unified
CCX
VM for
Unified
CM
Pub
or
copy 2010 Cisco andor its affiliates All rights reserved 22
Server Model TRC CPU RAM Storage Adapters
UCS B200 M2 Blade Server TRC 1
Dual E5640 (8 physical cores total)
48 GB DAS (RAID1) for
VMware FC SAN for UC apps
Cisco VIC
TRC 2 Dual E5640 (8
physical cores total) 48 GB Diskless Cisco VIC
UCS B230 M2 Blade Server
TRC 1 Dual E7-2870 (20
physical cores total) 128 GB Diskless Cisco VIC
UCS B440 M2 Blade Server
TRC1 Dual E7-4870 (40
physical cores total)
256 GB Diskless Cisco VIC
UCS C260 M2 Rack-Mount Server
TRC 1 Dual E7-2870 (20
physical cores total) 128 GB DAS (2x RAID5) 1GbE NIC
UCS C210 M2 General-Purpose
Rack-Mount Server TRC 1
Dual E5640 (8 physical cores total)
48 GB
DAS (2 disks RAID1) for VMware + DAS (8 disks RAID5) for
UC apps
1GbE NIC
TRC 2 Dual E5640 (8
physical cores total) 48 GB
DAS (2 disks RAID1) for VMware FC SAN
for UC apps
1GbE NIC and 4G FC HBA
TRC 3 Dual E5640 (8
physical cores total) 48 GB Diskless
1GbE NIC and 4G FC HBA
UCS C200 M2
General-Purpose Rack-Mount Server
TRC 1 Dual E5506 (8
physical cores total) 24 GB
DAS (4 disks RAID10) for VMware
+ UC apps 1GbE NIC
copy 2010 Cisco andor its affiliates All rights reserved 23
UC app Scale
(ldquousersrdquo)
vCPU (cores) Usually 253+ GHz
per core required
vRAM
(GB)
vDisk
(GB)
Notes
Unified
CM
1000 2 4 1 x 80 UCS C200 or BE6K only
2500 1 225 1 x 80 Not for use with C200BE6K
7500 2 6 2 x 80
10000 4 6 2 x 80
Unity
Connection
500 1 2 1 x 160
1000 1 4 1 x 160
5000 2 4 1 x 200
10000 4 4 2 x 146 Not for use with C200BE6K
20000 7 8 2 x 300
Unified
Presence
1000 1 2 1 x 80
2500 2 4 1 x 80 Not for use with C200BE6K
5000 4 4 2 x 80
Unified CCX 100 2 4 2 x 146 UCS C200 or BE6K only
300 2 4 2 x 146 Not for use with C200BE6K
400 4 8 2 x 146
Not exhaustive subject to change see wwwciscocomgouc-virtualized for latest
ie user count for particular values of BHCA trace level encryption CTI and other factors Actual
supportable user count may vary by deployment
copy 2010 Cisco andor its affiliates All rights reserved 24
Policy still lives here httpdocwikiciscocomwikiUnified_Communications_Virtualization_Sizing_GuidelinesApplication_Co-residency_Support_Policy
Three aspects
1 Allowed App Mix on same physical server
SAME RULES for TRC vs specs-based UCSHPIBM
2 Allowed VM OVA choices
DIFFERENT RULES for TRC vs specs-based due to CPU differences
3 Max number of VMs on same physical server
SAME RULES for TRC vs specs-based to determine max but specs-based might allow more VMs
Note DAS IO bottlenecks may prevent very high VM counts even if CPURAM are sufficient
copy 2010 Cisco andor its affiliates All rights reserved 25
bull Which apps can share the same physical server In general any UC with UC from apps listed at httpdocwikiciscocomwikiUnified_Communications_Virtualization_Supported_Applications
But note some UC apps restrict this eg BE6K CUCCE See their rules on their docwiki ldquochild pagesrdquo
NMTGrsquos UC Mgmt Suite (CUOM CUSM CUSSM CUPM) counts as a UC app for this
Note UCS C200 M2 TRC1 for non-BE6K no longer has special restrictions on UC App Mix
bull SEPARATE PHYSICAL SERVER required for non-UC or 3rd-party Eg N1KV ARC SingleWire vCenter FilePrint Directory CRMERP Groupware non-CUCM TFTP Nuance etc
UC VM UC VMhellipVM for
VMware
vCenter
VM for
Nexus
1KV
VSM
VM for
Solutions
Plus
CTDP
app
VM for
Unaffiliated
3rd-party
app
UC VM UC VMhellipVM for
VMware
vCenter
VM for
Nexus
1KV
VSM
VM for
Solutions
Plus
CTDP
app
VM for
Unaffiliated
3rd-party
app
UC VM UC VMhellipVM for
VMware
vCenter
VM for
Nexus
1KV
VSM
VM for
Solutions
Plus
CTDP
app
VM for
Unaffiliated
3rd-party
app
UC VM UC VMhellipVM for
VMware
vCenter
VM for
Nexus
1KV
VSM
VM for
Solutions
Plus
CTDP
app
VM for
Unaffiliated
3rd-party
app
Different blades in same chassis OK
Same blade same chassis not OK
copy 2010 Cisco andor its affiliates All rights reserved 26
bull App to HW some apps eg CUCCE donrsquot allow any of their OVAs on certain TRCs
See httpdocwikiciscocomwikiUnified_Communications_Virtualization_Supported_Applications
bull OVA to HW Some OVAs are deliberately only for use with a particular TRC or CPU
See co-res policy page and Notes column in httpdocwikiciscocomwikiUnified_Communications_Virtualization_Downloads_28including_OVAOVF_Templates29
Why Usually due to CPU modelspeed dependencies
C200 M2 TRC1
(E5506 213 GHz)
UCM
25K
UCM
75K
UCM
10K
C200 M2 Specs-based
(56xx 253+GHz )
B200C210 M2 TRC or Specs-based
(E5640 266 GHz on TRC
56xx75xx 253+ GHz on specs-based)
UCM
25K
UCM
75K
UCM
10K
UCM
1K
UCM
1K
UCM
25K
UCM
75K
UCM
10K
UCM
1K
copy 2010 Cisco andor its affiliates All rights reserved 27
Core
Core
Core
Core
Core
Core
Core
Core
Core
Core
Core
Core
Core
Core
Core
Core
Core
Core
Core
Core
VM VM VM VM
V
M
V
M
V
M
V
M
V
M
V
M
V
M
V
M
VM VM
Dual-socket 4-core
Eg UCS C210 M2 TRC1
with dual E5640
Dual-socket 6-core
Eg UCS C210 M2 Specs-based
with UC-supported CPU model
and min speed
VM VM V
M
V
M
VM VM VM V
M
Idle
VM
VM
VM
Jumbo + 1 reserved
or
Mixed sizes + 1 reserved
or
Mixed sizes
or
21 Large eg UCM 10K
or
41 Med eg UCM 75K
or
81 Small eg UCM 25K
ldquoSmallrdquo VM
ldquoMediumrdquo VM
ldquoLargerdquo VM
ldquoJumbordquo VM
VM
VM
Idle
VM VM VM VM
V
M
V
M
V
M
V
M
V
M
V
M
V
M
V
M
VM VM
VM VM V
M
V
M
VM VM VM V
M
Idle
Mixed sizes + 1 reserved
or
Mixed sizes
or
31 Large eg UCM 10K
or
61 Med eg UCM 75K
or
121 Small eg UCM 25K
VM
Idle
V
M
V
M
V
M
V
M
VM VM
VM
VM VM
VM
VM VM
copy 2010 Cisco andor its affiliates All rights reserved 28
Virtual Software Switch Options
VM
LAN SAN
ESXi Hypervisor
Software Switch
vNIC
CNA
FCoE
VMware
vSwitch
VMware
dvSwitch
Cisco Nexus
1KV
Host based (local) Distributed Distributed
IEEE 8021Q VLAN
tagging
IEEE 8021Q VLAN
tagging
IEEE 8021Q VLAN
tagging
VLANs only visible to
local ESXi host
VLANs visible to all
ESXi hosts
VLANs visible to all
ESXi hosts
EtherChannel EtherChannel EtherChannel
-- -- Virtual PortChannel
-- -- QoS Marking
(DSCPCoS)
-- -- ACL
-- -- SPAN
RADIUSTACACS+
No VM needed No VM needed VM needed for VSM
vmNIC
UCS B200
Strongly recommended for UC on UCS B-Series
Not required but recommended for UC on UCS C-Series
copy 2010 Cisco andor its affiliates All rights reserved 29
pSwitch
ESXi
bull Cisco Software Switch in Hypervisor
bull Familiar network server operations amp management model
bull Enhanced diagnostic amp monitoring capability
bull Visibility direct to VM
ESXi Nexus
1000V
VEM
Nexus
1000V
VEM
Nexus 1000V VSM
copy 2010 Cisco andor its affiliates All rights reserved 30
Physical switch maps L3 DSCP to L2 CoS
CUCM marks traffic based on L3 DSCP values
pSwitch (CAT6K etc) can do mapping from L3 DSCP to L2 CoS (if needed)
CTL Packet L3
dc1-access-6k(config)mls qos map dscp-cos 24 to 3
dc1-access-6k(config)mls qos map dscp-cos 46 to 5
CS3
L20 L3CS3
CUCM
CAT6K
L23 L3CS3 L23 L3CS3
copy 2010 Cisco andor its affiliates All rights reserved 31
bull UCS 6100 doesnrsquot look into L3 IP header
bull DSCPToS setting in IP header is not altered by UCS
bull 6100 sends packet to uplink pEthernet switch
bull Default QoS settings on UCS
FCoE (ldquomatch cos 3rdquo) ndash no drop policy
Rest (ldquomatch anyrdquo) ndash Best Effort Queue
vSwitch amp UCS 6100 can not map L3 DSCP to L2 CoS
L20 L3CS3
CUCM
CAT6K
UCS 6100
copy 2010 Cisco andor its affiliates All rights reserved 32
bull UC blades Network Adapters QoS policy set to Platinum (CoS=5 No
Drop)
bull Non-UC blades Network Adapters QoS policy set to best effort
N1Kv Considerations
bull UC sig traffic (CoS3) share queues with FCoE traffic (CoS3)
bull UC sig traffic is given lossless behavior
bull Default CoS value of 3 for FCoE traffic should never be changed
Without N1Kv
Caveat
bull All traffic types from virtual UC App will get CoS value of Platinum
bull Non-UC application gets best-effort class might not be acceptable
L20 L3CS3
L23 L3CS3
L23 L3CS3
CUCM
N1KV
UCS 6100
CAT6K
copy 2010 Cisco andor its affiliates All rights reserved 33
Compute Layer
SANStorage
Layer ndash Cisco
SRND
Cisco
UCS 6100
Fabric
Interconnect
UCS 5100
Blade
Server
Cisco SAN
Switch
4x10GE
4x10GE
4x10GE
4x10GE
FC FC
FC FC
Nexus
1000V
FC Storage
SP-A SP-B
3rd party layer
CUCM VM IOPS ~ 200
200 IOPS 4KB ~ 64 Mbps per VM
bull Total capacity 28000 IOPS
bull 14000 IOPS per controller
bull 4 KByte block size
14000 IOPS x (4KB) ~ 428 Mbps
600 Mbps throughputcontroller
3rd Party SAN Example
Result
bull One 4 Gbps FC interface is enough to
handle the entire capacity of one Storage
Array
bull HA requires four FC interfaces
copy 2010 Cisco andor its affiliates All rights reserved 34
copy 2010 Cisco andor its affiliates All rights reserved 35
bull All UC Deployment Models are supported
No change in the current deployment models
Base deployment model ndash Single Site Centralized Call Processing etc are not changing
bull VM machine layout on a blade andor chassis
Unity Connection requires one extra CPU core (vCPU) on blade
bull Software checks for design rules
No rules or restrictions are in place in UC Apps to check if you are running the primary and sub on the same blade
bull CoWAN ruleslatency requirement are same
Does not depend on CUCM code or hardware
httpwwwciscocomgoucsrnd
copy 2010 Cisco andor its affiliates All rights reserved 36
bull SRND application-layer guidelines are same as when on MCS
Redundancy Rules are the same
Clustering over the WANlatency numbers
Mega Cluster supported in 85
Determine quantityrole of nodes
For HA No design checks validating proper placement of primary and secondary servers
CUCCE private network requirement
bull Mixed clusters of HP IBM UCS are supported
Subject to ldquocommon senserdquo rules ndash eg donrsquot make Pub or Primary less powerful than Sub or Secondary
bull Direct attach devices must be on physical MCS server
-MOH Live audio stream
-Tape BackupFloppy
bull New factors to consider for end-to-end QoS designconfiguration
37 copy 2010 Cisco andor its affiliates All rights reserved
FROM CUCM
System Release To Unified CM 85 To UC System 85
4x Multi Hop thru 61(x)71(x) Multi Hop thru 61(3)
51(2) Multi Hop thru 61(x)71(x) NA
51(3) 2 Hop thru 71(3) NA
61(1) 2 Hop thru 61(x)71(x) 2 Hop
61(2) 2 Hop thru 61(x)71(x) NA
61(3) 2 Hop thru 61(x)71(x) NA
61(4) Single Hop NA
61(5) Single Hop NA
70(1) 2 Hop thru 71(x) 2 Hop
71(2) 2 Hop thru 71(x) 2 Hop
71(3) Single Hop Single Hop Multi stagesBWC supported
71(5) Single Hop NA
80(1) Single Hop Single Hop Multi stagesBWC supported
80(2) 80(3) Single Hop NA
38 copy 2010 Cisco andor its affiliates All rights reserved
VMware Feature support bull VMware feature support varies by application
bull Some features are supported with Caveats some Partially
bull For example
bull Clone Virtual Machine
ldquoY (C)rdquo means VM has to be powered off
bull vMotion
ldquoY (C)rdquo means vMotion supported for live traffic calls shouldnrsquot be dropped (but not guaranteed)
ldquoPartialrdquo means in maintenance mode only
ESXi Features CUCM CUC CUP CCX
Clone Virtual Machine Y (C) Y (C) Y (C) Y (C)
VMware vMotion Y (C) Partial Partial Y (C)
Resize Virtual Machine Partial Partial Partial Partial
VMware HA Y (C) Y (C) Y (C) Y (C)
Boot From SAN Y (C) Y (C) Y (C) Y (C)
VMware DRS No No No No
Docwiki httpdocwikiciscocomwikiUnified_Communications_VMware_Requirements
copy 2010 Cisco andor its affiliates All rights reserved 39
Current Business Continuity and Disaster Recovery strategies are still applicable
The UC apps redundancy rules are same
Distribute UC application nodes across UCS blades chassis and sites to minimize failure impact
Primarysecondary on different blade chassis sites
On same blade mix Subs with TFTPMoH vs just Subs
Redundancy of UCS components (blade chassis FEX links Interconnect switching)
Redundancy of ldquonewrdquo network types (10GbE SAN multi-pathing etc)
copy 2010 Cisco andor its affiliates All rights reserved 40
TAC Support Demarcation
Server Hardware
Shared Storage
VMware Cisco
Application
UC on UCS
Tested Reference Configuration
Cisco 3rd-party Cisco
Cisco VCE VCE
3rd-party
VCE
UC on UCS
Specs-based (including Vblock option)
Cisco 3rd-party Cisco
Cisco VCE VCE
3rd-party
VCE
3rd-party VMware (HP IBM)
Specs-based 3rd-party 3rd-party 3rd-party Cisco
MCS 7800 Appliances Cisco NA NA Cisco
Customer-provided MCS 7800 equivalent 3rd party NA NA Cisco
41 copy 2010 Cisco andor its affiliates All rights reserved
Customer Example ndash Primary Data Center
OLD NEW
Hardware Nodes 62 Physical Servers ( EU HQ clusters) 14 Approx
Software Version 615 amp 851 851
Ucxn Version 421 851 ndash 3 pairs -- Virtualized
CER 20 70 86 -- Virtualized
CM SUB CM PUB CM SUB CM SUB
MOH TFTP CER CM SUB
CER UCxn UCxN CM SUB
42 copy 2010 Cisco andor its affiliates All rights reserved
Deployment Model ndash Data Center 1
CM SUB CM PUB CM SUB CM SUB
MOH TFTP UCxN UCxn
CER UCxn UCxn UCxn
43 copy 2010 Cisco andor its affiliates All rights reserved
Deployment Model ndash Data Center 2
CM SUB CM PUB CM SUB CM SUB
MOH TFTP Ucxn UCxn
CER UCxn UCxn UCxn
copy 2010 Cisco andor its affiliates All rights reserved 44
Customer Design
PSTN
IP WAN
SIP Proxy
Unified Communications Manager
Unity Connection
Unified Presence
Unified Contact Center Express
Cisco UCS 5108 Chassis with UCS B200 Blade Servers
Cisco UCS 6100 Fabric Interconnect Switch
Cisco UCS C210 or C200 General-Purpose Rack-mount Server
CUSP
(11K phones)
(3K phones)
(400 phones)
copy 2010 Cisco andor its affiliates All rights reserved 45
HQ Details
Blade 1 Blade 2
CPU-1 CPU-2 CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
Blade 3 Blade 4
CPU-1 CPU-2 CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
CUP-1 UCCX-1 CUP-2
Blade 5 Blade 6
CPU-1 CPU-2 CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
UCCX-1
UCxn-1 Active
Leave
idle for
UCxn
UCxn-2
Active
TFTP-2
SUB-2
SUB-4
SUB-6
SUB-8 CUP-2 UCCX-2
SUB-3 PUB SUB-1
CUP-1
TFTP-1 SUB-5
SUB-7
Blade Slot 7 Blade Slot 8
CPU-1 CPU-2 CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
CUCM VM OVAs
Messaging VM OVAs
Leave
idle for
UCxn
ldquoSparerdquo blade slots available for non-UC workloads such as Cisco Nexus 1000V VMware
vCenter or 3rd-party workloads such as directories email groupware or other business
applications
Contact Center VM OVAs
Presence VM OVAs
copy 2010 Cisco andor its affiliates All rights reserved 46
Rack Server 3
CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
UCxn-2 CCX-1 CUP
Rack Server 2
CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
Rack Server 1
CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
Leave
idle for
UCxn
UCxn-1 TFTP-2 SUB-2
Leave
idle for
UCxn
SUB-1 CCX-2 PUB TFTP-1
Branch Office Details
Rack Server 2
CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
Rack Server 1
CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
PUBTFTP CCX-1 CUP
SUB CCX-2
Leave
idle for
UCxn
Leave
idle for
UCxn
UC
xn
-1
UC
xn
-1
CUCM VM OVAs
Messaging VM OVAs
Contact Center VM OVAs
Presence VM OVAs
copy 2010 Cisco andor its affiliates All rights reserved 47
bull DAS Rack Mount Server (Cisco C-Series)
bull Popular DAS Protocol SCSI
bull iSCSI Access SCSI storage media using IP network
bull Fibre Channel The most popular SAN protocol today
Cable distance ~ 2 km
Popular speed - 4 Gbs
bull NAS (Network Attached Storage) uses NFS (Network File System) protocol over TCPIP
DAS
SCSI
Computer System
SCSI Bus Adapter
SCSI Device Driver
Volume Manager
File System
Application
iSCSI
File System
Application
SCSI Device Driver iSCSI Driver
TCPIP Stack
NIC
Volume Manager
NIC
TCPIP Stack
iSCSI Layer
Bus Adapter
Host
Server
Storage
Transport
Storage
Media
FC SAN
FC
FC HBA
SCSI Device Driver
Volume Manager
File System
Application
Computer System Computer System
Block IO
SAN IP
copy 2010 Cisco andor its affiliates All rights reserved 48
HDD 1
450gig
15K RPM
HDD 2
450gig
15K RPM
HDD 3
450gig
15K RPM
HDD 4
450gig
15K RPM
HDD 5
450gig
15K RPM
Single RAID5 Group (14 TB Usable Space)
PUB SUB1 UCCX1
UC
VM 1
UC
VM 2
UC
VM 3
LUN 2 (720 GB) LUN 1 (720 GB)
UCCX2 CUP1 CUP2
UC
VM 4
UC
VM 5 UC
VM 6
4 to 8 UC VMs per
LUN
(max dependent on
sumvDisks)
NASSAN Array Best Practices for UC
Must be lt2TB per
LUN Recommend
500GB to 15 TB
Use FC class disks
with ~180 IOPS (eg
450 GB 15K or
300 GB 15K)
copy 2010 Cisco andor its affiliates All rights reserved 49
DAS Example UCS C210 M2 TRC1
HDD 1
146 GB
15K RPM
HDD 2
146 GB
15K RPM
HDD 3
146 GB
15K RPM
HDD 4
146 GB
15K RPM
HDD 5
146 GB
15K RPM
HDD 6
146 GB
15K RPM
HDD 7
146 GB
15K RPM
HDD 8
146 GB
15K RPM
HDD 9
146 GB
15K RPM
HDD 10
146 GB
15K RPM
Single RAID1
Volume Single RAID5 Volume (1022 GB after RAID overhead)
VMFS Filestore (947 GB after VMFS overhead)
PUB UCCX1
UC
VM 1
UC
VM 3
CUP1
UC
VM 5
vSphere ESXi
image
Notes
bull VMFS block size limits
max vDisk size
bull Could have gt1 VMFS
datastore on RAID
volume
copy 2010 Cisco andor its affiliates All rights reserved 50
DAS Example UCS C200 M2 TRC1 for BE6K
HDD 1
1 TB
72K RPM
HDD 2
1 TB
72K RPM
HDD 3
1 TB
72K RPM
HDD 4 1
TB
72K RPM
Single RAID10 Volume (2 TB after RAID
overhead)
VMFS Filestore (18 TB after VMFS overhead)
PUB UCCX1
UC
VM 1
UC
VM 3
CUP1
UC
VM 5 vSphere ESXi
image
copy 2010 Cisco andor its affiliates All rights reserved 51
UC Application Co-residency
bull See wwwciscocomgouc-virtualized for latest
bull Based on supported OVA for download
OVA reserves cores RAM etc to VMs
Basic rule of thumb fill up blade until out of capacity
If blade contains VM for messaging must reserve core for ESXi
bull Hardware oversubscription not supported
52 copy 2010 Cisco andor its affiliates All rights reserved
Virtual Machine Sizing
bull Virtual Machine virtual hardware defined by an VM template
vCPU vRAM vDisk vNICs
bull Capacity
bullAn VM template is associated with a specific capacity
bull VM templates are packaged in a OVA file
bull There are usually different VM template per release For example CUCM_80_vmv7_v21ova
CUCM_85_vmv7_v21ova
CUCM_86_vmv7_v15ova
Includes product product version VMware hardware version template version
53 copy 2010 Cisco andor its affiliates All rights reserved httptoolsciscocomcucst
copy 2010 Cisco andor its affiliates All rights reserved 54
bull Customer-accessible
bullUC on UCS httpwwwciscocomgouconucs and wwwciscocomgouc-virtualized and wwwciscocomgoucsrnd and wwwciscocomgoswonly (UCS page)
bullUCS in general httpwwwciscocomgoucs
bullVblocks amp Virtual Computing Environment wwwvceportalcom and wwwvcecom
bullFlexPods wwwcisconetappcom
bullCisco Infrastructure as a Service (IaaS) Portal httpwwwciscocomgoiaas
bullCisco Unified Service Delivery Portal httpwwwciscocomgousd
bullldquoCUCM on Virtual Serversrdquo (summary of ldquowhatrsquos newdifferent when virtualizedrdquo) httpwwwciscocomenUSdocsvoice_ip_commcucmvirtualservershtml
bull ldquoVirtualization and UCS 101rdquo httpnewsroomciscocomdlls2010ts_102810html
bull UC on UCS ldquotech valuerdquo httpwwwciscocomenUSsolutionsns340ns339ns638ns914html_TWTVtwtv_episode_74html
copy 2010 Cisco andor its affiliates All rights reserved 55
bull Partner Central ldquoServers OS and Virtualizationrdquo
httpwwwciscocomwebpartnersselltechnologyipcservers_os_virt_for_uchtml
bull Partner Community ldquoServers OS and Virtualizationrdquo
httpswwwmyciscocommunitycomcommunitypartnercollaborationucservers
bull Design info
httpwwwciscocomgouc-virtualized and httpwwwciscocomgoucsrnd
bull ldquoWhatrsquos new whatrsquos different when on VMwarerdquo customer doc
httpwwwciscocomenUSdocsvoice_ip_commcucmvirtualservershtml
bull Product Literature
httpwwwciscocomgoucs and httpwwwciscocomgouconucs
bull Ordering Guide
httpwwwciscocomwebpartnersselltechnologyipcuc_tech_readinesshtml~7
bull Virtual Computing Environment Portal httpwwwvceportalcom
bull Cisco Infrastructure as a Service (IaaS) Portal httpwwwciscocomgoiaas
copy 2010 Cisco andor its affiliates All rights reserved 9
Unified Communication System
80bull Voice Video Presence Mobility Customer Care
bull Available in flexible deployment models
bull Deliver a unparalleled user experience
HCS Management Systembull Zero-touch fulfillment amp provisioning with self service
bull Service assurance for enabling high quality of services
bull Coordinated management and integration across domains
Optimized Virtualization Platform
(UC on UCS B-series)bull Resource optimized for reduced hardware capex
bull Installation amp upgrade automation
bull Provides flexibility customization amp additional redundancy
Scalable System Architecturebull Customer Aggregation amp SIP Trunking
bull SLA Enablement Security Scalability
bull Cloud Based SaaS Integration
Cisco Hosted Collaboration Solution Combining virtualization management amp
architecture elements for a comprehensive
platform
Cisco Business Edition 6000 Midmarket 100-1000 user solution for call
control mobility rich media presence and
contact center
Infrastructure Solutions Data Center ldquobuilding blocksrdquo
Vblock FlexPod
UC on FlexPod planned not committed
copy 2010 Cisco andor its affiliates All rights reserved 10
B230 M2 2-Socket Intel E7-2800 2 SSD 32 DIMM
B200 M2 2-Socket Intel 5600 2 SFF Disk 12 DIMM
B250 M2 2-Socket Intel 5600 2 SFF Disk 48 DIMM
B440 M2 4-Socket Intel E7-4800 4 SFF Disk 32 DIMM
C200 M2 (LFF) 2-Socket Intel 5600 4 Disks 12 DIMM 2 PCIe 1U
C220 M3 SFF 2-Socket Intel E5-2600 8 Disks 16 DIMM 2 PCIe 1U
C250 M2 2-Socket Intel 5600 8 Disks 48 DIMM 5 PCIe 2U
C460 M2 4-Socket Intel E7-4800 12 Disks 64 DIMM 10 PCIe 4U
Supported Hardware UC on UCS B
lad
e
Rack M
ou
nt
C260 M2 2-Socket Intel E7-2800 16 Disks 64 DIMM 6 PCIe 2U
UC on UCS Tested Reference Configuration UC on UCS Specs-based
BE6K
B200 M3 SFF 2-Socket Intel E5-2600 2 SFF Disk 24 DIMM
C240 M3 SFF 2-Socket Intel E5-2600 24 Disks 24 DIMM 5 PCIe 2U
C210 M2 2-Socket Intel 5600 16 Disks 12 DIMM 5 PCIe 2U
Target support
Fall 2012
Target support
Fall 2012
Target support
Fall 2012
(Note UCS Express
not supported)
copy 2010 Cisco andor its affiliates All rights reserved 11
Must be on VMware
HCL
Allowed Server
Vendors
Server model and IO devices on wwwvmwarecomgohcl
All parts must be supported by server vendor
No hardware oversubscription allowed for UC
VMware vCenter is REQUIRED
Processor
Intel Xeon 56xx75xx 253+ GHz or E7-xxxx 24+ GHz
CPU support varies by UC app
Required physical core count = sumUC VMs vCPU (+1 if Unity Cxn)
Capacity = sumUC VMs vRAM + 2GB for VMware
Follow server vendor for module densityconfiguration
Memory
Storage Network
Must be on VMware HCL and supported by server vendor
Eg 1GbE10GbE NIC ge2Gb FC HBA 10Gb CNA or VIC
UCS BC UCS Express Other 3rd-parties
Adapters (eg LANStorage Access)
SAN (FCoE FC iSCSI) NAS (NFS) Variable DASRAID
Storage capacity = sumUC VMs vDisk + VMwareRAID overhead
Storage performance = sumUC VM IOPS
copy 2010 Cisco andor its affiliates All rights reserved 12
Intel
Program
Nehalem-
EP
Nehalem-
EX
Westmere-
EP
Westmere-
EX
Romley-
EP
CPU Family 55xx
65xx
75xx
56xx E7-28xx
E7-48xx
E7-88xx
E5-26xx
CPU Cores 4 468 46 6810 468
CPU Speed 2-29 GHz 17-27 GHz 19-333 GHz 17-27 GHz 1-3 GHz
Example UCS
Models with
these CPUs
B200250 M1
C210 M1
C250 M1M2
C200 M2
B230 M1
B440 M1
C460 M1
B200250 M2
C210 M2
C250 M2
B230 M2
B440 M2
C260 M2
C460 M2
B200 M3
C220 M3
C240 M3
UC on UCS
Certifications
TRCs for B200 M1
TRCs for C210 M1
(E5540)
TRC for C200M2
(E5506)
Specs-based
(75xx 253+ GHz)
TRCs for B200 M2
TRCs for C210 M2
(E5640)
Specs-based
(56xx at 253+
GHz)
TRC for B230 M2
TRC for B440 M2
(E7-28704870)
Specs-based
(E7 at 24+ GHz)
Not Currently
Supported by UC
copy 2010 Cisco andor its affiliates All rights reserved 13
VM VM VM
VM VM VM
VM VM VM
VM VM VM
VM
VM
VM
ldquoSmallrdquo VM
ldquoMediumrdquo VM
ldquoLargerdquo VM
VM
VM
VM VM
VM VM VM
19 UC VMs with
total 40 vcpursquos
19 MCS Appliances 5 virtualized
servers (dual 4-core
B200 M2
TRC)
4 virtualized
servers (dual 6-core
B200 M2
specs-based)
2 virtualized
servers (dual 10-core
B230 M2
TRC)
19 UC
app
copies
copy 2010 Cisco andor its affiliates All rights reserved 14
UC on UCS Products with Owner UC on UCS TRC UC on UCS Specs-based
Unified Communications Manager
Business Edition 6000 C200 M2 only Not supported
Unity Connection
Unified Presence
Unified Contact Center Express
Cisco Emergency Responder
Session Manager Edition
InterCompany Media Engine
Unified Attendant Consoles
Unity
Unified Workforce Optimization (WFO)
Unified Contact Center Enterprise Planned
Unified Intelligence Center
Unified Customer Voice Portal Planned
MediaSense Planned
Unified Contact Center Mgmt Portal
SocialMiner
Finesse Planned
Unified EmailWeb Interaction Mgr
Prime UCMS (OMPMSMSSM)
Webex Premise Planned Planned
Unified MeetingPlace Planned Planned
TMSCTMS Planned Planned
VCS Planned Planned
copy 2010 Cisco andor its affiliates All rights reserved 15
Why virtualize
your UC
Why virtualize
on UCS
Lower TCO
Business
Agility
Additional
Savings and
Increased
Agility
End to End
Solution
Single
Support
Tested Reference Configurations
Vblocks
Cisco options
VCE Vblock options
Infrastructure Simplification (Cables Adapters Switching)
Converge Communications and DC Networks ndash ldquowire oncerdquo
Consolidates System Mgmt
Easier Service Provisioning
Reduce ServersStorage
Reduced Power Cooling Cabling Space Weight
Investment Leverage amp Easy Server Repurposing
Efficient App Expansion
Accelerated UC rollouts
Better Business Continuity
PortableMobile VMs
UCS is the industryrsquos only
fully unified and virtualization-
aware compute solution
copy 2010 Cisco andor its affiliates All rights reserved 16
CAPEX
bull Reduced Server Count (50-75)
bull NetworkStorage Consolidation (50+)
bull Reduced Cabling (50+)
OPEX
Reduced Rack amp Floor Space (36)
Reduced PowerCooling (20+)
Fewer Servers to Manage (50-75 less)
Reduced MaintenanceSupport Costs (~20)
Example 5000 users Dial tone voicemail and Presence 10 are Contact Center Agents
11 non-virtualized rack servers required for UC more for other business apps
copy 2010 Cisco andor its affiliates All rights reserved 17
CAPEXOPEX
bull Similar Consolidation and Operational EfficiencyScale benefits as with UC on UCS B-series
Other Benefits
Lower initial investment
Simple entrymigration to virtualized UC ndash Data Center expertise not required unless using SAN option
Example 5000 users Dial tone voicemail and Presence 10 are Contact Center Agents
11 non-virtualized rack servers required for UC more for other business apps
copy 2010 Cisco andor its affiliates All rights reserved 18
$-
$500
$1000
$1500
$2000
$2500
$3000
2 4 8 10 12 20 50 100
UCS B230 M2 TRC OPEX ($K)
UCS B230 M2 TRC CAPEX ($K)
UCS B200 M2 TRC OPEX ($K)
UCS B200 M2 TRC CAPEX ($K)
UCS C210 M2 TRC OPEX ($K)
UCS C210 M2 TRC CAPEX ($K)
MCS 7845-I3 OPEX ($K)
MCS 7845-I3 CAPEX ($K)
Assumptions
bull UC only no other business applications included ldquoSparerdquo or ldquohot standbyrdquo hosts not included
bull ldquoServerrdquo is either an MCS Appliance or a 2-vcpu-core ldquoVirtual Machinerdquo
bull Dual sites split MCS or UCS TRC servers across sites no single point of failure ndash redundant sites switching blade chassis rackblade servers
bull Using list pricing for MCS-7845-I3-IPC1 UCS-C210M2-VCD2 UCS-B200M2-VCS1 UCS-B230M2-VCDL1 and VMware Enterprise Plus Edition
Appliance or VM Count
PSTN
2104
2104
2104
2104
SANLAN
Dual Site Scenario PSTN
2104
2104
2104
2104
SANLAN
hellip
hellip
hellip
hellip
UC on UCS
B200 B230
UC on UCS
C210
MCS 7845
B230 M2
vs B200 M2
C210 M2
vs MCS 7845
20 copy 2010 Cisco andor its affiliates All rights reserved
Current Offers Technical Overview
21 copy 2010 Cisco andor its affiliates All rights reserved
Eg 4 physical servers
Each MCS 7800 hosts only
one UC app instance
4 virtual servers (VMrsquos) on 1 physical server
Single virtualized server with total 8 physical
cores hosts all UC app instances
Unity
Connection
Unified CM
VM for
Unified
CM
Sub
Unified CCX
VM for
Unity
Cxn
VM for
Unified
CCX
VM for
Unified
CM
Pub
or
copy 2010 Cisco andor its affiliates All rights reserved 22
Server Model TRC CPU RAM Storage Adapters
UCS B200 M2 Blade Server TRC 1
Dual E5640 (8 physical cores total)
48 GB DAS (RAID1) for
VMware FC SAN for UC apps
Cisco VIC
TRC 2 Dual E5640 (8
physical cores total) 48 GB Diskless Cisco VIC
UCS B230 M2 Blade Server
TRC 1 Dual E7-2870 (20
physical cores total) 128 GB Diskless Cisco VIC
UCS B440 M2 Blade Server
TRC1 Dual E7-4870 (40
physical cores total)
256 GB Diskless Cisco VIC
UCS C260 M2 Rack-Mount Server
TRC 1 Dual E7-2870 (20
physical cores total) 128 GB DAS (2x RAID5) 1GbE NIC
UCS C210 M2 General-Purpose
Rack-Mount Server TRC 1
Dual E5640 (8 physical cores total)
48 GB
DAS (2 disks RAID1) for VMware + DAS (8 disks RAID5) for
UC apps
1GbE NIC
TRC 2 Dual E5640 (8
physical cores total) 48 GB
DAS (2 disks RAID1) for VMware FC SAN
for UC apps
1GbE NIC and 4G FC HBA
TRC 3 Dual E5640 (8
physical cores total) 48 GB Diskless
1GbE NIC and 4G FC HBA
UCS C200 M2
General-Purpose Rack-Mount Server
TRC 1 Dual E5506 (8
physical cores total) 24 GB
DAS (4 disks RAID10) for VMware
+ UC apps 1GbE NIC
copy 2010 Cisco andor its affiliates All rights reserved 23
UC app Scale
(ldquousersrdquo)
vCPU (cores) Usually 253+ GHz
per core required
vRAM
(GB)
vDisk
(GB)
Notes
Unified
CM
1000 2 4 1 x 80 UCS C200 or BE6K only
2500 1 225 1 x 80 Not for use with C200BE6K
7500 2 6 2 x 80
10000 4 6 2 x 80
Unity
Connection
500 1 2 1 x 160
1000 1 4 1 x 160
5000 2 4 1 x 200
10000 4 4 2 x 146 Not for use with C200BE6K
20000 7 8 2 x 300
Unified
Presence
1000 1 2 1 x 80
2500 2 4 1 x 80 Not for use with C200BE6K
5000 4 4 2 x 80
Unified CCX 100 2 4 2 x 146 UCS C200 or BE6K only
300 2 4 2 x 146 Not for use with C200BE6K
400 4 8 2 x 146
Not exhaustive subject to change see wwwciscocomgouc-virtualized for latest
ie user count for particular values of BHCA trace level encryption CTI and other factors Actual
supportable user count may vary by deployment
copy 2010 Cisco andor its affiliates All rights reserved 24
Policy still lives here httpdocwikiciscocomwikiUnified_Communications_Virtualization_Sizing_GuidelinesApplication_Co-residency_Support_Policy
Three aspects
1 Allowed App Mix on same physical server
SAME RULES for TRC vs specs-based UCSHPIBM
2 Allowed VM OVA choices
DIFFERENT RULES for TRC vs specs-based due to CPU differences
3 Max number of VMs on same physical server
SAME RULES for TRC vs specs-based to determine max but specs-based might allow more VMs
Note DAS IO bottlenecks may prevent very high VM counts even if CPURAM are sufficient
copy 2010 Cisco andor its affiliates All rights reserved 25
bull Which apps can share the same physical server In general any UC with UC from apps listed at httpdocwikiciscocomwikiUnified_Communications_Virtualization_Supported_Applications
But note some UC apps restrict this eg BE6K CUCCE See their rules on their docwiki ldquochild pagesrdquo
NMTGrsquos UC Mgmt Suite (CUOM CUSM CUSSM CUPM) counts as a UC app for this
Note UCS C200 M2 TRC1 for non-BE6K no longer has special restrictions on UC App Mix
bull SEPARATE PHYSICAL SERVER required for non-UC or 3rd-party Eg N1KV ARC SingleWire vCenter FilePrint Directory CRMERP Groupware non-CUCM TFTP Nuance etc
UC VM UC VMhellipVM for
VMware
vCenter
VM for
Nexus
1KV
VSM
VM for
Solutions
Plus
CTDP
app
VM for
Unaffiliated
3rd-party
app
UC VM UC VMhellipVM for
VMware
vCenter
VM for
Nexus
1KV
VSM
VM for
Solutions
Plus
CTDP
app
VM for
Unaffiliated
3rd-party
app
UC VM UC VMhellipVM for
VMware
vCenter
VM for
Nexus
1KV
VSM
VM for
Solutions
Plus
CTDP
app
VM for
Unaffiliated
3rd-party
app
UC VM UC VMhellipVM for
VMware
vCenter
VM for
Nexus
1KV
VSM
VM for
Solutions
Plus
CTDP
app
VM for
Unaffiliated
3rd-party
app
Different blades in same chassis OK
Same blade same chassis not OK
copy 2010 Cisco andor its affiliates All rights reserved 26
bull App to HW some apps eg CUCCE donrsquot allow any of their OVAs on certain TRCs
See httpdocwikiciscocomwikiUnified_Communications_Virtualization_Supported_Applications
bull OVA to HW Some OVAs are deliberately only for use with a particular TRC or CPU
See co-res policy page and Notes column in httpdocwikiciscocomwikiUnified_Communications_Virtualization_Downloads_28including_OVAOVF_Templates29
Why Usually due to CPU modelspeed dependencies
C200 M2 TRC1
(E5506 213 GHz)
UCM
25K
UCM
75K
UCM
10K
C200 M2 Specs-based
(56xx 253+GHz )
B200C210 M2 TRC or Specs-based
(E5640 266 GHz on TRC
56xx75xx 253+ GHz on specs-based)
UCM
25K
UCM
75K
UCM
10K
UCM
1K
UCM
1K
UCM
25K
UCM
75K
UCM
10K
UCM
1K
copy 2010 Cisco andor its affiliates All rights reserved 27
Core
Core
Core
Core
Core
Core
Core
Core
Core
Core
Core
Core
Core
Core
Core
Core
Core
Core
Core
Core
VM VM VM VM
V
M
V
M
V
M
V
M
V
M
V
M
V
M
V
M
VM VM
Dual-socket 4-core
Eg UCS C210 M2 TRC1
with dual E5640
Dual-socket 6-core
Eg UCS C210 M2 Specs-based
with UC-supported CPU model
and min speed
VM VM V
M
V
M
VM VM VM V
M
Idle
VM
VM
VM
Jumbo + 1 reserved
or
Mixed sizes + 1 reserved
or
Mixed sizes
or
21 Large eg UCM 10K
or
41 Med eg UCM 75K
or
81 Small eg UCM 25K
ldquoSmallrdquo VM
ldquoMediumrdquo VM
ldquoLargerdquo VM
ldquoJumbordquo VM
VM
VM
Idle
VM VM VM VM
V
M
V
M
V
M
V
M
V
M
V
M
V
M
V
M
VM VM
VM VM V
M
V
M
VM VM VM V
M
Idle
Mixed sizes + 1 reserved
or
Mixed sizes
or
31 Large eg UCM 10K
or
61 Med eg UCM 75K
or
121 Small eg UCM 25K
VM
Idle
V
M
V
M
V
M
V
M
VM VM
VM
VM VM
VM
VM VM
copy 2010 Cisco andor its affiliates All rights reserved 28
Virtual Software Switch Options
VM
LAN SAN
ESXi Hypervisor
Software Switch
vNIC
CNA
FCoE
VMware
vSwitch
VMware
dvSwitch
Cisco Nexus
1KV
Host based (local) Distributed Distributed
IEEE 8021Q VLAN
tagging
IEEE 8021Q VLAN
tagging
IEEE 8021Q VLAN
tagging
VLANs only visible to
local ESXi host
VLANs visible to all
ESXi hosts
VLANs visible to all
ESXi hosts
EtherChannel EtherChannel EtherChannel
-- -- Virtual PortChannel
-- -- QoS Marking
(DSCPCoS)
-- -- ACL
-- -- SPAN
RADIUSTACACS+
No VM needed No VM needed VM needed for VSM
vmNIC
UCS B200
Strongly recommended for UC on UCS B-Series
Not required but recommended for UC on UCS C-Series
copy 2010 Cisco andor its affiliates All rights reserved 29
pSwitch
ESXi
bull Cisco Software Switch in Hypervisor
bull Familiar network server operations amp management model
bull Enhanced diagnostic amp monitoring capability
bull Visibility direct to VM
ESXi Nexus
1000V
VEM
Nexus
1000V
VEM
Nexus 1000V VSM
copy 2010 Cisco andor its affiliates All rights reserved 30
Physical switch maps L3 DSCP to L2 CoS
CUCM marks traffic based on L3 DSCP values
pSwitch (CAT6K etc) can do mapping from L3 DSCP to L2 CoS (if needed)
CTL Packet L3
dc1-access-6k(config)mls qos map dscp-cos 24 to 3
dc1-access-6k(config)mls qos map dscp-cos 46 to 5
CS3
L20 L3CS3
CUCM
CAT6K
L23 L3CS3 L23 L3CS3
copy 2010 Cisco andor its affiliates All rights reserved 31
bull UCS 6100 doesnrsquot look into L3 IP header
bull DSCPToS setting in IP header is not altered by UCS
bull 6100 sends packet to uplink pEthernet switch
bull Default QoS settings on UCS
FCoE (ldquomatch cos 3rdquo) ndash no drop policy
Rest (ldquomatch anyrdquo) ndash Best Effort Queue
vSwitch amp UCS 6100 can not map L3 DSCP to L2 CoS
L20 L3CS3
CUCM
CAT6K
UCS 6100
copy 2010 Cisco andor its affiliates All rights reserved 32
bull UC blades Network Adapters QoS policy set to Platinum (CoS=5 No
Drop)
bull Non-UC blades Network Adapters QoS policy set to best effort
N1Kv Considerations
bull UC sig traffic (CoS3) share queues with FCoE traffic (CoS3)
bull UC sig traffic is given lossless behavior
bull Default CoS value of 3 for FCoE traffic should never be changed
Without N1Kv
Caveat
bull All traffic types from virtual UC App will get CoS value of Platinum
bull Non-UC application gets best-effort class might not be acceptable
L20 L3CS3
L23 L3CS3
L23 L3CS3
CUCM
N1KV
UCS 6100
CAT6K
copy 2010 Cisco andor its affiliates All rights reserved 33
Compute Layer
SANStorage
Layer ndash Cisco
SRND
Cisco
UCS 6100
Fabric
Interconnect
UCS 5100
Blade
Server
Cisco SAN
Switch
4x10GE
4x10GE
4x10GE
4x10GE
FC FC
FC FC
Nexus
1000V
FC Storage
SP-A SP-B
3rd party layer
CUCM VM IOPS ~ 200
200 IOPS 4KB ~ 64 Mbps per VM
bull Total capacity 28000 IOPS
bull 14000 IOPS per controller
bull 4 KByte block size
14000 IOPS x (4KB) ~ 428 Mbps
600 Mbps throughputcontroller
3rd Party SAN Example
Result
bull One 4 Gbps FC interface is enough to
handle the entire capacity of one Storage
Array
bull HA requires four FC interfaces
copy 2010 Cisco andor its affiliates All rights reserved 34
copy 2010 Cisco andor its affiliates All rights reserved 35
bull All UC Deployment Models are supported
No change in the current deployment models
Base deployment model ndash Single Site Centralized Call Processing etc are not changing
bull VM machine layout on a blade andor chassis
Unity Connection requires one extra CPU core (vCPU) on blade
bull Software checks for design rules
No rules or restrictions are in place in UC Apps to check if you are running the primary and sub on the same blade
bull CoWAN ruleslatency requirement are same
Does not depend on CUCM code or hardware
httpwwwciscocomgoucsrnd
copy 2010 Cisco andor its affiliates All rights reserved 36
bull SRND application-layer guidelines are same as when on MCS
Redundancy Rules are the same
Clustering over the WANlatency numbers
Mega Cluster supported in 85
Determine quantityrole of nodes
For HA No design checks validating proper placement of primary and secondary servers
CUCCE private network requirement
bull Mixed clusters of HP IBM UCS are supported
Subject to ldquocommon senserdquo rules ndash eg donrsquot make Pub or Primary less powerful than Sub or Secondary
bull Direct attach devices must be on physical MCS server
-MOH Live audio stream
-Tape BackupFloppy
bull New factors to consider for end-to-end QoS designconfiguration
37 copy 2010 Cisco andor its affiliates All rights reserved
FROM CUCM
System Release To Unified CM 85 To UC System 85
4x Multi Hop thru 61(x)71(x) Multi Hop thru 61(3)
51(2) Multi Hop thru 61(x)71(x) NA
51(3) 2 Hop thru 71(3) NA
61(1) 2 Hop thru 61(x)71(x) 2 Hop
61(2) 2 Hop thru 61(x)71(x) NA
61(3) 2 Hop thru 61(x)71(x) NA
61(4) Single Hop NA
61(5) Single Hop NA
70(1) 2 Hop thru 71(x) 2 Hop
71(2) 2 Hop thru 71(x) 2 Hop
71(3) Single Hop Single Hop Multi stagesBWC supported
71(5) Single Hop NA
80(1) Single Hop Single Hop Multi stagesBWC supported
80(2) 80(3) Single Hop NA
38 copy 2010 Cisco andor its affiliates All rights reserved
VMware Feature support bull VMware feature support varies by application
bull Some features are supported with Caveats some Partially
bull For example
bull Clone Virtual Machine
ldquoY (C)rdquo means VM has to be powered off
bull vMotion
ldquoY (C)rdquo means vMotion supported for live traffic calls shouldnrsquot be dropped (but not guaranteed)
ldquoPartialrdquo means in maintenance mode only
ESXi Features CUCM CUC CUP CCX
Clone Virtual Machine Y (C) Y (C) Y (C) Y (C)
VMware vMotion Y (C) Partial Partial Y (C)
Resize Virtual Machine Partial Partial Partial Partial
VMware HA Y (C) Y (C) Y (C) Y (C)
Boot From SAN Y (C) Y (C) Y (C) Y (C)
VMware DRS No No No No
Docwiki httpdocwikiciscocomwikiUnified_Communications_VMware_Requirements
copy 2010 Cisco andor its affiliates All rights reserved 39
Current Business Continuity and Disaster Recovery strategies are still applicable
The UC apps redundancy rules are same
Distribute UC application nodes across UCS blades chassis and sites to minimize failure impact
Primarysecondary on different blade chassis sites
On same blade mix Subs with TFTPMoH vs just Subs
Redundancy of UCS components (blade chassis FEX links Interconnect switching)
Redundancy of ldquonewrdquo network types (10GbE SAN multi-pathing etc)
copy 2010 Cisco andor its affiliates All rights reserved 40
TAC Support Demarcation
Server Hardware
Shared Storage
VMware Cisco
Application
UC on UCS
Tested Reference Configuration
Cisco 3rd-party Cisco
Cisco VCE VCE
3rd-party
VCE
UC on UCS
Specs-based (including Vblock option)
Cisco 3rd-party Cisco
Cisco VCE VCE
3rd-party
VCE
3rd-party VMware (HP IBM)
Specs-based 3rd-party 3rd-party 3rd-party Cisco
MCS 7800 Appliances Cisco NA NA Cisco
Customer-provided MCS 7800 equivalent 3rd party NA NA Cisco
41 copy 2010 Cisco andor its affiliates All rights reserved
Customer Example ndash Primary Data Center
OLD NEW
Hardware Nodes 62 Physical Servers ( EU HQ clusters) 14 Approx
Software Version 615 amp 851 851
Ucxn Version 421 851 ndash 3 pairs -- Virtualized
CER 20 70 86 -- Virtualized
CM SUB CM PUB CM SUB CM SUB
MOH TFTP CER CM SUB
CER UCxn UCxN CM SUB
42 copy 2010 Cisco andor its affiliates All rights reserved
Deployment Model ndash Data Center 1
CM SUB CM PUB CM SUB CM SUB
MOH TFTP UCxN UCxn
CER UCxn UCxn UCxn
43 copy 2010 Cisco andor its affiliates All rights reserved
Deployment Model ndash Data Center 2
CM SUB CM PUB CM SUB CM SUB
MOH TFTP Ucxn UCxn
CER UCxn UCxn UCxn
copy 2010 Cisco andor its affiliates All rights reserved 44
Customer Design
PSTN
IP WAN
SIP Proxy
Unified Communications Manager
Unity Connection
Unified Presence
Unified Contact Center Express
Cisco UCS 5108 Chassis with UCS B200 Blade Servers
Cisco UCS 6100 Fabric Interconnect Switch
Cisco UCS C210 or C200 General-Purpose Rack-mount Server
CUSP
(11K phones)
(3K phones)
(400 phones)
copy 2010 Cisco andor its affiliates All rights reserved 45
HQ Details
Blade 1 Blade 2
CPU-1 CPU-2 CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
Blade 3 Blade 4
CPU-1 CPU-2 CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
CUP-1 UCCX-1 CUP-2
Blade 5 Blade 6
CPU-1 CPU-2 CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
UCCX-1
UCxn-1 Active
Leave
idle for
UCxn
UCxn-2
Active
TFTP-2
SUB-2
SUB-4
SUB-6
SUB-8 CUP-2 UCCX-2
SUB-3 PUB SUB-1
CUP-1
TFTP-1 SUB-5
SUB-7
Blade Slot 7 Blade Slot 8
CPU-1 CPU-2 CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
CUCM VM OVAs
Messaging VM OVAs
Leave
idle for
UCxn
ldquoSparerdquo blade slots available for non-UC workloads such as Cisco Nexus 1000V VMware
vCenter or 3rd-party workloads such as directories email groupware or other business
applications
Contact Center VM OVAs
Presence VM OVAs
copy 2010 Cisco andor its affiliates All rights reserved 46
Rack Server 3
CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
UCxn-2 CCX-1 CUP
Rack Server 2
CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
Rack Server 1
CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
Leave
idle for
UCxn
UCxn-1 TFTP-2 SUB-2
Leave
idle for
UCxn
SUB-1 CCX-2 PUB TFTP-1
Branch Office Details
Rack Server 2
CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
Rack Server 1
CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
PUBTFTP CCX-1 CUP
SUB CCX-2
Leave
idle for
UCxn
Leave
idle for
UCxn
UC
xn
-1
UC
xn
-1
CUCM VM OVAs
Messaging VM OVAs
Contact Center VM OVAs
Presence VM OVAs
copy 2010 Cisco andor its affiliates All rights reserved 47
bull DAS Rack Mount Server (Cisco C-Series)
bull Popular DAS Protocol SCSI
bull iSCSI Access SCSI storage media using IP network
bull Fibre Channel The most popular SAN protocol today
Cable distance ~ 2 km
Popular speed - 4 Gbs
bull NAS (Network Attached Storage) uses NFS (Network File System) protocol over TCPIP
DAS
SCSI
Computer System
SCSI Bus Adapter
SCSI Device Driver
Volume Manager
File System
Application
iSCSI
File System
Application
SCSI Device Driver iSCSI Driver
TCPIP Stack
NIC
Volume Manager
NIC
TCPIP Stack
iSCSI Layer
Bus Adapter
Host
Server
Storage
Transport
Storage
Media
FC SAN
FC
FC HBA
SCSI Device Driver
Volume Manager
File System
Application
Computer System Computer System
Block IO
SAN IP
copy 2010 Cisco andor its affiliates All rights reserved 48
HDD 1
450gig
15K RPM
HDD 2
450gig
15K RPM
HDD 3
450gig
15K RPM
HDD 4
450gig
15K RPM
HDD 5
450gig
15K RPM
Single RAID5 Group (14 TB Usable Space)
PUB SUB1 UCCX1
UC
VM 1
UC
VM 2
UC
VM 3
LUN 2 (720 GB) LUN 1 (720 GB)
UCCX2 CUP1 CUP2
UC
VM 4
UC
VM 5 UC
VM 6
4 to 8 UC VMs per
LUN
(max dependent on
sumvDisks)
NASSAN Array Best Practices for UC
Must be lt2TB per
LUN Recommend
500GB to 15 TB
Use FC class disks
with ~180 IOPS (eg
450 GB 15K or
300 GB 15K)
copy 2010 Cisco andor its affiliates All rights reserved 49
DAS Example UCS C210 M2 TRC1
HDD 1
146 GB
15K RPM
HDD 2
146 GB
15K RPM
HDD 3
146 GB
15K RPM
HDD 4
146 GB
15K RPM
HDD 5
146 GB
15K RPM
HDD 6
146 GB
15K RPM
HDD 7
146 GB
15K RPM
HDD 8
146 GB
15K RPM
HDD 9
146 GB
15K RPM
HDD 10
146 GB
15K RPM
Single RAID1
Volume Single RAID5 Volume (1022 GB after RAID overhead)
VMFS Filestore (947 GB after VMFS overhead)
PUB UCCX1
UC
VM 1
UC
VM 3
CUP1
UC
VM 5
vSphere ESXi
image
Notes
bull VMFS block size limits
max vDisk size
bull Could have gt1 VMFS
datastore on RAID
volume
copy 2010 Cisco andor its affiliates All rights reserved 50
DAS Example UCS C200 M2 TRC1 for BE6K
HDD 1
1 TB
72K RPM
HDD 2
1 TB
72K RPM
HDD 3
1 TB
72K RPM
HDD 4 1
TB
72K RPM
Single RAID10 Volume (2 TB after RAID
overhead)
VMFS Filestore (18 TB after VMFS overhead)
PUB UCCX1
UC
VM 1
UC
VM 3
CUP1
UC
VM 5 vSphere ESXi
image
copy 2010 Cisco andor its affiliates All rights reserved 51
UC Application Co-residency
bull See wwwciscocomgouc-virtualized for latest
bull Based on supported OVA for download
OVA reserves cores RAM etc to VMs
Basic rule of thumb fill up blade until out of capacity
If blade contains VM for messaging must reserve core for ESXi
bull Hardware oversubscription not supported
52 copy 2010 Cisco andor its affiliates All rights reserved
Virtual Machine Sizing
bull Virtual Machine virtual hardware defined by an VM template
vCPU vRAM vDisk vNICs
bull Capacity
bullAn VM template is associated with a specific capacity
bull VM templates are packaged in a OVA file
bull There are usually different VM template per release For example CUCM_80_vmv7_v21ova
CUCM_85_vmv7_v21ova
CUCM_86_vmv7_v15ova
Includes product product version VMware hardware version template version
53 copy 2010 Cisco andor its affiliates All rights reserved httptoolsciscocomcucst
copy 2010 Cisco andor its affiliates All rights reserved 54
bull Customer-accessible
bullUC on UCS httpwwwciscocomgouconucs and wwwciscocomgouc-virtualized and wwwciscocomgoucsrnd and wwwciscocomgoswonly (UCS page)
bullUCS in general httpwwwciscocomgoucs
bullVblocks amp Virtual Computing Environment wwwvceportalcom and wwwvcecom
bullFlexPods wwwcisconetappcom
bullCisco Infrastructure as a Service (IaaS) Portal httpwwwciscocomgoiaas
bullCisco Unified Service Delivery Portal httpwwwciscocomgousd
bullldquoCUCM on Virtual Serversrdquo (summary of ldquowhatrsquos newdifferent when virtualizedrdquo) httpwwwciscocomenUSdocsvoice_ip_commcucmvirtualservershtml
bull ldquoVirtualization and UCS 101rdquo httpnewsroomciscocomdlls2010ts_102810html
bull UC on UCS ldquotech valuerdquo httpwwwciscocomenUSsolutionsns340ns339ns638ns914html_TWTVtwtv_episode_74html
copy 2010 Cisco andor its affiliates All rights reserved 55
bull Partner Central ldquoServers OS and Virtualizationrdquo
httpwwwciscocomwebpartnersselltechnologyipcservers_os_virt_for_uchtml
bull Partner Community ldquoServers OS and Virtualizationrdquo
httpswwwmyciscocommunitycomcommunitypartnercollaborationucservers
bull Design info
httpwwwciscocomgouc-virtualized and httpwwwciscocomgoucsrnd
bull ldquoWhatrsquos new whatrsquos different when on VMwarerdquo customer doc
httpwwwciscocomenUSdocsvoice_ip_commcucmvirtualservershtml
bull Product Literature
httpwwwciscocomgoucs and httpwwwciscocomgouconucs
bull Ordering Guide
httpwwwciscocomwebpartnersselltechnologyipcuc_tech_readinesshtml~7
bull Virtual Computing Environment Portal httpwwwvceportalcom
bull Cisco Infrastructure as a Service (IaaS) Portal httpwwwciscocomgoiaas
copy 2010 Cisco andor its affiliates All rights reserved 10
B230 M2 2-Socket Intel E7-2800 2 SSD 32 DIMM
B200 M2 2-Socket Intel 5600 2 SFF Disk 12 DIMM
B250 M2 2-Socket Intel 5600 2 SFF Disk 48 DIMM
B440 M2 4-Socket Intel E7-4800 4 SFF Disk 32 DIMM
C200 M2 (LFF) 2-Socket Intel 5600 4 Disks 12 DIMM 2 PCIe 1U
C220 M3 SFF 2-Socket Intel E5-2600 8 Disks 16 DIMM 2 PCIe 1U
C250 M2 2-Socket Intel 5600 8 Disks 48 DIMM 5 PCIe 2U
C460 M2 4-Socket Intel E7-4800 12 Disks 64 DIMM 10 PCIe 4U
Supported Hardware UC on UCS B
lad
e
Rack M
ou
nt
C260 M2 2-Socket Intel E7-2800 16 Disks 64 DIMM 6 PCIe 2U
UC on UCS Tested Reference Configuration UC on UCS Specs-based
BE6K
B200 M3 SFF 2-Socket Intel E5-2600 2 SFF Disk 24 DIMM
C240 M3 SFF 2-Socket Intel E5-2600 24 Disks 24 DIMM 5 PCIe 2U
C210 M2 2-Socket Intel 5600 16 Disks 12 DIMM 5 PCIe 2U
Target support
Fall 2012
Target support
Fall 2012
Target support
Fall 2012
(Note UCS Express
not supported)
copy 2010 Cisco andor its affiliates All rights reserved 11
Must be on VMware
HCL
Allowed Server
Vendors
Server model and IO devices on wwwvmwarecomgohcl
All parts must be supported by server vendor
No hardware oversubscription allowed for UC
VMware vCenter is REQUIRED
Processor
Intel Xeon 56xx75xx 253+ GHz or E7-xxxx 24+ GHz
CPU support varies by UC app
Required physical core count = sumUC VMs vCPU (+1 if Unity Cxn)
Capacity = sumUC VMs vRAM + 2GB for VMware
Follow server vendor for module densityconfiguration
Memory
Storage Network
Must be on VMware HCL and supported by server vendor
Eg 1GbE10GbE NIC ge2Gb FC HBA 10Gb CNA or VIC
UCS BC UCS Express Other 3rd-parties
Adapters (eg LANStorage Access)
SAN (FCoE FC iSCSI) NAS (NFS) Variable DASRAID
Storage capacity = sumUC VMs vDisk + VMwareRAID overhead
Storage performance = sumUC VM IOPS
copy 2010 Cisco andor its affiliates All rights reserved 12
Intel
Program
Nehalem-
EP
Nehalem-
EX
Westmere-
EP
Westmere-
EX
Romley-
EP
CPU Family 55xx
65xx
75xx
56xx E7-28xx
E7-48xx
E7-88xx
E5-26xx
CPU Cores 4 468 46 6810 468
CPU Speed 2-29 GHz 17-27 GHz 19-333 GHz 17-27 GHz 1-3 GHz
Example UCS
Models with
these CPUs
B200250 M1
C210 M1
C250 M1M2
C200 M2
B230 M1
B440 M1
C460 M1
B200250 M2
C210 M2
C250 M2
B230 M2
B440 M2
C260 M2
C460 M2
B200 M3
C220 M3
C240 M3
UC on UCS
Certifications
TRCs for B200 M1
TRCs for C210 M1
(E5540)
TRC for C200M2
(E5506)
Specs-based
(75xx 253+ GHz)
TRCs for B200 M2
TRCs for C210 M2
(E5640)
Specs-based
(56xx at 253+
GHz)
TRC for B230 M2
TRC for B440 M2
(E7-28704870)
Specs-based
(E7 at 24+ GHz)
Not Currently
Supported by UC
copy 2010 Cisco andor its affiliates All rights reserved 13
VM VM VM
VM VM VM
VM VM VM
VM VM VM
VM
VM
VM
ldquoSmallrdquo VM
ldquoMediumrdquo VM
ldquoLargerdquo VM
VM
VM
VM VM
VM VM VM
19 UC VMs with
total 40 vcpursquos
19 MCS Appliances 5 virtualized
servers (dual 4-core
B200 M2
TRC)
4 virtualized
servers (dual 6-core
B200 M2
specs-based)
2 virtualized
servers (dual 10-core
B230 M2
TRC)
19 UC
app
copies
copy 2010 Cisco andor its affiliates All rights reserved 14
UC on UCS Products with Owner UC on UCS TRC UC on UCS Specs-based
Unified Communications Manager
Business Edition 6000 C200 M2 only Not supported
Unity Connection
Unified Presence
Unified Contact Center Express
Cisco Emergency Responder
Session Manager Edition
InterCompany Media Engine
Unified Attendant Consoles
Unity
Unified Workforce Optimization (WFO)
Unified Contact Center Enterprise Planned
Unified Intelligence Center
Unified Customer Voice Portal Planned
MediaSense Planned
Unified Contact Center Mgmt Portal
SocialMiner
Finesse Planned
Unified EmailWeb Interaction Mgr
Prime UCMS (OMPMSMSSM)
Webex Premise Planned Planned
Unified MeetingPlace Planned Planned
TMSCTMS Planned Planned
VCS Planned Planned
copy 2010 Cisco andor its affiliates All rights reserved 15
Why virtualize
your UC
Why virtualize
on UCS
Lower TCO
Business
Agility
Additional
Savings and
Increased
Agility
End to End
Solution
Single
Support
Tested Reference Configurations
Vblocks
Cisco options
VCE Vblock options
Infrastructure Simplification (Cables Adapters Switching)
Converge Communications and DC Networks ndash ldquowire oncerdquo
Consolidates System Mgmt
Easier Service Provisioning
Reduce ServersStorage
Reduced Power Cooling Cabling Space Weight
Investment Leverage amp Easy Server Repurposing
Efficient App Expansion
Accelerated UC rollouts
Better Business Continuity
PortableMobile VMs
UCS is the industryrsquos only
fully unified and virtualization-
aware compute solution
copy 2010 Cisco andor its affiliates All rights reserved 16
CAPEX
bull Reduced Server Count (50-75)
bull NetworkStorage Consolidation (50+)
bull Reduced Cabling (50+)
OPEX
Reduced Rack amp Floor Space (36)
Reduced PowerCooling (20+)
Fewer Servers to Manage (50-75 less)
Reduced MaintenanceSupport Costs (~20)
Example 5000 users Dial tone voicemail and Presence 10 are Contact Center Agents
11 non-virtualized rack servers required for UC more for other business apps
copy 2010 Cisco andor its affiliates All rights reserved 17
CAPEXOPEX
bull Similar Consolidation and Operational EfficiencyScale benefits as with UC on UCS B-series
Other Benefits
Lower initial investment
Simple entrymigration to virtualized UC ndash Data Center expertise not required unless using SAN option
Example 5000 users Dial tone voicemail and Presence 10 are Contact Center Agents
11 non-virtualized rack servers required for UC more for other business apps
copy 2010 Cisco andor its affiliates All rights reserved 18
$-
$500
$1000
$1500
$2000
$2500
$3000
2 4 8 10 12 20 50 100
UCS B230 M2 TRC OPEX ($K)
UCS B230 M2 TRC CAPEX ($K)
UCS B200 M2 TRC OPEX ($K)
UCS B200 M2 TRC CAPEX ($K)
UCS C210 M2 TRC OPEX ($K)
UCS C210 M2 TRC CAPEX ($K)
MCS 7845-I3 OPEX ($K)
MCS 7845-I3 CAPEX ($K)
Assumptions
bull UC only no other business applications included ldquoSparerdquo or ldquohot standbyrdquo hosts not included
bull ldquoServerrdquo is either an MCS Appliance or a 2-vcpu-core ldquoVirtual Machinerdquo
bull Dual sites split MCS or UCS TRC servers across sites no single point of failure ndash redundant sites switching blade chassis rackblade servers
bull Using list pricing for MCS-7845-I3-IPC1 UCS-C210M2-VCD2 UCS-B200M2-VCS1 UCS-B230M2-VCDL1 and VMware Enterprise Plus Edition
Appliance or VM Count
PSTN
2104
2104
2104
2104
SANLAN
Dual Site Scenario PSTN
2104
2104
2104
2104
SANLAN
hellip
hellip
hellip
hellip
UC on UCS
B200 B230
UC on UCS
C210
MCS 7845
B230 M2
vs B200 M2
C210 M2
vs MCS 7845
20 copy 2010 Cisco andor its affiliates All rights reserved
Current Offers Technical Overview
21 copy 2010 Cisco andor its affiliates All rights reserved
Eg 4 physical servers
Each MCS 7800 hosts only
one UC app instance
4 virtual servers (VMrsquos) on 1 physical server
Single virtualized server with total 8 physical
cores hosts all UC app instances
Unity
Connection
Unified CM
VM for
Unified
CM
Sub
Unified CCX
VM for
Unity
Cxn
VM for
Unified
CCX
VM for
Unified
CM
Pub
or
copy 2010 Cisco andor its affiliates All rights reserved 22
Server Model TRC CPU RAM Storage Adapters
UCS B200 M2 Blade Server TRC 1
Dual E5640 (8 physical cores total)
48 GB DAS (RAID1) for
VMware FC SAN for UC apps
Cisco VIC
TRC 2 Dual E5640 (8
physical cores total) 48 GB Diskless Cisco VIC
UCS B230 M2 Blade Server
TRC 1 Dual E7-2870 (20
physical cores total) 128 GB Diskless Cisco VIC
UCS B440 M2 Blade Server
TRC1 Dual E7-4870 (40
physical cores total)
256 GB Diskless Cisco VIC
UCS C260 M2 Rack-Mount Server
TRC 1 Dual E7-2870 (20
physical cores total) 128 GB DAS (2x RAID5) 1GbE NIC
UCS C210 M2 General-Purpose
Rack-Mount Server TRC 1
Dual E5640 (8 physical cores total)
48 GB
DAS (2 disks RAID1) for VMware + DAS (8 disks RAID5) for
UC apps
1GbE NIC
TRC 2 Dual E5640 (8
physical cores total) 48 GB
DAS (2 disks RAID1) for VMware FC SAN
for UC apps
1GbE NIC and 4G FC HBA
TRC 3 Dual E5640 (8
physical cores total) 48 GB Diskless
1GbE NIC and 4G FC HBA
UCS C200 M2
General-Purpose Rack-Mount Server
TRC 1 Dual E5506 (8
physical cores total) 24 GB
DAS (4 disks RAID10) for VMware
+ UC apps 1GbE NIC
copy 2010 Cisco andor its affiliates All rights reserved 23
UC app Scale
(ldquousersrdquo)
vCPU (cores) Usually 253+ GHz
per core required
vRAM
(GB)
vDisk
(GB)
Notes
Unified
CM
1000 2 4 1 x 80 UCS C200 or BE6K only
2500 1 225 1 x 80 Not for use with C200BE6K
7500 2 6 2 x 80
10000 4 6 2 x 80
Unity
Connection
500 1 2 1 x 160
1000 1 4 1 x 160
5000 2 4 1 x 200
10000 4 4 2 x 146 Not for use with C200BE6K
20000 7 8 2 x 300
Unified
Presence
1000 1 2 1 x 80
2500 2 4 1 x 80 Not for use with C200BE6K
5000 4 4 2 x 80
Unified CCX 100 2 4 2 x 146 UCS C200 or BE6K only
300 2 4 2 x 146 Not for use with C200BE6K
400 4 8 2 x 146
Not exhaustive subject to change see wwwciscocomgouc-virtualized for latest
ie user count for particular values of BHCA trace level encryption CTI and other factors Actual
supportable user count may vary by deployment
copy 2010 Cisco andor its affiliates All rights reserved 24
Policy still lives here httpdocwikiciscocomwikiUnified_Communications_Virtualization_Sizing_GuidelinesApplication_Co-residency_Support_Policy
Three aspects
1 Allowed App Mix on same physical server
SAME RULES for TRC vs specs-based UCSHPIBM
2 Allowed VM OVA choices
DIFFERENT RULES for TRC vs specs-based due to CPU differences
3 Max number of VMs on same physical server
SAME RULES for TRC vs specs-based to determine max but specs-based might allow more VMs
Note DAS IO bottlenecks may prevent very high VM counts even if CPURAM are sufficient
copy 2010 Cisco andor its affiliates All rights reserved 25
bull Which apps can share the same physical server In general any UC with UC from apps listed at httpdocwikiciscocomwikiUnified_Communications_Virtualization_Supported_Applications
But note some UC apps restrict this eg BE6K CUCCE See their rules on their docwiki ldquochild pagesrdquo
NMTGrsquos UC Mgmt Suite (CUOM CUSM CUSSM CUPM) counts as a UC app for this
Note UCS C200 M2 TRC1 for non-BE6K no longer has special restrictions on UC App Mix
bull SEPARATE PHYSICAL SERVER required for non-UC or 3rd-party Eg N1KV ARC SingleWire vCenter FilePrint Directory CRMERP Groupware non-CUCM TFTP Nuance etc
UC VM UC VMhellipVM for
VMware
vCenter
VM for
Nexus
1KV
VSM
VM for
Solutions
Plus
CTDP
app
VM for
Unaffiliated
3rd-party
app
UC VM UC VMhellipVM for
VMware
vCenter
VM for
Nexus
1KV
VSM
VM for
Solutions
Plus
CTDP
app
VM for
Unaffiliated
3rd-party
app
UC VM UC VMhellipVM for
VMware
vCenter
VM for
Nexus
1KV
VSM
VM for
Solutions
Plus
CTDP
app
VM for
Unaffiliated
3rd-party
app
UC VM UC VMhellipVM for
VMware
vCenter
VM for
Nexus
1KV
VSM
VM for
Solutions
Plus
CTDP
app
VM for
Unaffiliated
3rd-party
app
Different blades in same chassis OK
Same blade same chassis not OK
copy 2010 Cisco andor its affiliates All rights reserved 26
bull App to HW some apps eg CUCCE donrsquot allow any of their OVAs on certain TRCs
See httpdocwikiciscocomwikiUnified_Communications_Virtualization_Supported_Applications
bull OVA to HW Some OVAs are deliberately only for use with a particular TRC or CPU
See co-res policy page and Notes column in httpdocwikiciscocomwikiUnified_Communications_Virtualization_Downloads_28including_OVAOVF_Templates29
Why Usually due to CPU modelspeed dependencies
C200 M2 TRC1
(E5506 213 GHz)
UCM
25K
UCM
75K
UCM
10K
C200 M2 Specs-based
(56xx 253+GHz )
B200C210 M2 TRC or Specs-based
(E5640 266 GHz on TRC
56xx75xx 253+ GHz on specs-based)
UCM
25K
UCM
75K
UCM
10K
UCM
1K
UCM
1K
UCM
25K
UCM
75K
UCM
10K
UCM
1K
copy 2010 Cisco andor its affiliates All rights reserved 27
Core
Core
Core
Core
Core
Core
Core
Core
Core
Core
Core
Core
Core
Core
Core
Core
Core
Core
Core
Core
VM VM VM VM
V
M
V
M
V
M
V
M
V
M
V
M
V
M
V
M
VM VM
Dual-socket 4-core
Eg UCS C210 M2 TRC1
with dual E5640
Dual-socket 6-core
Eg UCS C210 M2 Specs-based
with UC-supported CPU model
and min speed
VM VM V
M
V
M
VM VM VM V
M
Idle
VM
VM
VM
Jumbo + 1 reserved
or
Mixed sizes + 1 reserved
or
Mixed sizes
or
21 Large eg UCM 10K
or
41 Med eg UCM 75K
or
81 Small eg UCM 25K
ldquoSmallrdquo VM
ldquoMediumrdquo VM
ldquoLargerdquo VM
ldquoJumbordquo VM
VM
VM
Idle
VM VM VM VM
V
M
V
M
V
M
V
M
V
M
V
M
V
M
V
M
VM VM
VM VM V
M
V
M
VM VM VM V
M
Idle
Mixed sizes + 1 reserved
or
Mixed sizes
or
31 Large eg UCM 10K
or
61 Med eg UCM 75K
or
121 Small eg UCM 25K
VM
Idle
V
M
V
M
V
M
V
M
VM VM
VM
VM VM
VM
VM VM
copy 2010 Cisco andor its affiliates All rights reserved 28
Virtual Software Switch Options
VM
LAN SAN
ESXi Hypervisor
Software Switch
vNIC
CNA
FCoE
VMware
vSwitch
VMware
dvSwitch
Cisco Nexus
1KV
Host based (local) Distributed Distributed
IEEE 8021Q VLAN
tagging
IEEE 8021Q VLAN
tagging
IEEE 8021Q VLAN
tagging
VLANs only visible to
local ESXi host
VLANs visible to all
ESXi hosts
VLANs visible to all
ESXi hosts
EtherChannel EtherChannel EtherChannel
-- -- Virtual PortChannel
-- -- QoS Marking
(DSCPCoS)
-- -- ACL
-- -- SPAN
RADIUSTACACS+
No VM needed No VM needed VM needed for VSM
vmNIC
UCS B200
Strongly recommended for UC on UCS B-Series
Not required but recommended for UC on UCS C-Series
copy 2010 Cisco andor its affiliates All rights reserved 29
pSwitch
ESXi
bull Cisco Software Switch in Hypervisor
bull Familiar network server operations amp management model
bull Enhanced diagnostic amp monitoring capability
bull Visibility direct to VM
ESXi Nexus
1000V
VEM
Nexus
1000V
VEM
Nexus 1000V VSM
copy 2010 Cisco andor its affiliates All rights reserved 30
Physical switch maps L3 DSCP to L2 CoS
CUCM marks traffic based on L3 DSCP values
pSwitch (CAT6K etc) can do mapping from L3 DSCP to L2 CoS (if needed)
CTL Packet L3
dc1-access-6k(config)mls qos map dscp-cos 24 to 3
dc1-access-6k(config)mls qos map dscp-cos 46 to 5
CS3
L20 L3CS3
CUCM
CAT6K
L23 L3CS3 L23 L3CS3
copy 2010 Cisco andor its affiliates All rights reserved 31
bull UCS 6100 doesnrsquot look into L3 IP header
bull DSCPToS setting in IP header is not altered by UCS
bull 6100 sends packet to uplink pEthernet switch
bull Default QoS settings on UCS
FCoE (ldquomatch cos 3rdquo) ndash no drop policy
Rest (ldquomatch anyrdquo) ndash Best Effort Queue
vSwitch amp UCS 6100 can not map L3 DSCP to L2 CoS
L20 L3CS3
CUCM
CAT6K
UCS 6100
copy 2010 Cisco andor its affiliates All rights reserved 32
bull UC blades Network Adapters QoS policy set to Platinum (CoS=5 No
Drop)
bull Non-UC blades Network Adapters QoS policy set to best effort
N1Kv Considerations
bull UC sig traffic (CoS3) share queues with FCoE traffic (CoS3)
bull UC sig traffic is given lossless behavior
bull Default CoS value of 3 for FCoE traffic should never be changed
Without N1Kv
Caveat
bull All traffic types from virtual UC App will get CoS value of Platinum
bull Non-UC application gets best-effort class might not be acceptable
L20 L3CS3
L23 L3CS3
L23 L3CS3
CUCM
N1KV
UCS 6100
CAT6K
copy 2010 Cisco andor its affiliates All rights reserved 33
Compute Layer
SANStorage
Layer ndash Cisco
SRND
Cisco
UCS 6100
Fabric
Interconnect
UCS 5100
Blade
Server
Cisco SAN
Switch
4x10GE
4x10GE
4x10GE
4x10GE
FC FC
FC FC
Nexus
1000V
FC Storage
SP-A SP-B
3rd party layer
CUCM VM IOPS ~ 200
200 IOPS 4KB ~ 64 Mbps per VM
bull Total capacity 28000 IOPS
bull 14000 IOPS per controller
bull 4 KByte block size
14000 IOPS x (4KB) ~ 428 Mbps
600 Mbps throughputcontroller
3rd Party SAN Example
Result
bull One 4 Gbps FC interface is enough to
handle the entire capacity of one Storage
Array
bull HA requires four FC interfaces
copy 2010 Cisco andor its affiliates All rights reserved 34
copy 2010 Cisco andor its affiliates All rights reserved 35
bull All UC Deployment Models are supported
No change in the current deployment models
Base deployment model ndash Single Site Centralized Call Processing etc are not changing
bull VM machine layout on a blade andor chassis
Unity Connection requires one extra CPU core (vCPU) on blade
bull Software checks for design rules
No rules or restrictions are in place in UC Apps to check if you are running the primary and sub on the same blade
bull CoWAN ruleslatency requirement are same
Does not depend on CUCM code or hardware
httpwwwciscocomgoucsrnd
copy 2010 Cisco andor its affiliates All rights reserved 36
bull SRND application-layer guidelines are same as when on MCS
Redundancy Rules are the same
Clustering over the WANlatency numbers
Mega Cluster supported in 85
Determine quantityrole of nodes
For HA No design checks validating proper placement of primary and secondary servers
CUCCE private network requirement
bull Mixed clusters of HP IBM UCS are supported
Subject to ldquocommon senserdquo rules ndash eg donrsquot make Pub or Primary less powerful than Sub or Secondary
bull Direct attach devices must be on physical MCS server
-MOH Live audio stream
-Tape BackupFloppy
bull New factors to consider for end-to-end QoS designconfiguration
37 copy 2010 Cisco andor its affiliates All rights reserved
FROM CUCM
System Release To Unified CM 85 To UC System 85
4x Multi Hop thru 61(x)71(x) Multi Hop thru 61(3)
51(2) Multi Hop thru 61(x)71(x) NA
51(3) 2 Hop thru 71(3) NA
61(1) 2 Hop thru 61(x)71(x) 2 Hop
61(2) 2 Hop thru 61(x)71(x) NA
61(3) 2 Hop thru 61(x)71(x) NA
61(4) Single Hop NA
61(5) Single Hop NA
70(1) 2 Hop thru 71(x) 2 Hop
71(2) 2 Hop thru 71(x) 2 Hop
71(3) Single Hop Single Hop Multi stagesBWC supported
71(5) Single Hop NA
80(1) Single Hop Single Hop Multi stagesBWC supported
80(2) 80(3) Single Hop NA
38 copy 2010 Cisco andor its affiliates All rights reserved
VMware Feature support bull VMware feature support varies by application
bull Some features are supported with Caveats some Partially
bull For example
bull Clone Virtual Machine
ldquoY (C)rdquo means VM has to be powered off
bull vMotion
ldquoY (C)rdquo means vMotion supported for live traffic calls shouldnrsquot be dropped (but not guaranteed)
ldquoPartialrdquo means in maintenance mode only
ESXi Features CUCM CUC CUP CCX
Clone Virtual Machine Y (C) Y (C) Y (C) Y (C)
VMware vMotion Y (C) Partial Partial Y (C)
Resize Virtual Machine Partial Partial Partial Partial
VMware HA Y (C) Y (C) Y (C) Y (C)
Boot From SAN Y (C) Y (C) Y (C) Y (C)
VMware DRS No No No No
Docwiki httpdocwikiciscocomwikiUnified_Communications_VMware_Requirements
copy 2010 Cisco andor its affiliates All rights reserved 39
Current Business Continuity and Disaster Recovery strategies are still applicable
The UC apps redundancy rules are same
Distribute UC application nodes across UCS blades chassis and sites to minimize failure impact
Primarysecondary on different blade chassis sites
On same blade mix Subs with TFTPMoH vs just Subs
Redundancy of UCS components (blade chassis FEX links Interconnect switching)
Redundancy of ldquonewrdquo network types (10GbE SAN multi-pathing etc)
copy 2010 Cisco andor its affiliates All rights reserved 40
TAC Support Demarcation
Server Hardware
Shared Storage
VMware Cisco
Application
UC on UCS
Tested Reference Configuration
Cisco 3rd-party Cisco
Cisco VCE VCE
3rd-party
VCE
UC on UCS
Specs-based (including Vblock option)
Cisco 3rd-party Cisco
Cisco VCE VCE
3rd-party
VCE
3rd-party VMware (HP IBM)
Specs-based 3rd-party 3rd-party 3rd-party Cisco
MCS 7800 Appliances Cisco NA NA Cisco
Customer-provided MCS 7800 equivalent 3rd party NA NA Cisco
41 copy 2010 Cisco andor its affiliates All rights reserved
Customer Example ndash Primary Data Center
OLD NEW
Hardware Nodes 62 Physical Servers ( EU HQ clusters) 14 Approx
Software Version 615 amp 851 851
Ucxn Version 421 851 ndash 3 pairs -- Virtualized
CER 20 70 86 -- Virtualized
CM SUB CM PUB CM SUB CM SUB
MOH TFTP CER CM SUB
CER UCxn UCxN CM SUB
42 copy 2010 Cisco andor its affiliates All rights reserved
Deployment Model ndash Data Center 1
CM SUB CM PUB CM SUB CM SUB
MOH TFTP UCxN UCxn
CER UCxn UCxn UCxn
43 copy 2010 Cisco andor its affiliates All rights reserved
Deployment Model ndash Data Center 2
CM SUB CM PUB CM SUB CM SUB
MOH TFTP Ucxn UCxn
CER UCxn UCxn UCxn
copy 2010 Cisco andor its affiliates All rights reserved 44
Customer Design
PSTN
IP WAN
SIP Proxy
Unified Communications Manager
Unity Connection
Unified Presence
Unified Contact Center Express
Cisco UCS 5108 Chassis with UCS B200 Blade Servers
Cisco UCS 6100 Fabric Interconnect Switch
Cisco UCS C210 or C200 General-Purpose Rack-mount Server
CUSP
(11K phones)
(3K phones)
(400 phones)
copy 2010 Cisco andor its affiliates All rights reserved 45
HQ Details
Blade 1 Blade 2
CPU-1 CPU-2 CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
Blade 3 Blade 4
CPU-1 CPU-2 CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
CUP-1 UCCX-1 CUP-2
Blade 5 Blade 6
CPU-1 CPU-2 CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
UCCX-1
UCxn-1 Active
Leave
idle for
UCxn
UCxn-2
Active
TFTP-2
SUB-2
SUB-4
SUB-6
SUB-8 CUP-2 UCCX-2
SUB-3 PUB SUB-1
CUP-1
TFTP-1 SUB-5
SUB-7
Blade Slot 7 Blade Slot 8
CPU-1 CPU-2 CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
CUCM VM OVAs
Messaging VM OVAs
Leave
idle for
UCxn
ldquoSparerdquo blade slots available for non-UC workloads such as Cisco Nexus 1000V VMware
vCenter or 3rd-party workloads such as directories email groupware or other business
applications
Contact Center VM OVAs
Presence VM OVAs
copy 2010 Cisco andor its affiliates All rights reserved 46
Rack Server 3
CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
UCxn-2 CCX-1 CUP
Rack Server 2
CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
Rack Server 1
CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
Leave
idle for
UCxn
UCxn-1 TFTP-2 SUB-2
Leave
idle for
UCxn
SUB-1 CCX-2 PUB TFTP-1
Branch Office Details
Rack Server 2
CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
Rack Server 1
CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
PUBTFTP CCX-1 CUP
SUB CCX-2
Leave
idle for
UCxn
Leave
idle for
UCxn
UC
xn
-1
UC
xn
-1
CUCM VM OVAs
Messaging VM OVAs
Contact Center VM OVAs
Presence VM OVAs
copy 2010 Cisco andor its affiliates All rights reserved 47
bull DAS Rack Mount Server (Cisco C-Series)
bull Popular DAS Protocol SCSI
bull iSCSI Access SCSI storage media using IP network
bull Fibre Channel The most popular SAN protocol today
Cable distance ~ 2 km
Popular speed - 4 Gbs
bull NAS (Network Attached Storage) uses NFS (Network File System) protocol over TCPIP
DAS
SCSI
Computer System
SCSI Bus Adapter
SCSI Device Driver
Volume Manager
File System
Application
iSCSI
File System
Application
SCSI Device Driver iSCSI Driver
TCPIP Stack
NIC
Volume Manager
NIC
TCPIP Stack
iSCSI Layer
Bus Adapter
Host
Server
Storage
Transport
Storage
Media
FC SAN
FC
FC HBA
SCSI Device Driver
Volume Manager
File System
Application
Computer System Computer System
Block IO
SAN IP
copy 2010 Cisco andor its affiliates All rights reserved 48
HDD 1
450gig
15K RPM
HDD 2
450gig
15K RPM
HDD 3
450gig
15K RPM
HDD 4
450gig
15K RPM
HDD 5
450gig
15K RPM
Single RAID5 Group (14 TB Usable Space)
PUB SUB1 UCCX1
UC
VM 1
UC
VM 2
UC
VM 3
LUN 2 (720 GB) LUN 1 (720 GB)
UCCX2 CUP1 CUP2
UC
VM 4
UC
VM 5 UC
VM 6
4 to 8 UC VMs per
LUN
(max dependent on
sumvDisks)
NASSAN Array Best Practices for UC
Must be lt2TB per
LUN Recommend
500GB to 15 TB
Use FC class disks
with ~180 IOPS (eg
450 GB 15K or
300 GB 15K)
copy 2010 Cisco andor its affiliates All rights reserved 49
DAS Example UCS C210 M2 TRC1
HDD 1
146 GB
15K RPM
HDD 2
146 GB
15K RPM
HDD 3
146 GB
15K RPM
HDD 4
146 GB
15K RPM
HDD 5
146 GB
15K RPM
HDD 6
146 GB
15K RPM
HDD 7
146 GB
15K RPM
HDD 8
146 GB
15K RPM
HDD 9
146 GB
15K RPM
HDD 10
146 GB
15K RPM
Single RAID1
Volume Single RAID5 Volume (1022 GB after RAID overhead)
VMFS Filestore (947 GB after VMFS overhead)
PUB UCCX1
UC
VM 1
UC
VM 3
CUP1
UC
VM 5
vSphere ESXi
image
Notes
bull VMFS block size limits
max vDisk size
bull Could have gt1 VMFS
datastore on RAID
volume
copy 2010 Cisco andor its affiliates All rights reserved 50
DAS Example UCS C200 M2 TRC1 for BE6K
HDD 1
1 TB
72K RPM
HDD 2
1 TB
72K RPM
HDD 3
1 TB
72K RPM
HDD 4 1
TB
72K RPM
Single RAID10 Volume (2 TB after RAID
overhead)
VMFS Filestore (18 TB after VMFS overhead)
PUB UCCX1
UC
VM 1
UC
VM 3
CUP1
UC
VM 5 vSphere ESXi
image
copy 2010 Cisco andor its affiliates All rights reserved 51
UC Application Co-residency
bull See wwwciscocomgouc-virtualized for latest
bull Based on supported OVA for download
OVA reserves cores RAM etc to VMs
Basic rule of thumb fill up blade until out of capacity
If blade contains VM for messaging must reserve core for ESXi
bull Hardware oversubscription not supported
52 copy 2010 Cisco andor its affiliates All rights reserved
Virtual Machine Sizing
bull Virtual Machine virtual hardware defined by an VM template
vCPU vRAM vDisk vNICs
bull Capacity
bullAn VM template is associated with a specific capacity
bull VM templates are packaged in a OVA file
bull There are usually different VM template per release For example CUCM_80_vmv7_v21ova
CUCM_85_vmv7_v21ova
CUCM_86_vmv7_v15ova
Includes product product version VMware hardware version template version
53 copy 2010 Cisco andor its affiliates All rights reserved httptoolsciscocomcucst
copy 2010 Cisco andor its affiliates All rights reserved 54
bull Customer-accessible
bullUC on UCS httpwwwciscocomgouconucs and wwwciscocomgouc-virtualized and wwwciscocomgoucsrnd and wwwciscocomgoswonly (UCS page)
bullUCS in general httpwwwciscocomgoucs
bullVblocks amp Virtual Computing Environment wwwvceportalcom and wwwvcecom
bullFlexPods wwwcisconetappcom
bullCisco Infrastructure as a Service (IaaS) Portal httpwwwciscocomgoiaas
bullCisco Unified Service Delivery Portal httpwwwciscocomgousd
bullldquoCUCM on Virtual Serversrdquo (summary of ldquowhatrsquos newdifferent when virtualizedrdquo) httpwwwciscocomenUSdocsvoice_ip_commcucmvirtualservershtml
bull ldquoVirtualization and UCS 101rdquo httpnewsroomciscocomdlls2010ts_102810html
bull UC on UCS ldquotech valuerdquo httpwwwciscocomenUSsolutionsns340ns339ns638ns914html_TWTVtwtv_episode_74html
copy 2010 Cisco andor its affiliates All rights reserved 55
bull Partner Central ldquoServers OS and Virtualizationrdquo
httpwwwciscocomwebpartnersselltechnologyipcservers_os_virt_for_uchtml
bull Partner Community ldquoServers OS and Virtualizationrdquo
httpswwwmyciscocommunitycomcommunitypartnercollaborationucservers
bull Design info
httpwwwciscocomgouc-virtualized and httpwwwciscocomgoucsrnd
bull ldquoWhatrsquos new whatrsquos different when on VMwarerdquo customer doc
httpwwwciscocomenUSdocsvoice_ip_commcucmvirtualservershtml
bull Product Literature
httpwwwciscocomgoucs and httpwwwciscocomgouconucs
bull Ordering Guide
httpwwwciscocomwebpartnersselltechnologyipcuc_tech_readinesshtml~7
bull Virtual Computing Environment Portal httpwwwvceportalcom
bull Cisco Infrastructure as a Service (IaaS) Portal httpwwwciscocomgoiaas
copy 2010 Cisco andor its affiliates All rights reserved 11
Must be on VMware
HCL
Allowed Server
Vendors
Server model and IO devices on wwwvmwarecomgohcl
All parts must be supported by server vendor
No hardware oversubscription allowed for UC
VMware vCenter is REQUIRED
Processor
Intel Xeon 56xx75xx 253+ GHz or E7-xxxx 24+ GHz
CPU support varies by UC app
Required physical core count = sumUC VMs vCPU (+1 if Unity Cxn)
Capacity = sumUC VMs vRAM + 2GB for VMware
Follow server vendor for module densityconfiguration
Memory
Storage Network
Must be on VMware HCL and supported by server vendor
Eg 1GbE10GbE NIC ge2Gb FC HBA 10Gb CNA or VIC
UCS BC UCS Express Other 3rd-parties
Adapters (eg LANStorage Access)
SAN (FCoE FC iSCSI) NAS (NFS) Variable DASRAID
Storage capacity = sumUC VMs vDisk + VMwareRAID overhead
Storage performance = sumUC VM IOPS
copy 2010 Cisco andor its affiliates All rights reserved 12
Intel
Program
Nehalem-
EP
Nehalem-
EX
Westmere-
EP
Westmere-
EX
Romley-
EP
CPU Family 55xx
65xx
75xx
56xx E7-28xx
E7-48xx
E7-88xx
E5-26xx
CPU Cores 4 468 46 6810 468
CPU Speed 2-29 GHz 17-27 GHz 19-333 GHz 17-27 GHz 1-3 GHz
Example UCS
Models with
these CPUs
B200250 M1
C210 M1
C250 M1M2
C200 M2
B230 M1
B440 M1
C460 M1
B200250 M2
C210 M2
C250 M2
B230 M2
B440 M2
C260 M2
C460 M2
B200 M3
C220 M3
C240 M3
UC on UCS
Certifications
TRCs for B200 M1
TRCs for C210 M1
(E5540)
TRC for C200M2
(E5506)
Specs-based
(75xx 253+ GHz)
TRCs for B200 M2
TRCs for C210 M2
(E5640)
Specs-based
(56xx at 253+
GHz)
TRC for B230 M2
TRC for B440 M2
(E7-28704870)
Specs-based
(E7 at 24+ GHz)
Not Currently
Supported by UC
copy 2010 Cisco andor its affiliates All rights reserved 13
VM VM VM
VM VM VM
VM VM VM
VM VM VM
VM
VM
VM
ldquoSmallrdquo VM
ldquoMediumrdquo VM
ldquoLargerdquo VM
VM
VM
VM VM
VM VM VM
19 UC VMs with
total 40 vcpursquos
19 MCS Appliances 5 virtualized
servers (dual 4-core
B200 M2
TRC)
4 virtualized
servers (dual 6-core
B200 M2
specs-based)
2 virtualized
servers (dual 10-core
B230 M2
TRC)
19 UC
app
copies
copy 2010 Cisco andor its affiliates All rights reserved 14
UC on UCS Products with Owner UC on UCS TRC UC on UCS Specs-based
Unified Communications Manager
Business Edition 6000 C200 M2 only Not supported
Unity Connection
Unified Presence
Unified Contact Center Express
Cisco Emergency Responder
Session Manager Edition
InterCompany Media Engine
Unified Attendant Consoles
Unity
Unified Workforce Optimization (WFO)
Unified Contact Center Enterprise Planned
Unified Intelligence Center
Unified Customer Voice Portal Planned
MediaSense Planned
Unified Contact Center Mgmt Portal
SocialMiner
Finesse Planned
Unified EmailWeb Interaction Mgr
Prime UCMS (OMPMSMSSM)
Webex Premise Planned Planned
Unified MeetingPlace Planned Planned
TMSCTMS Planned Planned
VCS Planned Planned
copy 2010 Cisco andor its affiliates All rights reserved 15
Why virtualize
your UC
Why virtualize
on UCS
Lower TCO
Business
Agility
Additional
Savings and
Increased
Agility
End to End
Solution
Single
Support
Tested Reference Configurations
Vblocks
Cisco options
VCE Vblock options
Infrastructure Simplification (Cables Adapters Switching)
Converge Communications and DC Networks ndash ldquowire oncerdquo
Consolidates System Mgmt
Easier Service Provisioning
Reduce ServersStorage
Reduced Power Cooling Cabling Space Weight
Investment Leverage amp Easy Server Repurposing
Efficient App Expansion
Accelerated UC rollouts
Better Business Continuity
PortableMobile VMs
UCS is the industryrsquos only
fully unified and virtualization-
aware compute solution
copy 2010 Cisco andor its affiliates All rights reserved 16
CAPEX
bull Reduced Server Count (50-75)
bull NetworkStorage Consolidation (50+)
bull Reduced Cabling (50+)
OPEX
Reduced Rack amp Floor Space (36)
Reduced PowerCooling (20+)
Fewer Servers to Manage (50-75 less)
Reduced MaintenanceSupport Costs (~20)
Example 5000 users Dial tone voicemail and Presence 10 are Contact Center Agents
11 non-virtualized rack servers required for UC more for other business apps
copy 2010 Cisco andor its affiliates All rights reserved 17
CAPEXOPEX
bull Similar Consolidation and Operational EfficiencyScale benefits as with UC on UCS B-series
Other Benefits
Lower initial investment
Simple entrymigration to virtualized UC ndash Data Center expertise not required unless using SAN option
Example 5000 users Dial tone voicemail and Presence 10 are Contact Center Agents
11 non-virtualized rack servers required for UC more for other business apps
copy 2010 Cisco andor its affiliates All rights reserved 18
$-
$500
$1000
$1500
$2000
$2500
$3000
2 4 8 10 12 20 50 100
UCS B230 M2 TRC OPEX ($K)
UCS B230 M2 TRC CAPEX ($K)
UCS B200 M2 TRC OPEX ($K)
UCS B200 M2 TRC CAPEX ($K)
UCS C210 M2 TRC OPEX ($K)
UCS C210 M2 TRC CAPEX ($K)
MCS 7845-I3 OPEX ($K)
MCS 7845-I3 CAPEX ($K)
Assumptions
bull UC only no other business applications included ldquoSparerdquo or ldquohot standbyrdquo hosts not included
bull ldquoServerrdquo is either an MCS Appliance or a 2-vcpu-core ldquoVirtual Machinerdquo
bull Dual sites split MCS or UCS TRC servers across sites no single point of failure ndash redundant sites switching blade chassis rackblade servers
bull Using list pricing for MCS-7845-I3-IPC1 UCS-C210M2-VCD2 UCS-B200M2-VCS1 UCS-B230M2-VCDL1 and VMware Enterprise Plus Edition
Appliance or VM Count
PSTN
2104
2104
2104
2104
SANLAN
Dual Site Scenario PSTN
2104
2104
2104
2104
SANLAN
hellip
hellip
hellip
hellip
UC on UCS
B200 B230
UC on UCS
C210
MCS 7845
B230 M2
vs B200 M2
C210 M2
vs MCS 7845
20 copy 2010 Cisco andor its affiliates All rights reserved
Current Offers Technical Overview
21 copy 2010 Cisco andor its affiliates All rights reserved
Eg 4 physical servers
Each MCS 7800 hosts only
one UC app instance
4 virtual servers (VMrsquos) on 1 physical server
Single virtualized server with total 8 physical
cores hosts all UC app instances
Unity
Connection
Unified CM
VM for
Unified
CM
Sub
Unified CCX
VM for
Unity
Cxn
VM for
Unified
CCX
VM for
Unified
CM
Pub
or
copy 2010 Cisco andor its affiliates All rights reserved 22
Server Model TRC CPU RAM Storage Adapters
UCS B200 M2 Blade Server TRC 1
Dual E5640 (8 physical cores total)
48 GB DAS (RAID1) for
VMware FC SAN for UC apps
Cisco VIC
TRC 2 Dual E5640 (8
physical cores total) 48 GB Diskless Cisco VIC
UCS B230 M2 Blade Server
TRC 1 Dual E7-2870 (20
physical cores total) 128 GB Diskless Cisco VIC
UCS B440 M2 Blade Server
TRC1 Dual E7-4870 (40
physical cores total)
256 GB Diskless Cisco VIC
UCS C260 M2 Rack-Mount Server
TRC 1 Dual E7-2870 (20
physical cores total) 128 GB DAS (2x RAID5) 1GbE NIC
UCS C210 M2 General-Purpose
Rack-Mount Server TRC 1
Dual E5640 (8 physical cores total)
48 GB
DAS (2 disks RAID1) for VMware + DAS (8 disks RAID5) for
UC apps
1GbE NIC
TRC 2 Dual E5640 (8
physical cores total) 48 GB
DAS (2 disks RAID1) for VMware FC SAN
for UC apps
1GbE NIC and 4G FC HBA
TRC 3 Dual E5640 (8
physical cores total) 48 GB Diskless
1GbE NIC and 4G FC HBA
UCS C200 M2
General-Purpose Rack-Mount Server
TRC 1 Dual E5506 (8
physical cores total) 24 GB
DAS (4 disks RAID10) for VMware
+ UC apps 1GbE NIC
copy 2010 Cisco andor its affiliates All rights reserved 23
UC app Scale
(ldquousersrdquo)
vCPU (cores) Usually 253+ GHz
per core required
vRAM
(GB)
vDisk
(GB)
Notes
Unified
CM
1000 2 4 1 x 80 UCS C200 or BE6K only
2500 1 225 1 x 80 Not for use with C200BE6K
7500 2 6 2 x 80
10000 4 6 2 x 80
Unity
Connection
500 1 2 1 x 160
1000 1 4 1 x 160
5000 2 4 1 x 200
10000 4 4 2 x 146 Not for use with C200BE6K
20000 7 8 2 x 300
Unified
Presence
1000 1 2 1 x 80
2500 2 4 1 x 80 Not for use with C200BE6K
5000 4 4 2 x 80
Unified CCX 100 2 4 2 x 146 UCS C200 or BE6K only
300 2 4 2 x 146 Not for use with C200BE6K
400 4 8 2 x 146
Not exhaustive subject to change see wwwciscocomgouc-virtualized for latest
ie user count for particular values of BHCA trace level encryption CTI and other factors Actual
supportable user count may vary by deployment
copy 2010 Cisco andor its affiliates All rights reserved 24
Policy still lives here httpdocwikiciscocomwikiUnified_Communications_Virtualization_Sizing_GuidelinesApplication_Co-residency_Support_Policy
Three aspects
1 Allowed App Mix on same physical server
SAME RULES for TRC vs specs-based UCSHPIBM
2 Allowed VM OVA choices
DIFFERENT RULES for TRC vs specs-based due to CPU differences
3 Max number of VMs on same physical server
SAME RULES for TRC vs specs-based to determine max but specs-based might allow more VMs
Note DAS IO bottlenecks may prevent very high VM counts even if CPURAM are sufficient
copy 2010 Cisco andor its affiliates All rights reserved 25
bull Which apps can share the same physical server In general any UC with UC from apps listed at httpdocwikiciscocomwikiUnified_Communications_Virtualization_Supported_Applications
But note some UC apps restrict this eg BE6K CUCCE See their rules on their docwiki ldquochild pagesrdquo
NMTGrsquos UC Mgmt Suite (CUOM CUSM CUSSM CUPM) counts as a UC app for this
Note UCS C200 M2 TRC1 for non-BE6K no longer has special restrictions on UC App Mix
bull SEPARATE PHYSICAL SERVER required for non-UC or 3rd-party Eg N1KV ARC SingleWire vCenter FilePrint Directory CRMERP Groupware non-CUCM TFTP Nuance etc
UC VM UC VMhellipVM for
VMware
vCenter
VM for
Nexus
1KV
VSM
VM for
Solutions
Plus
CTDP
app
VM for
Unaffiliated
3rd-party
app
UC VM UC VMhellipVM for
VMware
vCenter
VM for
Nexus
1KV
VSM
VM for
Solutions
Plus
CTDP
app
VM for
Unaffiliated
3rd-party
app
UC VM UC VMhellipVM for
VMware
vCenter
VM for
Nexus
1KV
VSM
VM for
Solutions
Plus
CTDP
app
VM for
Unaffiliated
3rd-party
app
UC VM UC VMhellipVM for
VMware
vCenter
VM for
Nexus
1KV
VSM
VM for
Solutions
Plus
CTDP
app
VM for
Unaffiliated
3rd-party
app
Different blades in same chassis OK
Same blade same chassis not OK
copy 2010 Cisco andor its affiliates All rights reserved 26
bull App to HW some apps eg CUCCE donrsquot allow any of their OVAs on certain TRCs
See httpdocwikiciscocomwikiUnified_Communications_Virtualization_Supported_Applications
bull OVA to HW Some OVAs are deliberately only for use with a particular TRC or CPU
See co-res policy page and Notes column in httpdocwikiciscocomwikiUnified_Communications_Virtualization_Downloads_28including_OVAOVF_Templates29
Why Usually due to CPU modelspeed dependencies
C200 M2 TRC1
(E5506 213 GHz)
UCM
25K
UCM
75K
UCM
10K
C200 M2 Specs-based
(56xx 253+GHz )
B200C210 M2 TRC or Specs-based
(E5640 266 GHz on TRC
56xx75xx 253+ GHz on specs-based)
UCM
25K
UCM
75K
UCM
10K
UCM
1K
UCM
1K
UCM
25K
UCM
75K
UCM
10K
UCM
1K
copy 2010 Cisco andor its affiliates All rights reserved 27
Core
Core
Core
Core
Core
Core
Core
Core
Core
Core
Core
Core
Core
Core
Core
Core
Core
Core
Core
Core
VM VM VM VM
V
M
V
M
V
M
V
M
V
M
V
M
V
M
V
M
VM VM
Dual-socket 4-core
Eg UCS C210 M2 TRC1
with dual E5640
Dual-socket 6-core
Eg UCS C210 M2 Specs-based
with UC-supported CPU model
and min speed
VM VM V
M
V
M
VM VM VM V
M
Idle
VM
VM
VM
Jumbo + 1 reserved
or
Mixed sizes + 1 reserved
or
Mixed sizes
or
21 Large eg UCM 10K
or
41 Med eg UCM 75K
or
81 Small eg UCM 25K
ldquoSmallrdquo VM
ldquoMediumrdquo VM
ldquoLargerdquo VM
ldquoJumbordquo VM
VM
VM
Idle
VM VM VM VM
V
M
V
M
V
M
V
M
V
M
V
M
V
M
V
M
VM VM
VM VM V
M
V
M
VM VM VM V
M
Idle
Mixed sizes + 1 reserved
or
Mixed sizes
or
31 Large eg UCM 10K
or
61 Med eg UCM 75K
or
121 Small eg UCM 25K
VM
Idle
V
M
V
M
V
M
V
M
VM VM
VM
VM VM
VM
VM VM
copy 2010 Cisco andor its affiliates All rights reserved 28
Virtual Software Switch Options
VM
LAN SAN
ESXi Hypervisor
Software Switch
vNIC
CNA
FCoE
VMware
vSwitch
VMware
dvSwitch
Cisco Nexus
1KV
Host based (local) Distributed Distributed
IEEE 8021Q VLAN
tagging
IEEE 8021Q VLAN
tagging
IEEE 8021Q VLAN
tagging
VLANs only visible to
local ESXi host
VLANs visible to all
ESXi hosts
VLANs visible to all
ESXi hosts
EtherChannel EtherChannel EtherChannel
-- -- Virtual PortChannel
-- -- QoS Marking
(DSCPCoS)
-- -- ACL
-- -- SPAN
RADIUSTACACS+
No VM needed No VM needed VM needed for VSM
vmNIC
UCS B200
Strongly recommended for UC on UCS B-Series
Not required but recommended for UC on UCS C-Series
copy 2010 Cisco andor its affiliates All rights reserved 29
pSwitch
ESXi
bull Cisco Software Switch in Hypervisor
bull Familiar network server operations amp management model
bull Enhanced diagnostic amp monitoring capability
bull Visibility direct to VM
ESXi Nexus
1000V
VEM
Nexus
1000V
VEM
Nexus 1000V VSM
copy 2010 Cisco andor its affiliates All rights reserved 30
Physical switch maps L3 DSCP to L2 CoS
CUCM marks traffic based on L3 DSCP values
pSwitch (CAT6K etc) can do mapping from L3 DSCP to L2 CoS (if needed)
CTL Packet L3
dc1-access-6k(config)mls qos map dscp-cos 24 to 3
dc1-access-6k(config)mls qos map dscp-cos 46 to 5
CS3
L20 L3CS3
CUCM
CAT6K
L23 L3CS3 L23 L3CS3
copy 2010 Cisco andor its affiliates All rights reserved 31
bull UCS 6100 doesnrsquot look into L3 IP header
bull DSCPToS setting in IP header is not altered by UCS
bull 6100 sends packet to uplink pEthernet switch
bull Default QoS settings on UCS
FCoE (ldquomatch cos 3rdquo) ndash no drop policy
Rest (ldquomatch anyrdquo) ndash Best Effort Queue
vSwitch amp UCS 6100 can not map L3 DSCP to L2 CoS
L20 L3CS3
CUCM
CAT6K
UCS 6100
copy 2010 Cisco andor its affiliates All rights reserved 32
bull UC blades Network Adapters QoS policy set to Platinum (CoS=5 No
Drop)
bull Non-UC blades Network Adapters QoS policy set to best effort
N1Kv Considerations
bull UC sig traffic (CoS3) share queues with FCoE traffic (CoS3)
bull UC sig traffic is given lossless behavior
bull Default CoS value of 3 for FCoE traffic should never be changed
Without N1Kv
Caveat
bull All traffic types from virtual UC App will get CoS value of Platinum
bull Non-UC application gets best-effort class might not be acceptable
L20 L3CS3
L23 L3CS3
L23 L3CS3
CUCM
N1KV
UCS 6100
CAT6K
copy 2010 Cisco andor its affiliates All rights reserved 33
Compute Layer
SANStorage
Layer ndash Cisco
SRND
Cisco
UCS 6100
Fabric
Interconnect
UCS 5100
Blade
Server
Cisco SAN
Switch
4x10GE
4x10GE
4x10GE
4x10GE
FC FC
FC FC
Nexus
1000V
FC Storage
SP-A SP-B
3rd party layer
CUCM VM IOPS ~ 200
200 IOPS 4KB ~ 64 Mbps per VM
bull Total capacity 28000 IOPS
bull 14000 IOPS per controller
bull 4 KByte block size
14000 IOPS x (4KB) ~ 428 Mbps
600 Mbps throughputcontroller
3rd Party SAN Example
Result
bull One 4 Gbps FC interface is enough to
handle the entire capacity of one Storage
Array
bull HA requires four FC interfaces
copy 2010 Cisco andor its affiliates All rights reserved 34
copy 2010 Cisco andor its affiliates All rights reserved 35
bull All UC Deployment Models are supported
No change in the current deployment models
Base deployment model ndash Single Site Centralized Call Processing etc are not changing
bull VM machine layout on a blade andor chassis
Unity Connection requires one extra CPU core (vCPU) on blade
bull Software checks for design rules
No rules or restrictions are in place in UC Apps to check if you are running the primary and sub on the same blade
bull CoWAN ruleslatency requirement are same
Does not depend on CUCM code or hardware
httpwwwciscocomgoucsrnd
copy 2010 Cisco andor its affiliates All rights reserved 36
bull SRND application-layer guidelines are same as when on MCS
Redundancy Rules are the same
Clustering over the WANlatency numbers
Mega Cluster supported in 85
Determine quantityrole of nodes
For HA No design checks validating proper placement of primary and secondary servers
CUCCE private network requirement
bull Mixed clusters of HP IBM UCS are supported
Subject to ldquocommon senserdquo rules ndash eg donrsquot make Pub or Primary less powerful than Sub or Secondary
bull Direct attach devices must be on physical MCS server
-MOH Live audio stream
-Tape BackupFloppy
bull New factors to consider for end-to-end QoS designconfiguration
37 copy 2010 Cisco andor its affiliates All rights reserved
FROM CUCM
System Release To Unified CM 85 To UC System 85
4x Multi Hop thru 61(x)71(x) Multi Hop thru 61(3)
51(2) Multi Hop thru 61(x)71(x) NA
51(3) 2 Hop thru 71(3) NA
61(1) 2 Hop thru 61(x)71(x) 2 Hop
61(2) 2 Hop thru 61(x)71(x) NA
61(3) 2 Hop thru 61(x)71(x) NA
61(4) Single Hop NA
61(5) Single Hop NA
70(1) 2 Hop thru 71(x) 2 Hop
71(2) 2 Hop thru 71(x) 2 Hop
71(3) Single Hop Single Hop Multi stagesBWC supported
71(5) Single Hop NA
80(1) Single Hop Single Hop Multi stagesBWC supported
80(2) 80(3) Single Hop NA
38 copy 2010 Cisco andor its affiliates All rights reserved
VMware Feature support bull VMware feature support varies by application
bull Some features are supported with Caveats some Partially
bull For example
bull Clone Virtual Machine
ldquoY (C)rdquo means VM has to be powered off
bull vMotion
ldquoY (C)rdquo means vMotion supported for live traffic calls shouldnrsquot be dropped (but not guaranteed)
ldquoPartialrdquo means in maintenance mode only
ESXi Features CUCM CUC CUP CCX
Clone Virtual Machine Y (C) Y (C) Y (C) Y (C)
VMware vMotion Y (C) Partial Partial Y (C)
Resize Virtual Machine Partial Partial Partial Partial
VMware HA Y (C) Y (C) Y (C) Y (C)
Boot From SAN Y (C) Y (C) Y (C) Y (C)
VMware DRS No No No No
Docwiki httpdocwikiciscocomwikiUnified_Communications_VMware_Requirements
copy 2010 Cisco andor its affiliates All rights reserved 39
Current Business Continuity and Disaster Recovery strategies are still applicable
The UC apps redundancy rules are same
Distribute UC application nodes across UCS blades chassis and sites to minimize failure impact
Primarysecondary on different blade chassis sites
On same blade mix Subs with TFTPMoH vs just Subs
Redundancy of UCS components (blade chassis FEX links Interconnect switching)
Redundancy of ldquonewrdquo network types (10GbE SAN multi-pathing etc)
copy 2010 Cisco andor its affiliates All rights reserved 40
TAC Support Demarcation
Server Hardware
Shared Storage
VMware Cisco
Application
UC on UCS
Tested Reference Configuration
Cisco 3rd-party Cisco
Cisco VCE VCE
3rd-party
VCE
UC on UCS
Specs-based (including Vblock option)
Cisco 3rd-party Cisco
Cisco VCE VCE
3rd-party
VCE
3rd-party VMware (HP IBM)
Specs-based 3rd-party 3rd-party 3rd-party Cisco
MCS 7800 Appliances Cisco NA NA Cisco
Customer-provided MCS 7800 equivalent 3rd party NA NA Cisco
41 copy 2010 Cisco andor its affiliates All rights reserved
Customer Example ndash Primary Data Center
OLD NEW
Hardware Nodes 62 Physical Servers ( EU HQ clusters) 14 Approx
Software Version 615 amp 851 851
Ucxn Version 421 851 ndash 3 pairs -- Virtualized
CER 20 70 86 -- Virtualized
CM SUB CM PUB CM SUB CM SUB
MOH TFTP CER CM SUB
CER UCxn UCxN CM SUB
42 copy 2010 Cisco andor its affiliates All rights reserved
Deployment Model ndash Data Center 1
CM SUB CM PUB CM SUB CM SUB
MOH TFTP UCxN UCxn
CER UCxn UCxn UCxn
43 copy 2010 Cisco andor its affiliates All rights reserved
Deployment Model ndash Data Center 2
CM SUB CM PUB CM SUB CM SUB
MOH TFTP Ucxn UCxn
CER UCxn UCxn UCxn
copy 2010 Cisco andor its affiliates All rights reserved 44
Customer Design
PSTN
IP WAN
SIP Proxy
Unified Communications Manager
Unity Connection
Unified Presence
Unified Contact Center Express
Cisco UCS 5108 Chassis with UCS B200 Blade Servers
Cisco UCS 6100 Fabric Interconnect Switch
Cisco UCS C210 or C200 General-Purpose Rack-mount Server
CUSP
(11K phones)
(3K phones)
(400 phones)
copy 2010 Cisco andor its affiliates All rights reserved 45
HQ Details
Blade 1 Blade 2
CPU-1 CPU-2 CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
Blade 3 Blade 4
CPU-1 CPU-2 CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
CUP-1 UCCX-1 CUP-2
Blade 5 Blade 6
CPU-1 CPU-2 CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
UCCX-1
UCxn-1 Active
Leave
idle for
UCxn
UCxn-2
Active
TFTP-2
SUB-2
SUB-4
SUB-6
SUB-8 CUP-2 UCCX-2
SUB-3 PUB SUB-1
CUP-1
TFTP-1 SUB-5
SUB-7
Blade Slot 7 Blade Slot 8
CPU-1 CPU-2 CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
CUCM VM OVAs
Messaging VM OVAs
Leave
idle for
UCxn
ldquoSparerdquo blade slots available for non-UC workloads such as Cisco Nexus 1000V VMware
vCenter or 3rd-party workloads such as directories email groupware or other business
applications
Contact Center VM OVAs
Presence VM OVAs
copy 2010 Cisco andor its affiliates All rights reserved 46
Rack Server 3
CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
UCxn-2 CCX-1 CUP
Rack Server 2
CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
Rack Server 1
CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
Leave
idle for
UCxn
UCxn-1 TFTP-2 SUB-2
Leave
idle for
UCxn
SUB-1 CCX-2 PUB TFTP-1
Branch Office Details
Rack Server 2
CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
Rack Server 1
CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
PUBTFTP CCX-1 CUP
SUB CCX-2
Leave
idle for
UCxn
Leave
idle for
UCxn
UC
xn
-1
UC
xn
-1
CUCM VM OVAs
Messaging VM OVAs
Contact Center VM OVAs
Presence VM OVAs
copy 2010 Cisco andor its affiliates All rights reserved 47
bull DAS Rack Mount Server (Cisco C-Series)
bull Popular DAS Protocol SCSI
bull iSCSI Access SCSI storage media using IP network
bull Fibre Channel The most popular SAN protocol today
Cable distance ~ 2 km
Popular speed - 4 Gbs
bull NAS (Network Attached Storage) uses NFS (Network File System) protocol over TCPIP
DAS
SCSI
Computer System
SCSI Bus Adapter
SCSI Device Driver
Volume Manager
File System
Application
iSCSI
File System
Application
SCSI Device Driver iSCSI Driver
TCPIP Stack
NIC
Volume Manager
NIC
TCPIP Stack
iSCSI Layer
Bus Adapter
Host
Server
Storage
Transport
Storage
Media
FC SAN
FC
FC HBA
SCSI Device Driver
Volume Manager
File System
Application
Computer System Computer System
Block IO
SAN IP
copy 2010 Cisco andor its affiliates All rights reserved 48
HDD 1
450gig
15K RPM
HDD 2
450gig
15K RPM
HDD 3
450gig
15K RPM
HDD 4
450gig
15K RPM
HDD 5
450gig
15K RPM
Single RAID5 Group (14 TB Usable Space)
PUB SUB1 UCCX1
UC
VM 1
UC
VM 2
UC
VM 3
LUN 2 (720 GB) LUN 1 (720 GB)
UCCX2 CUP1 CUP2
UC
VM 4
UC
VM 5 UC
VM 6
4 to 8 UC VMs per
LUN
(max dependent on
sumvDisks)
NASSAN Array Best Practices for UC
Must be lt2TB per
LUN Recommend
500GB to 15 TB
Use FC class disks
with ~180 IOPS (eg
450 GB 15K or
300 GB 15K)
copy 2010 Cisco andor its affiliates All rights reserved 49
DAS Example UCS C210 M2 TRC1
HDD 1
146 GB
15K RPM
HDD 2
146 GB
15K RPM
HDD 3
146 GB
15K RPM
HDD 4
146 GB
15K RPM
HDD 5
146 GB
15K RPM
HDD 6
146 GB
15K RPM
HDD 7
146 GB
15K RPM
HDD 8
146 GB
15K RPM
HDD 9
146 GB
15K RPM
HDD 10
146 GB
15K RPM
Single RAID1
Volume Single RAID5 Volume (1022 GB after RAID overhead)
VMFS Filestore (947 GB after VMFS overhead)
PUB UCCX1
UC
VM 1
UC
VM 3
CUP1
UC
VM 5
vSphere ESXi
image
Notes
bull VMFS block size limits
max vDisk size
bull Could have gt1 VMFS
datastore on RAID
volume
copy 2010 Cisco andor its affiliates All rights reserved 50
DAS Example UCS C200 M2 TRC1 for BE6K
HDD 1
1 TB
72K RPM
HDD 2
1 TB
72K RPM
HDD 3
1 TB
72K RPM
HDD 4 1
TB
72K RPM
Single RAID10 Volume (2 TB after RAID
overhead)
VMFS Filestore (18 TB after VMFS overhead)
PUB UCCX1
UC
VM 1
UC
VM 3
CUP1
UC
VM 5 vSphere ESXi
image
copy 2010 Cisco andor its affiliates All rights reserved 51
UC Application Co-residency
bull See wwwciscocomgouc-virtualized for latest
bull Based on supported OVA for download
OVA reserves cores RAM etc to VMs
Basic rule of thumb fill up blade until out of capacity
If blade contains VM for messaging must reserve core for ESXi
bull Hardware oversubscription not supported
52 copy 2010 Cisco andor its affiliates All rights reserved
Virtual Machine Sizing
bull Virtual Machine virtual hardware defined by an VM template
vCPU vRAM vDisk vNICs
bull Capacity
bullAn VM template is associated with a specific capacity
bull VM templates are packaged in a OVA file
bull There are usually different VM template per release For example CUCM_80_vmv7_v21ova
CUCM_85_vmv7_v21ova
CUCM_86_vmv7_v15ova
Includes product product version VMware hardware version template version
53 copy 2010 Cisco andor its affiliates All rights reserved httptoolsciscocomcucst
copy 2010 Cisco andor its affiliates All rights reserved 54
bull Customer-accessible
bullUC on UCS httpwwwciscocomgouconucs and wwwciscocomgouc-virtualized and wwwciscocomgoucsrnd and wwwciscocomgoswonly (UCS page)
bullUCS in general httpwwwciscocomgoucs
bullVblocks amp Virtual Computing Environment wwwvceportalcom and wwwvcecom
bullFlexPods wwwcisconetappcom
bullCisco Infrastructure as a Service (IaaS) Portal httpwwwciscocomgoiaas
bullCisco Unified Service Delivery Portal httpwwwciscocomgousd
bullldquoCUCM on Virtual Serversrdquo (summary of ldquowhatrsquos newdifferent when virtualizedrdquo) httpwwwciscocomenUSdocsvoice_ip_commcucmvirtualservershtml
bull ldquoVirtualization and UCS 101rdquo httpnewsroomciscocomdlls2010ts_102810html
bull UC on UCS ldquotech valuerdquo httpwwwciscocomenUSsolutionsns340ns339ns638ns914html_TWTVtwtv_episode_74html
copy 2010 Cisco andor its affiliates All rights reserved 55
bull Partner Central ldquoServers OS and Virtualizationrdquo
httpwwwciscocomwebpartnersselltechnologyipcservers_os_virt_for_uchtml
bull Partner Community ldquoServers OS and Virtualizationrdquo
httpswwwmyciscocommunitycomcommunitypartnercollaborationucservers
bull Design info
httpwwwciscocomgouc-virtualized and httpwwwciscocomgoucsrnd
bull ldquoWhatrsquos new whatrsquos different when on VMwarerdquo customer doc
httpwwwciscocomenUSdocsvoice_ip_commcucmvirtualservershtml
bull Product Literature
httpwwwciscocomgoucs and httpwwwciscocomgouconucs
bull Ordering Guide
httpwwwciscocomwebpartnersselltechnologyipcuc_tech_readinesshtml~7
bull Virtual Computing Environment Portal httpwwwvceportalcom
bull Cisco Infrastructure as a Service (IaaS) Portal httpwwwciscocomgoiaas
copy 2010 Cisco andor its affiliates All rights reserved 12
Intel
Program
Nehalem-
EP
Nehalem-
EX
Westmere-
EP
Westmere-
EX
Romley-
EP
CPU Family 55xx
65xx
75xx
56xx E7-28xx
E7-48xx
E7-88xx
E5-26xx
CPU Cores 4 468 46 6810 468
CPU Speed 2-29 GHz 17-27 GHz 19-333 GHz 17-27 GHz 1-3 GHz
Example UCS
Models with
these CPUs
B200250 M1
C210 M1
C250 M1M2
C200 M2
B230 M1
B440 M1
C460 M1
B200250 M2
C210 M2
C250 M2
B230 M2
B440 M2
C260 M2
C460 M2
B200 M3
C220 M3
C240 M3
UC on UCS
Certifications
TRCs for B200 M1
TRCs for C210 M1
(E5540)
TRC for C200M2
(E5506)
Specs-based
(75xx 253+ GHz)
TRCs for B200 M2
TRCs for C210 M2
(E5640)
Specs-based
(56xx at 253+
GHz)
TRC for B230 M2
TRC for B440 M2
(E7-28704870)
Specs-based
(E7 at 24+ GHz)
Not Currently
Supported by UC
copy 2010 Cisco andor its affiliates All rights reserved 13
VM VM VM
VM VM VM
VM VM VM
VM VM VM
VM
VM
VM
ldquoSmallrdquo VM
ldquoMediumrdquo VM
ldquoLargerdquo VM
VM
VM
VM VM
VM VM VM
19 UC VMs with
total 40 vcpursquos
19 MCS Appliances 5 virtualized
servers (dual 4-core
B200 M2
TRC)
4 virtualized
servers (dual 6-core
B200 M2
specs-based)
2 virtualized
servers (dual 10-core
B230 M2
TRC)
19 UC
app
copies
copy 2010 Cisco andor its affiliates All rights reserved 14
UC on UCS Products with Owner UC on UCS TRC UC on UCS Specs-based
Unified Communications Manager
Business Edition 6000 C200 M2 only Not supported
Unity Connection
Unified Presence
Unified Contact Center Express
Cisco Emergency Responder
Session Manager Edition
InterCompany Media Engine
Unified Attendant Consoles
Unity
Unified Workforce Optimization (WFO)
Unified Contact Center Enterprise Planned
Unified Intelligence Center
Unified Customer Voice Portal Planned
MediaSense Planned
Unified Contact Center Mgmt Portal
SocialMiner
Finesse Planned
Unified EmailWeb Interaction Mgr
Prime UCMS (OMPMSMSSM)
Webex Premise Planned Planned
Unified MeetingPlace Planned Planned
TMSCTMS Planned Planned
VCS Planned Planned
copy 2010 Cisco andor its affiliates All rights reserved 15
Why virtualize
your UC
Why virtualize
on UCS
Lower TCO
Business
Agility
Additional
Savings and
Increased
Agility
End to End
Solution
Single
Support
Tested Reference Configurations
Vblocks
Cisco options
VCE Vblock options
Infrastructure Simplification (Cables Adapters Switching)
Converge Communications and DC Networks ndash ldquowire oncerdquo
Consolidates System Mgmt
Easier Service Provisioning
Reduce ServersStorage
Reduced Power Cooling Cabling Space Weight
Investment Leverage amp Easy Server Repurposing
Efficient App Expansion
Accelerated UC rollouts
Better Business Continuity
PortableMobile VMs
UCS is the industryrsquos only
fully unified and virtualization-
aware compute solution
copy 2010 Cisco andor its affiliates All rights reserved 16
CAPEX
bull Reduced Server Count (50-75)
bull NetworkStorage Consolidation (50+)
bull Reduced Cabling (50+)
OPEX
Reduced Rack amp Floor Space (36)
Reduced PowerCooling (20+)
Fewer Servers to Manage (50-75 less)
Reduced MaintenanceSupport Costs (~20)
Example 5000 users Dial tone voicemail and Presence 10 are Contact Center Agents
11 non-virtualized rack servers required for UC more for other business apps
copy 2010 Cisco andor its affiliates All rights reserved 17
CAPEXOPEX
bull Similar Consolidation and Operational EfficiencyScale benefits as with UC on UCS B-series
Other Benefits
Lower initial investment
Simple entrymigration to virtualized UC ndash Data Center expertise not required unless using SAN option
Example 5000 users Dial tone voicemail and Presence 10 are Contact Center Agents
11 non-virtualized rack servers required for UC more for other business apps
copy 2010 Cisco andor its affiliates All rights reserved 18
$-
$500
$1000
$1500
$2000
$2500
$3000
2 4 8 10 12 20 50 100
UCS B230 M2 TRC OPEX ($K)
UCS B230 M2 TRC CAPEX ($K)
UCS B200 M2 TRC OPEX ($K)
UCS B200 M2 TRC CAPEX ($K)
UCS C210 M2 TRC OPEX ($K)
UCS C210 M2 TRC CAPEX ($K)
MCS 7845-I3 OPEX ($K)
MCS 7845-I3 CAPEX ($K)
Assumptions
bull UC only no other business applications included ldquoSparerdquo or ldquohot standbyrdquo hosts not included
bull ldquoServerrdquo is either an MCS Appliance or a 2-vcpu-core ldquoVirtual Machinerdquo
bull Dual sites split MCS or UCS TRC servers across sites no single point of failure ndash redundant sites switching blade chassis rackblade servers
bull Using list pricing for MCS-7845-I3-IPC1 UCS-C210M2-VCD2 UCS-B200M2-VCS1 UCS-B230M2-VCDL1 and VMware Enterprise Plus Edition
Appliance or VM Count
PSTN
2104
2104
2104
2104
SANLAN
Dual Site Scenario PSTN
2104
2104
2104
2104
SANLAN
hellip
hellip
hellip
hellip
UC on UCS
B200 B230
UC on UCS
C210
MCS 7845
B230 M2
vs B200 M2
C210 M2
vs MCS 7845
20 copy 2010 Cisco andor its affiliates All rights reserved
Current Offers Technical Overview
21 copy 2010 Cisco andor its affiliates All rights reserved
Eg 4 physical servers
Each MCS 7800 hosts only
one UC app instance
4 virtual servers (VMrsquos) on 1 physical server
Single virtualized server with total 8 physical
cores hosts all UC app instances
Unity
Connection
Unified CM
VM for
Unified
CM
Sub
Unified CCX
VM for
Unity
Cxn
VM for
Unified
CCX
VM for
Unified
CM
Pub
or
copy 2010 Cisco andor its affiliates All rights reserved 22
Server Model TRC CPU RAM Storage Adapters
UCS B200 M2 Blade Server TRC 1
Dual E5640 (8 physical cores total)
48 GB DAS (RAID1) for
VMware FC SAN for UC apps
Cisco VIC
TRC 2 Dual E5640 (8
physical cores total) 48 GB Diskless Cisco VIC
UCS B230 M2 Blade Server
TRC 1 Dual E7-2870 (20
physical cores total) 128 GB Diskless Cisco VIC
UCS B440 M2 Blade Server
TRC1 Dual E7-4870 (40
physical cores total)
256 GB Diskless Cisco VIC
UCS C260 M2 Rack-Mount Server
TRC 1 Dual E7-2870 (20
physical cores total) 128 GB DAS (2x RAID5) 1GbE NIC
UCS C210 M2 General-Purpose
Rack-Mount Server TRC 1
Dual E5640 (8 physical cores total)
48 GB
DAS (2 disks RAID1) for VMware + DAS (8 disks RAID5) for
UC apps
1GbE NIC
TRC 2 Dual E5640 (8
physical cores total) 48 GB
DAS (2 disks RAID1) for VMware FC SAN
for UC apps
1GbE NIC and 4G FC HBA
TRC 3 Dual E5640 (8
physical cores total) 48 GB Diskless
1GbE NIC and 4G FC HBA
UCS C200 M2
General-Purpose Rack-Mount Server
TRC 1 Dual E5506 (8
physical cores total) 24 GB
DAS (4 disks RAID10) for VMware
+ UC apps 1GbE NIC
copy 2010 Cisco andor its affiliates All rights reserved 23
UC app Scale
(ldquousersrdquo)
vCPU (cores) Usually 253+ GHz
per core required
vRAM
(GB)
vDisk
(GB)
Notes
Unified
CM
1000 2 4 1 x 80 UCS C200 or BE6K only
2500 1 225 1 x 80 Not for use with C200BE6K
7500 2 6 2 x 80
10000 4 6 2 x 80
Unity
Connection
500 1 2 1 x 160
1000 1 4 1 x 160
5000 2 4 1 x 200
10000 4 4 2 x 146 Not for use with C200BE6K
20000 7 8 2 x 300
Unified
Presence
1000 1 2 1 x 80
2500 2 4 1 x 80 Not for use with C200BE6K
5000 4 4 2 x 80
Unified CCX 100 2 4 2 x 146 UCS C200 or BE6K only
300 2 4 2 x 146 Not for use with C200BE6K
400 4 8 2 x 146
Not exhaustive subject to change see wwwciscocomgouc-virtualized for latest
ie user count for particular values of BHCA trace level encryption CTI and other factors Actual
supportable user count may vary by deployment
copy 2010 Cisco andor its affiliates All rights reserved 24
Policy still lives here httpdocwikiciscocomwikiUnified_Communications_Virtualization_Sizing_GuidelinesApplication_Co-residency_Support_Policy
Three aspects
1 Allowed App Mix on same physical server
SAME RULES for TRC vs specs-based UCSHPIBM
2 Allowed VM OVA choices
DIFFERENT RULES for TRC vs specs-based due to CPU differences
3 Max number of VMs on same physical server
SAME RULES for TRC vs specs-based to determine max but specs-based might allow more VMs
Note DAS IO bottlenecks may prevent very high VM counts even if CPURAM are sufficient
copy 2010 Cisco andor its affiliates All rights reserved 25
bull Which apps can share the same physical server In general any UC with UC from apps listed at httpdocwikiciscocomwikiUnified_Communications_Virtualization_Supported_Applications
But note some UC apps restrict this eg BE6K CUCCE See their rules on their docwiki ldquochild pagesrdquo
NMTGrsquos UC Mgmt Suite (CUOM CUSM CUSSM CUPM) counts as a UC app for this
Note UCS C200 M2 TRC1 for non-BE6K no longer has special restrictions on UC App Mix
bull SEPARATE PHYSICAL SERVER required for non-UC or 3rd-party Eg N1KV ARC SingleWire vCenter FilePrint Directory CRMERP Groupware non-CUCM TFTP Nuance etc
UC VM UC VMhellipVM for
VMware
vCenter
VM for
Nexus
1KV
VSM
VM for
Solutions
Plus
CTDP
app
VM for
Unaffiliated
3rd-party
app
UC VM UC VMhellipVM for
VMware
vCenter
VM for
Nexus
1KV
VSM
VM for
Solutions
Plus
CTDP
app
VM for
Unaffiliated
3rd-party
app
UC VM UC VMhellipVM for
VMware
vCenter
VM for
Nexus
1KV
VSM
VM for
Solutions
Plus
CTDP
app
VM for
Unaffiliated
3rd-party
app
UC VM UC VMhellipVM for
VMware
vCenter
VM for
Nexus
1KV
VSM
VM for
Solutions
Plus
CTDP
app
VM for
Unaffiliated
3rd-party
app
Different blades in same chassis OK
Same blade same chassis not OK
copy 2010 Cisco andor its affiliates All rights reserved 26
bull App to HW some apps eg CUCCE donrsquot allow any of their OVAs on certain TRCs
See httpdocwikiciscocomwikiUnified_Communications_Virtualization_Supported_Applications
bull OVA to HW Some OVAs are deliberately only for use with a particular TRC or CPU
See co-res policy page and Notes column in httpdocwikiciscocomwikiUnified_Communications_Virtualization_Downloads_28including_OVAOVF_Templates29
Why Usually due to CPU modelspeed dependencies
C200 M2 TRC1
(E5506 213 GHz)
UCM
25K
UCM
75K
UCM
10K
C200 M2 Specs-based
(56xx 253+GHz )
B200C210 M2 TRC or Specs-based
(E5640 266 GHz on TRC
56xx75xx 253+ GHz on specs-based)
UCM
25K
UCM
75K
UCM
10K
UCM
1K
UCM
1K
UCM
25K
UCM
75K
UCM
10K
UCM
1K
copy 2010 Cisco andor its affiliates All rights reserved 27
Core
Core
Core
Core
Core
Core
Core
Core
Core
Core
Core
Core
Core
Core
Core
Core
Core
Core
Core
Core
VM VM VM VM
V
M
V
M
V
M
V
M
V
M
V
M
V
M
V
M
VM VM
Dual-socket 4-core
Eg UCS C210 M2 TRC1
with dual E5640
Dual-socket 6-core
Eg UCS C210 M2 Specs-based
with UC-supported CPU model
and min speed
VM VM V
M
V
M
VM VM VM V
M
Idle
VM
VM
VM
Jumbo + 1 reserved
or
Mixed sizes + 1 reserved
or
Mixed sizes
or
21 Large eg UCM 10K
or
41 Med eg UCM 75K
or
81 Small eg UCM 25K
ldquoSmallrdquo VM
ldquoMediumrdquo VM
ldquoLargerdquo VM
ldquoJumbordquo VM
VM
VM
Idle
VM VM VM VM
V
M
V
M
V
M
V
M
V
M
V
M
V
M
V
M
VM VM
VM VM V
M
V
M
VM VM VM V
M
Idle
Mixed sizes + 1 reserved
or
Mixed sizes
or
31 Large eg UCM 10K
or
61 Med eg UCM 75K
or
121 Small eg UCM 25K
VM
Idle
V
M
V
M
V
M
V
M
VM VM
VM
VM VM
VM
VM VM
copy 2010 Cisco andor its affiliates All rights reserved 28
Virtual Software Switch Options
VM
LAN SAN
ESXi Hypervisor
Software Switch
vNIC
CNA
FCoE
VMware
vSwitch
VMware
dvSwitch
Cisco Nexus
1KV
Host based (local) Distributed Distributed
IEEE 8021Q VLAN
tagging
IEEE 8021Q VLAN
tagging
IEEE 8021Q VLAN
tagging
VLANs only visible to
local ESXi host
VLANs visible to all
ESXi hosts
VLANs visible to all
ESXi hosts
EtherChannel EtherChannel EtherChannel
-- -- Virtual PortChannel
-- -- QoS Marking
(DSCPCoS)
-- -- ACL
-- -- SPAN
RADIUSTACACS+
No VM needed No VM needed VM needed for VSM
vmNIC
UCS B200
Strongly recommended for UC on UCS B-Series
Not required but recommended for UC on UCS C-Series
copy 2010 Cisco andor its affiliates All rights reserved 29
pSwitch
ESXi
bull Cisco Software Switch in Hypervisor
bull Familiar network server operations amp management model
bull Enhanced diagnostic amp monitoring capability
bull Visibility direct to VM
ESXi Nexus
1000V
VEM
Nexus
1000V
VEM
Nexus 1000V VSM
copy 2010 Cisco andor its affiliates All rights reserved 30
Physical switch maps L3 DSCP to L2 CoS
CUCM marks traffic based on L3 DSCP values
pSwitch (CAT6K etc) can do mapping from L3 DSCP to L2 CoS (if needed)
CTL Packet L3
dc1-access-6k(config)mls qos map dscp-cos 24 to 3
dc1-access-6k(config)mls qos map dscp-cos 46 to 5
CS3
L20 L3CS3
CUCM
CAT6K
L23 L3CS3 L23 L3CS3
copy 2010 Cisco andor its affiliates All rights reserved 31
bull UCS 6100 doesnrsquot look into L3 IP header
bull DSCPToS setting in IP header is not altered by UCS
bull 6100 sends packet to uplink pEthernet switch
bull Default QoS settings on UCS
FCoE (ldquomatch cos 3rdquo) ndash no drop policy
Rest (ldquomatch anyrdquo) ndash Best Effort Queue
vSwitch amp UCS 6100 can not map L3 DSCP to L2 CoS
L20 L3CS3
CUCM
CAT6K
UCS 6100
copy 2010 Cisco andor its affiliates All rights reserved 32
bull UC blades Network Adapters QoS policy set to Platinum (CoS=5 No
Drop)
bull Non-UC blades Network Adapters QoS policy set to best effort
N1Kv Considerations
bull UC sig traffic (CoS3) share queues with FCoE traffic (CoS3)
bull UC sig traffic is given lossless behavior
bull Default CoS value of 3 for FCoE traffic should never be changed
Without N1Kv
Caveat
bull All traffic types from virtual UC App will get CoS value of Platinum
bull Non-UC application gets best-effort class might not be acceptable
L20 L3CS3
L23 L3CS3
L23 L3CS3
CUCM
N1KV
UCS 6100
CAT6K
copy 2010 Cisco andor its affiliates All rights reserved 33
Compute Layer
SANStorage
Layer ndash Cisco
SRND
Cisco
UCS 6100
Fabric
Interconnect
UCS 5100
Blade
Server
Cisco SAN
Switch
4x10GE
4x10GE
4x10GE
4x10GE
FC FC
FC FC
Nexus
1000V
FC Storage
SP-A SP-B
3rd party layer
CUCM VM IOPS ~ 200
200 IOPS 4KB ~ 64 Mbps per VM
bull Total capacity 28000 IOPS
bull 14000 IOPS per controller
bull 4 KByte block size
14000 IOPS x (4KB) ~ 428 Mbps
600 Mbps throughputcontroller
3rd Party SAN Example
Result
bull One 4 Gbps FC interface is enough to
handle the entire capacity of one Storage
Array
bull HA requires four FC interfaces
copy 2010 Cisco andor its affiliates All rights reserved 34
copy 2010 Cisco andor its affiliates All rights reserved 35
bull All UC Deployment Models are supported
No change in the current deployment models
Base deployment model ndash Single Site Centralized Call Processing etc are not changing
bull VM machine layout on a blade andor chassis
Unity Connection requires one extra CPU core (vCPU) on blade
bull Software checks for design rules
No rules or restrictions are in place in UC Apps to check if you are running the primary and sub on the same blade
bull CoWAN ruleslatency requirement are same
Does not depend on CUCM code or hardware
httpwwwciscocomgoucsrnd
copy 2010 Cisco andor its affiliates All rights reserved 36
bull SRND application-layer guidelines are same as when on MCS
Redundancy Rules are the same
Clustering over the WANlatency numbers
Mega Cluster supported in 85
Determine quantityrole of nodes
For HA No design checks validating proper placement of primary and secondary servers
CUCCE private network requirement
bull Mixed clusters of HP IBM UCS are supported
Subject to ldquocommon senserdquo rules ndash eg donrsquot make Pub or Primary less powerful than Sub or Secondary
bull Direct attach devices must be on physical MCS server
-MOH Live audio stream
-Tape BackupFloppy
bull New factors to consider for end-to-end QoS designconfiguration
37 copy 2010 Cisco andor its affiliates All rights reserved
FROM CUCM
System Release To Unified CM 85 To UC System 85
4x Multi Hop thru 61(x)71(x) Multi Hop thru 61(3)
51(2) Multi Hop thru 61(x)71(x) NA
51(3) 2 Hop thru 71(3) NA
61(1) 2 Hop thru 61(x)71(x) 2 Hop
61(2) 2 Hop thru 61(x)71(x) NA
61(3) 2 Hop thru 61(x)71(x) NA
61(4) Single Hop NA
61(5) Single Hop NA
70(1) 2 Hop thru 71(x) 2 Hop
71(2) 2 Hop thru 71(x) 2 Hop
71(3) Single Hop Single Hop Multi stagesBWC supported
71(5) Single Hop NA
80(1) Single Hop Single Hop Multi stagesBWC supported
80(2) 80(3) Single Hop NA
38 copy 2010 Cisco andor its affiliates All rights reserved
VMware Feature support bull VMware feature support varies by application
bull Some features are supported with Caveats some Partially
bull For example
bull Clone Virtual Machine
ldquoY (C)rdquo means VM has to be powered off
bull vMotion
ldquoY (C)rdquo means vMotion supported for live traffic calls shouldnrsquot be dropped (but not guaranteed)
ldquoPartialrdquo means in maintenance mode only
ESXi Features CUCM CUC CUP CCX
Clone Virtual Machine Y (C) Y (C) Y (C) Y (C)
VMware vMotion Y (C) Partial Partial Y (C)
Resize Virtual Machine Partial Partial Partial Partial
VMware HA Y (C) Y (C) Y (C) Y (C)
Boot From SAN Y (C) Y (C) Y (C) Y (C)
VMware DRS No No No No
Docwiki httpdocwikiciscocomwikiUnified_Communications_VMware_Requirements
copy 2010 Cisco andor its affiliates All rights reserved 39
Current Business Continuity and Disaster Recovery strategies are still applicable
The UC apps redundancy rules are same
Distribute UC application nodes across UCS blades chassis and sites to minimize failure impact
Primarysecondary on different blade chassis sites
On same blade mix Subs with TFTPMoH vs just Subs
Redundancy of UCS components (blade chassis FEX links Interconnect switching)
Redundancy of ldquonewrdquo network types (10GbE SAN multi-pathing etc)
copy 2010 Cisco andor its affiliates All rights reserved 40
TAC Support Demarcation
Server Hardware
Shared Storage
VMware Cisco
Application
UC on UCS
Tested Reference Configuration
Cisco 3rd-party Cisco
Cisco VCE VCE
3rd-party
VCE
UC on UCS
Specs-based (including Vblock option)
Cisco 3rd-party Cisco
Cisco VCE VCE
3rd-party
VCE
3rd-party VMware (HP IBM)
Specs-based 3rd-party 3rd-party 3rd-party Cisco
MCS 7800 Appliances Cisco NA NA Cisco
Customer-provided MCS 7800 equivalent 3rd party NA NA Cisco
41 copy 2010 Cisco andor its affiliates All rights reserved
Customer Example ndash Primary Data Center
OLD NEW
Hardware Nodes 62 Physical Servers ( EU HQ clusters) 14 Approx
Software Version 615 amp 851 851
Ucxn Version 421 851 ndash 3 pairs -- Virtualized
CER 20 70 86 -- Virtualized
CM SUB CM PUB CM SUB CM SUB
MOH TFTP CER CM SUB
CER UCxn UCxN CM SUB
42 copy 2010 Cisco andor its affiliates All rights reserved
Deployment Model ndash Data Center 1
CM SUB CM PUB CM SUB CM SUB
MOH TFTP UCxN UCxn
CER UCxn UCxn UCxn
43 copy 2010 Cisco andor its affiliates All rights reserved
Deployment Model ndash Data Center 2
CM SUB CM PUB CM SUB CM SUB
MOH TFTP Ucxn UCxn
CER UCxn UCxn UCxn
copy 2010 Cisco andor its affiliates All rights reserved 44
Customer Design
PSTN
IP WAN
SIP Proxy
Unified Communications Manager
Unity Connection
Unified Presence
Unified Contact Center Express
Cisco UCS 5108 Chassis with UCS B200 Blade Servers
Cisco UCS 6100 Fabric Interconnect Switch
Cisco UCS C210 or C200 General-Purpose Rack-mount Server
CUSP
(11K phones)
(3K phones)
(400 phones)
copy 2010 Cisco andor its affiliates All rights reserved 45
HQ Details
Blade 1 Blade 2
CPU-1 CPU-2 CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
Blade 3 Blade 4
CPU-1 CPU-2 CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
CUP-1 UCCX-1 CUP-2
Blade 5 Blade 6
CPU-1 CPU-2 CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
UCCX-1
UCxn-1 Active
Leave
idle for
UCxn
UCxn-2
Active
TFTP-2
SUB-2
SUB-4
SUB-6
SUB-8 CUP-2 UCCX-2
SUB-3 PUB SUB-1
CUP-1
TFTP-1 SUB-5
SUB-7
Blade Slot 7 Blade Slot 8
CPU-1 CPU-2 CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
CUCM VM OVAs
Messaging VM OVAs
Leave
idle for
UCxn
ldquoSparerdquo blade slots available for non-UC workloads such as Cisco Nexus 1000V VMware
vCenter or 3rd-party workloads such as directories email groupware or other business
applications
Contact Center VM OVAs
Presence VM OVAs
copy 2010 Cisco andor its affiliates All rights reserved 46
Rack Server 3
CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
UCxn-2 CCX-1 CUP
Rack Server 2
CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
Rack Server 1
CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
Leave
idle for
UCxn
UCxn-1 TFTP-2 SUB-2
Leave
idle for
UCxn
SUB-1 CCX-2 PUB TFTP-1
Branch Office Details
Rack Server 2
CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
Rack Server 1
CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
PUBTFTP CCX-1 CUP
SUB CCX-2
Leave
idle for
UCxn
Leave
idle for
UCxn
UC
xn
-1
UC
xn
-1
CUCM VM OVAs
Messaging VM OVAs
Contact Center VM OVAs
Presence VM OVAs
copy 2010 Cisco andor its affiliates All rights reserved 47
bull DAS Rack Mount Server (Cisco C-Series)
bull Popular DAS Protocol SCSI
bull iSCSI Access SCSI storage media using IP network
bull Fibre Channel The most popular SAN protocol today
Cable distance ~ 2 km
Popular speed - 4 Gbs
bull NAS (Network Attached Storage) uses NFS (Network File System) protocol over TCPIP
DAS
SCSI
Computer System
SCSI Bus Adapter
SCSI Device Driver
Volume Manager
File System
Application
iSCSI
File System
Application
SCSI Device Driver iSCSI Driver
TCPIP Stack
NIC
Volume Manager
NIC
TCPIP Stack
iSCSI Layer
Bus Adapter
Host
Server
Storage
Transport
Storage
Media
FC SAN
FC
FC HBA
SCSI Device Driver
Volume Manager
File System
Application
Computer System Computer System
Block IO
SAN IP
copy 2010 Cisco andor its affiliates All rights reserved 48
HDD 1
450gig
15K RPM
HDD 2
450gig
15K RPM
HDD 3
450gig
15K RPM
HDD 4
450gig
15K RPM
HDD 5
450gig
15K RPM
Single RAID5 Group (14 TB Usable Space)
PUB SUB1 UCCX1
UC
VM 1
UC
VM 2
UC
VM 3
LUN 2 (720 GB) LUN 1 (720 GB)
UCCX2 CUP1 CUP2
UC
VM 4
UC
VM 5 UC
VM 6
4 to 8 UC VMs per
LUN
(max dependent on
sumvDisks)
NASSAN Array Best Practices for UC
Must be lt2TB per
LUN Recommend
500GB to 15 TB
Use FC class disks
with ~180 IOPS (eg
450 GB 15K or
300 GB 15K)
copy 2010 Cisco andor its affiliates All rights reserved 49
DAS Example UCS C210 M2 TRC1
HDD 1
146 GB
15K RPM
HDD 2
146 GB
15K RPM
HDD 3
146 GB
15K RPM
HDD 4
146 GB
15K RPM
HDD 5
146 GB
15K RPM
HDD 6
146 GB
15K RPM
HDD 7
146 GB
15K RPM
HDD 8
146 GB
15K RPM
HDD 9
146 GB
15K RPM
HDD 10
146 GB
15K RPM
Single RAID1
Volume Single RAID5 Volume (1022 GB after RAID overhead)
VMFS Filestore (947 GB after VMFS overhead)
PUB UCCX1
UC
VM 1
UC
VM 3
CUP1
UC
VM 5
vSphere ESXi
image
Notes
bull VMFS block size limits
max vDisk size
bull Could have gt1 VMFS
datastore on RAID
volume
copy 2010 Cisco andor its affiliates All rights reserved 50
DAS Example UCS C200 M2 TRC1 for BE6K
HDD 1
1 TB
72K RPM
HDD 2
1 TB
72K RPM
HDD 3
1 TB
72K RPM
HDD 4 1
TB
72K RPM
Single RAID10 Volume (2 TB after RAID
overhead)
VMFS Filestore (18 TB after VMFS overhead)
PUB UCCX1
UC
VM 1
UC
VM 3
CUP1
UC
VM 5 vSphere ESXi
image
copy 2010 Cisco andor its affiliates All rights reserved 51
UC Application Co-residency
bull See wwwciscocomgouc-virtualized for latest
bull Based on supported OVA for download
OVA reserves cores RAM etc to VMs
Basic rule of thumb fill up blade until out of capacity
If blade contains VM for messaging must reserve core for ESXi
bull Hardware oversubscription not supported
52 copy 2010 Cisco andor its affiliates All rights reserved
Virtual Machine Sizing
bull Virtual Machine virtual hardware defined by an VM template
vCPU vRAM vDisk vNICs
bull Capacity
bullAn VM template is associated with a specific capacity
bull VM templates are packaged in a OVA file
bull There are usually different VM template per release For example CUCM_80_vmv7_v21ova
CUCM_85_vmv7_v21ova
CUCM_86_vmv7_v15ova
Includes product product version VMware hardware version template version
53 copy 2010 Cisco andor its affiliates All rights reserved httptoolsciscocomcucst
copy 2010 Cisco andor its affiliates All rights reserved 54
bull Customer-accessible
bullUC on UCS httpwwwciscocomgouconucs and wwwciscocomgouc-virtualized and wwwciscocomgoucsrnd and wwwciscocomgoswonly (UCS page)
bullUCS in general httpwwwciscocomgoucs
bullVblocks amp Virtual Computing Environment wwwvceportalcom and wwwvcecom
bullFlexPods wwwcisconetappcom
bullCisco Infrastructure as a Service (IaaS) Portal httpwwwciscocomgoiaas
bullCisco Unified Service Delivery Portal httpwwwciscocomgousd
bullldquoCUCM on Virtual Serversrdquo (summary of ldquowhatrsquos newdifferent when virtualizedrdquo) httpwwwciscocomenUSdocsvoice_ip_commcucmvirtualservershtml
bull ldquoVirtualization and UCS 101rdquo httpnewsroomciscocomdlls2010ts_102810html
bull UC on UCS ldquotech valuerdquo httpwwwciscocomenUSsolutionsns340ns339ns638ns914html_TWTVtwtv_episode_74html
copy 2010 Cisco andor its affiliates All rights reserved 55
bull Partner Central ldquoServers OS and Virtualizationrdquo
httpwwwciscocomwebpartnersselltechnologyipcservers_os_virt_for_uchtml
bull Partner Community ldquoServers OS and Virtualizationrdquo
httpswwwmyciscocommunitycomcommunitypartnercollaborationucservers
bull Design info
httpwwwciscocomgouc-virtualized and httpwwwciscocomgoucsrnd
bull ldquoWhatrsquos new whatrsquos different when on VMwarerdquo customer doc
httpwwwciscocomenUSdocsvoice_ip_commcucmvirtualservershtml
bull Product Literature
httpwwwciscocomgoucs and httpwwwciscocomgouconucs
bull Ordering Guide
httpwwwciscocomwebpartnersselltechnologyipcuc_tech_readinesshtml~7
bull Virtual Computing Environment Portal httpwwwvceportalcom
bull Cisco Infrastructure as a Service (IaaS) Portal httpwwwciscocomgoiaas
copy 2010 Cisco andor its affiliates All rights reserved 13
VM VM VM
VM VM VM
VM VM VM
VM VM VM
VM
VM
VM
ldquoSmallrdquo VM
ldquoMediumrdquo VM
ldquoLargerdquo VM
VM
VM
VM VM
VM VM VM
19 UC VMs with
total 40 vcpursquos
19 MCS Appliances 5 virtualized
servers (dual 4-core
B200 M2
TRC)
4 virtualized
servers (dual 6-core
B200 M2
specs-based)
2 virtualized
servers (dual 10-core
B230 M2
TRC)
19 UC
app
copies
copy 2010 Cisco andor its affiliates All rights reserved 14
UC on UCS Products with Owner UC on UCS TRC UC on UCS Specs-based
Unified Communications Manager
Business Edition 6000 C200 M2 only Not supported
Unity Connection
Unified Presence
Unified Contact Center Express
Cisco Emergency Responder
Session Manager Edition
InterCompany Media Engine
Unified Attendant Consoles
Unity
Unified Workforce Optimization (WFO)
Unified Contact Center Enterprise Planned
Unified Intelligence Center
Unified Customer Voice Portal Planned
MediaSense Planned
Unified Contact Center Mgmt Portal
SocialMiner
Finesse Planned
Unified EmailWeb Interaction Mgr
Prime UCMS (OMPMSMSSM)
Webex Premise Planned Planned
Unified MeetingPlace Planned Planned
TMSCTMS Planned Planned
VCS Planned Planned
copy 2010 Cisco andor its affiliates All rights reserved 15
Why virtualize
your UC
Why virtualize
on UCS
Lower TCO
Business
Agility
Additional
Savings and
Increased
Agility
End to End
Solution
Single
Support
Tested Reference Configurations
Vblocks
Cisco options
VCE Vblock options
Infrastructure Simplification (Cables Adapters Switching)
Converge Communications and DC Networks ndash ldquowire oncerdquo
Consolidates System Mgmt
Easier Service Provisioning
Reduce ServersStorage
Reduced Power Cooling Cabling Space Weight
Investment Leverage amp Easy Server Repurposing
Efficient App Expansion
Accelerated UC rollouts
Better Business Continuity
PortableMobile VMs
UCS is the industryrsquos only
fully unified and virtualization-
aware compute solution
copy 2010 Cisco andor its affiliates All rights reserved 16
CAPEX
bull Reduced Server Count (50-75)
bull NetworkStorage Consolidation (50+)
bull Reduced Cabling (50+)
OPEX
Reduced Rack amp Floor Space (36)
Reduced PowerCooling (20+)
Fewer Servers to Manage (50-75 less)
Reduced MaintenanceSupport Costs (~20)
Example 5000 users Dial tone voicemail and Presence 10 are Contact Center Agents
11 non-virtualized rack servers required for UC more for other business apps
copy 2010 Cisco andor its affiliates All rights reserved 17
CAPEXOPEX
bull Similar Consolidation and Operational EfficiencyScale benefits as with UC on UCS B-series
Other Benefits
Lower initial investment
Simple entrymigration to virtualized UC ndash Data Center expertise not required unless using SAN option
Example 5000 users Dial tone voicemail and Presence 10 are Contact Center Agents
11 non-virtualized rack servers required for UC more for other business apps
copy 2010 Cisco andor its affiliates All rights reserved 18
$-
$500
$1000
$1500
$2000
$2500
$3000
2 4 8 10 12 20 50 100
UCS B230 M2 TRC OPEX ($K)
UCS B230 M2 TRC CAPEX ($K)
UCS B200 M2 TRC OPEX ($K)
UCS B200 M2 TRC CAPEX ($K)
UCS C210 M2 TRC OPEX ($K)
UCS C210 M2 TRC CAPEX ($K)
MCS 7845-I3 OPEX ($K)
MCS 7845-I3 CAPEX ($K)
Assumptions
bull UC only no other business applications included ldquoSparerdquo or ldquohot standbyrdquo hosts not included
bull ldquoServerrdquo is either an MCS Appliance or a 2-vcpu-core ldquoVirtual Machinerdquo
bull Dual sites split MCS or UCS TRC servers across sites no single point of failure ndash redundant sites switching blade chassis rackblade servers
bull Using list pricing for MCS-7845-I3-IPC1 UCS-C210M2-VCD2 UCS-B200M2-VCS1 UCS-B230M2-VCDL1 and VMware Enterprise Plus Edition
Appliance or VM Count
PSTN
2104
2104
2104
2104
SANLAN
Dual Site Scenario PSTN
2104
2104
2104
2104
SANLAN
hellip
hellip
hellip
hellip
UC on UCS
B200 B230
UC on UCS
C210
MCS 7845
B230 M2
vs B200 M2
C210 M2
vs MCS 7845
20 copy 2010 Cisco andor its affiliates All rights reserved
Current Offers Technical Overview
21 copy 2010 Cisco andor its affiliates All rights reserved
Eg 4 physical servers
Each MCS 7800 hosts only
one UC app instance
4 virtual servers (VMrsquos) on 1 physical server
Single virtualized server with total 8 physical
cores hosts all UC app instances
Unity
Connection
Unified CM
VM for
Unified
CM
Sub
Unified CCX
VM for
Unity
Cxn
VM for
Unified
CCX
VM for
Unified
CM
Pub
or
copy 2010 Cisco andor its affiliates All rights reserved 22
Server Model TRC CPU RAM Storage Adapters
UCS B200 M2 Blade Server TRC 1
Dual E5640 (8 physical cores total)
48 GB DAS (RAID1) for
VMware FC SAN for UC apps
Cisco VIC
TRC 2 Dual E5640 (8
physical cores total) 48 GB Diskless Cisco VIC
UCS B230 M2 Blade Server
TRC 1 Dual E7-2870 (20
physical cores total) 128 GB Diskless Cisco VIC
UCS B440 M2 Blade Server
TRC1 Dual E7-4870 (40
physical cores total)
256 GB Diskless Cisco VIC
UCS C260 M2 Rack-Mount Server
TRC 1 Dual E7-2870 (20
physical cores total) 128 GB DAS (2x RAID5) 1GbE NIC
UCS C210 M2 General-Purpose
Rack-Mount Server TRC 1
Dual E5640 (8 physical cores total)
48 GB
DAS (2 disks RAID1) for VMware + DAS (8 disks RAID5) for
UC apps
1GbE NIC
TRC 2 Dual E5640 (8
physical cores total) 48 GB
DAS (2 disks RAID1) for VMware FC SAN
for UC apps
1GbE NIC and 4G FC HBA
TRC 3 Dual E5640 (8
physical cores total) 48 GB Diskless
1GbE NIC and 4G FC HBA
UCS C200 M2
General-Purpose Rack-Mount Server
TRC 1 Dual E5506 (8
physical cores total) 24 GB
DAS (4 disks RAID10) for VMware
+ UC apps 1GbE NIC
copy 2010 Cisco andor its affiliates All rights reserved 23
UC app Scale
(ldquousersrdquo)
vCPU (cores) Usually 253+ GHz
per core required
vRAM
(GB)
vDisk
(GB)
Notes
Unified
CM
1000 2 4 1 x 80 UCS C200 or BE6K only
2500 1 225 1 x 80 Not for use with C200BE6K
7500 2 6 2 x 80
10000 4 6 2 x 80
Unity
Connection
500 1 2 1 x 160
1000 1 4 1 x 160
5000 2 4 1 x 200
10000 4 4 2 x 146 Not for use with C200BE6K
20000 7 8 2 x 300
Unified
Presence
1000 1 2 1 x 80
2500 2 4 1 x 80 Not for use with C200BE6K
5000 4 4 2 x 80
Unified CCX 100 2 4 2 x 146 UCS C200 or BE6K only
300 2 4 2 x 146 Not for use with C200BE6K
400 4 8 2 x 146
Not exhaustive subject to change see wwwciscocomgouc-virtualized for latest
ie user count for particular values of BHCA trace level encryption CTI and other factors Actual
supportable user count may vary by deployment
copy 2010 Cisco andor its affiliates All rights reserved 24
Policy still lives here httpdocwikiciscocomwikiUnified_Communications_Virtualization_Sizing_GuidelinesApplication_Co-residency_Support_Policy
Three aspects
1 Allowed App Mix on same physical server
SAME RULES for TRC vs specs-based UCSHPIBM
2 Allowed VM OVA choices
DIFFERENT RULES for TRC vs specs-based due to CPU differences
3 Max number of VMs on same physical server
SAME RULES for TRC vs specs-based to determine max but specs-based might allow more VMs
Note DAS IO bottlenecks may prevent very high VM counts even if CPURAM are sufficient
copy 2010 Cisco andor its affiliates All rights reserved 25
bull Which apps can share the same physical server In general any UC with UC from apps listed at httpdocwikiciscocomwikiUnified_Communications_Virtualization_Supported_Applications
But note some UC apps restrict this eg BE6K CUCCE See their rules on their docwiki ldquochild pagesrdquo
NMTGrsquos UC Mgmt Suite (CUOM CUSM CUSSM CUPM) counts as a UC app for this
Note UCS C200 M2 TRC1 for non-BE6K no longer has special restrictions on UC App Mix
bull SEPARATE PHYSICAL SERVER required for non-UC or 3rd-party Eg N1KV ARC SingleWire vCenter FilePrint Directory CRMERP Groupware non-CUCM TFTP Nuance etc
UC VM UC VMhellipVM for
VMware
vCenter
VM for
Nexus
1KV
VSM
VM for
Solutions
Plus
CTDP
app
VM for
Unaffiliated
3rd-party
app
UC VM UC VMhellipVM for
VMware
vCenter
VM for
Nexus
1KV
VSM
VM for
Solutions
Plus
CTDP
app
VM for
Unaffiliated
3rd-party
app
UC VM UC VMhellipVM for
VMware
vCenter
VM for
Nexus
1KV
VSM
VM for
Solutions
Plus
CTDP
app
VM for
Unaffiliated
3rd-party
app
UC VM UC VMhellipVM for
VMware
vCenter
VM for
Nexus
1KV
VSM
VM for
Solutions
Plus
CTDP
app
VM for
Unaffiliated
3rd-party
app
Different blades in same chassis OK
Same blade same chassis not OK
copy 2010 Cisco andor its affiliates All rights reserved 26
bull App to HW some apps eg CUCCE donrsquot allow any of their OVAs on certain TRCs
See httpdocwikiciscocomwikiUnified_Communications_Virtualization_Supported_Applications
bull OVA to HW Some OVAs are deliberately only for use with a particular TRC or CPU
See co-res policy page and Notes column in httpdocwikiciscocomwikiUnified_Communications_Virtualization_Downloads_28including_OVAOVF_Templates29
Why Usually due to CPU modelspeed dependencies
C200 M2 TRC1
(E5506 213 GHz)
UCM
25K
UCM
75K
UCM
10K
C200 M2 Specs-based
(56xx 253+GHz )
B200C210 M2 TRC or Specs-based
(E5640 266 GHz on TRC
56xx75xx 253+ GHz on specs-based)
UCM
25K
UCM
75K
UCM
10K
UCM
1K
UCM
1K
UCM
25K
UCM
75K
UCM
10K
UCM
1K
copy 2010 Cisco andor its affiliates All rights reserved 27
Core
Core
Core
Core
Core
Core
Core
Core
Core
Core
Core
Core
Core
Core
Core
Core
Core
Core
Core
Core
VM VM VM VM
V
M
V
M
V
M
V
M
V
M
V
M
V
M
V
M
VM VM
Dual-socket 4-core
Eg UCS C210 M2 TRC1
with dual E5640
Dual-socket 6-core
Eg UCS C210 M2 Specs-based
with UC-supported CPU model
and min speed
VM VM V
M
V
M
VM VM VM V
M
Idle
VM
VM
VM
Jumbo + 1 reserved
or
Mixed sizes + 1 reserved
or
Mixed sizes
or
21 Large eg UCM 10K
or
41 Med eg UCM 75K
or
81 Small eg UCM 25K
ldquoSmallrdquo VM
ldquoMediumrdquo VM
ldquoLargerdquo VM
ldquoJumbordquo VM
VM
VM
Idle
VM VM VM VM
V
M
V
M
V
M
V
M
V
M
V
M
V
M
V
M
VM VM
VM VM V
M
V
M
VM VM VM V
M
Idle
Mixed sizes + 1 reserved
or
Mixed sizes
or
31 Large eg UCM 10K
or
61 Med eg UCM 75K
or
121 Small eg UCM 25K
VM
Idle
V
M
V
M
V
M
V
M
VM VM
VM
VM VM
VM
VM VM
copy 2010 Cisco andor its affiliates All rights reserved 28
Virtual Software Switch Options
VM
LAN SAN
ESXi Hypervisor
Software Switch
vNIC
CNA
FCoE
VMware
vSwitch
VMware
dvSwitch
Cisco Nexus
1KV
Host based (local) Distributed Distributed
IEEE 8021Q VLAN
tagging
IEEE 8021Q VLAN
tagging
IEEE 8021Q VLAN
tagging
VLANs only visible to
local ESXi host
VLANs visible to all
ESXi hosts
VLANs visible to all
ESXi hosts
EtherChannel EtherChannel EtherChannel
-- -- Virtual PortChannel
-- -- QoS Marking
(DSCPCoS)
-- -- ACL
-- -- SPAN
RADIUSTACACS+
No VM needed No VM needed VM needed for VSM
vmNIC
UCS B200
Strongly recommended for UC on UCS B-Series
Not required but recommended for UC on UCS C-Series
copy 2010 Cisco andor its affiliates All rights reserved 29
pSwitch
ESXi
bull Cisco Software Switch in Hypervisor
bull Familiar network server operations amp management model
bull Enhanced diagnostic amp monitoring capability
bull Visibility direct to VM
ESXi Nexus
1000V
VEM
Nexus
1000V
VEM
Nexus 1000V VSM
copy 2010 Cisco andor its affiliates All rights reserved 30
Physical switch maps L3 DSCP to L2 CoS
CUCM marks traffic based on L3 DSCP values
pSwitch (CAT6K etc) can do mapping from L3 DSCP to L2 CoS (if needed)
CTL Packet L3
dc1-access-6k(config)mls qos map dscp-cos 24 to 3
dc1-access-6k(config)mls qos map dscp-cos 46 to 5
CS3
L20 L3CS3
CUCM
CAT6K
L23 L3CS3 L23 L3CS3
copy 2010 Cisco andor its affiliates All rights reserved 31
bull UCS 6100 doesnrsquot look into L3 IP header
bull DSCPToS setting in IP header is not altered by UCS
bull 6100 sends packet to uplink pEthernet switch
bull Default QoS settings on UCS
FCoE (ldquomatch cos 3rdquo) ndash no drop policy
Rest (ldquomatch anyrdquo) ndash Best Effort Queue
vSwitch amp UCS 6100 can not map L3 DSCP to L2 CoS
L20 L3CS3
CUCM
CAT6K
UCS 6100
copy 2010 Cisco andor its affiliates All rights reserved 32
bull UC blades Network Adapters QoS policy set to Platinum (CoS=5 No
Drop)
bull Non-UC blades Network Adapters QoS policy set to best effort
N1Kv Considerations
bull UC sig traffic (CoS3) share queues with FCoE traffic (CoS3)
bull UC sig traffic is given lossless behavior
bull Default CoS value of 3 for FCoE traffic should never be changed
Without N1Kv
Caveat
bull All traffic types from virtual UC App will get CoS value of Platinum
bull Non-UC application gets best-effort class might not be acceptable
L20 L3CS3
L23 L3CS3
L23 L3CS3
CUCM
N1KV
UCS 6100
CAT6K
copy 2010 Cisco andor its affiliates All rights reserved 33
Compute Layer
SANStorage
Layer ndash Cisco
SRND
Cisco
UCS 6100
Fabric
Interconnect
UCS 5100
Blade
Server
Cisco SAN
Switch
4x10GE
4x10GE
4x10GE
4x10GE
FC FC
FC FC
Nexus
1000V
FC Storage
SP-A SP-B
3rd party layer
CUCM VM IOPS ~ 200
200 IOPS 4KB ~ 64 Mbps per VM
bull Total capacity 28000 IOPS
bull 14000 IOPS per controller
bull 4 KByte block size
14000 IOPS x (4KB) ~ 428 Mbps
600 Mbps throughputcontroller
3rd Party SAN Example
Result
bull One 4 Gbps FC interface is enough to
handle the entire capacity of one Storage
Array
bull HA requires four FC interfaces
copy 2010 Cisco andor its affiliates All rights reserved 34
copy 2010 Cisco andor its affiliates All rights reserved 35
bull All UC Deployment Models are supported
No change in the current deployment models
Base deployment model ndash Single Site Centralized Call Processing etc are not changing
bull VM machine layout on a blade andor chassis
Unity Connection requires one extra CPU core (vCPU) on blade
bull Software checks for design rules
No rules or restrictions are in place in UC Apps to check if you are running the primary and sub on the same blade
bull CoWAN ruleslatency requirement are same
Does not depend on CUCM code or hardware
httpwwwciscocomgoucsrnd
copy 2010 Cisco andor its affiliates All rights reserved 36
bull SRND application-layer guidelines are same as when on MCS
Redundancy Rules are the same
Clustering over the WANlatency numbers
Mega Cluster supported in 85
Determine quantityrole of nodes
For HA No design checks validating proper placement of primary and secondary servers
CUCCE private network requirement
bull Mixed clusters of HP IBM UCS are supported
Subject to ldquocommon senserdquo rules ndash eg donrsquot make Pub or Primary less powerful than Sub or Secondary
bull Direct attach devices must be on physical MCS server
-MOH Live audio stream
-Tape BackupFloppy
bull New factors to consider for end-to-end QoS designconfiguration
37 copy 2010 Cisco andor its affiliates All rights reserved
FROM CUCM
System Release To Unified CM 85 To UC System 85
4x Multi Hop thru 61(x)71(x) Multi Hop thru 61(3)
51(2) Multi Hop thru 61(x)71(x) NA
51(3) 2 Hop thru 71(3) NA
61(1) 2 Hop thru 61(x)71(x) 2 Hop
61(2) 2 Hop thru 61(x)71(x) NA
61(3) 2 Hop thru 61(x)71(x) NA
61(4) Single Hop NA
61(5) Single Hop NA
70(1) 2 Hop thru 71(x) 2 Hop
71(2) 2 Hop thru 71(x) 2 Hop
71(3) Single Hop Single Hop Multi stagesBWC supported
71(5) Single Hop NA
80(1) Single Hop Single Hop Multi stagesBWC supported
80(2) 80(3) Single Hop NA
38 copy 2010 Cisco andor its affiliates All rights reserved
VMware Feature support bull VMware feature support varies by application
bull Some features are supported with Caveats some Partially
bull For example
bull Clone Virtual Machine
ldquoY (C)rdquo means VM has to be powered off
bull vMotion
ldquoY (C)rdquo means vMotion supported for live traffic calls shouldnrsquot be dropped (but not guaranteed)
ldquoPartialrdquo means in maintenance mode only
ESXi Features CUCM CUC CUP CCX
Clone Virtual Machine Y (C) Y (C) Y (C) Y (C)
VMware vMotion Y (C) Partial Partial Y (C)
Resize Virtual Machine Partial Partial Partial Partial
VMware HA Y (C) Y (C) Y (C) Y (C)
Boot From SAN Y (C) Y (C) Y (C) Y (C)
VMware DRS No No No No
Docwiki httpdocwikiciscocomwikiUnified_Communications_VMware_Requirements
copy 2010 Cisco andor its affiliates All rights reserved 39
Current Business Continuity and Disaster Recovery strategies are still applicable
The UC apps redundancy rules are same
Distribute UC application nodes across UCS blades chassis and sites to minimize failure impact
Primarysecondary on different blade chassis sites
On same blade mix Subs with TFTPMoH vs just Subs
Redundancy of UCS components (blade chassis FEX links Interconnect switching)
Redundancy of ldquonewrdquo network types (10GbE SAN multi-pathing etc)
copy 2010 Cisco andor its affiliates All rights reserved 40
TAC Support Demarcation
Server Hardware
Shared Storage
VMware Cisco
Application
UC on UCS
Tested Reference Configuration
Cisco 3rd-party Cisco
Cisco VCE VCE
3rd-party
VCE
UC on UCS
Specs-based (including Vblock option)
Cisco 3rd-party Cisco
Cisco VCE VCE
3rd-party
VCE
3rd-party VMware (HP IBM)
Specs-based 3rd-party 3rd-party 3rd-party Cisco
MCS 7800 Appliances Cisco NA NA Cisco
Customer-provided MCS 7800 equivalent 3rd party NA NA Cisco
41 copy 2010 Cisco andor its affiliates All rights reserved
Customer Example ndash Primary Data Center
OLD NEW
Hardware Nodes 62 Physical Servers ( EU HQ clusters) 14 Approx
Software Version 615 amp 851 851
Ucxn Version 421 851 ndash 3 pairs -- Virtualized
CER 20 70 86 -- Virtualized
CM SUB CM PUB CM SUB CM SUB
MOH TFTP CER CM SUB
CER UCxn UCxN CM SUB
42 copy 2010 Cisco andor its affiliates All rights reserved
Deployment Model ndash Data Center 1
CM SUB CM PUB CM SUB CM SUB
MOH TFTP UCxN UCxn
CER UCxn UCxn UCxn
43 copy 2010 Cisco andor its affiliates All rights reserved
Deployment Model ndash Data Center 2
CM SUB CM PUB CM SUB CM SUB
MOH TFTP Ucxn UCxn
CER UCxn UCxn UCxn
copy 2010 Cisco andor its affiliates All rights reserved 44
Customer Design
PSTN
IP WAN
SIP Proxy
Unified Communications Manager
Unity Connection
Unified Presence
Unified Contact Center Express
Cisco UCS 5108 Chassis with UCS B200 Blade Servers
Cisco UCS 6100 Fabric Interconnect Switch
Cisco UCS C210 or C200 General-Purpose Rack-mount Server
CUSP
(11K phones)
(3K phones)
(400 phones)
copy 2010 Cisco andor its affiliates All rights reserved 45
HQ Details
Blade 1 Blade 2
CPU-1 CPU-2 CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
Blade 3 Blade 4
CPU-1 CPU-2 CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
CUP-1 UCCX-1 CUP-2
Blade 5 Blade 6
CPU-1 CPU-2 CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
UCCX-1
UCxn-1 Active
Leave
idle for
UCxn
UCxn-2
Active
TFTP-2
SUB-2
SUB-4
SUB-6
SUB-8 CUP-2 UCCX-2
SUB-3 PUB SUB-1
CUP-1
TFTP-1 SUB-5
SUB-7
Blade Slot 7 Blade Slot 8
CPU-1 CPU-2 CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
CUCM VM OVAs
Messaging VM OVAs
Leave
idle for
UCxn
ldquoSparerdquo blade slots available for non-UC workloads such as Cisco Nexus 1000V VMware
vCenter or 3rd-party workloads such as directories email groupware or other business
applications
Contact Center VM OVAs
Presence VM OVAs
copy 2010 Cisco andor its affiliates All rights reserved 46
Rack Server 3
CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
UCxn-2 CCX-1 CUP
Rack Server 2
CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
Rack Server 1
CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
Leave
idle for
UCxn
UCxn-1 TFTP-2 SUB-2
Leave
idle for
UCxn
SUB-1 CCX-2 PUB TFTP-1
Branch Office Details
Rack Server 2
CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
Rack Server 1
CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
PUBTFTP CCX-1 CUP
SUB CCX-2
Leave
idle for
UCxn
Leave
idle for
UCxn
UC
xn
-1
UC
xn
-1
CUCM VM OVAs
Messaging VM OVAs
Contact Center VM OVAs
Presence VM OVAs
copy 2010 Cisco andor its affiliates All rights reserved 47
bull DAS Rack Mount Server (Cisco C-Series)
bull Popular DAS Protocol SCSI
bull iSCSI Access SCSI storage media using IP network
bull Fibre Channel The most popular SAN protocol today
Cable distance ~ 2 km
Popular speed - 4 Gbs
bull NAS (Network Attached Storage) uses NFS (Network File System) protocol over TCPIP
DAS
SCSI
Computer System
SCSI Bus Adapter
SCSI Device Driver
Volume Manager
File System
Application
iSCSI
File System
Application
SCSI Device Driver iSCSI Driver
TCPIP Stack
NIC
Volume Manager
NIC
TCPIP Stack
iSCSI Layer
Bus Adapter
Host
Server
Storage
Transport
Storage
Media
FC SAN
FC
FC HBA
SCSI Device Driver
Volume Manager
File System
Application
Computer System Computer System
Block IO
SAN IP
copy 2010 Cisco andor its affiliates All rights reserved 48
HDD 1
450gig
15K RPM
HDD 2
450gig
15K RPM
HDD 3
450gig
15K RPM
HDD 4
450gig
15K RPM
HDD 5
450gig
15K RPM
Single RAID5 Group (14 TB Usable Space)
PUB SUB1 UCCX1
UC
VM 1
UC
VM 2
UC
VM 3
LUN 2 (720 GB) LUN 1 (720 GB)
UCCX2 CUP1 CUP2
UC
VM 4
UC
VM 5 UC
VM 6
4 to 8 UC VMs per
LUN
(max dependent on
sumvDisks)
NASSAN Array Best Practices for UC
Must be lt2TB per
LUN Recommend
500GB to 15 TB
Use FC class disks
with ~180 IOPS (eg
450 GB 15K or
300 GB 15K)
copy 2010 Cisco andor its affiliates All rights reserved 49
DAS Example UCS C210 M2 TRC1
HDD 1
146 GB
15K RPM
HDD 2
146 GB
15K RPM
HDD 3
146 GB
15K RPM
HDD 4
146 GB
15K RPM
HDD 5
146 GB
15K RPM
HDD 6
146 GB
15K RPM
HDD 7
146 GB
15K RPM
HDD 8
146 GB
15K RPM
HDD 9
146 GB
15K RPM
HDD 10
146 GB
15K RPM
Single RAID1
Volume Single RAID5 Volume (1022 GB after RAID overhead)
VMFS Filestore (947 GB after VMFS overhead)
PUB UCCX1
UC
VM 1
UC
VM 3
CUP1
UC
VM 5
vSphere ESXi
image
Notes
bull VMFS block size limits
max vDisk size
bull Could have gt1 VMFS
datastore on RAID
volume
copy 2010 Cisco andor its affiliates All rights reserved 50
DAS Example UCS C200 M2 TRC1 for BE6K
HDD 1
1 TB
72K RPM
HDD 2
1 TB
72K RPM
HDD 3
1 TB
72K RPM
HDD 4 1
TB
72K RPM
Single RAID10 Volume (2 TB after RAID
overhead)
VMFS Filestore (18 TB after VMFS overhead)
PUB UCCX1
UC
VM 1
UC
VM 3
CUP1
UC
VM 5 vSphere ESXi
image
copy 2010 Cisco andor its affiliates All rights reserved 51
UC Application Co-residency
bull See wwwciscocomgouc-virtualized for latest
bull Based on supported OVA for download
OVA reserves cores RAM etc to VMs
Basic rule of thumb fill up blade until out of capacity
If blade contains VM for messaging must reserve core for ESXi
bull Hardware oversubscription not supported
52 copy 2010 Cisco andor its affiliates All rights reserved
Virtual Machine Sizing
bull Virtual Machine virtual hardware defined by an VM template
vCPU vRAM vDisk vNICs
bull Capacity
bullAn VM template is associated with a specific capacity
bull VM templates are packaged in a OVA file
bull There are usually different VM template per release For example CUCM_80_vmv7_v21ova
CUCM_85_vmv7_v21ova
CUCM_86_vmv7_v15ova
Includes product product version VMware hardware version template version
53 copy 2010 Cisco andor its affiliates All rights reserved httptoolsciscocomcucst
copy 2010 Cisco andor its affiliates All rights reserved 54
bull Customer-accessible
bullUC on UCS httpwwwciscocomgouconucs and wwwciscocomgouc-virtualized and wwwciscocomgoucsrnd and wwwciscocomgoswonly (UCS page)
bullUCS in general httpwwwciscocomgoucs
bullVblocks amp Virtual Computing Environment wwwvceportalcom and wwwvcecom
bullFlexPods wwwcisconetappcom
bullCisco Infrastructure as a Service (IaaS) Portal httpwwwciscocomgoiaas
bullCisco Unified Service Delivery Portal httpwwwciscocomgousd
bullldquoCUCM on Virtual Serversrdquo (summary of ldquowhatrsquos newdifferent when virtualizedrdquo) httpwwwciscocomenUSdocsvoice_ip_commcucmvirtualservershtml
bull ldquoVirtualization and UCS 101rdquo httpnewsroomciscocomdlls2010ts_102810html
bull UC on UCS ldquotech valuerdquo httpwwwciscocomenUSsolutionsns340ns339ns638ns914html_TWTVtwtv_episode_74html
copy 2010 Cisco andor its affiliates All rights reserved 55
bull Partner Central ldquoServers OS and Virtualizationrdquo
httpwwwciscocomwebpartnersselltechnologyipcservers_os_virt_for_uchtml
bull Partner Community ldquoServers OS and Virtualizationrdquo
httpswwwmyciscocommunitycomcommunitypartnercollaborationucservers
bull Design info
httpwwwciscocomgouc-virtualized and httpwwwciscocomgoucsrnd
bull ldquoWhatrsquos new whatrsquos different when on VMwarerdquo customer doc
httpwwwciscocomenUSdocsvoice_ip_commcucmvirtualservershtml
bull Product Literature
httpwwwciscocomgoucs and httpwwwciscocomgouconucs
bull Ordering Guide
httpwwwciscocomwebpartnersselltechnologyipcuc_tech_readinesshtml~7
bull Virtual Computing Environment Portal httpwwwvceportalcom
bull Cisco Infrastructure as a Service (IaaS) Portal httpwwwciscocomgoiaas
copy 2010 Cisco andor its affiliates All rights reserved 14
UC on UCS Products with Owner UC on UCS TRC UC on UCS Specs-based
Unified Communications Manager
Business Edition 6000 C200 M2 only Not supported
Unity Connection
Unified Presence
Unified Contact Center Express
Cisco Emergency Responder
Session Manager Edition
InterCompany Media Engine
Unified Attendant Consoles
Unity
Unified Workforce Optimization (WFO)
Unified Contact Center Enterprise Planned
Unified Intelligence Center
Unified Customer Voice Portal Planned
MediaSense Planned
Unified Contact Center Mgmt Portal
SocialMiner
Finesse Planned
Unified EmailWeb Interaction Mgr
Prime UCMS (OMPMSMSSM)
Webex Premise Planned Planned
Unified MeetingPlace Planned Planned
TMSCTMS Planned Planned
VCS Planned Planned
copy 2010 Cisco andor its affiliates All rights reserved 15
Why virtualize
your UC
Why virtualize
on UCS
Lower TCO
Business
Agility
Additional
Savings and
Increased
Agility
End to End
Solution
Single
Support
Tested Reference Configurations
Vblocks
Cisco options
VCE Vblock options
Infrastructure Simplification (Cables Adapters Switching)
Converge Communications and DC Networks ndash ldquowire oncerdquo
Consolidates System Mgmt
Easier Service Provisioning
Reduce ServersStorage
Reduced Power Cooling Cabling Space Weight
Investment Leverage amp Easy Server Repurposing
Efficient App Expansion
Accelerated UC rollouts
Better Business Continuity
PortableMobile VMs
UCS is the industryrsquos only
fully unified and virtualization-
aware compute solution
copy 2010 Cisco andor its affiliates All rights reserved 16
CAPEX
bull Reduced Server Count (50-75)
bull NetworkStorage Consolidation (50+)
bull Reduced Cabling (50+)
OPEX
Reduced Rack amp Floor Space (36)
Reduced PowerCooling (20+)
Fewer Servers to Manage (50-75 less)
Reduced MaintenanceSupport Costs (~20)
Example 5000 users Dial tone voicemail and Presence 10 are Contact Center Agents
11 non-virtualized rack servers required for UC more for other business apps
copy 2010 Cisco andor its affiliates All rights reserved 17
CAPEXOPEX
bull Similar Consolidation and Operational EfficiencyScale benefits as with UC on UCS B-series
Other Benefits
Lower initial investment
Simple entrymigration to virtualized UC ndash Data Center expertise not required unless using SAN option
Example 5000 users Dial tone voicemail and Presence 10 are Contact Center Agents
11 non-virtualized rack servers required for UC more for other business apps
copy 2010 Cisco andor its affiliates All rights reserved 18
$-
$500
$1000
$1500
$2000
$2500
$3000
2 4 8 10 12 20 50 100
UCS B230 M2 TRC OPEX ($K)
UCS B230 M2 TRC CAPEX ($K)
UCS B200 M2 TRC OPEX ($K)
UCS B200 M2 TRC CAPEX ($K)
UCS C210 M2 TRC OPEX ($K)
UCS C210 M2 TRC CAPEX ($K)
MCS 7845-I3 OPEX ($K)
MCS 7845-I3 CAPEX ($K)
Assumptions
bull UC only no other business applications included ldquoSparerdquo or ldquohot standbyrdquo hosts not included
bull ldquoServerrdquo is either an MCS Appliance or a 2-vcpu-core ldquoVirtual Machinerdquo
bull Dual sites split MCS or UCS TRC servers across sites no single point of failure ndash redundant sites switching blade chassis rackblade servers
bull Using list pricing for MCS-7845-I3-IPC1 UCS-C210M2-VCD2 UCS-B200M2-VCS1 UCS-B230M2-VCDL1 and VMware Enterprise Plus Edition
Appliance or VM Count
PSTN
2104
2104
2104
2104
SANLAN
Dual Site Scenario PSTN
2104
2104
2104
2104
SANLAN
hellip
hellip
hellip
hellip
UC on UCS
B200 B230
UC on UCS
C210
MCS 7845
B230 M2
vs B200 M2
C210 M2
vs MCS 7845
20 copy 2010 Cisco andor its affiliates All rights reserved
Current Offers Technical Overview
21 copy 2010 Cisco andor its affiliates All rights reserved
Eg 4 physical servers
Each MCS 7800 hosts only
one UC app instance
4 virtual servers (VMrsquos) on 1 physical server
Single virtualized server with total 8 physical
cores hosts all UC app instances
Unity
Connection
Unified CM
VM for
Unified
CM
Sub
Unified CCX
VM for
Unity
Cxn
VM for
Unified
CCX
VM for
Unified
CM
Pub
or
copy 2010 Cisco andor its affiliates All rights reserved 22
Server Model TRC CPU RAM Storage Adapters
UCS B200 M2 Blade Server TRC 1
Dual E5640 (8 physical cores total)
48 GB DAS (RAID1) for
VMware FC SAN for UC apps
Cisco VIC
TRC 2 Dual E5640 (8
physical cores total) 48 GB Diskless Cisco VIC
UCS B230 M2 Blade Server
TRC 1 Dual E7-2870 (20
physical cores total) 128 GB Diskless Cisco VIC
UCS B440 M2 Blade Server
TRC1 Dual E7-4870 (40
physical cores total)
256 GB Diskless Cisco VIC
UCS C260 M2 Rack-Mount Server
TRC 1 Dual E7-2870 (20
physical cores total) 128 GB DAS (2x RAID5) 1GbE NIC
UCS C210 M2 General-Purpose
Rack-Mount Server TRC 1
Dual E5640 (8 physical cores total)
48 GB
DAS (2 disks RAID1) for VMware + DAS (8 disks RAID5) for
UC apps
1GbE NIC
TRC 2 Dual E5640 (8
physical cores total) 48 GB
DAS (2 disks RAID1) for VMware FC SAN
for UC apps
1GbE NIC and 4G FC HBA
TRC 3 Dual E5640 (8
physical cores total) 48 GB Diskless
1GbE NIC and 4G FC HBA
UCS C200 M2
General-Purpose Rack-Mount Server
TRC 1 Dual E5506 (8
physical cores total) 24 GB
DAS (4 disks RAID10) for VMware
+ UC apps 1GbE NIC
copy 2010 Cisco andor its affiliates All rights reserved 23
UC app Scale
(ldquousersrdquo)
vCPU (cores) Usually 253+ GHz
per core required
vRAM
(GB)
vDisk
(GB)
Notes
Unified
CM
1000 2 4 1 x 80 UCS C200 or BE6K only
2500 1 225 1 x 80 Not for use with C200BE6K
7500 2 6 2 x 80
10000 4 6 2 x 80
Unity
Connection
500 1 2 1 x 160
1000 1 4 1 x 160
5000 2 4 1 x 200
10000 4 4 2 x 146 Not for use with C200BE6K
20000 7 8 2 x 300
Unified
Presence
1000 1 2 1 x 80
2500 2 4 1 x 80 Not for use with C200BE6K
5000 4 4 2 x 80
Unified CCX 100 2 4 2 x 146 UCS C200 or BE6K only
300 2 4 2 x 146 Not for use with C200BE6K
400 4 8 2 x 146
Not exhaustive subject to change see wwwciscocomgouc-virtualized for latest
ie user count for particular values of BHCA trace level encryption CTI and other factors Actual
supportable user count may vary by deployment
copy 2010 Cisco andor its affiliates All rights reserved 24
Policy still lives here httpdocwikiciscocomwikiUnified_Communications_Virtualization_Sizing_GuidelinesApplication_Co-residency_Support_Policy
Three aspects
1 Allowed App Mix on same physical server
SAME RULES for TRC vs specs-based UCSHPIBM
2 Allowed VM OVA choices
DIFFERENT RULES for TRC vs specs-based due to CPU differences
3 Max number of VMs on same physical server
SAME RULES for TRC vs specs-based to determine max but specs-based might allow more VMs
Note DAS IO bottlenecks may prevent very high VM counts even if CPURAM are sufficient
copy 2010 Cisco andor its affiliates All rights reserved 25
bull Which apps can share the same physical server In general any UC with UC from apps listed at httpdocwikiciscocomwikiUnified_Communications_Virtualization_Supported_Applications
But note some UC apps restrict this eg BE6K CUCCE See their rules on their docwiki ldquochild pagesrdquo
NMTGrsquos UC Mgmt Suite (CUOM CUSM CUSSM CUPM) counts as a UC app for this
Note UCS C200 M2 TRC1 for non-BE6K no longer has special restrictions on UC App Mix
bull SEPARATE PHYSICAL SERVER required for non-UC or 3rd-party Eg N1KV ARC SingleWire vCenter FilePrint Directory CRMERP Groupware non-CUCM TFTP Nuance etc
UC VM UC VMhellipVM for
VMware
vCenter
VM for
Nexus
1KV
VSM
VM for
Solutions
Plus
CTDP
app
VM for
Unaffiliated
3rd-party
app
UC VM UC VMhellipVM for
VMware
vCenter
VM for
Nexus
1KV
VSM
VM for
Solutions
Plus
CTDP
app
VM for
Unaffiliated
3rd-party
app
UC VM UC VMhellipVM for
VMware
vCenter
VM for
Nexus
1KV
VSM
VM for
Solutions
Plus
CTDP
app
VM for
Unaffiliated
3rd-party
app
UC VM UC VMhellipVM for
VMware
vCenter
VM for
Nexus
1KV
VSM
VM for
Solutions
Plus
CTDP
app
VM for
Unaffiliated
3rd-party
app
Different blades in same chassis OK
Same blade same chassis not OK
copy 2010 Cisco andor its affiliates All rights reserved 26
bull App to HW some apps eg CUCCE donrsquot allow any of their OVAs on certain TRCs
See httpdocwikiciscocomwikiUnified_Communications_Virtualization_Supported_Applications
bull OVA to HW Some OVAs are deliberately only for use with a particular TRC or CPU
See co-res policy page and Notes column in httpdocwikiciscocomwikiUnified_Communications_Virtualization_Downloads_28including_OVAOVF_Templates29
Why Usually due to CPU modelspeed dependencies
C200 M2 TRC1
(E5506 213 GHz)
UCM
25K
UCM
75K
UCM
10K
C200 M2 Specs-based
(56xx 253+GHz )
B200C210 M2 TRC or Specs-based
(E5640 266 GHz on TRC
56xx75xx 253+ GHz on specs-based)
UCM
25K
UCM
75K
UCM
10K
UCM
1K
UCM
1K
UCM
25K
UCM
75K
UCM
10K
UCM
1K
copy 2010 Cisco andor its affiliates All rights reserved 27
Core
Core
Core
Core
Core
Core
Core
Core
Core
Core
Core
Core
Core
Core
Core
Core
Core
Core
Core
Core
VM VM VM VM
V
M
V
M
V
M
V
M
V
M
V
M
V
M
V
M
VM VM
Dual-socket 4-core
Eg UCS C210 M2 TRC1
with dual E5640
Dual-socket 6-core
Eg UCS C210 M2 Specs-based
with UC-supported CPU model
and min speed
VM VM V
M
V
M
VM VM VM V
M
Idle
VM
VM
VM
Jumbo + 1 reserved
or
Mixed sizes + 1 reserved
or
Mixed sizes
or
21 Large eg UCM 10K
or
41 Med eg UCM 75K
or
81 Small eg UCM 25K
ldquoSmallrdquo VM
ldquoMediumrdquo VM
ldquoLargerdquo VM
ldquoJumbordquo VM
VM
VM
Idle
VM VM VM VM
V
M
V
M
V
M
V
M
V
M
V
M
V
M
V
M
VM VM
VM VM V
M
V
M
VM VM VM V
M
Idle
Mixed sizes + 1 reserved
or
Mixed sizes
or
31 Large eg UCM 10K
or
61 Med eg UCM 75K
or
121 Small eg UCM 25K
VM
Idle
V
M
V
M
V
M
V
M
VM VM
VM
VM VM
VM
VM VM
copy 2010 Cisco andor its affiliates All rights reserved 28
Virtual Software Switch Options
VM
LAN SAN
ESXi Hypervisor
Software Switch
vNIC
CNA
FCoE
VMware
vSwitch
VMware
dvSwitch
Cisco Nexus
1KV
Host based (local) Distributed Distributed
IEEE 8021Q VLAN
tagging
IEEE 8021Q VLAN
tagging
IEEE 8021Q VLAN
tagging
VLANs only visible to
local ESXi host
VLANs visible to all
ESXi hosts
VLANs visible to all
ESXi hosts
EtherChannel EtherChannel EtherChannel
-- -- Virtual PortChannel
-- -- QoS Marking
(DSCPCoS)
-- -- ACL
-- -- SPAN
RADIUSTACACS+
No VM needed No VM needed VM needed for VSM
vmNIC
UCS B200
Strongly recommended for UC on UCS B-Series
Not required but recommended for UC on UCS C-Series
copy 2010 Cisco andor its affiliates All rights reserved 29
pSwitch
ESXi
bull Cisco Software Switch in Hypervisor
bull Familiar network server operations amp management model
bull Enhanced diagnostic amp monitoring capability
bull Visibility direct to VM
ESXi Nexus
1000V
VEM
Nexus
1000V
VEM
Nexus 1000V VSM
copy 2010 Cisco andor its affiliates All rights reserved 30
Physical switch maps L3 DSCP to L2 CoS
CUCM marks traffic based on L3 DSCP values
pSwitch (CAT6K etc) can do mapping from L3 DSCP to L2 CoS (if needed)
CTL Packet L3
dc1-access-6k(config)mls qos map dscp-cos 24 to 3
dc1-access-6k(config)mls qos map dscp-cos 46 to 5
CS3
L20 L3CS3
CUCM
CAT6K
L23 L3CS3 L23 L3CS3
copy 2010 Cisco andor its affiliates All rights reserved 31
bull UCS 6100 doesnrsquot look into L3 IP header
bull DSCPToS setting in IP header is not altered by UCS
bull 6100 sends packet to uplink pEthernet switch
bull Default QoS settings on UCS
FCoE (ldquomatch cos 3rdquo) ndash no drop policy
Rest (ldquomatch anyrdquo) ndash Best Effort Queue
vSwitch amp UCS 6100 can not map L3 DSCP to L2 CoS
L20 L3CS3
CUCM
CAT6K
UCS 6100
copy 2010 Cisco andor its affiliates All rights reserved 32
bull UC blades Network Adapters QoS policy set to Platinum (CoS=5 No
Drop)
bull Non-UC blades Network Adapters QoS policy set to best effort
N1Kv Considerations
bull UC sig traffic (CoS3) share queues with FCoE traffic (CoS3)
bull UC sig traffic is given lossless behavior
bull Default CoS value of 3 for FCoE traffic should never be changed
Without N1Kv
Caveat
bull All traffic types from virtual UC App will get CoS value of Platinum
bull Non-UC application gets best-effort class might not be acceptable
L20 L3CS3
L23 L3CS3
L23 L3CS3
CUCM
N1KV
UCS 6100
CAT6K
copy 2010 Cisco andor its affiliates All rights reserved 33
Compute Layer
SANStorage
Layer ndash Cisco
SRND
Cisco
UCS 6100
Fabric
Interconnect
UCS 5100
Blade
Server
Cisco SAN
Switch
4x10GE
4x10GE
4x10GE
4x10GE
FC FC
FC FC
Nexus
1000V
FC Storage
SP-A SP-B
3rd party layer
CUCM VM IOPS ~ 200
200 IOPS 4KB ~ 64 Mbps per VM
bull Total capacity 28000 IOPS
bull 14000 IOPS per controller
bull 4 KByte block size
14000 IOPS x (4KB) ~ 428 Mbps
600 Mbps throughputcontroller
3rd Party SAN Example
Result
bull One 4 Gbps FC interface is enough to
handle the entire capacity of one Storage
Array
bull HA requires four FC interfaces
copy 2010 Cisco andor its affiliates All rights reserved 34
copy 2010 Cisco andor its affiliates All rights reserved 35
bull All UC Deployment Models are supported
No change in the current deployment models
Base deployment model ndash Single Site Centralized Call Processing etc are not changing
bull VM machine layout on a blade andor chassis
Unity Connection requires one extra CPU core (vCPU) on blade
bull Software checks for design rules
No rules or restrictions are in place in UC Apps to check if you are running the primary and sub on the same blade
bull CoWAN ruleslatency requirement are same
Does not depend on CUCM code or hardware
httpwwwciscocomgoucsrnd
copy 2010 Cisco andor its affiliates All rights reserved 36
bull SRND application-layer guidelines are same as when on MCS
Redundancy Rules are the same
Clustering over the WANlatency numbers
Mega Cluster supported in 85
Determine quantityrole of nodes
For HA No design checks validating proper placement of primary and secondary servers
CUCCE private network requirement
bull Mixed clusters of HP IBM UCS are supported
Subject to ldquocommon senserdquo rules ndash eg donrsquot make Pub or Primary less powerful than Sub or Secondary
bull Direct attach devices must be on physical MCS server
-MOH Live audio stream
-Tape BackupFloppy
bull New factors to consider for end-to-end QoS designconfiguration
37 copy 2010 Cisco andor its affiliates All rights reserved
FROM CUCM
System Release To Unified CM 85 To UC System 85
4x Multi Hop thru 61(x)71(x) Multi Hop thru 61(3)
51(2) Multi Hop thru 61(x)71(x) NA
51(3) 2 Hop thru 71(3) NA
61(1) 2 Hop thru 61(x)71(x) 2 Hop
61(2) 2 Hop thru 61(x)71(x) NA
61(3) 2 Hop thru 61(x)71(x) NA
61(4) Single Hop NA
61(5) Single Hop NA
70(1) 2 Hop thru 71(x) 2 Hop
71(2) 2 Hop thru 71(x) 2 Hop
71(3) Single Hop Single Hop Multi stagesBWC supported
71(5) Single Hop NA
80(1) Single Hop Single Hop Multi stagesBWC supported
80(2) 80(3) Single Hop NA
38 copy 2010 Cisco andor its affiliates All rights reserved
VMware Feature support bull VMware feature support varies by application
bull Some features are supported with Caveats some Partially
bull For example
bull Clone Virtual Machine
ldquoY (C)rdquo means VM has to be powered off
bull vMotion
ldquoY (C)rdquo means vMotion supported for live traffic calls shouldnrsquot be dropped (but not guaranteed)
ldquoPartialrdquo means in maintenance mode only
ESXi Features CUCM CUC CUP CCX
Clone Virtual Machine Y (C) Y (C) Y (C) Y (C)
VMware vMotion Y (C) Partial Partial Y (C)
Resize Virtual Machine Partial Partial Partial Partial
VMware HA Y (C) Y (C) Y (C) Y (C)
Boot From SAN Y (C) Y (C) Y (C) Y (C)
VMware DRS No No No No
Docwiki httpdocwikiciscocomwikiUnified_Communications_VMware_Requirements
copy 2010 Cisco andor its affiliates All rights reserved 39
Current Business Continuity and Disaster Recovery strategies are still applicable
The UC apps redundancy rules are same
Distribute UC application nodes across UCS blades chassis and sites to minimize failure impact
Primarysecondary on different blade chassis sites
On same blade mix Subs with TFTPMoH vs just Subs
Redundancy of UCS components (blade chassis FEX links Interconnect switching)
Redundancy of ldquonewrdquo network types (10GbE SAN multi-pathing etc)
copy 2010 Cisco andor its affiliates All rights reserved 40
TAC Support Demarcation
Server Hardware
Shared Storage
VMware Cisco
Application
UC on UCS
Tested Reference Configuration
Cisco 3rd-party Cisco
Cisco VCE VCE
3rd-party
VCE
UC on UCS
Specs-based (including Vblock option)
Cisco 3rd-party Cisco
Cisco VCE VCE
3rd-party
VCE
3rd-party VMware (HP IBM)
Specs-based 3rd-party 3rd-party 3rd-party Cisco
MCS 7800 Appliances Cisco NA NA Cisco
Customer-provided MCS 7800 equivalent 3rd party NA NA Cisco
41 copy 2010 Cisco andor its affiliates All rights reserved
Customer Example ndash Primary Data Center
OLD NEW
Hardware Nodes 62 Physical Servers ( EU HQ clusters) 14 Approx
Software Version 615 amp 851 851
Ucxn Version 421 851 ndash 3 pairs -- Virtualized
CER 20 70 86 -- Virtualized
CM SUB CM PUB CM SUB CM SUB
MOH TFTP CER CM SUB
CER UCxn UCxN CM SUB
42 copy 2010 Cisco andor its affiliates All rights reserved
Deployment Model ndash Data Center 1
CM SUB CM PUB CM SUB CM SUB
MOH TFTP UCxN UCxn
CER UCxn UCxn UCxn
43 copy 2010 Cisco andor its affiliates All rights reserved
Deployment Model ndash Data Center 2
CM SUB CM PUB CM SUB CM SUB
MOH TFTP Ucxn UCxn
CER UCxn UCxn UCxn
copy 2010 Cisco andor its affiliates All rights reserved 44
Customer Design
PSTN
IP WAN
SIP Proxy
Unified Communications Manager
Unity Connection
Unified Presence
Unified Contact Center Express
Cisco UCS 5108 Chassis with UCS B200 Blade Servers
Cisco UCS 6100 Fabric Interconnect Switch
Cisco UCS C210 or C200 General-Purpose Rack-mount Server
CUSP
(11K phones)
(3K phones)
(400 phones)
copy 2010 Cisco andor its affiliates All rights reserved 45
HQ Details
Blade 1 Blade 2
CPU-1 CPU-2 CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
Blade 3 Blade 4
CPU-1 CPU-2 CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
CUP-1 UCCX-1 CUP-2
Blade 5 Blade 6
CPU-1 CPU-2 CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
UCCX-1
UCxn-1 Active
Leave
idle for
UCxn
UCxn-2
Active
TFTP-2
SUB-2
SUB-4
SUB-6
SUB-8 CUP-2 UCCX-2
SUB-3 PUB SUB-1
CUP-1
TFTP-1 SUB-5
SUB-7
Blade Slot 7 Blade Slot 8
CPU-1 CPU-2 CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
CUCM VM OVAs
Messaging VM OVAs
Leave
idle for
UCxn
ldquoSparerdquo blade slots available for non-UC workloads such as Cisco Nexus 1000V VMware
vCenter or 3rd-party workloads such as directories email groupware or other business
applications
Contact Center VM OVAs
Presence VM OVAs
copy 2010 Cisco andor its affiliates All rights reserved 46
Rack Server 3
CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
UCxn-2 CCX-1 CUP
Rack Server 2
CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
Rack Server 1
CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
Leave
idle for
UCxn
UCxn-1 TFTP-2 SUB-2
Leave
idle for
UCxn
SUB-1 CCX-2 PUB TFTP-1
Branch Office Details
Rack Server 2
CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
Rack Server 1
CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
PUBTFTP CCX-1 CUP
SUB CCX-2
Leave
idle for
UCxn
Leave
idle for
UCxn
UC
xn
-1
UC
xn
-1
CUCM VM OVAs
Messaging VM OVAs
Contact Center VM OVAs
Presence VM OVAs
copy 2010 Cisco andor its affiliates All rights reserved 47
bull DAS Rack Mount Server (Cisco C-Series)
bull Popular DAS Protocol SCSI
bull iSCSI Access SCSI storage media using IP network
bull Fibre Channel The most popular SAN protocol today
Cable distance ~ 2 km
Popular speed - 4 Gbs
bull NAS (Network Attached Storage) uses NFS (Network File System) protocol over TCPIP
DAS
SCSI
Computer System
SCSI Bus Adapter
SCSI Device Driver
Volume Manager
File System
Application
iSCSI
File System
Application
SCSI Device Driver iSCSI Driver
TCPIP Stack
NIC
Volume Manager
NIC
TCPIP Stack
iSCSI Layer
Bus Adapter
Host
Server
Storage
Transport
Storage
Media
FC SAN
FC
FC HBA
SCSI Device Driver
Volume Manager
File System
Application
Computer System Computer System
Block IO
SAN IP
copy 2010 Cisco andor its affiliates All rights reserved 48
HDD 1
450gig
15K RPM
HDD 2
450gig
15K RPM
HDD 3
450gig
15K RPM
HDD 4
450gig
15K RPM
HDD 5
450gig
15K RPM
Single RAID5 Group (14 TB Usable Space)
PUB SUB1 UCCX1
UC
VM 1
UC
VM 2
UC
VM 3
LUN 2 (720 GB) LUN 1 (720 GB)
UCCX2 CUP1 CUP2
UC
VM 4
UC
VM 5 UC
VM 6
4 to 8 UC VMs per
LUN
(max dependent on
sumvDisks)
NASSAN Array Best Practices for UC
Must be lt2TB per
LUN Recommend
500GB to 15 TB
Use FC class disks
with ~180 IOPS (eg
450 GB 15K or
300 GB 15K)
copy 2010 Cisco andor its affiliates All rights reserved 49
DAS Example UCS C210 M2 TRC1
HDD 1
146 GB
15K RPM
HDD 2
146 GB
15K RPM
HDD 3
146 GB
15K RPM
HDD 4
146 GB
15K RPM
HDD 5
146 GB
15K RPM
HDD 6
146 GB
15K RPM
HDD 7
146 GB
15K RPM
HDD 8
146 GB
15K RPM
HDD 9
146 GB
15K RPM
HDD 10
146 GB
15K RPM
Single RAID1
Volume Single RAID5 Volume (1022 GB after RAID overhead)
VMFS Filestore (947 GB after VMFS overhead)
PUB UCCX1
UC
VM 1
UC
VM 3
CUP1
UC
VM 5
vSphere ESXi
image
Notes
bull VMFS block size limits
max vDisk size
bull Could have gt1 VMFS
datastore on RAID
volume
copy 2010 Cisco andor its affiliates All rights reserved 50
DAS Example UCS C200 M2 TRC1 for BE6K
HDD 1
1 TB
72K RPM
HDD 2
1 TB
72K RPM
HDD 3
1 TB
72K RPM
HDD 4 1
TB
72K RPM
Single RAID10 Volume (2 TB after RAID
overhead)
VMFS Filestore (18 TB after VMFS overhead)
PUB UCCX1
UC
VM 1
UC
VM 3
CUP1
UC
VM 5 vSphere ESXi
image
copy 2010 Cisco andor its affiliates All rights reserved 51
UC Application Co-residency
bull See wwwciscocomgouc-virtualized for latest
bull Based on supported OVA for download
OVA reserves cores RAM etc to VMs
Basic rule of thumb fill up blade until out of capacity
If blade contains VM for messaging must reserve core for ESXi
bull Hardware oversubscription not supported
52 copy 2010 Cisco andor its affiliates All rights reserved
Virtual Machine Sizing
bull Virtual Machine virtual hardware defined by an VM template
vCPU vRAM vDisk vNICs
bull Capacity
bullAn VM template is associated with a specific capacity
bull VM templates are packaged in a OVA file
bull There are usually different VM template per release For example CUCM_80_vmv7_v21ova
CUCM_85_vmv7_v21ova
CUCM_86_vmv7_v15ova
Includes product product version VMware hardware version template version
53 copy 2010 Cisco andor its affiliates All rights reserved httptoolsciscocomcucst
copy 2010 Cisco andor its affiliates All rights reserved 54
bull Customer-accessible
bullUC on UCS httpwwwciscocomgouconucs and wwwciscocomgouc-virtualized and wwwciscocomgoucsrnd and wwwciscocomgoswonly (UCS page)
bullUCS in general httpwwwciscocomgoucs
bullVblocks amp Virtual Computing Environment wwwvceportalcom and wwwvcecom
bullFlexPods wwwcisconetappcom
bullCisco Infrastructure as a Service (IaaS) Portal httpwwwciscocomgoiaas
bullCisco Unified Service Delivery Portal httpwwwciscocomgousd
bullldquoCUCM on Virtual Serversrdquo (summary of ldquowhatrsquos newdifferent when virtualizedrdquo) httpwwwciscocomenUSdocsvoice_ip_commcucmvirtualservershtml
bull ldquoVirtualization and UCS 101rdquo httpnewsroomciscocomdlls2010ts_102810html
bull UC on UCS ldquotech valuerdquo httpwwwciscocomenUSsolutionsns340ns339ns638ns914html_TWTVtwtv_episode_74html
copy 2010 Cisco andor its affiliates All rights reserved 55
bull Partner Central ldquoServers OS and Virtualizationrdquo
httpwwwciscocomwebpartnersselltechnologyipcservers_os_virt_for_uchtml
bull Partner Community ldquoServers OS and Virtualizationrdquo
httpswwwmyciscocommunitycomcommunitypartnercollaborationucservers
bull Design info
httpwwwciscocomgouc-virtualized and httpwwwciscocomgoucsrnd
bull ldquoWhatrsquos new whatrsquos different when on VMwarerdquo customer doc
httpwwwciscocomenUSdocsvoice_ip_commcucmvirtualservershtml
bull Product Literature
httpwwwciscocomgoucs and httpwwwciscocomgouconucs
bull Ordering Guide
httpwwwciscocomwebpartnersselltechnologyipcuc_tech_readinesshtml~7
bull Virtual Computing Environment Portal httpwwwvceportalcom
bull Cisco Infrastructure as a Service (IaaS) Portal httpwwwciscocomgoiaas
copy 2010 Cisco andor its affiliates All rights reserved 15
Why virtualize
your UC
Why virtualize
on UCS
Lower TCO
Business
Agility
Additional
Savings and
Increased
Agility
End to End
Solution
Single
Support
Tested Reference Configurations
Vblocks
Cisco options
VCE Vblock options
Infrastructure Simplification (Cables Adapters Switching)
Converge Communications and DC Networks ndash ldquowire oncerdquo
Consolidates System Mgmt
Easier Service Provisioning
Reduce ServersStorage
Reduced Power Cooling Cabling Space Weight
Investment Leverage amp Easy Server Repurposing
Efficient App Expansion
Accelerated UC rollouts
Better Business Continuity
PortableMobile VMs
UCS is the industryrsquos only
fully unified and virtualization-
aware compute solution
copy 2010 Cisco andor its affiliates All rights reserved 16
CAPEX
bull Reduced Server Count (50-75)
bull NetworkStorage Consolidation (50+)
bull Reduced Cabling (50+)
OPEX
Reduced Rack amp Floor Space (36)
Reduced PowerCooling (20+)
Fewer Servers to Manage (50-75 less)
Reduced MaintenanceSupport Costs (~20)
Example 5000 users Dial tone voicemail and Presence 10 are Contact Center Agents
11 non-virtualized rack servers required for UC more for other business apps
copy 2010 Cisco andor its affiliates All rights reserved 17
CAPEXOPEX
bull Similar Consolidation and Operational EfficiencyScale benefits as with UC on UCS B-series
Other Benefits
Lower initial investment
Simple entrymigration to virtualized UC ndash Data Center expertise not required unless using SAN option
Example 5000 users Dial tone voicemail and Presence 10 are Contact Center Agents
11 non-virtualized rack servers required for UC more for other business apps
copy 2010 Cisco andor its affiliates All rights reserved 18
$-
$500
$1000
$1500
$2000
$2500
$3000
2 4 8 10 12 20 50 100
UCS B230 M2 TRC OPEX ($K)
UCS B230 M2 TRC CAPEX ($K)
UCS B200 M2 TRC OPEX ($K)
UCS B200 M2 TRC CAPEX ($K)
UCS C210 M2 TRC OPEX ($K)
UCS C210 M2 TRC CAPEX ($K)
MCS 7845-I3 OPEX ($K)
MCS 7845-I3 CAPEX ($K)
Assumptions
bull UC only no other business applications included ldquoSparerdquo or ldquohot standbyrdquo hosts not included
bull ldquoServerrdquo is either an MCS Appliance or a 2-vcpu-core ldquoVirtual Machinerdquo
bull Dual sites split MCS or UCS TRC servers across sites no single point of failure ndash redundant sites switching blade chassis rackblade servers
bull Using list pricing for MCS-7845-I3-IPC1 UCS-C210M2-VCD2 UCS-B200M2-VCS1 UCS-B230M2-VCDL1 and VMware Enterprise Plus Edition
Appliance or VM Count
PSTN
2104
2104
2104
2104
SANLAN
Dual Site Scenario PSTN
2104
2104
2104
2104
SANLAN
hellip
hellip
hellip
hellip
UC on UCS
B200 B230
UC on UCS
C210
MCS 7845
B230 M2
vs B200 M2
C210 M2
vs MCS 7845
20 copy 2010 Cisco andor its affiliates All rights reserved
Current Offers Technical Overview
21 copy 2010 Cisco andor its affiliates All rights reserved
Eg 4 physical servers
Each MCS 7800 hosts only
one UC app instance
4 virtual servers (VMrsquos) on 1 physical server
Single virtualized server with total 8 physical
cores hosts all UC app instances
Unity
Connection
Unified CM
VM for
Unified
CM
Sub
Unified CCX
VM for
Unity
Cxn
VM for
Unified
CCX
VM for
Unified
CM
Pub
or
copy 2010 Cisco andor its affiliates All rights reserved 22
Server Model TRC CPU RAM Storage Adapters
UCS B200 M2 Blade Server TRC 1
Dual E5640 (8 physical cores total)
48 GB DAS (RAID1) for
VMware FC SAN for UC apps
Cisco VIC
TRC 2 Dual E5640 (8
physical cores total) 48 GB Diskless Cisco VIC
UCS B230 M2 Blade Server
TRC 1 Dual E7-2870 (20
physical cores total) 128 GB Diskless Cisco VIC
UCS B440 M2 Blade Server
TRC1 Dual E7-4870 (40
physical cores total)
256 GB Diskless Cisco VIC
UCS C260 M2 Rack-Mount Server
TRC 1 Dual E7-2870 (20
physical cores total) 128 GB DAS (2x RAID5) 1GbE NIC
UCS C210 M2 General-Purpose
Rack-Mount Server TRC 1
Dual E5640 (8 physical cores total)
48 GB
DAS (2 disks RAID1) for VMware + DAS (8 disks RAID5) for
UC apps
1GbE NIC
TRC 2 Dual E5640 (8
physical cores total) 48 GB
DAS (2 disks RAID1) for VMware FC SAN
for UC apps
1GbE NIC and 4G FC HBA
TRC 3 Dual E5640 (8
physical cores total) 48 GB Diskless
1GbE NIC and 4G FC HBA
UCS C200 M2
General-Purpose Rack-Mount Server
TRC 1 Dual E5506 (8
physical cores total) 24 GB
DAS (4 disks RAID10) for VMware
+ UC apps 1GbE NIC
copy 2010 Cisco andor its affiliates All rights reserved 23
UC app Scale
(ldquousersrdquo)
vCPU (cores) Usually 253+ GHz
per core required
vRAM
(GB)
vDisk
(GB)
Notes
Unified
CM
1000 2 4 1 x 80 UCS C200 or BE6K only
2500 1 225 1 x 80 Not for use with C200BE6K
7500 2 6 2 x 80
10000 4 6 2 x 80
Unity
Connection
500 1 2 1 x 160
1000 1 4 1 x 160
5000 2 4 1 x 200
10000 4 4 2 x 146 Not for use with C200BE6K
20000 7 8 2 x 300
Unified
Presence
1000 1 2 1 x 80
2500 2 4 1 x 80 Not for use with C200BE6K
5000 4 4 2 x 80
Unified CCX 100 2 4 2 x 146 UCS C200 or BE6K only
300 2 4 2 x 146 Not for use with C200BE6K
400 4 8 2 x 146
Not exhaustive subject to change see wwwciscocomgouc-virtualized for latest
ie user count for particular values of BHCA trace level encryption CTI and other factors Actual
supportable user count may vary by deployment
copy 2010 Cisco andor its affiliates All rights reserved 24
Policy still lives here httpdocwikiciscocomwikiUnified_Communications_Virtualization_Sizing_GuidelinesApplication_Co-residency_Support_Policy
Three aspects
1 Allowed App Mix on same physical server
SAME RULES for TRC vs specs-based UCSHPIBM
2 Allowed VM OVA choices
DIFFERENT RULES for TRC vs specs-based due to CPU differences
3 Max number of VMs on same physical server
SAME RULES for TRC vs specs-based to determine max but specs-based might allow more VMs
Note DAS IO bottlenecks may prevent very high VM counts even if CPURAM are sufficient
copy 2010 Cisco andor its affiliates All rights reserved 25
bull Which apps can share the same physical server In general any UC with UC from apps listed at httpdocwikiciscocomwikiUnified_Communications_Virtualization_Supported_Applications
But note some UC apps restrict this eg BE6K CUCCE See their rules on their docwiki ldquochild pagesrdquo
NMTGrsquos UC Mgmt Suite (CUOM CUSM CUSSM CUPM) counts as a UC app for this
Note UCS C200 M2 TRC1 for non-BE6K no longer has special restrictions on UC App Mix
bull SEPARATE PHYSICAL SERVER required for non-UC or 3rd-party Eg N1KV ARC SingleWire vCenter FilePrint Directory CRMERP Groupware non-CUCM TFTP Nuance etc
UC VM UC VMhellipVM for
VMware
vCenter
VM for
Nexus
1KV
VSM
VM for
Solutions
Plus
CTDP
app
VM for
Unaffiliated
3rd-party
app
UC VM UC VMhellipVM for
VMware
vCenter
VM for
Nexus
1KV
VSM
VM for
Solutions
Plus
CTDP
app
VM for
Unaffiliated
3rd-party
app
UC VM UC VMhellipVM for
VMware
vCenter
VM for
Nexus
1KV
VSM
VM for
Solutions
Plus
CTDP
app
VM for
Unaffiliated
3rd-party
app
UC VM UC VMhellipVM for
VMware
vCenter
VM for
Nexus
1KV
VSM
VM for
Solutions
Plus
CTDP
app
VM for
Unaffiliated
3rd-party
app
Different blades in same chassis OK
Same blade same chassis not OK
copy 2010 Cisco andor its affiliates All rights reserved 26
bull App to HW some apps eg CUCCE donrsquot allow any of their OVAs on certain TRCs
See httpdocwikiciscocomwikiUnified_Communications_Virtualization_Supported_Applications
bull OVA to HW Some OVAs are deliberately only for use with a particular TRC or CPU
See co-res policy page and Notes column in httpdocwikiciscocomwikiUnified_Communications_Virtualization_Downloads_28including_OVAOVF_Templates29
Why Usually due to CPU modelspeed dependencies
C200 M2 TRC1
(E5506 213 GHz)
UCM
25K
UCM
75K
UCM
10K
C200 M2 Specs-based
(56xx 253+GHz )
B200C210 M2 TRC or Specs-based
(E5640 266 GHz on TRC
56xx75xx 253+ GHz on specs-based)
UCM
25K
UCM
75K
UCM
10K
UCM
1K
UCM
1K
UCM
25K
UCM
75K
UCM
10K
UCM
1K
copy 2010 Cisco andor its affiliates All rights reserved 27
Core
Core
Core
Core
Core
Core
Core
Core
Core
Core
Core
Core
Core
Core
Core
Core
Core
Core
Core
Core
VM VM VM VM
V
M
V
M
V
M
V
M
V
M
V
M
V
M
V
M
VM VM
Dual-socket 4-core
Eg UCS C210 M2 TRC1
with dual E5640
Dual-socket 6-core
Eg UCS C210 M2 Specs-based
with UC-supported CPU model
and min speed
VM VM V
M
V
M
VM VM VM V
M
Idle
VM
VM
VM
Jumbo + 1 reserved
or
Mixed sizes + 1 reserved
or
Mixed sizes
or
21 Large eg UCM 10K
or
41 Med eg UCM 75K
or
81 Small eg UCM 25K
ldquoSmallrdquo VM
ldquoMediumrdquo VM
ldquoLargerdquo VM
ldquoJumbordquo VM
VM
VM
Idle
VM VM VM VM
V
M
V
M
V
M
V
M
V
M
V
M
V
M
V
M
VM VM
VM VM V
M
V
M
VM VM VM V
M
Idle
Mixed sizes + 1 reserved
or
Mixed sizes
or
31 Large eg UCM 10K
or
61 Med eg UCM 75K
or
121 Small eg UCM 25K
VM
Idle
V
M
V
M
V
M
V
M
VM VM
VM
VM VM
VM
VM VM
copy 2010 Cisco andor its affiliates All rights reserved 28
Virtual Software Switch Options
VM
LAN SAN
ESXi Hypervisor
Software Switch
vNIC
CNA
FCoE
VMware
vSwitch
VMware
dvSwitch
Cisco Nexus
1KV
Host based (local) Distributed Distributed
IEEE 8021Q VLAN
tagging
IEEE 8021Q VLAN
tagging
IEEE 8021Q VLAN
tagging
VLANs only visible to
local ESXi host
VLANs visible to all
ESXi hosts
VLANs visible to all
ESXi hosts
EtherChannel EtherChannel EtherChannel
-- -- Virtual PortChannel
-- -- QoS Marking
(DSCPCoS)
-- -- ACL
-- -- SPAN
RADIUSTACACS+
No VM needed No VM needed VM needed for VSM
vmNIC
UCS B200
Strongly recommended for UC on UCS B-Series
Not required but recommended for UC on UCS C-Series
copy 2010 Cisco andor its affiliates All rights reserved 29
pSwitch
ESXi
bull Cisco Software Switch in Hypervisor
bull Familiar network server operations amp management model
bull Enhanced diagnostic amp monitoring capability
bull Visibility direct to VM
ESXi Nexus
1000V
VEM
Nexus
1000V
VEM
Nexus 1000V VSM
copy 2010 Cisco andor its affiliates All rights reserved 30
Physical switch maps L3 DSCP to L2 CoS
CUCM marks traffic based on L3 DSCP values
pSwitch (CAT6K etc) can do mapping from L3 DSCP to L2 CoS (if needed)
CTL Packet L3
dc1-access-6k(config)mls qos map dscp-cos 24 to 3
dc1-access-6k(config)mls qos map dscp-cos 46 to 5
CS3
L20 L3CS3
CUCM
CAT6K
L23 L3CS3 L23 L3CS3
copy 2010 Cisco andor its affiliates All rights reserved 31
bull UCS 6100 doesnrsquot look into L3 IP header
bull DSCPToS setting in IP header is not altered by UCS
bull 6100 sends packet to uplink pEthernet switch
bull Default QoS settings on UCS
FCoE (ldquomatch cos 3rdquo) ndash no drop policy
Rest (ldquomatch anyrdquo) ndash Best Effort Queue
vSwitch amp UCS 6100 can not map L3 DSCP to L2 CoS
L20 L3CS3
CUCM
CAT6K
UCS 6100
copy 2010 Cisco andor its affiliates All rights reserved 32
bull UC blades Network Adapters QoS policy set to Platinum (CoS=5 No
Drop)
bull Non-UC blades Network Adapters QoS policy set to best effort
N1Kv Considerations
bull UC sig traffic (CoS3) share queues with FCoE traffic (CoS3)
bull UC sig traffic is given lossless behavior
bull Default CoS value of 3 for FCoE traffic should never be changed
Without N1Kv
Caveat
bull All traffic types from virtual UC App will get CoS value of Platinum
bull Non-UC application gets best-effort class might not be acceptable
L20 L3CS3
L23 L3CS3
L23 L3CS3
CUCM
N1KV
UCS 6100
CAT6K
copy 2010 Cisco andor its affiliates All rights reserved 33
Compute Layer
SANStorage
Layer ndash Cisco
SRND
Cisco
UCS 6100
Fabric
Interconnect
UCS 5100
Blade
Server
Cisco SAN
Switch
4x10GE
4x10GE
4x10GE
4x10GE
FC FC
FC FC
Nexus
1000V
FC Storage
SP-A SP-B
3rd party layer
CUCM VM IOPS ~ 200
200 IOPS 4KB ~ 64 Mbps per VM
bull Total capacity 28000 IOPS
bull 14000 IOPS per controller
bull 4 KByte block size
14000 IOPS x (4KB) ~ 428 Mbps
600 Mbps throughputcontroller
3rd Party SAN Example
Result
bull One 4 Gbps FC interface is enough to
handle the entire capacity of one Storage
Array
bull HA requires four FC interfaces
copy 2010 Cisco andor its affiliates All rights reserved 34
copy 2010 Cisco andor its affiliates All rights reserved 35
bull All UC Deployment Models are supported
No change in the current deployment models
Base deployment model ndash Single Site Centralized Call Processing etc are not changing
bull VM machine layout on a blade andor chassis
Unity Connection requires one extra CPU core (vCPU) on blade
bull Software checks for design rules
No rules or restrictions are in place in UC Apps to check if you are running the primary and sub on the same blade
bull CoWAN ruleslatency requirement are same
Does not depend on CUCM code or hardware
httpwwwciscocomgoucsrnd
copy 2010 Cisco andor its affiliates All rights reserved 36
bull SRND application-layer guidelines are same as when on MCS
Redundancy Rules are the same
Clustering over the WANlatency numbers
Mega Cluster supported in 85
Determine quantityrole of nodes
For HA No design checks validating proper placement of primary and secondary servers
CUCCE private network requirement
bull Mixed clusters of HP IBM UCS are supported
Subject to ldquocommon senserdquo rules ndash eg donrsquot make Pub or Primary less powerful than Sub or Secondary
bull Direct attach devices must be on physical MCS server
-MOH Live audio stream
-Tape BackupFloppy
bull New factors to consider for end-to-end QoS designconfiguration
37 copy 2010 Cisco andor its affiliates All rights reserved
FROM CUCM
System Release To Unified CM 85 To UC System 85
4x Multi Hop thru 61(x)71(x) Multi Hop thru 61(3)
51(2) Multi Hop thru 61(x)71(x) NA
51(3) 2 Hop thru 71(3) NA
61(1) 2 Hop thru 61(x)71(x) 2 Hop
61(2) 2 Hop thru 61(x)71(x) NA
61(3) 2 Hop thru 61(x)71(x) NA
61(4) Single Hop NA
61(5) Single Hop NA
70(1) 2 Hop thru 71(x) 2 Hop
71(2) 2 Hop thru 71(x) 2 Hop
71(3) Single Hop Single Hop Multi stagesBWC supported
71(5) Single Hop NA
80(1) Single Hop Single Hop Multi stagesBWC supported
80(2) 80(3) Single Hop NA
38 copy 2010 Cisco andor its affiliates All rights reserved
VMware Feature support bull VMware feature support varies by application
bull Some features are supported with Caveats some Partially
bull For example
bull Clone Virtual Machine
ldquoY (C)rdquo means VM has to be powered off
bull vMotion
ldquoY (C)rdquo means vMotion supported for live traffic calls shouldnrsquot be dropped (but not guaranteed)
ldquoPartialrdquo means in maintenance mode only
ESXi Features CUCM CUC CUP CCX
Clone Virtual Machine Y (C) Y (C) Y (C) Y (C)
VMware vMotion Y (C) Partial Partial Y (C)
Resize Virtual Machine Partial Partial Partial Partial
VMware HA Y (C) Y (C) Y (C) Y (C)
Boot From SAN Y (C) Y (C) Y (C) Y (C)
VMware DRS No No No No
Docwiki httpdocwikiciscocomwikiUnified_Communications_VMware_Requirements
copy 2010 Cisco andor its affiliates All rights reserved 39
Current Business Continuity and Disaster Recovery strategies are still applicable
The UC apps redundancy rules are same
Distribute UC application nodes across UCS blades chassis and sites to minimize failure impact
Primarysecondary on different blade chassis sites
On same blade mix Subs with TFTPMoH vs just Subs
Redundancy of UCS components (blade chassis FEX links Interconnect switching)
Redundancy of ldquonewrdquo network types (10GbE SAN multi-pathing etc)
copy 2010 Cisco andor its affiliates All rights reserved 40
TAC Support Demarcation
Server Hardware
Shared Storage
VMware Cisco
Application
UC on UCS
Tested Reference Configuration
Cisco 3rd-party Cisco
Cisco VCE VCE
3rd-party
VCE
UC on UCS
Specs-based (including Vblock option)
Cisco 3rd-party Cisco
Cisco VCE VCE
3rd-party
VCE
3rd-party VMware (HP IBM)
Specs-based 3rd-party 3rd-party 3rd-party Cisco
MCS 7800 Appliances Cisco NA NA Cisco
Customer-provided MCS 7800 equivalent 3rd party NA NA Cisco
41 copy 2010 Cisco andor its affiliates All rights reserved
Customer Example ndash Primary Data Center
OLD NEW
Hardware Nodes 62 Physical Servers ( EU HQ clusters) 14 Approx
Software Version 615 amp 851 851
Ucxn Version 421 851 ndash 3 pairs -- Virtualized
CER 20 70 86 -- Virtualized
CM SUB CM PUB CM SUB CM SUB
MOH TFTP CER CM SUB
CER UCxn UCxN CM SUB
42 copy 2010 Cisco andor its affiliates All rights reserved
Deployment Model ndash Data Center 1
CM SUB CM PUB CM SUB CM SUB
MOH TFTP UCxN UCxn
CER UCxn UCxn UCxn
43 copy 2010 Cisco andor its affiliates All rights reserved
Deployment Model ndash Data Center 2
CM SUB CM PUB CM SUB CM SUB
MOH TFTP Ucxn UCxn
CER UCxn UCxn UCxn
copy 2010 Cisco andor its affiliates All rights reserved 44
Customer Design
PSTN
IP WAN
SIP Proxy
Unified Communications Manager
Unity Connection
Unified Presence
Unified Contact Center Express
Cisco UCS 5108 Chassis with UCS B200 Blade Servers
Cisco UCS 6100 Fabric Interconnect Switch
Cisco UCS C210 or C200 General-Purpose Rack-mount Server
CUSP
(11K phones)
(3K phones)
(400 phones)
copy 2010 Cisco andor its affiliates All rights reserved 45
HQ Details
Blade 1 Blade 2
CPU-1 CPU-2 CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
Blade 3 Blade 4
CPU-1 CPU-2 CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
CUP-1 UCCX-1 CUP-2
Blade 5 Blade 6
CPU-1 CPU-2 CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
UCCX-1
UCxn-1 Active
Leave
idle for
UCxn
UCxn-2
Active
TFTP-2
SUB-2
SUB-4
SUB-6
SUB-8 CUP-2 UCCX-2
SUB-3 PUB SUB-1
CUP-1
TFTP-1 SUB-5
SUB-7
Blade Slot 7 Blade Slot 8
CPU-1 CPU-2 CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
CUCM VM OVAs
Messaging VM OVAs
Leave
idle for
UCxn
ldquoSparerdquo blade slots available for non-UC workloads such as Cisco Nexus 1000V VMware
vCenter or 3rd-party workloads such as directories email groupware or other business
applications
Contact Center VM OVAs
Presence VM OVAs
copy 2010 Cisco andor its affiliates All rights reserved 46
Rack Server 3
CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
UCxn-2 CCX-1 CUP
Rack Server 2
CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
Rack Server 1
CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
Leave
idle for
UCxn
UCxn-1 TFTP-2 SUB-2
Leave
idle for
UCxn
SUB-1 CCX-2 PUB TFTP-1
Branch Office Details
Rack Server 2
CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
Rack Server 1
CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
PUBTFTP CCX-1 CUP
SUB CCX-2
Leave
idle for
UCxn
Leave
idle for
UCxn
UC
xn
-1
UC
xn
-1
CUCM VM OVAs
Messaging VM OVAs
Contact Center VM OVAs
Presence VM OVAs
copy 2010 Cisco andor its affiliates All rights reserved 47
bull DAS Rack Mount Server (Cisco C-Series)
bull Popular DAS Protocol SCSI
bull iSCSI Access SCSI storage media using IP network
bull Fibre Channel The most popular SAN protocol today
Cable distance ~ 2 km
Popular speed - 4 Gbs
bull NAS (Network Attached Storage) uses NFS (Network File System) protocol over TCPIP
DAS
SCSI
Computer System
SCSI Bus Adapter
SCSI Device Driver
Volume Manager
File System
Application
iSCSI
File System
Application
SCSI Device Driver iSCSI Driver
TCPIP Stack
NIC
Volume Manager
NIC
TCPIP Stack
iSCSI Layer
Bus Adapter
Host
Server
Storage
Transport
Storage
Media
FC SAN
FC
FC HBA
SCSI Device Driver
Volume Manager
File System
Application
Computer System Computer System
Block IO
SAN IP
copy 2010 Cisco andor its affiliates All rights reserved 48
HDD 1
450gig
15K RPM
HDD 2
450gig
15K RPM
HDD 3
450gig
15K RPM
HDD 4
450gig
15K RPM
HDD 5
450gig
15K RPM
Single RAID5 Group (14 TB Usable Space)
PUB SUB1 UCCX1
UC
VM 1
UC
VM 2
UC
VM 3
LUN 2 (720 GB) LUN 1 (720 GB)
UCCX2 CUP1 CUP2
UC
VM 4
UC
VM 5 UC
VM 6
4 to 8 UC VMs per
LUN
(max dependent on
sumvDisks)
NASSAN Array Best Practices for UC
Must be lt2TB per
LUN Recommend
500GB to 15 TB
Use FC class disks
with ~180 IOPS (eg
450 GB 15K or
300 GB 15K)
copy 2010 Cisco andor its affiliates All rights reserved 49
DAS Example UCS C210 M2 TRC1
HDD 1
146 GB
15K RPM
HDD 2
146 GB
15K RPM
HDD 3
146 GB
15K RPM
HDD 4
146 GB
15K RPM
HDD 5
146 GB
15K RPM
HDD 6
146 GB
15K RPM
HDD 7
146 GB
15K RPM
HDD 8
146 GB
15K RPM
HDD 9
146 GB
15K RPM
HDD 10
146 GB
15K RPM
Single RAID1
Volume Single RAID5 Volume (1022 GB after RAID overhead)
VMFS Filestore (947 GB after VMFS overhead)
PUB UCCX1
UC
VM 1
UC
VM 3
CUP1
UC
VM 5
vSphere ESXi
image
Notes
bull VMFS block size limits
max vDisk size
bull Could have gt1 VMFS
datastore on RAID
volume
copy 2010 Cisco andor its affiliates All rights reserved 50
DAS Example UCS C200 M2 TRC1 for BE6K
HDD 1
1 TB
72K RPM
HDD 2
1 TB
72K RPM
HDD 3
1 TB
72K RPM
HDD 4 1
TB
72K RPM
Single RAID10 Volume (2 TB after RAID
overhead)
VMFS Filestore (18 TB after VMFS overhead)
PUB UCCX1
UC
VM 1
UC
VM 3
CUP1
UC
VM 5 vSphere ESXi
image
copy 2010 Cisco andor its affiliates All rights reserved 51
UC Application Co-residency
bull See wwwciscocomgouc-virtualized for latest
bull Based on supported OVA for download
OVA reserves cores RAM etc to VMs
Basic rule of thumb fill up blade until out of capacity
If blade contains VM for messaging must reserve core for ESXi
bull Hardware oversubscription not supported
52 copy 2010 Cisco andor its affiliates All rights reserved
Virtual Machine Sizing
bull Virtual Machine virtual hardware defined by an VM template
vCPU vRAM vDisk vNICs
bull Capacity
bullAn VM template is associated with a specific capacity
bull VM templates are packaged in a OVA file
bull There are usually different VM template per release For example CUCM_80_vmv7_v21ova
CUCM_85_vmv7_v21ova
CUCM_86_vmv7_v15ova
Includes product product version VMware hardware version template version
53 copy 2010 Cisco andor its affiliates All rights reserved httptoolsciscocomcucst
copy 2010 Cisco andor its affiliates All rights reserved 54
bull Customer-accessible
bullUC on UCS httpwwwciscocomgouconucs and wwwciscocomgouc-virtualized and wwwciscocomgoucsrnd and wwwciscocomgoswonly (UCS page)
bullUCS in general httpwwwciscocomgoucs
bullVblocks amp Virtual Computing Environment wwwvceportalcom and wwwvcecom
bullFlexPods wwwcisconetappcom
bullCisco Infrastructure as a Service (IaaS) Portal httpwwwciscocomgoiaas
bullCisco Unified Service Delivery Portal httpwwwciscocomgousd
bullldquoCUCM on Virtual Serversrdquo (summary of ldquowhatrsquos newdifferent when virtualizedrdquo) httpwwwciscocomenUSdocsvoice_ip_commcucmvirtualservershtml
bull ldquoVirtualization and UCS 101rdquo httpnewsroomciscocomdlls2010ts_102810html
bull UC on UCS ldquotech valuerdquo httpwwwciscocomenUSsolutionsns340ns339ns638ns914html_TWTVtwtv_episode_74html
copy 2010 Cisco andor its affiliates All rights reserved 55
bull Partner Central ldquoServers OS and Virtualizationrdquo
httpwwwciscocomwebpartnersselltechnologyipcservers_os_virt_for_uchtml
bull Partner Community ldquoServers OS and Virtualizationrdquo
httpswwwmyciscocommunitycomcommunitypartnercollaborationucservers
bull Design info
httpwwwciscocomgouc-virtualized and httpwwwciscocomgoucsrnd
bull ldquoWhatrsquos new whatrsquos different when on VMwarerdquo customer doc
httpwwwciscocomenUSdocsvoice_ip_commcucmvirtualservershtml
bull Product Literature
httpwwwciscocomgoucs and httpwwwciscocomgouconucs
bull Ordering Guide
httpwwwciscocomwebpartnersselltechnologyipcuc_tech_readinesshtml~7
bull Virtual Computing Environment Portal httpwwwvceportalcom
bull Cisco Infrastructure as a Service (IaaS) Portal httpwwwciscocomgoiaas
copy 2010 Cisco andor its affiliates All rights reserved 16
CAPEX
bull Reduced Server Count (50-75)
bull NetworkStorage Consolidation (50+)
bull Reduced Cabling (50+)
OPEX
Reduced Rack amp Floor Space (36)
Reduced PowerCooling (20+)
Fewer Servers to Manage (50-75 less)
Reduced MaintenanceSupport Costs (~20)
Example 5000 users Dial tone voicemail and Presence 10 are Contact Center Agents
11 non-virtualized rack servers required for UC more for other business apps
copy 2010 Cisco andor its affiliates All rights reserved 17
CAPEXOPEX
bull Similar Consolidation and Operational EfficiencyScale benefits as with UC on UCS B-series
Other Benefits
Lower initial investment
Simple entrymigration to virtualized UC ndash Data Center expertise not required unless using SAN option
Example 5000 users Dial tone voicemail and Presence 10 are Contact Center Agents
11 non-virtualized rack servers required for UC more for other business apps
copy 2010 Cisco andor its affiliates All rights reserved 18
$-
$500
$1000
$1500
$2000
$2500
$3000
2 4 8 10 12 20 50 100
UCS B230 M2 TRC OPEX ($K)
UCS B230 M2 TRC CAPEX ($K)
UCS B200 M2 TRC OPEX ($K)
UCS B200 M2 TRC CAPEX ($K)
UCS C210 M2 TRC OPEX ($K)
UCS C210 M2 TRC CAPEX ($K)
MCS 7845-I3 OPEX ($K)
MCS 7845-I3 CAPEX ($K)
Assumptions
bull UC only no other business applications included ldquoSparerdquo or ldquohot standbyrdquo hosts not included
bull ldquoServerrdquo is either an MCS Appliance or a 2-vcpu-core ldquoVirtual Machinerdquo
bull Dual sites split MCS or UCS TRC servers across sites no single point of failure ndash redundant sites switching blade chassis rackblade servers
bull Using list pricing for MCS-7845-I3-IPC1 UCS-C210M2-VCD2 UCS-B200M2-VCS1 UCS-B230M2-VCDL1 and VMware Enterprise Plus Edition
Appliance or VM Count
PSTN
2104
2104
2104
2104
SANLAN
Dual Site Scenario PSTN
2104
2104
2104
2104
SANLAN
hellip
hellip
hellip
hellip
UC on UCS
B200 B230
UC on UCS
C210
MCS 7845
B230 M2
vs B200 M2
C210 M2
vs MCS 7845
20 copy 2010 Cisco andor its affiliates All rights reserved
Current Offers Technical Overview
21 copy 2010 Cisco andor its affiliates All rights reserved
Eg 4 physical servers
Each MCS 7800 hosts only
one UC app instance
4 virtual servers (VMrsquos) on 1 physical server
Single virtualized server with total 8 physical
cores hosts all UC app instances
Unity
Connection
Unified CM
VM for
Unified
CM
Sub
Unified CCX
VM for
Unity
Cxn
VM for
Unified
CCX
VM for
Unified
CM
Pub
or
copy 2010 Cisco andor its affiliates All rights reserved 22
Server Model TRC CPU RAM Storage Adapters
UCS B200 M2 Blade Server TRC 1
Dual E5640 (8 physical cores total)
48 GB DAS (RAID1) for
VMware FC SAN for UC apps
Cisco VIC
TRC 2 Dual E5640 (8
physical cores total) 48 GB Diskless Cisco VIC
UCS B230 M2 Blade Server
TRC 1 Dual E7-2870 (20
physical cores total) 128 GB Diskless Cisco VIC
UCS B440 M2 Blade Server
TRC1 Dual E7-4870 (40
physical cores total)
256 GB Diskless Cisco VIC
UCS C260 M2 Rack-Mount Server
TRC 1 Dual E7-2870 (20
physical cores total) 128 GB DAS (2x RAID5) 1GbE NIC
UCS C210 M2 General-Purpose
Rack-Mount Server TRC 1
Dual E5640 (8 physical cores total)
48 GB
DAS (2 disks RAID1) for VMware + DAS (8 disks RAID5) for
UC apps
1GbE NIC
TRC 2 Dual E5640 (8
physical cores total) 48 GB
DAS (2 disks RAID1) for VMware FC SAN
for UC apps
1GbE NIC and 4G FC HBA
TRC 3 Dual E5640 (8
physical cores total) 48 GB Diskless
1GbE NIC and 4G FC HBA
UCS C200 M2
General-Purpose Rack-Mount Server
TRC 1 Dual E5506 (8
physical cores total) 24 GB
DAS (4 disks RAID10) for VMware
+ UC apps 1GbE NIC
copy 2010 Cisco andor its affiliates All rights reserved 23
UC app Scale
(ldquousersrdquo)
vCPU (cores) Usually 253+ GHz
per core required
vRAM
(GB)
vDisk
(GB)
Notes
Unified
CM
1000 2 4 1 x 80 UCS C200 or BE6K only
2500 1 225 1 x 80 Not for use with C200BE6K
7500 2 6 2 x 80
10000 4 6 2 x 80
Unity
Connection
500 1 2 1 x 160
1000 1 4 1 x 160
5000 2 4 1 x 200
10000 4 4 2 x 146 Not for use with C200BE6K
20000 7 8 2 x 300
Unified
Presence
1000 1 2 1 x 80
2500 2 4 1 x 80 Not for use with C200BE6K
5000 4 4 2 x 80
Unified CCX 100 2 4 2 x 146 UCS C200 or BE6K only
300 2 4 2 x 146 Not for use with C200BE6K
400 4 8 2 x 146
Not exhaustive subject to change see wwwciscocomgouc-virtualized for latest
ie user count for particular values of BHCA trace level encryption CTI and other factors Actual
supportable user count may vary by deployment
copy 2010 Cisco andor its affiliates All rights reserved 24
Policy still lives here httpdocwikiciscocomwikiUnified_Communications_Virtualization_Sizing_GuidelinesApplication_Co-residency_Support_Policy
Three aspects
1 Allowed App Mix on same physical server
SAME RULES for TRC vs specs-based UCSHPIBM
2 Allowed VM OVA choices
DIFFERENT RULES for TRC vs specs-based due to CPU differences
3 Max number of VMs on same physical server
SAME RULES for TRC vs specs-based to determine max but specs-based might allow more VMs
Note DAS IO bottlenecks may prevent very high VM counts even if CPURAM are sufficient
copy 2010 Cisco andor its affiliates All rights reserved 25
bull Which apps can share the same physical server In general any UC with UC from apps listed at httpdocwikiciscocomwikiUnified_Communications_Virtualization_Supported_Applications
But note some UC apps restrict this eg BE6K CUCCE See their rules on their docwiki ldquochild pagesrdquo
NMTGrsquos UC Mgmt Suite (CUOM CUSM CUSSM CUPM) counts as a UC app for this
Note UCS C200 M2 TRC1 for non-BE6K no longer has special restrictions on UC App Mix
bull SEPARATE PHYSICAL SERVER required for non-UC or 3rd-party Eg N1KV ARC SingleWire vCenter FilePrint Directory CRMERP Groupware non-CUCM TFTP Nuance etc
UC VM UC VMhellipVM for
VMware
vCenter
VM for
Nexus
1KV
VSM
VM for
Solutions
Plus
CTDP
app
VM for
Unaffiliated
3rd-party
app
UC VM UC VMhellipVM for
VMware
vCenter
VM for
Nexus
1KV
VSM
VM for
Solutions
Plus
CTDP
app
VM for
Unaffiliated
3rd-party
app
UC VM UC VMhellipVM for
VMware
vCenter
VM for
Nexus
1KV
VSM
VM for
Solutions
Plus
CTDP
app
VM for
Unaffiliated
3rd-party
app
UC VM UC VMhellipVM for
VMware
vCenter
VM for
Nexus
1KV
VSM
VM for
Solutions
Plus
CTDP
app
VM for
Unaffiliated
3rd-party
app
Different blades in same chassis OK
Same blade same chassis not OK
copy 2010 Cisco andor its affiliates All rights reserved 26
bull App to HW some apps eg CUCCE donrsquot allow any of their OVAs on certain TRCs
See httpdocwikiciscocomwikiUnified_Communications_Virtualization_Supported_Applications
bull OVA to HW Some OVAs are deliberately only for use with a particular TRC or CPU
See co-res policy page and Notes column in httpdocwikiciscocomwikiUnified_Communications_Virtualization_Downloads_28including_OVAOVF_Templates29
Why Usually due to CPU modelspeed dependencies
C200 M2 TRC1
(E5506 213 GHz)
UCM
25K
UCM
75K
UCM
10K
C200 M2 Specs-based
(56xx 253+GHz )
B200C210 M2 TRC or Specs-based
(E5640 266 GHz on TRC
56xx75xx 253+ GHz on specs-based)
UCM
25K
UCM
75K
UCM
10K
UCM
1K
UCM
1K
UCM
25K
UCM
75K
UCM
10K
UCM
1K
copy 2010 Cisco andor its affiliates All rights reserved 27
Core
Core
Core
Core
Core
Core
Core
Core
Core
Core
Core
Core
Core
Core
Core
Core
Core
Core
Core
Core
VM VM VM VM
V
M
V
M
V
M
V
M
V
M
V
M
V
M
V
M
VM VM
Dual-socket 4-core
Eg UCS C210 M2 TRC1
with dual E5640
Dual-socket 6-core
Eg UCS C210 M2 Specs-based
with UC-supported CPU model
and min speed
VM VM V
M
V
M
VM VM VM V
M
Idle
VM
VM
VM
Jumbo + 1 reserved
or
Mixed sizes + 1 reserved
or
Mixed sizes
or
21 Large eg UCM 10K
or
41 Med eg UCM 75K
or
81 Small eg UCM 25K
ldquoSmallrdquo VM
ldquoMediumrdquo VM
ldquoLargerdquo VM
ldquoJumbordquo VM
VM
VM
Idle
VM VM VM VM
V
M
V
M
V
M
V
M
V
M
V
M
V
M
V
M
VM VM
VM VM V
M
V
M
VM VM VM V
M
Idle
Mixed sizes + 1 reserved
or
Mixed sizes
or
31 Large eg UCM 10K
or
61 Med eg UCM 75K
or
121 Small eg UCM 25K
VM
Idle
V
M
V
M
V
M
V
M
VM VM
VM
VM VM
VM
VM VM
copy 2010 Cisco andor its affiliates All rights reserved 28
Virtual Software Switch Options
VM
LAN SAN
ESXi Hypervisor
Software Switch
vNIC
CNA
FCoE
VMware
vSwitch
VMware
dvSwitch
Cisco Nexus
1KV
Host based (local) Distributed Distributed
IEEE 8021Q VLAN
tagging
IEEE 8021Q VLAN
tagging
IEEE 8021Q VLAN
tagging
VLANs only visible to
local ESXi host
VLANs visible to all
ESXi hosts
VLANs visible to all
ESXi hosts
EtherChannel EtherChannel EtherChannel
-- -- Virtual PortChannel
-- -- QoS Marking
(DSCPCoS)
-- -- ACL
-- -- SPAN
RADIUSTACACS+
No VM needed No VM needed VM needed for VSM
vmNIC
UCS B200
Strongly recommended for UC on UCS B-Series
Not required but recommended for UC on UCS C-Series
copy 2010 Cisco andor its affiliates All rights reserved 29
pSwitch
ESXi
bull Cisco Software Switch in Hypervisor
bull Familiar network server operations amp management model
bull Enhanced diagnostic amp monitoring capability
bull Visibility direct to VM
ESXi Nexus
1000V
VEM
Nexus
1000V
VEM
Nexus 1000V VSM
copy 2010 Cisco andor its affiliates All rights reserved 30
Physical switch maps L3 DSCP to L2 CoS
CUCM marks traffic based on L3 DSCP values
pSwitch (CAT6K etc) can do mapping from L3 DSCP to L2 CoS (if needed)
CTL Packet L3
dc1-access-6k(config)mls qos map dscp-cos 24 to 3
dc1-access-6k(config)mls qos map dscp-cos 46 to 5
CS3
L20 L3CS3
CUCM
CAT6K
L23 L3CS3 L23 L3CS3
copy 2010 Cisco andor its affiliates All rights reserved 31
bull UCS 6100 doesnrsquot look into L3 IP header
bull DSCPToS setting in IP header is not altered by UCS
bull 6100 sends packet to uplink pEthernet switch
bull Default QoS settings on UCS
FCoE (ldquomatch cos 3rdquo) ndash no drop policy
Rest (ldquomatch anyrdquo) ndash Best Effort Queue
vSwitch amp UCS 6100 can not map L3 DSCP to L2 CoS
L20 L3CS3
CUCM
CAT6K
UCS 6100
copy 2010 Cisco andor its affiliates All rights reserved 32
bull UC blades Network Adapters QoS policy set to Platinum (CoS=5 No
Drop)
bull Non-UC blades Network Adapters QoS policy set to best effort
N1Kv Considerations
bull UC sig traffic (CoS3) share queues with FCoE traffic (CoS3)
bull UC sig traffic is given lossless behavior
bull Default CoS value of 3 for FCoE traffic should never be changed
Without N1Kv
Caveat
bull All traffic types from virtual UC App will get CoS value of Platinum
bull Non-UC application gets best-effort class might not be acceptable
L20 L3CS3
L23 L3CS3
L23 L3CS3
CUCM
N1KV
UCS 6100
CAT6K
copy 2010 Cisco andor its affiliates All rights reserved 33
Compute Layer
SANStorage
Layer ndash Cisco
SRND
Cisco
UCS 6100
Fabric
Interconnect
UCS 5100
Blade
Server
Cisco SAN
Switch
4x10GE
4x10GE
4x10GE
4x10GE
FC FC
FC FC
Nexus
1000V
FC Storage
SP-A SP-B
3rd party layer
CUCM VM IOPS ~ 200
200 IOPS 4KB ~ 64 Mbps per VM
bull Total capacity 28000 IOPS
bull 14000 IOPS per controller
bull 4 KByte block size
14000 IOPS x (4KB) ~ 428 Mbps
600 Mbps throughputcontroller
3rd Party SAN Example
Result
bull One 4 Gbps FC interface is enough to
handle the entire capacity of one Storage
Array
bull HA requires four FC interfaces
copy 2010 Cisco andor its affiliates All rights reserved 34
copy 2010 Cisco andor its affiliates All rights reserved 35
bull All UC Deployment Models are supported
No change in the current deployment models
Base deployment model ndash Single Site Centralized Call Processing etc are not changing
bull VM machine layout on a blade andor chassis
Unity Connection requires one extra CPU core (vCPU) on blade
bull Software checks for design rules
No rules or restrictions are in place in UC Apps to check if you are running the primary and sub on the same blade
bull CoWAN ruleslatency requirement are same
Does not depend on CUCM code or hardware
httpwwwciscocomgoucsrnd
copy 2010 Cisco andor its affiliates All rights reserved 36
bull SRND application-layer guidelines are same as when on MCS
Redundancy Rules are the same
Clustering over the WANlatency numbers
Mega Cluster supported in 85
Determine quantityrole of nodes
For HA No design checks validating proper placement of primary and secondary servers
CUCCE private network requirement
bull Mixed clusters of HP IBM UCS are supported
Subject to ldquocommon senserdquo rules ndash eg donrsquot make Pub or Primary less powerful than Sub or Secondary
bull Direct attach devices must be on physical MCS server
-MOH Live audio stream
-Tape BackupFloppy
bull New factors to consider for end-to-end QoS designconfiguration
37 copy 2010 Cisco andor its affiliates All rights reserved
FROM CUCM
System Release To Unified CM 85 To UC System 85
4x Multi Hop thru 61(x)71(x) Multi Hop thru 61(3)
51(2) Multi Hop thru 61(x)71(x) NA
51(3) 2 Hop thru 71(3) NA
61(1) 2 Hop thru 61(x)71(x) 2 Hop
61(2) 2 Hop thru 61(x)71(x) NA
61(3) 2 Hop thru 61(x)71(x) NA
61(4) Single Hop NA
61(5) Single Hop NA
70(1) 2 Hop thru 71(x) 2 Hop
71(2) 2 Hop thru 71(x) 2 Hop
71(3) Single Hop Single Hop Multi stagesBWC supported
71(5) Single Hop NA
80(1) Single Hop Single Hop Multi stagesBWC supported
80(2) 80(3) Single Hop NA
38 copy 2010 Cisco andor its affiliates All rights reserved
VMware Feature support bull VMware feature support varies by application
bull Some features are supported with Caveats some Partially
bull For example
bull Clone Virtual Machine
ldquoY (C)rdquo means VM has to be powered off
bull vMotion
ldquoY (C)rdquo means vMotion supported for live traffic calls shouldnrsquot be dropped (but not guaranteed)
ldquoPartialrdquo means in maintenance mode only
ESXi Features CUCM CUC CUP CCX
Clone Virtual Machine Y (C) Y (C) Y (C) Y (C)
VMware vMotion Y (C) Partial Partial Y (C)
Resize Virtual Machine Partial Partial Partial Partial
VMware HA Y (C) Y (C) Y (C) Y (C)
Boot From SAN Y (C) Y (C) Y (C) Y (C)
VMware DRS No No No No
Docwiki httpdocwikiciscocomwikiUnified_Communications_VMware_Requirements
copy 2010 Cisco andor its affiliates All rights reserved 39
Current Business Continuity and Disaster Recovery strategies are still applicable
The UC apps redundancy rules are same
Distribute UC application nodes across UCS blades chassis and sites to minimize failure impact
Primarysecondary on different blade chassis sites
On same blade mix Subs with TFTPMoH vs just Subs
Redundancy of UCS components (blade chassis FEX links Interconnect switching)
Redundancy of ldquonewrdquo network types (10GbE SAN multi-pathing etc)
copy 2010 Cisco andor its affiliates All rights reserved 40
TAC Support Demarcation
Server Hardware
Shared Storage
VMware Cisco
Application
UC on UCS
Tested Reference Configuration
Cisco 3rd-party Cisco
Cisco VCE VCE
3rd-party
VCE
UC on UCS
Specs-based (including Vblock option)
Cisco 3rd-party Cisco
Cisco VCE VCE
3rd-party
VCE
3rd-party VMware (HP IBM)
Specs-based 3rd-party 3rd-party 3rd-party Cisco
MCS 7800 Appliances Cisco NA NA Cisco
Customer-provided MCS 7800 equivalent 3rd party NA NA Cisco
41 copy 2010 Cisco andor its affiliates All rights reserved
Customer Example ndash Primary Data Center
OLD NEW
Hardware Nodes 62 Physical Servers ( EU HQ clusters) 14 Approx
Software Version 615 amp 851 851
Ucxn Version 421 851 ndash 3 pairs -- Virtualized
CER 20 70 86 -- Virtualized
CM SUB CM PUB CM SUB CM SUB
MOH TFTP CER CM SUB
CER UCxn UCxN CM SUB
42 copy 2010 Cisco andor its affiliates All rights reserved
Deployment Model ndash Data Center 1
CM SUB CM PUB CM SUB CM SUB
MOH TFTP UCxN UCxn
CER UCxn UCxn UCxn
43 copy 2010 Cisco andor its affiliates All rights reserved
Deployment Model ndash Data Center 2
CM SUB CM PUB CM SUB CM SUB
MOH TFTP Ucxn UCxn
CER UCxn UCxn UCxn
copy 2010 Cisco andor its affiliates All rights reserved 44
Customer Design
PSTN
IP WAN
SIP Proxy
Unified Communications Manager
Unity Connection
Unified Presence
Unified Contact Center Express
Cisco UCS 5108 Chassis with UCS B200 Blade Servers
Cisco UCS 6100 Fabric Interconnect Switch
Cisco UCS C210 or C200 General-Purpose Rack-mount Server
CUSP
(11K phones)
(3K phones)
(400 phones)
copy 2010 Cisco andor its affiliates All rights reserved 45
HQ Details
Blade 1 Blade 2
CPU-1 CPU-2 CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
Blade 3 Blade 4
CPU-1 CPU-2 CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
CUP-1 UCCX-1 CUP-2
Blade 5 Blade 6
CPU-1 CPU-2 CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
UCCX-1
UCxn-1 Active
Leave
idle for
UCxn
UCxn-2
Active
TFTP-2
SUB-2
SUB-4
SUB-6
SUB-8 CUP-2 UCCX-2
SUB-3 PUB SUB-1
CUP-1
TFTP-1 SUB-5
SUB-7
Blade Slot 7 Blade Slot 8
CPU-1 CPU-2 CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
CUCM VM OVAs
Messaging VM OVAs
Leave
idle for
UCxn
ldquoSparerdquo blade slots available for non-UC workloads such as Cisco Nexus 1000V VMware
vCenter or 3rd-party workloads such as directories email groupware or other business
applications
Contact Center VM OVAs
Presence VM OVAs
copy 2010 Cisco andor its affiliates All rights reserved 46
Rack Server 3
CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
UCxn-2 CCX-1 CUP
Rack Server 2
CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
Rack Server 1
CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
Leave
idle for
UCxn
UCxn-1 TFTP-2 SUB-2
Leave
idle for
UCxn
SUB-1 CCX-2 PUB TFTP-1
Branch Office Details
Rack Server 2
CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
Rack Server 1
CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
PUBTFTP CCX-1 CUP
SUB CCX-2
Leave
idle for
UCxn
Leave
idle for
UCxn
UC
xn
-1
UC
xn
-1
CUCM VM OVAs
Messaging VM OVAs
Contact Center VM OVAs
Presence VM OVAs
copy 2010 Cisco andor its affiliates All rights reserved 47
bull DAS Rack Mount Server (Cisco C-Series)
bull Popular DAS Protocol SCSI
bull iSCSI Access SCSI storage media using IP network
bull Fibre Channel The most popular SAN protocol today
Cable distance ~ 2 km
Popular speed - 4 Gbs
bull NAS (Network Attached Storage) uses NFS (Network File System) protocol over TCPIP
DAS
SCSI
Computer System
SCSI Bus Adapter
SCSI Device Driver
Volume Manager
File System
Application
iSCSI
File System
Application
SCSI Device Driver iSCSI Driver
TCPIP Stack
NIC
Volume Manager
NIC
TCPIP Stack
iSCSI Layer
Bus Adapter
Host
Server
Storage
Transport
Storage
Media
FC SAN
FC
FC HBA
SCSI Device Driver
Volume Manager
File System
Application
Computer System Computer System
Block IO
SAN IP
copy 2010 Cisco andor its affiliates All rights reserved 48
HDD 1
450gig
15K RPM
HDD 2
450gig
15K RPM
HDD 3
450gig
15K RPM
HDD 4
450gig
15K RPM
HDD 5
450gig
15K RPM
Single RAID5 Group (14 TB Usable Space)
PUB SUB1 UCCX1
UC
VM 1
UC
VM 2
UC
VM 3
LUN 2 (720 GB) LUN 1 (720 GB)
UCCX2 CUP1 CUP2
UC
VM 4
UC
VM 5 UC
VM 6
4 to 8 UC VMs per
LUN
(max dependent on
sumvDisks)
NASSAN Array Best Practices for UC
Must be lt2TB per
LUN Recommend
500GB to 15 TB
Use FC class disks
with ~180 IOPS (eg
450 GB 15K or
300 GB 15K)
copy 2010 Cisco andor its affiliates All rights reserved 49
DAS Example UCS C210 M2 TRC1
HDD 1
146 GB
15K RPM
HDD 2
146 GB
15K RPM
HDD 3
146 GB
15K RPM
HDD 4
146 GB
15K RPM
HDD 5
146 GB
15K RPM
HDD 6
146 GB
15K RPM
HDD 7
146 GB
15K RPM
HDD 8
146 GB
15K RPM
HDD 9
146 GB
15K RPM
HDD 10
146 GB
15K RPM
Single RAID1
Volume Single RAID5 Volume (1022 GB after RAID overhead)
VMFS Filestore (947 GB after VMFS overhead)
PUB UCCX1
UC
VM 1
UC
VM 3
CUP1
UC
VM 5
vSphere ESXi
image
Notes
bull VMFS block size limits
max vDisk size
bull Could have gt1 VMFS
datastore on RAID
volume
copy 2010 Cisco andor its affiliates All rights reserved 50
DAS Example UCS C200 M2 TRC1 for BE6K
HDD 1
1 TB
72K RPM
HDD 2
1 TB
72K RPM
HDD 3
1 TB
72K RPM
HDD 4 1
TB
72K RPM
Single RAID10 Volume (2 TB after RAID
overhead)
VMFS Filestore (18 TB after VMFS overhead)
PUB UCCX1
UC
VM 1
UC
VM 3
CUP1
UC
VM 5 vSphere ESXi
image
copy 2010 Cisco andor its affiliates All rights reserved 51
UC Application Co-residency
bull See wwwciscocomgouc-virtualized for latest
bull Based on supported OVA for download
OVA reserves cores RAM etc to VMs
Basic rule of thumb fill up blade until out of capacity
If blade contains VM for messaging must reserve core for ESXi
bull Hardware oversubscription not supported
52 copy 2010 Cisco andor its affiliates All rights reserved
Virtual Machine Sizing
bull Virtual Machine virtual hardware defined by an VM template
vCPU vRAM vDisk vNICs
bull Capacity
bullAn VM template is associated with a specific capacity
bull VM templates are packaged in a OVA file
bull There are usually different VM template per release For example CUCM_80_vmv7_v21ova
CUCM_85_vmv7_v21ova
CUCM_86_vmv7_v15ova
Includes product product version VMware hardware version template version
53 copy 2010 Cisco andor its affiliates All rights reserved httptoolsciscocomcucst
copy 2010 Cisco andor its affiliates All rights reserved 54
bull Customer-accessible
bullUC on UCS httpwwwciscocomgouconucs and wwwciscocomgouc-virtualized and wwwciscocomgoucsrnd and wwwciscocomgoswonly (UCS page)
bullUCS in general httpwwwciscocomgoucs
bullVblocks amp Virtual Computing Environment wwwvceportalcom and wwwvcecom
bullFlexPods wwwcisconetappcom
bullCisco Infrastructure as a Service (IaaS) Portal httpwwwciscocomgoiaas
bullCisco Unified Service Delivery Portal httpwwwciscocomgousd
bullldquoCUCM on Virtual Serversrdquo (summary of ldquowhatrsquos newdifferent when virtualizedrdquo) httpwwwciscocomenUSdocsvoice_ip_commcucmvirtualservershtml
bull ldquoVirtualization and UCS 101rdquo httpnewsroomciscocomdlls2010ts_102810html
bull UC on UCS ldquotech valuerdquo httpwwwciscocomenUSsolutionsns340ns339ns638ns914html_TWTVtwtv_episode_74html
copy 2010 Cisco andor its affiliates All rights reserved 55
bull Partner Central ldquoServers OS and Virtualizationrdquo
httpwwwciscocomwebpartnersselltechnologyipcservers_os_virt_for_uchtml
bull Partner Community ldquoServers OS and Virtualizationrdquo
httpswwwmyciscocommunitycomcommunitypartnercollaborationucservers
bull Design info
httpwwwciscocomgouc-virtualized and httpwwwciscocomgoucsrnd
bull ldquoWhatrsquos new whatrsquos different when on VMwarerdquo customer doc
httpwwwciscocomenUSdocsvoice_ip_commcucmvirtualservershtml
bull Product Literature
httpwwwciscocomgoucs and httpwwwciscocomgouconucs
bull Ordering Guide
httpwwwciscocomwebpartnersselltechnologyipcuc_tech_readinesshtml~7
bull Virtual Computing Environment Portal httpwwwvceportalcom
bull Cisco Infrastructure as a Service (IaaS) Portal httpwwwciscocomgoiaas
copy 2010 Cisco andor its affiliates All rights reserved 17
CAPEXOPEX
bull Similar Consolidation and Operational EfficiencyScale benefits as with UC on UCS B-series
Other Benefits
Lower initial investment
Simple entrymigration to virtualized UC ndash Data Center expertise not required unless using SAN option
Example 5000 users Dial tone voicemail and Presence 10 are Contact Center Agents
11 non-virtualized rack servers required for UC more for other business apps
copy 2010 Cisco andor its affiliates All rights reserved 18
$-
$500
$1000
$1500
$2000
$2500
$3000
2 4 8 10 12 20 50 100
UCS B230 M2 TRC OPEX ($K)
UCS B230 M2 TRC CAPEX ($K)
UCS B200 M2 TRC OPEX ($K)
UCS B200 M2 TRC CAPEX ($K)
UCS C210 M2 TRC OPEX ($K)
UCS C210 M2 TRC CAPEX ($K)
MCS 7845-I3 OPEX ($K)
MCS 7845-I3 CAPEX ($K)
Assumptions
bull UC only no other business applications included ldquoSparerdquo or ldquohot standbyrdquo hosts not included
bull ldquoServerrdquo is either an MCS Appliance or a 2-vcpu-core ldquoVirtual Machinerdquo
bull Dual sites split MCS or UCS TRC servers across sites no single point of failure ndash redundant sites switching blade chassis rackblade servers
bull Using list pricing for MCS-7845-I3-IPC1 UCS-C210M2-VCD2 UCS-B200M2-VCS1 UCS-B230M2-VCDL1 and VMware Enterprise Plus Edition
Appliance or VM Count
PSTN
2104
2104
2104
2104
SANLAN
Dual Site Scenario PSTN
2104
2104
2104
2104
SANLAN
hellip
hellip
hellip
hellip
UC on UCS
B200 B230
UC on UCS
C210
MCS 7845
B230 M2
vs B200 M2
C210 M2
vs MCS 7845
20 copy 2010 Cisco andor its affiliates All rights reserved
Current Offers Technical Overview
21 copy 2010 Cisco andor its affiliates All rights reserved
Eg 4 physical servers
Each MCS 7800 hosts only
one UC app instance
4 virtual servers (VMrsquos) on 1 physical server
Single virtualized server with total 8 physical
cores hosts all UC app instances
Unity
Connection
Unified CM
VM for
Unified
CM
Sub
Unified CCX
VM for
Unity
Cxn
VM for
Unified
CCX
VM for
Unified
CM
Pub
or
copy 2010 Cisco andor its affiliates All rights reserved 22
Server Model TRC CPU RAM Storage Adapters
UCS B200 M2 Blade Server TRC 1
Dual E5640 (8 physical cores total)
48 GB DAS (RAID1) for
VMware FC SAN for UC apps
Cisco VIC
TRC 2 Dual E5640 (8
physical cores total) 48 GB Diskless Cisco VIC
UCS B230 M2 Blade Server
TRC 1 Dual E7-2870 (20
physical cores total) 128 GB Diskless Cisco VIC
UCS B440 M2 Blade Server
TRC1 Dual E7-4870 (40
physical cores total)
256 GB Diskless Cisco VIC
UCS C260 M2 Rack-Mount Server
TRC 1 Dual E7-2870 (20
physical cores total) 128 GB DAS (2x RAID5) 1GbE NIC
UCS C210 M2 General-Purpose
Rack-Mount Server TRC 1
Dual E5640 (8 physical cores total)
48 GB
DAS (2 disks RAID1) for VMware + DAS (8 disks RAID5) for
UC apps
1GbE NIC
TRC 2 Dual E5640 (8
physical cores total) 48 GB
DAS (2 disks RAID1) for VMware FC SAN
for UC apps
1GbE NIC and 4G FC HBA
TRC 3 Dual E5640 (8
physical cores total) 48 GB Diskless
1GbE NIC and 4G FC HBA
UCS C200 M2
General-Purpose Rack-Mount Server
TRC 1 Dual E5506 (8
physical cores total) 24 GB
DAS (4 disks RAID10) for VMware
+ UC apps 1GbE NIC
copy 2010 Cisco andor its affiliates All rights reserved 23
UC app Scale
(ldquousersrdquo)
vCPU (cores) Usually 253+ GHz
per core required
vRAM
(GB)
vDisk
(GB)
Notes
Unified
CM
1000 2 4 1 x 80 UCS C200 or BE6K only
2500 1 225 1 x 80 Not for use with C200BE6K
7500 2 6 2 x 80
10000 4 6 2 x 80
Unity
Connection
500 1 2 1 x 160
1000 1 4 1 x 160
5000 2 4 1 x 200
10000 4 4 2 x 146 Not for use with C200BE6K
20000 7 8 2 x 300
Unified
Presence
1000 1 2 1 x 80
2500 2 4 1 x 80 Not for use with C200BE6K
5000 4 4 2 x 80
Unified CCX 100 2 4 2 x 146 UCS C200 or BE6K only
300 2 4 2 x 146 Not for use with C200BE6K
400 4 8 2 x 146
Not exhaustive subject to change see wwwciscocomgouc-virtualized for latest
ie user count for particular values of BHCA trace level encryption CTI and other factors Actual
supportable user count may vary by deployment
copy 2010 Cisco andor its affiliates All rights reserved 24
Policy still lives here httpdocwikiciscocomwikiUnified_Communications_Virtualization_Sizing_GuidelinesApplication_Co-residency_Support_Policy
Three aspects
1 Allowed App Mix on same physical server
SAME RULES for TRC vs specs-based UCSHPIBM
2 Allowed VM OVA choices
DIFFERENT RULES for TRC vs specs-based due to CPU differences
3 Max number of VMs on same physical server
SAME RULES for TRC vs specs-based to determine max but specs-based might allow more VMs
Note DAS IO bottlenecks may prevent very high VM counts even if CPURAM are sufficient
copy 2010 Cisco andor its affiliates All rights reserved 25
bull Which apps can share the same physical server In general any UC with UC from apps listed at httpdocwikiciscocomwikiUnified_Communications_Virtualization_Supported_Applications
But note some UC apps restrict this eg BE6K CUCCE See their rules on their docwiki ldquochild pagesrdquo
NMTGrsquos UC Mgmt Suite (CUOM CUSM CUSSM CUPM) counts as a UC app for this
Note UCS C200 M2 TRC1 for non-BE6K no longer has special restrictions on UC App Mix
bull SEPARATE PHYSICAL SERVER required for non-UC or 3rd-party Eg N1KV ARC SingleWire vCenter FilePrint Directory CRMERP Groupware non-CUCM TFTP Nuance etc
UC VM UC VMhellipVM for
VMware
vCenter
VM for
Nexus
1KV
VSM
VM for
Solutions
Plus
CTDP
app
VM for
Unaffiliated
3rd-party
app
UC VM UC VMhellipVM for
VMware
vCenter
VM for
Nexus
1KV
VSM
VM for
Solutions
Plus
CTDP
app
VM for
Unaffiliated
3rd-party
app
UC VM UC VMhellipVM for
VMware
vCenter
VM for
Nexus
1KV
VSM
VM for
Solutions
Plus
CTDP
app
VM for
Unaffiliated
3rd-party
app
UC VM UC VMhellipVM for
VMware
vCenter
VM for
Nexus
1KV
VSM
VM for
Solutions
Plus
CTDP
app
VM for
Unaffiliated
3rd-party
app
Different blades in same chassis OK
Same blade same chassis not OK
copy 2010 Cisco andor its affiliates All rights reserved 26
bull App to HW some apps eg CUCCE donrsquot allow any of their OVAs on certain TRCs
See httpdocwikiciscocomwikiUnified_Communications_Virtualization_Supported_Applications
bull OVA to HW Some OVAs are deliberately only for use with a particular TRC or CPU
See co-res policy page and Notes column in httpdocwikiciscocomwikiUnified_Communications_Virtualization_Downloads_28including_OVAOVF_Templates29
Why Usually due to CPU modelspeed dependencies
C200 M2 TRC1
(E5506 213 GHz)
UCM
25K
UCM
75K
UCM
10K
C200 M2 Specs-based
(56xx 253+GHz )
B200C210 M2 TRC or Specs-based
(E5640 266 GHz on TRC
56xx75xx 253+ GHz on specs-based)
UCM
25K
UCM
75K
UCM
10K
UCM
1K
UCM
1K
UCM
25K
UCM
75K
UCM
10K
UCM
1K
copy 2010 Cisco andor its affiliates All rights reserved 27
Core
Core
Core
Core
Core
Core
Core
Core
Core
Core
Core
Core
Core
Core
Core
Core
Core
Core
Core
Core
VM VM VM VM
V
M
V
M
V
M
V
M
V
M
V
M
V
M
V
M
VM VM
Dual-socket 4-core
Eg UCS C210 M2 TRC1
with dual E5640
Dual-socket 6-core
Eg UCS C210 M2 Specs-based
with UC-supported CPU model
and min speed
VM VM V
M
V
M
VM VM VM V
M
Idle
VM
VM
VM
Jumbo + 1 reserved
or
Mixed sizes + 1 reserved
or
Mixed sizes
or
21 Large eg UCM 10K
or
41 Med eg UCM 75K
or
81 Small eg UCM 25K
ldquoSmallrdquo VM
ldquoMediumrdquo VM
ldquoLargerdquo VM
ldquoJumbordquo VM
VM
VM
Idle
VM VM VM VM
V
M
V
M
V
M
V
M
V
M
V
M
V
M
V
M
VM VM
VM VM V
M
V
M
VM VM VM V
M
Idle
Mixed sizes + 1 reserved
or
Mixed sizes
or
31 Large eg UCM 10K
or
61 Med eg UCM 75K
or
121 Small eg UCM 25K
VM
Idle
V
M
V
M
V
M
V
M
VM VM
VM
VM VM
VM
VM VM
copy 2010 Cisco andor its affiliates All rights reserved 28
Virtual Software Switch Options
VM
LAN SAN
ESXi Hypervisor
Software Switch
vNIC
CNA
FCoE
VMware
vSwitch
VMware
dvSwitch
Cisco Nexus
1KV
Host based (local) Distributed Distributed
IEEE 8021Q VLAN
tagging
IEEE 8021Q VLAN
tagging
IEEE 8021Q VLAN
tagging
VLANs only visible to
local ESXi host
VLANs visible to all
ESXi hosts
VLANs visible to all
ESXi hosts
EtherChannel EtherChannel EtherChannel
-- -- Virtual PortChannel
-- -- QoS Marking
(DSCPCoS)
-- -- ACL
-- -- SPAN
RADIUSTACACS+
No VM needed No VM needed VM needed for VSM
vmNIC
UCS B200
Strongly recommended for UC on UCS B-Series
Not required but recommended for UC on UCS C-Series
copy 2010 Cisco andor its affiliates All rights reserved 29
pSwitch
ESXi
bull Cisco Software Switch in Hypervisor
bull Familiar network server operations amp management model
bull Enhanced diagnostic amp monitoring capability
bull Visibility direct to VM
ESXi Nexus
1000V
VEM
Nexus
1000V
VEM
Nexus 1000V VSM
copy 2010 Cisco andor its affiliates All rights reserved 30
Physical switch maps L3 DSCP to L2 CoS
CUCM marks traffic based on L3 DSCP values
pSwitch (CAT6K etc) can do mapping from L3 DSCP to L2 CoS (if needed)
CTL Packet L3
dc1-access-6k(config)mls qos map dscp-cos 24 to 3
dc1-access-6k(config)mls qos map dscp-cos 46 to 5
CS3
L20 L3CS3
CUCM
CAT6K
L23 L3CS3 L23 L3CS3
copy 2010 Cisco andor its affiliates All rights reserved 31
bull UCS 6100 doesnrsquot look into L3 IP header
bull DSCPToS setting in IP header is not altered by UCS
bull 6100 sends packet to uplink pEthernet switch
bull Default QoS settings on UCS
FCoE (ldquomatch cos 3rdquo) ndash no drop policy
Rest (ldquomatch anyrdquo) ndash Best Effort Queue
vSwitch amp UCS 6100 can not map L3 DSCP to L2 CoS
L20 L3CS3
CUCM
CAT6K
UCS 6100
copy 2010 Cisco andor its affiliates All rights reserved 32
bull UC blades Network Adapters QoS policy set to Platinum (CoS=5 No
Drop)
bull Non-UC blades Network Adapters QoS policy set to best effort
N1Kv Considerations
bull UC sig traffic (CoS3) share queues with FCoE traffic (CoS3)
bull UC sig traffic is given lossless behavior
bull Default CoS value of 3 for FCoE traffic should never be changed
Without N1Kv
Caveat
bull All traffic types from virtual UC App will get CoS value of Platinum
bull Non-UC application gets best-effort class might not be acceptable
L20 L3CS3
L23 L3CS3
L23 L3CS3
CUCM
N1KV
UCS 6100
CAT6K
copy 2010 Cisco andor its affiliates All rights reserved 33
Compute Layer
SANStorage
Layer ndash Cisco
SRND
Cisco
UCS 6100
Fabric
Interconnect
UCS 5100
Blade
Server
Cisco SAN
Switch
4x10GE
4x10GE
4x10GE
4x10GE
FC FC
FC FC
Nexus
1000V
FC Storage
SP-A SP-B
3rd party layer
CUCM VM IOPS ~ 200
200 IOPS 4KB ~ 64 Mbps per VM
bull Total capacity 28000 IOPS
bull 14000 IOPS per controller
bull 4 KByte block size
14000 IOPS x (4KB) ~ 428 Mbps
600 Mbps throughputcontroller
3rd Party SAN Example
Result
bull One 4 Gbps FC interface is enough to
handle the entire capacity of one Storage
Array
bull HA requires four FC interfaces
copy 2010 Cisco andor its affiliates All rights reserved 34
copy 2010 Cisco andor its affiliates All rights reserved 35
bull All UC Deployment Models are supported
No change in the current deployment models
Base deployment model ndash Single Site Centralized Call Processing etc are not changing
bull VM machine layout on a blade andor chassis
Unity Connection requires one extra CPU core (vCPU) on blade
bull Software checks for design rules
No rules or restrictions are in place in UC Apps to check if you are running the primary and sub on the same blade
bull CoWAN ruleslatency requirement are same
Does not depend on CUCM code or hardware
httpwwwciscocomgoucsrnd
copy 2010 Cisco andor its affiliates All rights reserved 36
bull SRND application-layer guidelines are same as when on MCS
Redundancy Rules are the same
Clustering over the WANlatency numbers
Mega Cluster supported in 85
Determine quantityrole of nodes
For HA No design checks validating proper placement of primary and secondary servers
CUCCE private network requirement
bull Mixed clusters of HP IBM UCS are supported
Subject to ldquocommon senserdquo rules ndash eg donrsquot make Pub or Primary less powerful than Sub or Secondary
bull Direct attach devices must be on physical MCS server
-MOH Live audio stream
-Tape BackupFloppy
bull New factors to consider for end-to-end QoS designconfiguration
37 copy 2010 Cisco andor its affiliates All rights reserved
FROM CUCM
System Release To Unified CM 85 To UC System 85
4x Multi Hop thru 61(x)71(x) Multi Hop thru 61(3)
51(2) Multi Hop thru 61(x)71(x) NA
51(3) 2 Hop thru 71(3) NA
61(1) 2 Hop thru 61(x)71(x) 2 Hop
61(2) 2 Hop thru 61(x)71(x) NA
61(3) 2 Hop thru 61(x)71(x) NA
61(4) Single Hop NA
61(5) Single Hop NA
70(1) 2 Hop thru 71(x) 2 Hop
71(2) 2 Hop thru 71(x) 2 Hop
71(3) Single Hop Single Hop Multi stagesBWC supported
71(5) Single Hop NA
80(1) Single Hop Single Hop Multi stagesBWC supported
80(2) 80(3) Single Hop NA
38 copy 2010 Cisco andor its affiliates All rights reserved
VMware Feature support bull VMware feature support varies by application
bull Some features are supported with Caveats some Partially
bull For example
bull Clone Virtual Machine
ldquoY (C)rdquo means VM has to be powered off
bull vMotion
ldquoY (C)rdquo means vMotion supported for live traffic calls shouldnrsquot be dropped (but not guaranteed)
ldquoPartialrdquo means in maintenance mode only
ESXi Features CUCM CUC CUP CCX
Clone Virtual Machine Y (C) Y (C) Y (C) Y (C)
VMware vMotion Y (C) Partial Partial Y (C)
Resize Virtual Machine Partial Partial Partial Partial
VMware HA Y (C) Y (C) Y (C) Y (C)
Boot From SAN Y (C) Y (C) Y (C) Y (C)
VMware DRS No No No No
Docwiki httpdocwikiciscocomwikiUnified_Communications_VMware_Requirements
copy 2010 Cisco andor its affiliates All rights reserved 39
Current Business Continuity and Disaster Recovery strategies are still applicable
The UC apps redundancy rules are same
Distribute UC application nodes across UCS blades chassis and sites to minimize failure impact
Primarysecondary on different blade chassis sites
On same blade mix Subs with TFTPMoH vs just Subs
Redundancy of UCS components (blade chassis FEX links Interconnect switching)
Redundancy of ldquonewrdquo network types (10GbE SAN multi-pathing etc)
copy 2010 Cisco andor its affiliates All rights reserved 40
TAC Support Demarcation
Server Hardware
Shared Storage
VMware Cisco
Application
UC on UCS
Tested Reference Configuration
Cisco 3rd-party Cisco
Cisco VCE VCE
3rd-party
VCE
UC on UCS
Specs-based (including Vblock option)
Cisco 3rd-party Cisco
Cisco VCE VCE
3rd-party
VCE
3rd-party VMware (HP IBM)
Specs-based 3rd-party 3rd-party 3rd-party Cisco
MCS 7800 Appliances Cisco NA NA Cisco
Customer-provided MCS 7800 equivalent 3rd party NA NA Cisco
41 copy 2010 Cisco andor its affiliates All rights reserved
Customer Example ndash Primary Data Center
OLD NEW
Hardware Nodes 62 Physical Servers ( EU HQ clusters) 14 Approx
Software Version 615 amp 851 851
Ucxn Version 421 851 ndash 3 pairs -- Virtualized
CER 20 70 86 -- Virtualized
CM SUB CM PUB CM SUB CM SUB
MOH TFTP CER CM SUB
CER UCxn UCxN CM SUB
42 copy 2010 Cisco andor its affiliates All rights reserved
Deployment Model ndash Data Center 1
CM SUB CM PUB CM SUB CM SUB
MOH TFTP UCxN UCxn
CER UCxn UCxn UCxn
43 copy 2010 Cisco andor its affiliates All rights reserved
Deployment Model ndash Data Center 2
CM SUB CM PUB CM SUB CM SUB
MOH TFTP Ucxn UCxn
CER UCxn UCxn UCxn
copy 2010 Cisco andor its affiliates All rights reserved 44
Customer Design
PSTN
IP WAN
SIP Proxy
Unified Communications Manager
Unity Connection
Unified Presence
Unified Contact Center Express
Cisco UCS 5108 Chassis with UCS B200 Blade Servers
Cisco UCS 6100 Fabric Interconnect Switch
Cisco UCS C210 or C200 General-Purpose Rack-mount Server
CUSP
(11K phones)
(3K phones)
(400 phones)
copy 2010 Cisco andor its affiliates All rights reserved 45
HQ Details
Blade 1 Blade 2
CPU-1 CPU-2 CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
Blade 3 Blade 4
CPU-1 CPU-2 CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
CUP-1 UCCX-1 CUP-2
Blade 5 Blade 6
CPU-1 CPU-2 CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
UCCX-1
UCxn-1 Active
Leave
idle for
UCxn
UCxn-2
Active
TFTP-2
SUB-2
SUB-4
SUB-6
SUB-8 CUP-2 UCCX-2
SUB-3 PUB SUB-1
CUP-1
TFTP-1 SUB-5
SUB-7
Blade Slot 7 Blade Slot 8
CPU-1 CPU-2 CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
CUCM VM OVAs
Messaging VM OVAs
Leave
idle for
UCxn
ldquoSparerdquo blade slots available for non-UC workloads such as Cisco Nexus 1000V VMware
vCenter or 3rd-party workloads such as directories email groupware or other business
applications
Contact Center VM OVAs
Presence VM OVAs
copy 2010 Cisco andor its affiliates All rights reserved 46
Rack Server 3
CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
UCxn-2 CCX-1 CUP
Rack Server 2
CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
Rack Server 1
CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
Leave
idle for
UCxn
UCxn-1 TFTP-2 SUB-2
Leave
idle for
UCxn
SUB-1 CCX-2 PUB TFTP-1
Branch Office Details
Rack Server 2
CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
Rack Server 1
CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
PUBTFTP CCX-1 CUP
SUB CCX-2
Leave
idle for
UCxn
Leave
idle for
UCxn
UC
xn
-1
UC
xn
-1
CUCM VM OVAs
Messaging VM OVAs
Contact Center VM OVAs
Presence VM OVAs
copy 2010 Cisco andor its affiliates All rights reserved 47
bull DAS Rack Mount Server (Cisco C-Series)
bull Popular DAS Protocol SCSI
bull iSCSI Access SCSI storage media using IP network
bull Fibre Channel The most popular SAN protocol today
Cable distance ~ 2 km
Popular speed - 4 Gbs
bull NAS (Network Attached Storage) uses NFS (Network File System) protocol over TCPIP
DAS
SCSI
Computer System
SCSI Bus Adapter
SCSI Device Driver
Volume Manager
File System
Application
iSCSI
File System
Application
SCSI Device Driver iSCSI Driver
TCPIP Stack
NIC
Volume Manager
NIC
TCPIP Stack
iSCSI Layer
Bus Adapter
Host
Server
Storage
Transport
Storage
Media
FC SAN
FC
FC HBA
SCSI Device Driver
Volume Manager
File System
Application
Computer System Computer System
Block IO
SAN IP
copy 2010 Cisco andor its affiliates All rights reserved 48
HDD 1
450gig
15K RPM
HDD 2
450gig
15K RPM
HDD 3
450gig
15K RPM
HDD 4
450gig
15K RPM
HDD 5
450gig
15K RPM
Single RAID5 Group (14 TB Usable Space)
PUB SUB1 UCCX1
UC
VM 1
UC
VM 2
UC
VM 3
LUN 2 (720 GB) LUN 1 (720 GB)
UCCX2 CUP1 CUP2
UC
VM 4
UC
VM 5 UC
VM 6
4 to 8 UC VMs per
LUN
(max dependent on
sumvDisks)
NASSAN Array Best Practices for UC
Must be lt2TB per
LUN Recommend
500GB to 15 TB
Use FC class disks
with ~180 IOPS (eg
450 GB 15K or
300 GB 15K)
copy 2010 Cisco andor its affiliates All rights reserved 49
DAS Example UCS C210 M2 TRC1
HDD 1
146 GB
15K RPM
HDD 2
146 GB
15K RPM
HDD 3
146 GB
15K RPM
HDD 4
146 GB
15K RPM
HDD 5
146 GB
15K RPM
HDD 6
146 GB
15K RPM
HDD 7
146 GB
15K RPM
HDD 8
146 GB
15K RPM
HDD 9
146 GB
15K RPM
HDD 10
146 GB
15K RPM
Single RAID1
Volume Single RAID5 Volume (1022 GB after RAID overhead)
VMFS Filestore (947 GB after VMFS overhead)
PUB UCCX1
UC
VM 1
UC
VM 3
CUP1
UC
VM 5
vSphere ESXi
image
Notes
bull VMFS block size limits
max vDisk size
bull Could have gt1 VMFS
datastore on RAID
volume
copy 2010 Cisco andor its affiliates All rights reserved 50
DAS Example UCS C200 M2 TRC1 for BE6K
HDD 1
1 TB
72K RPM
HDD 2
1 TB
72K RPM
HDD 3
1 TB
72K RPM
HDD 4 1
TB
72K RPM
Single RAID10 Volume (2 TB after RAID
overhead)
VMFS Filestore (18 TB after VMFS overhead)
PUB UCCX1
UC
VM 1
UC
VM 3
CUP1
UC
VM 5 vSphere ESXi
image
copy 2010 Cisco andor its affiliates All rights reserved 51
UC Application Co-residency
bull See wwwciscocomgouc-virtualized for latest
bull Based on supported OVA for download
OVA reserves cores RAM etc to VMs
Basic rule of thumb fill up blade until out of capacity
If blade contains VM for messaging must reserve core for ESXi
bull Hardware oversubscription not supported
52 copy 2010 Cisco andor its affiliates All rights reserved
Virtual Machine Sizing
bull Virtual Machine virtual hardware defined by an VM template
vCPU vRAM vDisk vNICs
bull Capacity
bullAn VM template is associated with a specific capacity
bull VM templates are packaged in a OVA file
bull There are usually different VM template per release For example CUCM_80_vmv7_v21ova
CUCM_85_vmv7_v21ova
CUCM_86_vmv7_v15ova
Includes product product version VMware hardware version template version
53 copy 2010 Cisco andor its affiliates All rights reserved httptoolsciscocomcucst
copy 2010 Cisco andor its affiliates All rights reserved 54
bull Customer-accessible
bullUC on UCS httpwwwciscocomgouconucs and wwwciscocomgouc-virtualized and wwwciscocomgoucsrnd and wwwciscocomgoswonly (UCS page)
bullUCS in general httpwwwciscocomgoucs
bullVblocks amp Virtual Computing Environment wwwvceportalcom and wwwvcecom
bullFlexPods wwwcisconetappcom
bullCisco Infrastructure as a Service (IaaS) Portal httpwwwciscocomgoiaas
bullCisco Unified Service Delivery Portal httpwwwciscocomgousd
bullldquoCUCM on Virtual Serversrdquo (summary of ldquowhatrsquos newdifferent when virtualizedrdquo) httpwwwciscocomenUSdocsvoice_ip_commcucmvirtualservershtml
bull ldquoVirtualization and UCS 101rdquo httpnewsroomciscocomdlls2010ts_102810html
bull UC on UCS ldquotech valuerdquo httpwwwciscocomenUSsolutionsns340ns339ns638ns914html_TWTVtwtv_episode_74html
copy 2010 Cisco andor its affiliates All rights reserved 55
bull Partner Central ldquoServers OS and Virtualizationrdquo
httpwwwciscocomwebpartnersselltechnologyipcservers_os_virt_for_uchtml
bull Partner Community ldquoServers OS and Virtualizationrdquo
httpswwwmyciscocommunitycomcommunitypartnercollaborationucservers
bull Design info
httpwwwciscocomgouc-virtualized and httpwwwciscocomgoucsrnd
bull ldquoWhatrsquos new whatrsquos different when on VMwarerdquo customer doc
httpwwwciscocomenUSdocsvoice_ip_commcucmvirtualservershtml
bull Product Literature
httpwwwciscocomgoucs and httpwwwciscocomgouconucs
bull Ordering Guide
httpwwwciscocomwebpartnersselltechnologyipcuc_tech_readinesshtml~7
bull Virtual Computing Environment Portal httpwwwvceportalcom
bull Cisco Infrastructure as a Service (IaaS) Portal httpwwwciscocomgoiaas
copy 2010 Cisco andor its affiliates All rights reserved 18
$-
$500
$1000
$1500
$2000
$2500
$3000
2 4 8 10 12 20 50 100
UCS B230 M2 TRC OPEX ($K)
UCS B230 M2 TRC CAPEX ($K)
UCS B200 M2 TRC OPEX ($K)
UCS B200 M2 TRC CAPEX ($K)
UCS C210 M2 TRC OPEX ($K)
UCS C210 M2 TRC CAPEX ($K)
MCS 7845-I3 OPEX ($K)
MCS 7845-I3 CAPEX ($K)
Assumptions
bull UC only no other business applications included ldquoSparerdquo or ldquohot standbyrdquo hosts not included
bull ldquoServerrdquo is either an MCS Appliance or a 2-vcpu-core ldquoVirtual Machinerdquo
bull Dual sites split MCS or UCS TRC servers across sites no single point of failure ndash redundant sites switching blade chassis rackblade servers
bull Using list pricing for MCS-7845-I3-IPC1 UCS-C210M2-VCD2 UCS-B200M2-VCS1 UCS-B230M2-VCDL1 and VMware Enterprise Plus Edition
Appliance or VM Count
PSTN
2104
2104
2104
2104
SANLAN
Dual Site Scenario PSTN
2104
2104
2104
2104
SANLAN
hellip
hellip
hellip
hellip
UC on UCS
B200 B230
UC on UCS
C210
MCS 7845
B230 M2
vs B200 M2
C210 M2
vs MCS 7845
20 copy 2010 Cisco andor its affiliates All rights reserved
Current Offers Technical Overview
21 copy 2010 Cisco andor its affiliates All rights reserved
Eg 4 physical servers
Each MCS 7800 hosts only
one UC app instance
4 virtual servers (VMrsquos) on 1 physical server
Single virtualized server with total 8 physical
cores hosts all UC app instances
Unity
Connection
Unified CM
VM for
Unified
CM
Sub
Unified CCX
VM for
Unity
Cxn
VM for
Unified
CCX
VM for
Unified
CM
Pub
or
copy 2010 Cisco andor its affiliates All rights reserved 22
Server Model TRC CPU RAM Storage Adapters
UCS B200 M2 Blade Server TRC 1
Dual E5640 (8 physical cores total)
48 GB DAS (RAID1) for
VMware FC SAN for UC apps
Cisco VIC
TRC 2 Dual E5640 (8
physical cores total) 48 GB Diskless Cisco VIC
UCS B230 M2 Blade Server
TRC 1 Dual E7-2870 (20
physical cores total) 128 GB Diskless Cisco VIC
UCS B440 M2 Blade Server
TRC1 Dual E7-4870 (40
physical cores total)
256 GB Diskless Cisco VIC
UCS C260 M2 Rack-Mount Server
TRC 1 Dual E7-2870 (20
physical cores total) 128 GB DAS (2x RAID5) 1GbE NIC
UCS C210 M2 General-Purpose
Rack-Mount Server TRC 1
Dual E5640 (8 physical cores total)
48 GB
DAS (2 disks RAID1) for VMware + DAS (8 disks RAID5) for
UC apps
1GbE NIC
TRC 2 Dual E5640 (8
physical cores total) 48 GB
DAS (2 disks RAID1) for VMware FC SAN
for UC apps
1GbE NIC and 4G FC HBA
TRC 3 Dual E5640 (8
physical cores total) 48 GB Diskless
1GbE NIC and 4G FC HBA
UCS C200 M2
General-Purpose Rack-Mount Server
TRC 1 Dual E5506 (8
physical cores total) 24 GB
DAS (4 disks RAID10) for VMware
+ UC apps 1GbE NIC
copy 2010 Cisco andor its affiliates All rights reserved 23
UC app Scale
(ldquousersrdquo)
vCPU (cores) Usually 253+ GHz
per core required
vRAM
(GB)
vDisk
(GB)
Notes
Unified
CM
1000 2 4 1 x 80 UCS C200 or BE6K only
2500 1 225 1 x 80 Not for use with C200BE6K
7500 2 6 2 x 80
10000 4 6 2 x 80
Unity
Connection
500 1 2 1 x 160
1000 1 4 1 x 160
5000 2 4 1 x 200
10000 4 4 2 x 146 Not for use with C200BE6K
20000 7 8 2 x 300
Unified
Presence
1000 1 2 1 x 80
2500 2 4 1 x 80 Not for use with C200BE6K
5000 4 4 2 x 80
Unified CCX 100 2 4 2 x 146 UCS C200 or BE6K only
300 2 4 2 x 146 Not for use with C200BE6K
400 4 8 2 x 146
Not exhaustive subject to change see wwwciscocomgouc-virtualized for latest
ie user count for particular values of BHCA trace level encryption CTI and other factors Actual
supportable user count may vary by deployment
copy 2010 Cisco andor its affiliates All rights reserved 24
Policy still lives here httpdocwikiciscocomwikiUnified_Communications_Virtualization_Sizing_GuidelinesApplication_Co-residency_Support_Policy
Three aspects
1 Allowed App Mix on same physical server
SAME RULES for TRC vs specs-based UCSHPIBM
2 Allowed VM OVA choices
DIFFERENT RULES for TRC vs specs-based due to CPU differences
3 Max number of VMs on same physical server
SAME RULES for TRC vs specs-based to determine max but specs-based might allow more VMs
Note DAS IO bottlenecks may prevent very high VM counts even if CPURAM are sufficient
copy 2010 Cisco andor its affiliates All rights reserved 25
bull Which apps can share the same physical server In general any UC with UC from apps listed at httpdocwikiciscocomwikiUnified_Communications_Virtualization_Supported_Applications
But note some UC apps restrict this eg BE6K CUCCE See their rules on their docwiki ldquochild pagesrdquo
NMTGrsquos UC Mgmt Suite (CUOM CUSM CUSSM CUPM) counts as a UC app for this
Note UCS C200 M2 TRC1 for non-BE6K no longer has special restrictions on UC App Mix
bull SEPARATE PHYSICAL SERVER required for non-UC or 3rd-party Eg N1KV ARC SingleWire vCenter FilePrint Directory CRMERP Groupware non-CUCM TFTP Nuance etc
UC VM UC VMhellipVM for
VMware
vCenter
VM for
Nexus
1KV
VSM
VM for
Solutions
Plus
CTDP
app
VM for
Unaffiliated
3rd-party
app
UC VM UC VMhellipVM for
VMware
vCenter
VM for
Nexus
1KV
VSM
VM for
Solutions
Plus
CTDP
app
VM for
Unaffiliated
3rd-party
app
UC VM UC VMhellipVM for
VMware
vCenter
VM for
Nexus
1KV
VSM
VM for
Solutions
Plus
CTDP
app
VM for
Unaffiliated
3rd-party
app
UC VM UC VMhellipVM for
VMware
vCenter
VM for
Nexus
1KV
VSM
VM for
Solutions
Plus
CTDP
app
VM for
Unaffiliated
3rd-party
app
Different blades in same chassis OK
Same blade same chassis not OK
copy 2010 Cisco andor its affiliates All rights reserved 26
bull App to HW some apps eg CUCCE donrsquot allow any of their OVAs on certain TRCs
See httpdocwikiciscocomwikiUnified_Communications_Virtualization_Supported_Applications
bull OVA to HW Some OVAs are deliberately only for use with a particular TRC or CPU
See co-res policy page and Notes column in httpdocwikiciscocomwikiUnified_Communications_Virtualization_Downloads_28including_OVAOVF_Templates29
Why Usually due to CPU modelspeed dependencies
C200 M2 TRC1
(E5506 213 GHz)
UCM
25K
UCM
75K
UCM
10K
C200 M2 Specs-based
(56xx 253+GHz )
B200C210 M2 TRC or Specs-based
(E5640 266 GHz on TRC
56xx75xx 253+ GHz on specs-based)
UCM
25K
UCM
75K
UCM
10K
UCM
1K
UCM
1K
UCM
25K
UCM
75K
UCM
10K
UCM
1K
copy 2010 Cisco andor its affiliates All rights reserved 27
Core
Core
Core
Core
Core
Core
Core
Core
Core
Core
Core
Core
Core
Core
Core
Core
Core
Core
Core
Core
VM VM VM VM
V
M
V
M
V
M
V
M
V
M
V
M
V
M
V
M
VM VM
Dual-socket 4-core
Eg UCS C210 M2 TRC1
with dual E5640
Dual-socket 6-core
Eg UCS C210 M2 Specs-based
with UC-supported CPU model
and min speed
VM VM V
M
V
M
VM VM VM V
M
Idle
VM
VM
VM
Jumbo + 1 reserved
or
Mixed sizes + 1 reserved
or
Mixed sizes
or
21 Large eg UCM 10K
or
41 Med eg UCM 75K
or
81 Small eg UCM 25K
ldquoSmallrdquo VM
ldquoMediumrdquo VM
ldquoLargerdquo VM
ldquoJumbordquo VM
VM
VM
Idle
VM VM VM VM
V
M
V
M
V
M
V
M
V
M
V
M
V
M
V
M
VM VM
VM VM V
M
V
M
VM VM VM V
M
Idle
Mixed sizes + 1 reserved
or
Mixed sizes
or
31 Large eg UCM 10K
or
61 Med eg UCM 75K
or
121 Small eg UCM 25K
VM
Idle
V
M
V
M
V
M
V
M
VM VM
VM
VM VM
VM
VM VM
copy 2010 Cisco andor its affiliates All rights reserved 28
Virtual Software Switch Options
VM
LAN SAN
ESXi Hypervisor
Software Switch
vNIC
CNA
FCoE
VMware
vSwitch
VMware
dvSwitch
Cisco Nexus
1KV
Host based (local) Distributed Distributed
IEEE 8021Q VLAN
tagging
IEEE 8021Q VLAN
tagging
IEEE 8021Q VLAN
tagging
VLANs only visible to
local ESXi host
VLANs visible to all
ESXi hosts
VLANs visible to all
ESXi hosts
EtherChannel EtherChannel EtherChannel
-- -- Virtual PortChannel
-- -- QoS Marking
(DSCPCoS)
-- -- ACL
-- -- SPAN
RADIUSTACACS+
No VM needed No VM needed VM needed for VSM
vmNIC
UCS B200
Strongly recommended for UC on UCS B-Series
Not required but recommended for UC on UCS C-Series
copy 2010 Cisco andor its affiliates All rights reserved 29
pSwitch
ESXi
bull Cisco Software Switch in Hypervisor
bull Familiar network server operations amp management model
bull Enhanced diagnostic amp monitoring capability
bull Visibility direct to VM
ESXi Nexus
1000V
VEM
Nexus
1000V
VEM
Nexus 1000V VSM
copy 2010 Cisco andor its affiliates All rights reserved 30
Physical switch maps L3 DSCP to L2 CoS
CUCM marks traffic based on L3 DSCP values
pSwitch (CAT6K etc) can do mapping from L3 DSCP to L2 CoS (if needed)
CTL Packet L3
dc1-access-6k(config)mls qos map dscp-cos 24 to 3
dc1-access-6k(config)mls qos map dscp-cos 46 to 5
CS3
L20 L3CS3
CUCM
CAT6K
L23 L3CS3 L23 L3CS3
copy 2010 Cisco andor its affiliates All rights reserved 31
bull UCS 6100 doesnrsquot look into L3 IP header
bull DSCPToS setting in IP header is not altered by UCS
bull 6100 sends packet to uplink pEthernet switch
bull Default QoS settings on UCS
FCoE (ldquomatch cos 3rdquo) ndash no drop policy
Rest (ldquomatch anyrdquo) ndash Best Effort Queue
vSwitch amp UCS 6100 can not map L3 DSCP to L2 CoS
L20 L3CS3
CUCM
CAT6K
UCS 6100
copy 2010 Cisco andor its affiliates All rights reserved 32
bull UC blades Network Adapters QoS policy set to Platinum (CoS=5 No
Drop)
bull Non-UC blades Network Adapters QoS policy set to best effort
N1Kv Considerations
bull UC sig traffic (CoS3) share queues with FCoE traffic (CoS3)
bull UC sig traffic is given lossless behavior
bull Default CoS value of 3 for FCoE traffic should never be changed
Without N1Kv
Caveat
bull All traffic types from virtual UC App will get CoS value of Platinum
bull Non-UC application gets best-effort class might not be acceptable
L20 L3CS3
L23 L3CS3
L23 L3CS3
CUCM
N1KV
UCS 6100
CAT6K
copy 2010 Cisco andor its affiliates All rights reserved 33
Compute Layer
SANStorage
Layer ndash Cisco
SRND
Cisco
UCS 6100
Fabric
Interconnect
UCS 5100
Blade
Server
Cisco SAN
Switch
4x10GE
4x10GE
4x10GE
4x10GE
FC FC
FC FC
Nexus
1000V
FC Storage
SP-A SP-B
3rd party layer
CUCM VM IOPS ~ 200
200 IOPS 4KB ~ 64 Mbps per VM
bull Total capacity 28000 IOPS
bull 14000 IOPS per controller
bull 4 KByte block size
14000 IOPS x (4KB) ~ 428 Mbps
600 Mbps throughputcontroller
3rd Party SAN Example
Result
bull One 4 Gbps FC interface is enough to
handle the entire capacity of one Storage
Array
bull HA requires four FC interfaces
copy 2010 Cisco andor its affiliates All rights reserved 34
copy 2010 Cisco andor its affiliates All rights reserved 35
bull All UC Deployment Models are supported
No change in the current deployment models
Base deployment model ndash Single Site Centralized Call Processing etc are not changing
bull VM machine layout on a blade andor chassis
Unity Connection requires one extra CPU core (vCPU) on blade
bull Software checks for design rules
No rules or restrictions are in place in UC Apps to check if you are running the primary and sub on the same blade
bull CoWAN ruleslatency requirement are same
Does not depend on CUCM code or hardware
httpwwwciscocomgoucsrnd
copy 2010 Cisco andor its affiliates All rights reserved 36
bull SRND application-layer guidelines are same as when on MCS
Redundancy Rules are the same
Clustering over the WANlatency numbers
Mega Cluster supported in 85
Determine quantityrole of nodes
For HA No design checks validating proper placement of primary and secondary servers
CUCCE private network requirement
bull Mixed clusters of HP IBM UCS are supported
Subject to ldquocommon senserdquo rules ndash eg donrsquot make Pub or Primary less powerful than Sub or Secondary
bull Direct attach devices must be on physical MCS server
-MOH Live audio stream
-Tape BackupFloppy
bull New factors to consider for end-to-end QoS designconfiguration
37 copy 2010 Cisco andor its affiliates All rights reserved
FROM CUCM
System Release To Unified CM 85 To UC System 85
4x Multi Hop thru 61(x)71(x) Multi Hop thru 61(3)
51(2) Multi Hop thru 61(x)71(x) NA
51(3) 2 Hop thru 71(3) NA
61(1) 2 Hop thru 61(x)71(x) 2 Hop
61(2) 2 Hop thru 61(x)71(x) NA
61(3) 2 Hop thru 61(x)71(x) NA
61(4) Single Hop NA
61(5) Single Hop NA
70(1) 2 Hop thru 71(x) 2 Hop
71(2) 2 Hop thru 71(x) 2 Hop
71(3) Single Hop Single Hop Multi stagesBWC supported
71(5) Single Hop NA
80(1) Single Hop Single Hop Multi stagesBWC supported
80(2) 80(3) Single Hop NA
38 copy 2010 Cisco andor its affiliates All rights reserved
VMware Feature support bull VMware feature support varies by application
bull Some features are supported with Caveats some Partially
bull For example
bull Clone Virtual Machine
ldquoY (C)rdquo means VM has to be powered off
bull vMotion
ldquoY (C)rdquo means vMotion supported for live traffic calls shouldnrsquot be dropped (but not guaranteed)
ldquoPartialrdquo means in maintenance mode only
ESXi Features CUCM CUC CUP CCX
Clone Virtual Machine Y (C) Y (C) Y (C) Y (C)
VMware vMotion Y (C) Partial Partial Y (C)
Resize Virtual Machine Partial Partial Partial Partial
VMware HA Y (C) Y (C) Y (C) Y (C)
Boot From SAN Y (C) Y (C) Y (C) Y (C)
VMware DRS No No No No
Docwiki httpdocwikiciscocomwikiUnified_Communications_VMware_Requirements
copy 2010 Cisco andor its affiliates All rights reserved 39
Current Business Continuity and Disaster Recovery strategies are still applicable
The UC apps redundancy rules are same
Distribute UC application nodes across UCS blades chassis and sites to minimize failure impact
Primarysecondary on different blade chassis sites
On same blade mix Subs with TFTPMoH vs just Subs
Redundancy of UCS components (blade chassis FEX links Interconnect switching)
Redundancy of ldquonewrdquo network types (10GbE SAN multi-pathing etc)
copy 2010 Cisco andor its affiliates All rights reserved 40
TAC Support Demarcation
Server Hardware
Shared Storage
VMware Cisco
Application
UC on UCS
Tested Reference Configuration
Cisco 3rd-party Cisco
Cisco VCE VCE
3rd-party
VCE
UC on UCS
Specs-based (including Vblock option)
Cisco 3rd-party Cisco
Cisco VCE VCE
3rd-party
VCE
3rd-party VMware (HP IBM)
Specs-based 3rd-party 3rd-party 3rd-party Cisco
MCS 7800 Appliances Cisco NA NA Cisco
Customer-provided MCS 7800 equivalent 3rd party NA NA Cisco
41 copy 2010 Cisco andor its affiliates All rights reserved
Customer Example ndash Primary Data Center
OLD NEW
Hardware Nodes 62 Physical Servers ( EU HQ clusters) 14 Approx
Software Version 615 amp 851 851
Ucxn Version 421 851 ndash 3 pairs -- Virtualized
CER 20 70 86 -- Virtualized
CM SUB CM PUB CM SUB CM SUB
MOH TFTP CER CM SUB
CER UCxn UCxN CM SUB
42 copy 2010 Cisco andor its affiliates All rights reserved
Deployment Model ndash Data Center 1
CM SUB CM PUB CM SUB CM SUB
MOH TFTP UCxN UCxn
CER UCxn UCxn UCxn
43 copy 2010 Cisco andor its affiliates All rights reserved
Deployment Model ndash Data Center 2
CM SUB CM PUB CM SUB CM SUB
MOH TFTP Ucxn UCxn
CER UCxn UCxn UCxn
copy 2010 Cisco andor its affiliates All rights reserved 44
Customer Design
PSTN
IP WAN
SIP Proxy
Unified Communications Manager
Unity Connection
Unified Presence
Unified Contact Center Express
Cisco UCS 5108 Chassis with UCS B200 Blade Servers
Cisco UCS 6100 Fabric Interconnect Switch
Cisco UCS C210 or C200 General-Purpose Rack-mount Server
CUSP
(11K phones)
(3K phones)
(400 phones)
copy 2010 Cisco andor its affiliates All rights reserved 45
HQ Details
Blade 1 Blade 2
CPU-1 CPU-2 CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
Blade 3 Blade 4
CPU-1 CPU-2 CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
CUP-1 UCCX-1 CUP-2
Blade 5 Blade 6
CPU-1 CPU-2 CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
UCCX-1
UCxn-1 Active
Leave
idle for
UCxn
UCxn-2
Active
TFTP-2
SUB-2
SUB-4
SUB-6
SUB-8 CUP-2 UCCX-2
SUB-3 PUB SUB-1
CUP-1
TFTP-1 SUB-5
SUB-7
Blade Slot 7 Blade Slot 8
CPU-1 CPU-2 CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
CUCM VM OVAs
Messaging VM OVAs
Leave
idle for
UCxn
ldquoSparerdquo blade slots available for non-UC workloads such as Cisco Nexus 1000V VMware
vCenter or 3rd-party workloads such as directories email groupware or other business
applications
Contact Center VM OVAs
Presence VM OVAs
copy 2010 Cisco andor its affiliates All rights reserved 46
Rack Server 3
CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
UCxn-2 CCX-1 CUP
Rack Server 2
CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
Rack Server 1
CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
Leave
idle for
UCxn
UCxn-1 TFTP-2 SUB-2
Leave
idle for
UCxn
SUB-1 CCX-2 PUB TFTP-1
Branch Office Details
Rack Server 2
CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
Rack Server 1
CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
PUBTFTP CCX-1 CUP
SUB CCX-2
Leave
idle for
UCxn
Leave
idle for
UCxn
UC
xn
-1
UC
xn
-1
CUCM VM OVAs
Messaging VM OVAs
Contact Center VM OVAs
Presence VM OVAs
copy 2010 Cisco andor its affiliates All rights reserved 47
bull DAS Rack Mount Server (Cisco C-Series)
bull Popular DAS Protocol SCSI
bull iSCSI Access SCSI storage media using IP network
bull Fibre Channel The most popular SAN protocol today
Cable distance ~ 2 km
Popular speed - 4 Gbs
bull NAS (Network Attached Storage) uses NFS (Network File System) protocol over TCPIP
DAS
SCSI
Computer System
SCSI Bus Adapter
SCSI Device Driver
Volume Manager
File System
Application
iSCSI
File System
Application
SCSI Device Driver iSCSI Driver
TCPIP Stack
NIC
Volume Manager
NIC
TCPIP Stack
iSCSI Layer
Bus Adapter
Host
Server
Storage
Transport
Storage
Media
FC SAN
FC
FC HBA
SCSI Device Driver
Volume Manager
File System
Application
Computer System Computer System
Block IO
SAN IP
copy 2010 Cisco andor its affiliates All rights reserved 48
HDD 1
450gig
15K RPM
HDD 2
450gig
15K RPM
HDD 3
450gig
15K RPM
HDD 4
450gig
15K RPM
HDD 5
450gig
15K RPM
Single RAID5 Group (14 TB Usable Space)
PUB SUB1 UCCX1
UC
VM 1
UC
VM 2
UC
VM 3
LUN 2 (720 GB) LUN 1 (720 GB)
UCCX2 CUP1 CUP2
UC
VM 4
UC
VM 5 UC
VM 6
4 to 8 UC VMs per
LUN
(max dependent on
sumvDisks)
NASSAN Array Best Practices for UC
Must be lt2TB per
LUN Recommend
500GB to 15 TB
Use FC class disks
with ~180 IOPS (eg
450 GB 15K or
300 GB 15K)
copy 2010 Cisco andor its affiliates All rights reserved 49
DAS Example UCS C210 M2 TRC1
HDD 1
146 GB
15K RPM
HDD 2
146 GB
15K RPM
HDD 3
146 GB
15K RPM
HDD 4
146 GB
15K RPM
HDD 5
146 GB
15K RPM
HDD 6
146 GB
15K RPM
HDD 7
146 GB
15K RPM
HDD 8
146 GB
15K RPM
HDD 9
146 GB
15K RPM
HDD 10
146 GB
15K RPM
Single RAID1
Volume Single RAID5 Volume (1022 GB after RAID overhead)
VMFS Filestore (947 GB after VMFS overhead)
PUB UCCX1
UC
VM 1
UC
VM 3
CUP1
UC
VM 5
vSphere ESXi
image
Notes
bull VMFS block size limits
max vDisk size
bull Could have gt1 VMFS
datastore on RAID
volume
copy 2010 Cisco andor its affiliates All rights reserved 50
DAS Example UCS C200 M2 TRC1 for BE6K
HDD 1
1 TB
72K RPM
HDD 2
1 TB
72K RPM
HDD 3
1 TB
72K RPM
HDD 4 1
TB
72K RPM
Single RAID10 Volume (2 TB after RAID
overhead)
VMFS Filestore (18 TB after VMFS overhead)
PUB UCCX1
UC
VM 1
UC
VM 3
CUP1
UC
VM 5 vSphere ESXi
image
copy 2010 Cisco andor its affiliates All rights reserved 51
UC Application Co-residency
bull See wwwciscocomgouc-virtualized for latest
bull Based on supported OVA for download
OVA reserves cores RAM etc to VMs
Basic rule of thumb fill up blade until out of capacity
If blade contains VM for messaging must reserve core for ESXi
bull Hardware oversubscription not supported
52 copy 2010 Cisco andor its affiliates All rights reserved
Virtual Machine Sizing
bull Virtual Machine virtual hardware defined by an VM template
vCPU vRAM vDisk vNICs
bull Capacity
bullAn VM template is associated with a specific capacity
bull VM templates are packaged in a OVA file
bull There are usually different VM template per release For example CUCM_80_vmv7_v21ova
CUCM_85_vmv7_v21ova
CUCM_86_vmv7_v15ova
Includes product product version VMware hardware version template version
53 copy 2010 Cisco andor its affiliates All rights reserved httptoolsciscocomcucst
copy 2010 Cisco andor its affiliates All rights reserved 54
bull Customer-accessible
bullUC on UCS httpwwwciscocomgouconucs and wwwciscocomgouc-virtualized and wwwciscocomgoucsrnd and wwwciscocomgoswonly (UCS page)
bullUCS in general httpwwwciscocomgoucs
bullVblocks amp Virtual Computing Environment wwwvceportalcom and wwwvcecom
bullFlexPods wwwcisconetappcom
bullCisco Infrastructure as a Service (IaaS) Portal httpwwwciscocomgoiaas
bullCisco Unified Service Delivery Portal httpwwwciscocomgousd
bullldquoCUCM on Virtual Serversrdquo (summary of ldquowhatrsquos newdifferent when virtualizedrdquo) httpwwwciscocomenUSdocsvoice_ip_commcucmvirtualservershtml
bull ldquoVirtualization and UCS 101rdquo httpnewsroomciscocomdlls2010ts_102810html
bull UC on UCS ldquotech valuerdquo httpwwwciscocomenUSsolutionsns340ns339ns638ns914html_TWTVtwtv_episode_74html
copy 2010 Cisco andor its affiliates All rights reserved 55
bull Partner Central ldquoServers OS and Virtualizationrdquo
httpwwwciscocomwebpartnersselltechnologyipcservers_os_virt_for_uchtml
bull Partner Community ldquoServers OS and Virtualizationrdquo
httpswwwmyciscocommunitycomcommunitypartnercollaborationucservers
bull Design info
httpwwwciscocomgouc-virtualized and httpwwwciscocomgoucsrnd
bull ldquoWhatrsquos new whatrsquos different when on VMwarerdquo customer doc
httpwwwciscocomenUSdocsvoice_ip_commcucmvirtualservershtml
bull Product Literature
httpwwwciscocomgoucs and httpwwwciscocomgouconucs
bull Ordering Guide
httpwwwciscocomwebpartnersselltechnologyipcuc_tech_readinesshtml~7
bull Virtual Computing Environment Portal httpwwwvceportalcom
bull Cisco Infrastructure as a Service (IaaS) Portal httpwwwciscocomgoiaas
20 copy 2010 Cisco andor its affiliates All rights reserved
Current Offers Technical Overview
21 copy 2010 Cisco andor its affiliates All rights reserved
Eg 4 physical servers
Each MCS 7800 hosts only
one UC app instance
4 virtual servers (VMrsquos) on 1 physical server
Single virtualized server with total 8 physical
cores hosts all UC app instances
Unity
Connection
Unified CM
VM for
Unified
CM
Sub
Unified CCX
VM for
Unity
Cxn
VM for
Unified
CCX
VM for
Unified
CM
Pub
or
copy 2010 Cisco andor its affiliates All rights reserved 22
Server Model TRC CPU RAM Storage Adapters
UCS B200 M2 Blade Server TRC 1
Dual E5640 (8 physical cores total)
48 GB DAS (RAID1) for
VMware FC SAN for UC apps
Cisco VIC
TRC 2 Dual E5640 (8
physical cores total) 48 GB Diskless Cisco VIC
UCS B230 M2 Blade Server
TRC 1 Dual E7-2870 (20
physical cores total) 128 GB Diskless Cisco VIC
UCS B440 M2 Blade Server
TRC1 Dual E7-4870 (40
physical cores total)
256 GB Diskless Cisco VIC
UCS C260 M2 Rack-Mount Server
TRC 1 Dual E7-2870 (20
physical cores total) 128 GB DAS (2x RAID5) 1GbE NIC
UCS C210 M2 General-Purpose
Rack-Mount Server TRC 1
Dual E5640 (8 physical cores total)
48 GB
DAS (2 disks RAID1) for VMware + DAS (8 disks RAID5) for
UC apps
1GbE NIC
TRC 2 Dual E5640 (8
physical cores total) 48 GB
DAS (2 disks RAID1) for VMware FC SAN
for UC apps
1GbE NIC and 4G FC HBA
TRC 3 Dual E5640 (8
physical cores total) 48 GB Diskless
1GbE NIC and 4G FC HBA
UCS C200 M2
General-Purpose Rack-Mount Server
TRC 1 Dual E5506 (8
physical cores total) 24 GB
DAS (4 disks RAID10) for VMware
+ UC apps 1GbE NIC
copy 2010 Cisco andor its affiliates All rights reserved 23
UC app Scale
(ldquousersrdquo)
vCPU (cores) Usually 253+ GHz
per core required
vRAM
(GB)
vDisk
(GB)
Notes
Unified
CM
1000 2 4 1 x 80 UCS C200 or BE6K only
2500 1 225 1 x 80 Not for use with C200BE6K
7500 2 6 2 x 80
10000 4 6 2 x 80
Unity
Connection
500 1 2 1 x 160
1000 1 4 1 x 160
5000 2 4 1 x 200
10000 4 4 2 x 146 Not for use with C200BE6K
20000 7 8 2 x 300
Unified
Presence
1000 1 2 1 x 80
2500 2 4 1 x 80 Not for use with C200BE6K
5000 4 4 2 x 80
Unified CCX 100 2 4 2 x 146 UCS C200 or BE6K only
300 2 4 2 x 146 Not for use with C200BE6K
400 4 8 2 x 146
Not exhaustive subject to change see wwwciscocomgouc-virtualized for latest
ie user count for particular values of BHCA trace level encryption CTI and other factors Actual
supportable user count may vary by deployment
copy 2010 Cisco andor its affiliates All rights reserved 24
Policy still lives here httpdocwikiciscocomwikiUnified_Communications_Virtualization_Sizing_GuidelinesApplication_Co-residency_Support_Policy
Three aspects
1 Allowed App Mix on same physical server
SAME RULES for TRC vs specs-based UCSHPIBM
2 Allowed VM OVA choices
DIFFERENT RULES for TRC vs specs-based due to CPU differences
3 Max number of VMs on same physical server
SAME RULES for TRC vs specs-based to determine max but specs-based might allow more VMs
Note DAS IO bottlenecks may prevent very high VM counts even if CPURAM are sufficient
copy 2010 Cisco andor its affiliates All rights reserved 25
bull Which apps can share the same physical server In general any UC with UC from apps listed at httpdocwikiciscocomwikiUnified_Communications_Virtualization_Supported_Applications
But note some UC apps restrict this eg BE6K CUCCE See their rules on their docwiki ldquochild pagesrdquo
NMTGrsquos UC Mgmt Suite (CUOM CUSM CUSSM CUPM) counts as a UC app for this
Note UCS C200 M2 TRC1 for non-BE6K no longer has special restrictions on UC App Mix
bull SEPARATE PHYSICAL SERVER required for non-UC or 3rd-party Eg N1KV ARC SingleWire vCenter FilePrint Directory CRMERP Groupware non-CUCM TFTP Nuance etc
UC VM UC VMhellipVM for
VMware
vCenter
VM for
Nexus
1KV
VSM
VM for
Solutions
Plus
CTDP
app
VM for
Unaffiliated
3rd-party
app
UC VM UC VMhellipVM for
VMware
vCenter
VM for
Nexus
1KV
VSM
VM for
Solutions
Plus
CTDP
app
VM for
Unaffiliated
3rd-party
app
UC VM UC VMhellipVM for
VMware
vCenter
VM for
Nexus
1KV
VSM
VM for
Solutions
Plus
CTDP
app
VM for
Unaffiliated
3rd-party
app
UC VM UC VMhellipVM for
VMware
vCenter
VM for
Nexus
1KV
VSM
VM for
Solutions
Plus
CTDP
app
VM for
Unaffiliated
3rd-party
app
Different blades in same chassis OK
Same blade same chassis not OK
copy 2010 Cisco andor its affiliates All rights reserved 26
bull App to HW some apps eg CUCCE donrsquot allow any of their OVAs on certain TRCs
See httpdocwikiciscocomwikiUnified_Communications_Virtualization_Supported_Applications
bull OVA to HW Some OVAs are deliberately only for use with a particular TRC or CPU
See co-res policy page and Notes column in httpdocwikiciscocomwikiUnified_Communications_Virtualization_Downloads_28including_OVAOVF_Templates29
Why Usually due to CPU modelspeed dependencies
C200 M2 TRC1
(E5506 213 GHz)
UCM
25K
UCM
75K
UCM
10K
C200 M2 Specs-based
(56xx 253+GHz )
B200C210 M2 TRC or Specs-based
(E5640 266 GHz on TRC
56xx75xx 253+ GHz on specs-based)
UCM
25K
UCM
75K
UCM
10K
UCM
1K
UCM
1K
UCM
25K
UCM
75K
UCM
10K
UCM
1K
copy 2010 Cisco andor its affiliates All rights reserved 27
Core
Core
Core
Core
Core
Core
Core
Core
Core
Core
Core
Core
Core
Core
Core
Core
Core
Core
Core
Core
VM VM VM VM
V
M
V
M
V
M
V
M
V
M
V
M
V
M
V
M
VM VM
Dual-socket 4-core
Eg UCS C210 M2 TRC1
with dual E5640
Dual-socket 6-core
Eg UCS C210 M2 Specs-based
with UC-supported CPU model
and min speed
VM VM V
M
V
M
VM VM VM V
M
Idle
VM
VM
VM
Jumbo + 1 reserved
or
Mixed sizes + 1 reserved
or
Mixed sizes
or
21 Large eg UCM 10K
or
41 Med eg UCM 75K
or
81 Small eg UCM 25K
ldquoSmallrdquo VM
ldquoMediumrdquo VM
ldquoLargerdquo VM
ldquoJumbordquo VM
VM
VM
Idle
VM VM VM VM
V
M
V
M
V
M
V
M
V
M
V
M
V
M
V
M
VM VM
VM VM V
M
V
M
VM VM VM V
M
Idle
Mixed sizes + 1 reserved
or
Mixed sizes
or
31 Large eg UCM 10K
or
61 Med eg UCM 75K
or
121 Small eg UCM 25K
VM
Idle
V
M
V
M
V
M
V
M
VM VM
VM
VM VM
VM
VM VM
copy 2010 Cisco andor its affiliates All rights reserved 28
Virtual Software Switch Options
VM
LAN SAN
ESXi Hypervisor
Software Switch
vNIC
CNA
FCoE
VMware
vSwitch
VMware
dvSwitch
Cisco Nexus
1KV
Host based (local) Distributed Distributed
IEEE 8021Q VLAN
tagging
IEEE 8021Q VLAN
tagging
IEEE 8021Q VLAN
tagging
VLANs only visible to
local ESXi host
VLANs visible to all
ESXi hosts
VLANs visible to all
ESXi hosts
EtherChannel EtherChannel EtherChannel
-- -- Virtual PortChannel
-- -- QoS Marking
(DSCPCoS)
-- -- ACL
-- -- SPAN
RADIUSTACACS+
No VM needed No VM needed VM needed for VSM
vmNIC
UCS B200
Strongly recommended for UC on UCS B-Series
Not required but recommended for UC on UCS C-Series
copy 2010 Cisco andor its affiliates All rights reserved 29
pSwitch
ESXi
bull Cisco Software Switch in Hypervisor
bull Familiar network server operations amp management model
bull Enhanced diagnostic amp monitoring capability
bull Visibility direct to VM
ESXi Nexus
1000V
VEM
Nexus
1000V
VEM
Nexus 1000V VSM
copy 2010 Cisco andor its affiliates All rights reserved 30
Physical switch maps L3 DSCP to L2 CoS
CUCM marks traffic based on L3 DSCP values
pSwitch (CAT6K etc) can do mapping from L3 DSCP to L2 CoS (if needed)
CTL Packet L3
dc1-access-6k(config)mls qos map dscp-cos 24 to 3
dc1-access-6k(config)mls qos map dscp-cos 46 to 5
CS3
L20 L3CS3
CUCM
CAT6K
L23 L3CS3 L23 L3CS3
copy 2010 Cisco andor its affiliates All rights reserved 31
bull UCS 6100 doesnrsquot look into L3 IP header
bull DSCPToS setting in IP header is not altered by UCS
bull 6100 sends packet to uplink pEthernet switch
bull Default QoS settings on UCS
FCoE (ldquomatch cos 3rdquo) ndash no drop policy
Rest (ldquomatch anyrdquo) ndash Best Effort Queue
vSwitch amp UCS 6100 can not map L3 DSCP to L2 CoS
L20 L3CS3
CUCM
CAT6K
UCS 6100
copy 2010 Cisco andor its affiliates All rights reserved 32
bull UC blades Network Adapters QoS policy set to Platinum (CoS=5 No
Drop)
bull Non-UC blades Network Adapters QoS policy set to best effort
N1Kv Considerations
bull UC sig traffic (CoS3) share queues with FCoE traffic (CoS3)
bull UC sig traffic is given lossless behavior
bull Default CoS value of 3 for FCoE traffic should never be changed
Without N1Kv
Caveat
bull All traffic types from virtual UC App will get CoS value of Platinum
bull Non-UC application gets best-effort class might not be acceptable
L20 L3CS3
L23 L3CS3
L23 L3CS3
CUCM
N1KV
UCS 6100
CAT6K
copy 2010 Cisco andor its affiliates All rights reserved 33
Compute Layer
SANStorage
Layer ndash Cisco
SRND
Cisco
UCS 6100
Fabric
Interconnect
UCS 5100
Blade
Server
Cisco SAN
Switch
4x10GE
4x10GE
4x10GE
4x10GE
FC FC
FC FC
Nexus
1000V
FC Storage
SP-A SP-B
3rd party layer
CUCM VM IOPS ~ 200
200 IOPS 4KB ~ 64 Mbps per VM
bull Total capacity 28000 IOPS
bull 14000 IOPS per controller
bull 4 KByte block size
14000 IOPS x (4KB) ~ 428 Mbps
600 Mbps throughputcontroller
3rd Party SAN Example
Result
bull One 4 Gbps FC interface is enough to
handle the entire capacity of one Storage
Array
bull HA requires four FC interfaces
copy 2010 Cisco andor its affiliates All rights reserved 34
copy 2010 Cisco andor its affiliates All rights reserved 35
bull All UC Deployment Models are supported
No change in the current deployment models
Base deployment model ndash Single Site Centralized Call Processing etc are not changing
bull VM machine layout on a blade andor chassis
Unity Connection requires one extra CPU core (vCPU) on blade
bull Software checks for design rules
No rules or restrictions are in place in UC Apps to check if you are running the primary and sub on the same blade
bull CoWAN ruleslatency requirement are same
Does not depend on CUCM code or hardware
httpwwwciscocomgoucsrnd
copy 2010 Cisco andor its affiliates All rights reserved 36
bull SRND application-layer guidelines are same as when on MCS
Redundancy Rules are the same
Clustering over the WANlatency numbers
Mega Cluster supported in 85
Determine quantityrole of nodes
For HA No design checks validating proper placement of primary and secondary servers
CUCCE private network requirement
bull Mixed clusters of HP IBM UCS are supported
Subject to ldquocommon senserdquo rules ndash eg donrsquot make Pub or Primary less powerful than Sub or Secondary
bull Direct attach devices must be on physical MCS server
-MOH Live audio stream
-Tape BackupFloppy
bull New factors to consider for end-to-end QoS designconfiguration
37 copy 2010 Cisco andor its affiliates All rights reserved
FROM CUCM
System Release To Unified CM 85 To UC System 85
4x Multi Hop thru 61(x)71(x) Multi Hop thru 61(3)
51(2) Multi Hop thru 61(x)71(x) NA
51(3) 2 Hop thru 71(3) NA
61(1) 2 Hop thru 61(x)71(x) 2 Hop
61(2) 2 Hop thru 61(x)71(x) NA
61(3) 2 Hop thru 61(x)71(x) NA
61(4) Single Hop NA
61(5) Single Hop NA
70(1) 2 Hop thru 71(x) 2 Hop
71(2) 2 Hop thru 71(x) 2 Hop
71(3) Single Hop Single Hop Multi stagesBWC supported
71(5) Single Hop NA
80(1) Single Hop Single Hop Multi stagesBWC supported
80(2) 80(3) Single Hop NA
38 copy 2010 Cisco andor its affiliates All rights reserved
VMware Feature support bull VMware feature support varies by application
bull Some features are supported with Caveats some Partially
bull For example
bull Clone Virtual Machine
ldquoY (C)rdquo means VM has to be powered off
bull vMotion
ldquoY (C)rdquo means vMotion supported for live traffic calls shouldnrsquot be dropped (but not guaranteed)
ldquoPartialrdquo means in maintenance mode only
ESXi Features CUCM CUC CUP CCX
Clone Virtual Machine Y (C) Y (C) Y (C) Y (C)
VMware vMotion Y (C) Partial Partial Y (C)
Resize Virtual Machine Partial Partial Partial Partial
VMware HA Y (C) Y (C) Y (C) Y (C)
Boot From SAN Y (C) Y (C) Y (C) Y (C)
VMware DRS No No No No
Docwiki httpdocwikiciscocomwikiUnified_Communications_VMware_Requirements
copy 2010 Cisco andor its affiliates All rights reserved 39
Current Business Continuity and Disaster Recovery strategies are still applicable
The UC apps redundancy rules are same
Distribute UC application nodes across UCS blades chassis and sites to minimize failure impact
Primarysecondary on different blade chassis sites
On same blade mix Subs with TFTPMoH vs just Subs
Redundancy of UCS components (blade chassis FEX links Interconnect switching)
Redundancy of ldquonewrdquo network types (10GbE SAN multi-pathing etc)
copy 2010 Cisco andor its affiliates All rights reserved 40
TAC Support Demarcation
Server Hardware
Shared Storage
VMware Cisco
Application
UC on UCS
Tested Reference Configuration
Cisco 3rd-party Cisco
Cisco VCE VCE
3rd-party
VCE
UC on UCS
Specs-based (including Vblock option)
Cisco 3rd-party Cisco
Cisco VCE VCE
3rd-party
VCE
3rd-party VMware (HP IBM)
Specs-based 3rd-party 3rd-party 3rd-party Cisco
MCS 7800 Appliances Cisco NA NA Cisco
Customer-provided MCS 7800 equivalent 3rd party NA NA Cisco
41 copy 2010 Cisco andor its affiliates All rights reserved
Customer Example ndash Primary Data Center
OLD NEW
Hardware Nodes 62 Physical Servers ( EU HQ clusters) 14 Approx
Software Version 615 amp 851 851
Ucxn Version 421 851 ndash 3 pairs -- Virtualized
CER 20 70 86 -- Virtualized
CM SUB CM PUB CM SUB CM SUB
MOH TFTP CER CM SUB
CER UCxn UCxN CM SUB
42 copy 2010 Cisco andor its affiliates All rights reserved
Deployment Model ndash Data Center 1
CM SUB CM PUB CM SUB CM SUB
MOH TFTP UCxN UCxn
CER UCxn UCxn UCxn
43 copy 2010 Cisco andor its affiliates All rights reserved
Deployment Model ndash Data Center 2
CM SUB CM PUB CM SUB CM SUB
MOH TFTP Ucxn UCxn
CER UCxn UCxn UCxn
copy 2010 Cisco andor its affiliates All rights reserved 44
Customer Design
PSTN
IP WAN
SIP Proxy
Unified Communications Manager
Unity Connection
Unified Presence
Unified Contact Center Express
Cisco UCS 5108 Chassis with UCS B200 Blade Servers
Cisco UCS 6100 Fabric Interconnect Switch
Cisco UCS C210 or C200 General-Purpose Rack-mount Server
CUSP
(11K phones)
(3K phones)
(400 phones)
copy 2010 Cisco andor its affiliates All rights reserved 45
HQ Details
Blade 1 Blade 2
CPU-1 CPU-2 CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
Blade 3 Blade 4
CPU-1 CPU-2 CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
CUP-1 UCCX-1 CUP-2
Blade 5 Blade 6
CPU-1 CPU-2 CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
UCCX-1
UCxn-1 Active
Leave
idle for
UCxn
UCxn-2
Active
TFTP-2
SUB-2
SUB-4
SUB-6
SUB-8 CUP-2 UCCX-2
SUB-3 PUB SUB-1
CUP-1
TFTP-1 SUB-5
SUB-7
Blade Slot 7 Blade Slot 8
CPU-1 CPU-2 CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
CUCM VM OVAs
Messaging VM OVAs
Leave
idle for
UCxn
ldquoSparerdquo blade slots available for non-UC workloads such as Cisco Nexus 1000V VMware
vCenter or 3rd-party workloads such as directories email groupware or other business
applications
Contact Center VM OVAs
Presence VM OVAs
copy 2010 Cisco andor its affiliates All rights reserved 46
Rack Server 3
CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
UCxn-2 CCX-1 CUP
Rack Server 2
CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
Rack Server 1
CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
Leave
idle for
UCxn
UCxn-1 TFTP-2 SUB-2
Leave
idle for
UCxn
SUB-1 CCX-2 PUB TFTP-1
Branch Office Details
Rack Server 2
CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
Rack Server 1
CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
PUBTFTP CCX-1 CUP
SUB CCX-2
Leave
idle for
UCxn
Leave
idle for
UCxn
UC
xn
-1
UC
xn
-1
CUCM VM OVAs
Messaging VM OVAs
Contact Center VM OVAs
Presence VM OVAs
copy 2010 Cisco andor its affiliates All rights reserved 47
bull DAS Rack Mount Server (Cisco C-Series)
bull Popular DAS Protocol SCSI
bull iSCSI Access SCSI storage media using IP network
bull Fibre Channel The most popular SAN protocol today
Cable distance ~ 2 km
Popular speed - 4 Gbs
bull NAS (Network Attached Storage) uses NFS (Network File System) protocol over TCPIP
DAS
SCSI
Computer System
SCSI Bus Adapter
SCSI Device Driver
Volume Manager
File System
Application
iSCSI
File System
Application
SCSI Device Driver iSCSI Driver
TCPIP Stack
NIC
Volume Manager
NIC
TCPIP Stack
iSCSI Layer
Bus Adapter
Host
Server
Storage
Transport
Storage
Media
FC SAN
FC
FC HBA
SCSI Device Driver
Volume Manager
File System
Application
Computer System Computer System
Block IO
SAN IP
copy 2010 Cisco andor its affiliates All rights reserved 48
HDD 1
450gig
15K RPM
HDD 2
450gig
15K RPM
HDD 3
450gig
15K RPM
HDD 4
450gig
15K RPM
HDD 5
450gig
15K RPM
Single RAID5 Group (14 TB Usable Space)
PUB SUB1 UCCX1
UC
VM 1
UC
VM 2
UC
VM 3
LUN 2 (720 GB) LUN 1 (720 GB)
UCCX2 CUP1 CUP2
UC
VM 4
UC
VM 5 UC
VM 6
4 to 8 UC VMs per
LUN
(max dependent on
sumvDisks)
NASSAN Array Best Practices for UC
Must be lt2TB per
LUN Recommend
500GB to 15 TB
Use FC class disks
with ~180 IOPS (eg
450 GB 15K or
300 GB 15K)
copy 2010 Cisco andor its affiliates All rights reserved 49
DAS Example UCS C210 M2 TRC1
HDD 1
146 GB
15K RPM
HDD 2
146 GB
15K RPM
HDD 3
146 GB
15K RPM
HDD 4
146 GB
15K RPM
HDD 5
146 GB
15K RPM
HDD 6
146 GB
15K RPM
HDD 7
146 GB
15K RPM
HDD 8
146 GB
15K RPM
HDD 9
146 GB
15K RPM
HDD 10
146 GB
15K RPM
Single RAID1
Volume Single RAID5 Volume (1022 GB after RAID overhead)
VMFS Filestore (947 GB after VMFS overhead)
PUB UCCX1
UC
VM 1
UC
VM 3
CUP1
UC
VM 5
vSphere ESXi
image
Notes
bull VMFS block size limits
max vDisk size
bull Could have gt1 VMFS
datastore on RAID
volume
copy 2010 Cisco andor its affiliates All rights reserved 50
DAS Example UCS C200 M2 TRC1 for BE6K
HDD 1
1 TB
72K RPM
HDD 2
1 TB
72K RPM
HDD 3
1 TB
72K RPM
HDD 4 1
TB
72K RPM
Single RAID10 Volume (2 TB after RAID
overhead)
VMFS Filestore (18 TB after VMFS overhead)
PUB UCCX1
UC
VM 1
UC
VM 3
CUP1
UC
VM 5 vSphere ESXi
image
copy 2010 Cisco andor its affiliates All rights reserved 51
UC Application Co-residency
bull See wwwciscocomgouc-virtualized for latest
bull Based on supported OVA for download
OVA reserves cores RAM etc to VMs
Basic rule of thumb fill up blade until out of capacity
If blade contains VM for messaging must reserve core for ESXi
bull Hardware oversubscription not supported
52 copy 2010 Cisco andor its affiliates All rights reserved
Virtual Machine Sizing
bull Virtual Machine virtual hardware defined by an VM template
vCPU vRAM vDisk vNICs
bull Capacity
bullAn VM template is associated with a specific capacity
bull VM templates are packaged in a OVA file
bull There are usually different VM template per release For example CUCM_80_vmv7_v21ova
CUCM_85_vmv7_v21ova
CUCM_86_vmv7_v15ova
Includes product product version VMware hardware version template version
53 copy 2010 Cisco andor its affiliates All rights reserved httptoolsciscocomcucst
copy 2010 Cisco andor its affiliates All rights reserved 54
bull Customer-accessible
bullUC on UCS httpwwwciscocomgouconucs and wwwciscocomgouc-virtualized and wwwciscocomgoucsrnd and wwwciscocomgoswonly (UCS page)
bullUCS in general httpwwwciscocomgoucs
bullVblocks amp Virtual Computing Environment wwwvceportalcom and wwwvcecom
bullFlexPods wwwcisconetappcom
bullCisco Infrastructure as a Service (IaaS) Portal httpwwwciscocomgoiaas
bullCisco Unified Service Delivery Portal httpwwwciscocomgousd
bullldquoCUCM on Virtual Serversrdquo (summary of ldquowhatrsquos newdifferent when virtualizedrdquo) httpwwwciscocomenUSdocsvoice_ip_commcucmvirtualservershtml
bull ldquoVirtualization and UCS 101rdquo httpnewsroomciscocomdlls2010ts_102810html
bull UC on UCS ldquotech valuerdquo httpwwwciscocomenUSsolutionsns340ns339ns638ns914html_TWTVtwtv_episode_74html
copy 2010 Cisco andor its affiliates All rights reserved 55
bull Partner Central ldquoServers OS and Virtualizationrdquo
httpwwwciscocomwebpartnersselltechnologyipcservers_os_virt_for_uchtml
bull Partner Community ldquoServers OS and Virtualizationrdquo
httpswwwmyciscocommunitycomcommunitypartnercollaborationucservers
bull Design info
httpwwwciscocomgouc-virtualized and httpwwwciscocomgoucsrnd
bull ldquoWhatrsquos new whatrsquos different when on VMwarerdquo customer doc
httpwwwciscocomenUSdocsvoice_ip_commcucmvirtualservershtml
bull Product Literature
httpwwwciscocomgoucs and httpwwwciscocomgouconucs
bull Ordering Guide
httpwwwciscocomwebpartnersselltechnologyipcuc_tech_readinesshtml~7
bull Virtual Computing Environment Portal httpwwwvceportalcom
bull Cisco Infrastructure as a Service (IaaS) Portal httpwwwciscocomgoiaas
21 copy 2010 Cisco andor its affiliates All rights reserved
Eg 4 physical servers
Each MCS 7800 hosts only
one UC app instance
4 virtual servers (VMrsquos) on 1 physical server
Single virtualized server with total 8 physical
cores hosts all UC app instances
Unity
Connection
Unified CM
VM for
Unified
CM
Sub
Unified CCX
VM for
Unity
Cxn
VM for
Unified
CCX
VM for
Unified
CM
Pub
or
copy 2010 Cisco andor its affiliates All rights reserved 22
Server Model TRC CPU RAM Storage Adapters
UCS B200 M2 Blade Server TRC 1
Dual E5640 (8 physical cores total)
48 GB DAS (RAID1) for
VMware FC SAN for UC apps
Cisco VIC
TRC 2 Dual E5640 (8
physical cores total) 48 GB Diskless Cisco VIC
UCS B230 M2 Blade Server
TRC 1 Dual E7-2870 (20
physical cores total) 128 GB Diskless Cisco VIC
UCS B440 M2 Blade Server
TRC1 Dual E7-4870 (40
physical cores total)
256 GB Diskless Cisco VIC
UCS C260 M2 Rack-Mount Server
TRC 1 Dual E7-2870 (20
physical cores total) 128 GB DAS (2x RAID5) 1GbE NIC
UCS C210 M2 General-Purpose
Rack-Mount Server TRC 1
Dual E5640 (8 physical cores total)
48 GB
DAS (2 disks RAID1) for VMware + DAS (8 disks RAID5) for
UC apps
1GbE NIC
TRC 2 Dual E5640 (8
physical cores total) 48 GB
DAS (2 disks RAID1) for VMware FC SAN
for UC apps
1GbE NIC and 4G FC HBA
TRC 3 Dual E5640 (8
physical cores total) 48 GB Diskless
1GbE NIC and 4G FC HBA
UCS C200 M2
General-Purpose Rack-Mount Server
TRC 1 Dual E5506 (8
physical cores total) 24 GB
DAS (4 disks RAID10) for VMware
+ UC apps 1GbE NIC
copy 2010 Cisco andor its affiliates All rights reserved 23
UC app Scale
(ldquousersrdquo)
vCPU (cores) Usually 253+ GHz
per core required
vRAM
(GB)
vDisk
(GB)
Notes
Unified
CM
1000 2 4 1 x 80 UCS C200 or BE6K only
2500 1 225 1 x 80 Not for use with C200BE6K
7500 2 6 2 x 80
10000 4 6 2 x 80
Unity
Connection
500 1 2 1 x 160
1000 1 4 1 x 160
5000 2 4 1 x 200
10000 4 4 2 x 146 Not for use with C200BE6K
20000 7 8 2 x 300
Unified
Presence
1000 1 2 1 x 80
2500 2 4 1 x 80 Not for use with C200BE6K
5000 4 4 2 x 80
Unified CCX 100 2 4 2 x 146 UCS C200 or BE6K only
300 2 4 2 x 146 Not for use with C200BE6K
400 4 8 2 x 146
Not exhaustive subject to change see wwwciscocomgouc-virtualized for latest
ie user count for particular values of BHCA trace level encryption CTI and other factors Actual
supportable user count may vary by deployment
copy 2010 Cisco andor its affiliates All rights reserved 24
Policy still lives here httpdocwikiciscocomwikiUnified_Communications_Virtualization_Sizing_GuidelinesApplication_Co-residency_Support_Policy
Three aspects
1 Allowed App Mix on same physical server
SAME RULES for TRC vs specs-based UCSHPIBM
2 Allowed VM OVA choices
DIFFERENT RULES for TRC vs specs-based due to CPU differences
3 Max number of VMs on same physical server
SAME RULES for TRC vs specs-based to determine max but specs-based might allow more VMs
Note DAS IO bottlenecks may prevent very high VM counts even if CPURAM are sufficient
copy 2010 Cisco andor its affiliates All rights reserved 25
bull Which apps can share the same physical server In general any UC with UC from apps listed at httpdocwikiciscocomwikiUnified_Communications_Virtualization_Supported_Applications
But note some UC apps restrict this eg BE6K CUCCE See their rules on their docwiki ldquochild pagesrdquo
NMTGrsquos UC Mgmt Suite (CUOM CUSM CUSSM CUPM) counts as a UC app for this
Note UCS C200 M2 TRC1 for non-BE6K no longer has special restrictions on UC App Mix
bull SEPARATE PHYSICAL SERVER required for non-UC or 3rd-party Eg N1KV ARC SingleWire vCenter FilePrint Directory CRMERP Groupware non-CUCM TFTP Nuance etc
UC VM UC VMhellipVM for
VMware
vCenter
VM for
Nexus
1KV
VSM
VM for
Solutions
Plus
CTDP
app
VM for
Unaffiliated
3rd-party
app
UC VM UC VMhellipVM for
VMware
vCenter
VM for
Nexus
1KV
VSM
VM for
Solutions
Plus
CTDP
app
VM for
Unaffiliated
3rd-party
app
UC VM UC VMhellipVM for
VMware
vCenter
VM for
Nexus
1KV
VSM
VM for
Solutions
Plus
CTDP
app
VM for
Unaffiliated
3rd-party
app
UC VM UC VMhellipVM for
VMware
vCenter
VM for
Nexus
1KV
VSM
VM for
Solutions
Plus
CTDP
app
VM for
Unaffiliated
3rd-party
app
Different blades in same chassis OK
Same blade same chassis not OK
copy 2010 Cisco andor its affiliates All rights reserved 26
bull App to HW some apps eg CUCCE donrsquot allow any of their OVAs on certain TRCs
See httpdocwikiciscocomwikiUnified_Communications_Virtualization_Supported_Applications
bull OVA to HW Some OVAs are deliberately only for use with a particular TRC or CPU
See co-res policy page and Notes column in httpdocwikiciscocomwikiUnified_Communications_Virtualization_Downloads_28including_OVAOVF_Templates29
Why Usually due to CPU modelspeed dependencies
C200 M2 TRC1
(E5506 213 GHz)
UCM
25K
UCM
75K
UCM
10K
C200 M2 Specs-based
(56xx 253+GHz )
B200C210 M2 TRC or Specs-based
(E5640 266 GHz on TRC
56xx75xx 253+ GHz on specs-based)
UCM
25K
UCM
75K
UCM
10K
UCM
1K
UCM
1K
UCM
25K
UCM
75K
UCM
10K
UCM
1K
copy 2010 Cisco andor its affiliates All rights reserved 27
Core
Core
Core
Core
Core
Core
Core
Core
Core
Core
Core
Core
Core
Core
Core
Core
Core
Core
Core
Core
VM VM VM VM
V
M
V
M
V
M
V
M
V
M
V
M
V
M
V
M
VM VM
Dual-socket 4-core
Eg UCS C210 M2 TRC1
with dual E5640
Dual-socket 6-core
Eg UCS C210 M2 Specs-based
with UC-supported CPU model
and min speed
VM VM V
M
V
M
VM VM VM V
M
Idle
VM
VM
VM
Jumbo + 1 reserved
or
Mixed sizes + 1 reserved
or
Mixed sizes
or
21 Large eg UCM 10K
or
41 Med eg UCM 75K
or
81 Small eg UCM 25K
ldquoSmallrdquo VM
ldquoMediumrdquo VM
ldquoLargerdquo VM
ldquoJumbordquo VM
VM
VM
Idle
VM VM VM VM
V
M
V
M
V
M
V
M
V
M
V
M
V
M
V
M
VM VM
VM VM V
M
V
M
VM VM VM V
M
Idle
Mixed sizes + 1 reserved
or
Mixed sizes
or
31 Large eg UCM 10K
or
61 Med eg UCM 75K
or
121 Small eg UCM 25K
VM
Idle
V
M
V
M
V
M
V
M
VM VM
VM
VM VM
VM
VM VM
copy 2010 Cisco andor its affiliates All rights reserved 28
Virtual Software Switch Options
VM
LAN SAN
ESXi Hypervisor
Software Switch
vNIC
CNA
FCoE
VMware
vSwitch
VMware
dvSwitch
Cisco Nexus
1KV
Host based (local) Distributed Distributed
IEEE 8021Q VLAN
tagging
IEEE 8021Q VLAN
tagging
IEEE 8021Q VLAN
tagging
VLANs only visible to
local ESXi host
VLANs visible to all
ESXi hosts
VLANs visible to all
ESXi hosts
EtherChannel EtherChannel EtherChannel
-- -- Virtual PortChannel
-- -- QoS Marking
(DSCPCoS)
-- -- ACL
-- -- SPAN
RADIUSTACACS+
No VM needed No VM needed VM needed for VSM
vmNIC
UCS B200
Strongly recommended for UC on UCS B-Series
Not required but recommended for UC on UCS C-Series
copy 2010 Cisco andor its affiliates All rights reserved 29
pSwitch
ESXi
bull Cisco Software Switch in Hypervisor
bull Familiar network server operations amp management model
bull Enhanced diagnostic amp monitoring capability
bull Visibility direct to VM
ESXi Nexus
1000V
VEM
Nexus
1000V
VEM
Nexus 1000V VSM
copy 2010 Cisco andor its affiliates All rights reserved 30
Physical switch maps L3 DSCP to L2 CoS
CUCM marks traffic based on L3 DSCP values
pSwitch (CAT6K etc) can do mapping from L3 DSCP to L2 CoS (if needed)
CTL Packet L3
dc1-access-6k(config)mls qos map dscp-cos 24 to 3
dc1-access-6k(config)mls qos map dscp-cos 46 to 5
CS3
L20 L3CS3
CUCM
CAT6K
L23 L3CS3 L23 L3CS3
copy 2010 Cisco andor its affiliates All rights reserved 31
bull UCS 6100 doesnrsquot look into L3 IP header
bull DSCPToS setting in IP header is not altered by UCS
bull 6100 sends packet to uplink pEthernet switch
bull Default QoS settings on UCS
FCoE (ldquomatch cos 3rdquo) ndash no drop policy
Rest (ldquomatch anyrdquo) ndash Best Effort Queue
vSwitch amp UCS 6100 can not map L3 DSCP to L2 CoS
L20 L3CS3
CUCM
CAT6K
UCS 6100
copy 2010 Cisco andor its affiliates All rights reserved 32
bull UC blades Network Adapters QoS policy set to Platinum (CoS=5 No
Drop)
bull Non-UC blades Network Adapters QoS policy set to best effort
N1Kv Considerations
bull UC sig traffic (CoS3) share queues with FCoE traffic (CoS3)
bull UC sig traffic is given lossless behavior
bull Default CoS value of 3 for FCoE traffic should never be changed
Without N1Kv
Caveat
bull All traffic types from virtual UC App will get CoS value of Platinum
bull Non-UC application gets best-effort class might not be acceptable
L20 L3CS3
L23 L3CS3
L23 L3CS3
CUCM
N1KV
UCS 6100
CAT6K
copy 2010 Cisco andor its affiliates All rights reserved 33
Compute Layer
SANStorage
Layer ndash Cisco
SRND
Cisco
UCS 6100
Fabric
Interconnect
UCS 5100
Blade
Server
Cisco SAN
Switch
4x10GE
4x10GE
4x10GE
4x10GE
FC FC
FC FC
Nexus
1000V
FC Storage
SP-A SP-B
3rd party layer
CUCM VM IOPS ~ 200
200 IOPS 4KB ~ 64 Mbps per VM
bull Total capacity 28000 IOPS
bull 14000 IOPS per controller
bull 4 KByte block size
14000 IOPS x (4KB) ~ 428 Mbps
600 Mbps throughputcontroller
3rd Party SAN Example
Result
bull One 4 Gbps FC interface is enough to
handle the entire capacity of one Storage
Array
bull HA requires four FC interfaces
copy 2010 Cisco andor its affiliates All rights reserved 34
copy 2010 Cisco andor its affiliates All rights reserved 35
bull All UC Deployment Models are supported
No change in the current deployment models
Base deployment model ndash Single Site Centralized Call Processing etc are not changing
bull VM machine layout on a blade andor chassis
Unity Connection requires one extra CPU core (vCPU) on blade
bull Software checks for design rules
No rules or restrictions are in place in UC Apps to check if you are running the primary and sub on the same blade
bull CoWAN ruleslatency requirement are same
Does not depend on CUCM code or hardware
httpwwwciscocomgoucsrnd
copy 2010 Cisco andor its affiliates All rights reserved 36
bull SRND application-layer guidelines are same as when on MCS
Redundancy Rules are the same
Clustering over the WANlatency numbers
Mega Cluster supported in 85
Determine quantityrole of nodes
For HA No design checks validating proper placement of primary and secondary servers
CUCCE private network requirement
bull Mixed clusters of HP IBM UCS are supported
Subject to ldquocommon senserdquo rules ndash eg donrsquot make Pub or Primary less powerful than Sub or Secondary
bull Direct attach devices must be on physical MCS server
-MOH Live audio stream
-Tape BackupFloppy
bull New factors to consider for end-to-end QoS designconfiguration
37 copy 2010 Cisco andor its affiliates All rights reserved
FROM CUCM
System Release To Unified CM 85 To UC System 85
4x Multi Hop thru 61(x)71(x) Multi Hop thru 61(3)
51(2) Multi Hop thru 61(x)71(x) NA
51(3) 2 Hop thru 71(3) NA
61(1) 2 Hop thru 61(x)71(x) 2 Hop
61(2) 2 Hop thru 61(x)71(x) NA
61(3) 2 Hop thru 61(x)71(x) NA
61(4) Single Hop NA
61(5) Single Hop NA
70(1) 2 Hop thru 71(x) 2 Hop
71(2) 2 Hop thru 71(x) 2 Hop
71(3) Single Hop Single Hop Multi stagesBWC supported
71(5) Single Hop NA
80(1) Single Hop Single Hop Multi stagesBWC supported
80(2) 80(3) Single Hop NA
38 copy 2010 Cisco andor its affiliates All rights reserved
VMware Feature support bull VMware feature support varies by application
bull Some features are supported with Caveats some Partially
bull For example
bull Clone Virtual Machine
ldquoY (C)rdquo means VM has to be powered off
bull vMotion
ldquoY (C)rdquo means vMotion supported for live traffic calls shouldnrsquot be dropped (but not guaranteed)
ldquoPartialrdquo means in maintenance mode only
ESXi Features CUCM CUC CUP CCX
Clone Virtual Machine Y (C) Y (C) Y (C) Y (C)
VMware vMotion Y (C) Partial Partial Y (C)
Resize Virtual Machine Partial Partial Partial Partial
VMware HA Y (C) Y (C) Y (C) Y (C)
Boot From SAN Y (C) Y (C) Y (C) Y (C)
VMware DRS No No No No
Docwiki httpdocwikiciscocomwikiUnified_Communications_VMware_Requirements
copy 2010 Cisco andor its affiliates All rights reserved 39
Current Business Continuity and Disaster Recovery strategies are still applicable
The UC apps redundancy rules are same
Distribute UC application nodes across UCS blades chassis and sites to minimize failure impact
Primarysecondary on different blade chassis sites
On same blade mix Subs with TFTPMoH vs just Subs
Redundancy of UCS components (blade chassis FEX links Interconnect switching)
Redundancy of ldquonewrdquo network types (10GbE SAN multi-pathing etc)
copy 2010 Cisco andor its affiliates All rights reserved 40
TAC Support Demarcation
Server Hardware
Shared Storage
VMware Cisco
Application
UC on UCS
Tested Reference Configuration
Cisco 3rd-party Cisco
Cisco VCE VCE
3rd-party
VCE
UC on UCS
Specs-based (including Vblock option)
Cisco 3rd-party Cisco
Cisco VCE VCE
3rd-party
VCE
3rd-party VMware (HP IBM)
Specs-based 3rd-party 3rd-party 3rd-party Cisco
MCS 7800 Appliances Cisco NA NA Cisco
Customer-provided MCS 7800 equivalent 3rd party NA NA Cisco
41 copy 2010 Cisco andor its affiliates All rights reserved
Customer Example ndash Primary Data Center
OLD NEW
Hardware Nodes 62 Physical Servers ( EU HQ clusters) 14 Approx
Software Version 615 amp 851 851
Ucxn Version 421 851 ndash 3 pairs -- Virtualized
CER 20 70 86 -- Virtualized
CM SUB CM PUB CM SUB CM SUB
MOH TFTP CER CM SUB
CER UCxn UCxN CM SUB
42 copy 2010 Cisco andor its affiliates All rights reserved
Deployment Model ndash Data Center 1
CM SUB CM PUB CM SUB CM SUB
MOH TFTP UCxN UCxn
CER UCxn UCxn UCxn
43 copy 2010 Cisco andor its affiliates All rights reserved
Deployment Model ndash Data Center 2
CM SUB CM PUB CM SUB CM SUB
MOH TFTP Ucxn UCxn
CER UCxn UCxn UCxn
copy 2010 Cisco andor its affiliates All rights reserved 44
Customer Design
PSTN
IP WAN
SIP Proxy
Unified Communications Manager
Unity Connection
Unified Presence
Unified Contact Center Express
Cisco UCS 5108 Chassis with UCS B200 Blade Servers
Cisco UCS 6100 Fabric Interconnect Switch
Cisco UCS C210 or C200 General-Purpose Rack-mount Server
CUSP
(11K phones)
(3K phones)
(400 phones)
copy 2010 Cisco andor its affiliates All rights reserved 45
HQ Details
Blade 1 Blade 2
CPU-1 CPU-2 CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
Blade 3 Blade 4
CPU-1 CPU-2 CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
CUP-1 UCCX-1 CUP-2
Blade 5 Blade 6
CPU-1 CPU-2 CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
UCCX-1
UCxn-1 Active
Leave
idle for
UCxn
UCxn-2
Active
TFTP-2
SUB-2
SUB-4
SUB-6
SUB-8 CUP-2 UCCX-2
SUB-3 PUB SUB-1
CUP-1
TFTP-1 SUB-5
SUB-7
Blade Slot 7 Blade Slot 8
CPU-1 CPU-2 CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
CUCM VM OVAs
Messaging VM OVAs
Leave
idle for
UCxn
ldquoSparerdquo blade slots available for non-UC workloads such as Cisco Nexus 1000V VMware
vCenter or 3rd-party workloads such as directories email groupware or other business
applications
Contact Center VM OVAs
Presence VM OVAs
copy 2010 Cisco andor its affiliates All rights reserved 46
Rack Server 3
CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
UCxn-2 CCX-1 CUP
Rack Server 2
CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
Rack Server 1
CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
Leave
idle for
UCxn
UCxn-1 TFTP-2 SUB-2
Leave
idle for
UCxn
SUB-1 CCX-2 PUB TFTP-1
Branch Office Details
Rack Server 2
CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
Rack Server 1
CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
PUBTFTP CCX-1 CUP
SUB CCX-2
Leave
idle for
UCxn
Leave
idle for
UCxn
UC
xn
-1
UC
xn
-1
CUCM VM OVAs
Messaging VM OVAs
Contact Center VM OVAs
Presence VM OVAs
copy 2010 Cisco andor its affiliates All rights reserved 47
bull DAS Rack Mount Server (Cisco C-Series)
bull Popular DAS Protocol SCSI
bull iSCSI Access SCSI storage media using IP network
bull Fibre Channel The most popular SAN protocol today
Cable distance ~ 2 km
Popular speed - 4 Gbs
bull NAS (Network Attached Storage) uses NFS (Network File System) protocol over TCPIP
DAS
SCSI
Computer System
SCSI Bus Adapter
SCSI Device Driver
Volume Manager
File System
Application
iSCSI
File System
Application
SCSI Device Driver iSCSI Driver
TCPIP Stack
NIC
Volume Manager
NIC
TCPIP Stack
iSCSI Layer
Bus Adapter
Host
Server
Storage
Transport
Storage
Media
FC SAN
FC
FC HBA
SCSI Device Driver
Volume Manager
File System
Application
Computer System Computer System
Block IO
SAN IP
copy 2010 Cisco andor its affiliates All rights reserved 48
HDD 1
450gig
15K RPM
HDD 2
450gig
15K RPM
HDD 3
450gig
15K RPM
HDD 4
450gig
15K RPM
HDD 5
450gig
15K RPM
Single RAID5 Group (14 TB Usable Space)
PUB SUB1 UCCX1
UC
VM 1
UC
VM 2
UC
VM 3
LUN 2 (720 GB) LUN 1 (720 GB)
UCCX2 CUP1 CUP2
UC
VM 4
UC
VM 5 UC
VM 6
4 to 8 UC VMs per
LUN
(max dependent on
sumvDisks)
NASSAN Array Best Practices for UC
Must be lt2TB per
LUN Recommend
500GB to 15 TB
Use FC class disks
with ~180 IOPS (eg
450 GB 15K or
300 GB 15K)
copy 2010 Cisco andor its affiliates All rights reserved 49
DAS Example UCS C210 M2 TRC1
HDD 1
146 GB
15K RPM
HDD 2
146 GB
15K RPM
HDD 3
146 GB
15K RPM
HDD 4
146 GB
15K RPM
HDD 5
146 GB
15K RPM
HDD 6
146 GB
15K RPM
HDD 7
146 GB
15K RPM
HDD 8
146 GB
15K RPM
HDD 9
146 GB
15K RPM
HDD 10
146 GB
15K RPM
Single RAID1
Volume Single RAID5 Volume (1022 GB after RAID overhead)
VMFS Filestore (947 GB after VMFS overhead)
PUB UCCX1
UC
VM 1
UC
VM 3
CUP1
UC
VM 5
vSphere ESXi
image
Notes
bull VMFS block size limits
max vDisk size
bull Could have gt1 VMFS
datastore on RAID
volume
copy 2010 Cisco andor its affiliates All rights reserved 50
DAS Example UCS C200 M2 TRC1 for BE6K
HDD 1
1 TB
72K RPM
HDD 2
1 TB
72K RPM
HDD 3
1 TB
72K RPM
HDD 4 1
TB
72K RPM
Single RAID10 Volume (2 TB after RAID
overhead)
VMFS Filestore (18 TB after VMFS overhead)
PUB UCCX1
UC
VM 1
UC
VM 3
CUP1
UC
VM 5 vSphere ESXi
image
copy 2010 Cisco andor its affiliates All rights reserved 51
UC Application Co-residency
bull See wwwciscocomgouc-virtualized for latest
bull Based on supported OVA for download
OVA reserves cores RAM etc to VMs
Basic rule of thumb fill up blade until out of capacity
If blade contains VM for messaging must reserve core for ESXi
bull Hardware oversubscription not supported
52 copy 2010 Cisco andor its affiliates All rights reserved
Virtual Machine Sizing
bull Virtual Machine virtual hardware defined by an VM template
vCPU vRAM vDisk vNICs
bull Capacity
bullAn VM template is associated with a specific capacity
bull VM templates are packaged in a OVA file
bull There are usually different VM template per release For example CUCM_80_vmv7_v21ova
CUCM_85_vmv7_v21ova
CUCM_86_vmv7_v15ova
Includes product product version VMware hardware version template version
53 copy 2010 Cisco andor its affiliates All rights reserved httptoolsciscocomcucst
copy 2010 Cisco andor its affiliates All rights reserved 54
bull Customer-accessible
bullUC on UCS httpwwwciscocomgouconucs and wwwciscocomgouc-virtualized and wwwciscocomgoucsrnd and wwwciscocomgoswonly (UCS page)
bullUCS in general httpwwwciscocomgoucs
bullVblocks amp Virtual Computing Environment wwwvceportalcom and wwwvcecom
bullFlexPods wwwcisconetappcom
bullCisco Infrastructure as a Service (IaaS) Portal httpwwwciscocomgoiaas
bullCisco Unified Service Delivery Portal httpwwwciscocomgousd
bullldquoCUCM on Virtual Serversrdquo (summary of ldquowhatrsquos newdifferent when virtualizedrdquo) httpwwwciscocomenUSdocsvoice_ip_commcucmvirtualservershtml
bull ldquoVirtualization and UCS 101rdquo httpnewsroomciscocomdlls2010ts_102810html
bull UC on UCS ldquotech valuerdquo httpwwwciscocomenUSsolutionsns340ns339ns638ns914html_TWTVtwtv_episode_74html
copy 2010 Cisco andor its affiliates All rights reserved 55
bull Partner Central ldquoServers OS and Virtualizationrdquo
httpwwwciscocomwebpartnersselltechnologyipcservers_os_virt_for_uchtml
bull Partner Community ldquoServers OS and Virtualizationrdquo
httpswwwmyciscocommunitycomcommunitypartnercollaborationucservers
bull Design info
httpwwwciscocomgouc-virtualized and httpwwwciscocomgoucsrnd
bull ldquoWhatrsquos new whatrsquos different when on VMwarerdquo customer doc
httpwwwciscocomenUSdocsvoice_ip_commcucmvirtualservershtml
bull Product Literature
httpwwwciscocomgoucs and httpwwwciscocomgouconucs
bull Ordering Guide
httpwwwciscocomwebpartnersselltechnologyipcuc_tech_readinesshtml~7
bull Virtual Computing Environment Portal httpwwwvceportalcom
bull Cisco Infrastructure as a Service (IaaS) Portal httpwwwciscocomgoiaas
copy 2010 Cisco andor its affiliates All rights reserved 22
Server Model TRC CPU RAM Storage Adapters
UCS B200 M2 Blade Server TRC 1
Dual E5640 (8 physical cores total)
48 GB DAS (RAID1) for
VMware FC SAN for UC apps
Cisco VIC
TRC 2 Dual E5640 (8
physical cores total) 48 GB Diskless Cisco VIC
UCS B230 M2 Blade Server
TRC 1 Dual E7-2870 (20
physical cores total) 128 GB Diskless Cisco VIC
UCS B440 M2 Blade Server
TRC1 Dual E7-4870 (40
physical cores total)
256 GB Diskless Cisco VIC
UCS C260 M2 Rack-Mount Server
TRC 1 Dual E7-2870 (20
physical cores total) 128 GB DAS (2x RAID5) 1GbE NIC
UCS C210 M2 General-Purpose
Rack-Mount Server TRC 1
Dual E5640 (8 physical cores total)
48 GB
DAS (2 disks RAID1) for VMware + DAS (8 disks RAID5) for
UC apps
1GbE NIC
TRC 2 Dual E5640 (8
physical cores total) 48 GB
DAS (2 disks RAID1) for VMware FC SAN
for UC apps
1GbE NIC and 4G FC HBA
TRC 3 Dual E5640 (8
physical cores total) 48 GB Diskless
1GbE NIC and 4G FC HBA
UCS C200 M2
General-Purpose Rack-Mount Server
TRC 1 Dual E5506 (8
physical cores total) 24 GB
DAS (4 disks RAID10) for VMware
+ UC apps 1GbE NIC
copy 2010 Cisco andor its affiliates All rights reserved 23
UC app Scale
(ldquousersrdquo)
vCPU (cores) Usually 253+ GHz
per core required
vRAM
(GB)
vDisk
(GB)
Notes
Unified
CM
1000 2 4 1 x 80 UCS C200 or BE6K only
2500 1 225 1 x 80 Not for use with C200BE6K
7500 2 6 2 x 80
10000 4 6 2 x 80
Unity
Connection
500 1 2 1 x 160
1000 1 4 1 x 160
5000 2 4 1 x 200
10000 4 4 2 x 146 Not for use with C200BE6K
20000 7 8 2 x 300
Unified
Presence
1000 1 2 1 x 80
2500 2 4 1 x 80 Not for use with C200BE6K
5000 4 4 2 x 80
Unified CCX 100 2 4 2 x 146 UCS C200 or BE6K only
300 2 4 2 x 146 Not for use with C200BE6K
400 4 8 2 x 146
Not exhaustive subject to change see wwwciscocomgouc-virtualized for latest
ie user count for particular values of BHCA trace level encryption CTI and other factors Actual
supportable user count may vary by deployment
copy 2010 Cisco andor its affiliates All rights reserved 24
Policy still lives here httpdocwikiciscocomwikiUnified_Communications_Virtualization_Sizing_GuidelinesApplication_Co-residency_Support_Policy
Three aspects
1 Allowed App Mix on same physical server
SAME RULES for TRC vs specs-based UCSHPIBM
2 Allowed VM OVA choices
DIFFERENT RULES for TRC vs specs-based due to CPU differences
3 Max number of VMs on same physical server
SAME RULES for TRC vs specs-based to determine max but specs-based might allow more VMs
Note DAS IO bottlenecks may prevent very high VM counts even if CPURAM are sufficient
copy 2010 Cisco andor its affiliates All rights reserved 25
bull Which apps can share the same physical server In general any UC with UC from apps listed at httpdocwikiciscocomwikiUnified_Communications_Virtualization_Supported_Applications
But note some UC apps restrict this eg BE6K CUCCE See their rules on their docwiki ldquochild pagesrdquo
NMTGrsquos UC Mgmt Suite (CUOM CUSM CUSSM CUPM) counts as a UC app for this
Note UCS C200 M2 TRC1 for non-BE6K no longer has special restrictions on UC App Mix
bull SEPARATE PHYSICAL SERVER required for non-UC or 3rd-party Eg N1KV ARC SingleWire vCenter FilePrint Directory CRMERP Groupware non-CUCM TFTP Nuance etc
UC VM UC VMhellipVM for
VMware
vCenter
VM for
Nexus
1KV
VSM
VM for
Solutions
Plus
CTDP
app
VM for
Unaffiliated
3rd-party
app
UC VM UC VMhellipVM for
VMware
vCenter
VM for
Nexus
1KV
VSM
VM for
Solutions
Plus
CTDP
app
VM for
Unaffiliated
3rd-party
app
UC VM UC VMhellipVM for
VMware
vCenter
VM for
Nexus
1KV
VSM
VM for
Solutions
Plus
CTDP
app
VM for
Unaffiliated
3rd-party
app
UC VM UC VMhellipVM for
VMware
vCenter
VM for
Nexus
1KV
VSM
VM for
Solutions
Plus
CTDP
app
VM for
Unaffiliated
3rd-party
app
Different blades in same chassis OK
Same blade same chassis not OK
copy 2010 Cisco andor its affiliates All rights reserved 26
bull App to HW some apps eg CUCCE donrsquot allow any of their OVAs on certain TRCs
See httpdocwikiciscocomwikiUnified_Communications_Virtualization_Supported_Applications
bull OVA to HW Some OVAs are deliberately only for use with a particular TRC or CPU
See co-res policy page and Notes column in httpdocwikiciscocomwikiUnified_Communications_Virtualization_Downloads_28including_OVAOVF_Templates29
Why Usually due to CPU modelspeed dependencies
C200 M2 TRC1
(E5506 213 GHz)
UCM
25K
UCM
75K
UCM
10K
C200 M2 Specs-based
(56xx 253+GHz )
B200C210 M2 TRC or Specs-based
(E5640 266 GHz on TRC
56xx75xx 253+ GHz on specs-based)
UCM
25K
UCM
75K
UCM
10K
UCM
1K
UCM
1K
UCM
25K
UCM
75K
UCM
10K
UCM
1K
copy 2010 Cisco andor its affiliates All rights reserved 27
Core
Core
Core
Core
Core
Core
Core
Core
Core
Core
Core
Core
Core
Core
Core
Core
Core
Core
Core
Core
VM VM VM VM
V
M
V
M
V
M
V
M
V
M
V
M
V
M
V
M
VM VM
Dual-socket 4-core
Eg UCS C210 M2 TRC1
with dual E5640
Dual-socket 6-core
Eg UCS C210 M2 Specs-based
with UC-supported CPU model
and min speed
VM VM V
M
V
M
VM VM VM V
M
Idle
VM
VM
VM
Jumbo + 1 reserved
or
Mixed sizes + 1 reserved
or
Mixed sizes
or
21 Large eg UCM 10K
or
41 Med eg UCM 75K
or
81 Small eg UCM 25K
ldquoSmallrdquo VM
ldquoMediumrdquo VM
ldquoLargerdquo VM
ldquoJumbordquo VM
VM
VM
Idle
VM VM VM VM
V
M
V
M
V
M
V
M
V
M
V
M
V
M
V
M
VM VM
VM VM V
M
V
M
VM VM VM V
M
Idle
Mixed sizes + 1 reserved
or
Mixed sizes
or
31 Large eg UCM 10K
or
61 Med eg UCM 75K
or
121 Small eg UCM 25K
VM
Idle
V
M
V
M
V
M
V
M
VM VM
VM
VM VM
VM
VM VM
copy 2010 Cisco andor its affiliates All rights reserved 28
Virtual Software Switch Options
VM
LAN SAN
ESXi Hypervisor
Software Switch
vNIC
CNA
FCoE
VMware
vSwitch
VMware
dvSwitch
Cisco Nexus
1KV
Host based (local) Distributed Distributed
IEEE 8021Q VLAN
tagging
IEEE 8021Q VLAN
tagging
IEEE 8021Q VLAN
tagging
VLANs only visible to
local ESXi host
VLANs visible to all
ESXi hosts
VLANs visible to all
ESXi hosts
EtherChannel EtherChannel EtherChannel
-- -- Virtual PortChannel
-- -- QoS Marking
(DSCPCoS)
-- -- ACL
-- -- SPAN
RADIUSTACACS+
No VM needed No VM needed VM needed for VSM
vmNIC
UCS B200
Strongly recommended for UC on UCS B-Series
Not required but recommended for UC on UCS C-Series
copy 2010 Cisco andor its affiliates All rights reserved 29
pSwitch
ESXi
bull Cisco Software Switch in Hypervisor
bull Familiar network server operations amp management model
bull Enhanced diagnostic amp monitoring capability
bull Visibility direct to VM
ESXi Nexus
1000V
VEM
Nexus
1000V
VEM
Nexus 1000V VSM
copy 2010 Cisco andor its affiliates All rights reserved 30
Physical switch maps L3 DSCP to L2 CoS
CUCM marks traffic based on L3 DSCP values
pSwitch (CAT6K etc) can do mapping from L3 DSCP to L2 CoS (if needed)
CTL Packet L3
dc1-access-6k(config)mls qos map dscp-cos 24 to 3
dc1-access-6k(config)mls qos map dscp-cos 46 to 5
CS3
L20 L3CS3
CUCM
CAT6K
L23 L3CS3 L23 L3CS3
copy 2010 Cisco andor its affiliates All rights reserved 31
bull UCS 6100 doesnrsquot look into L3 IP header
bull DSCPToS setting in IP header is not altered by UCS
bull 6100 sends packet to uplink pEthernet switch
bull Default QoS settings on UCS
FCoE (ldquomatch cos 3rdquo) ndash no drop policy
Rest (ldquomatch anyrdquo) ndash Best Effort Queue
vSwitch amp UCS 6100 can not map L3 DSCP to L2 CoS
L20 L3CS3
CUCM
CAT6K
UCS 6100
copy 2010 Cisco andor its affiliates All rights reserved 32
bull UC blades Network Adapters QoS policy set to Platinum (CoS=5 No
Drop)
bull Non-UC blades Network Adapters QoS policy set to best effort
N1Kv Considerations
bull UC sig traffic (CoS3) share queues with FCoE traffic (CoS3)
bull UC sig traffic is given lossless behavior
bull Default CoS value of 3 for FCoE traffic should never be changed
Without N1Kv
Caveat
bull All traffic types from virtual UC App will get CoS value of Platinum
bull Non-UC application gets best-effort class might not be acceptable
L20 L3CS3
L23 L3CS3
L23 L3CS3
CUCM
N1KV
UCS 6100
CAT6K
copy 2010 Cisco andor its affiliates All rights reserved 33
Compute Layer
SANStorage
Layer ndash Cisco
SRND
Cisco
UCS 6100
Fabric
Interconnect
UCS 5100
Blade
Server
Cisco SAN
Switch
4x10GE
4x10GE
4x10GE
4x10GE
FC FC
FC FC
Nexus
1000V
FC Storage
SP-A SP-B
3rd party layer
CUCM VM IOPS ~ 200
200 IOPS 4KB ~ 64 Mbps per VM
bull Total capacity 28000 IOPS
bull 14000 IOPS per controller
bull 4 KByte block size
14000 IOPS x (4KB) ~ 428 Mbps
600 Mbps throughputcontroller
3rd Party SAN Example
Result
bull One 4 Gbps FC interface is enough to
handle the entire capacity of one Storage
Array
bull HA requires four FC interfaces
copy 2010 Cisco andor its affiliates All rights reserved 34
copy 2010 Cisco andor its affiliates All rights reserved 35
bull All UC Deployment Models are supported
No change in the current deployment models
Base deployment model ndash Single Site Centralized Call Processing etc are not changing
bull VM machine layout on a blade andor chassis
Unity Connection requires one extra CPU core (vCPU) on blade
bull Software checks for design rules
No rules or restrictions are in place in UC Apps to check if you are running the primary and sub on the same blade
bull CoWAN ruleslatency requirement are same
Does not depend on CUCM code or hardware
httpwwwciscocomgoucsrnd
copy 2010 Cisco andor its affiliates All rights reserved 36
bull SRND application-layer guidelines are same as when on MCS
Redundancy Rules are the same
Clustering over the WANlatency numbers
Mega Cluster supported in 85
Determine quantityrole of nodes
For HA No design checks validating proper placement of primary and secondary servers
CUCCE private network requirement
bull Mixed clusters of HP IBM UCS are supported
Subject to ldquocommon senserdquo rules ndash eg donrsquot make Pub or Primary less powerful than Sub or Secondary
bull Direct attach devices must be on physical MCS server
-MOH Live audio stream
-Tape BackupFloppy
bull New factors to consider for end-to-end QoS designconfiguration
37 copy 2010 Cisco andor its affiliates All rights reserved
FROM CUCM
System Release To Unified CM 85 To UC System 85
4x Multi Hop thru 61(x)71(x) Multi Hop thru 61(3)
51(2) Multi Hop thru 61(x)71(x) NA
51(3) 2 Hop thru 71(3) NA
61(1) 2 Hop thru 61(x)71(x) 2 Hop
61(2) 2 Hop thru 61(x)71(x) NA
61(3) 2 Hop thru 61(x)71(x) NA
61(4) Single Hop NA
61(5) Single Hop NA
70(1) 2 Hop thru 71(x) 2 Hop
71(2) 2 Hop thru 71(x) 2 Hop
71(3) Single Hop Single Hop Multi stagesBWC supported
71(5) Single Hop NA
80(1) Single Hop Single Hop Multi stagesBWC supported
80(2) 80(3) Single Hop NA
38 copy 2010 Cisco andor its affiliates All rights reserved
VMware Feature support bull VMware feature support varies by application
bull Some features are supported with Caveats some Partially
bull For example
bull Clone Virtual Machine
ldquoY (C)rdquo means VM has to be powered off
bull vMotion
ldquoY (C)rdquo means vMotion supported for live traffic calls shouldnrsquot be dropped (but not guaranteed)
ldquoPartialrdquo means in maintenance mode only
ESXi Features CUCM CUC CUP CCX
Clone Virtual Machine Y (C) Y (C) Y (C) Y (C)
VMware vMotion Y (C) Partial Partial Y (C)
Resize Virtual Machine Partial Partial Partial Partial
VMware HA Y (C) Y (C) Y (C) Y (C)
Boot From SAN Y (C) Y (C) Y (C) Y (C)
VMware DRS No No No No
Docwiki httpdocwikiciscocomwikiUnified_Communications_VMware_Requirements
copy 2010 Cisco andor its affiliates All rights reserved 39
Current Business Continuity and Disaster Recovery strategies are still applicable
The UC apps redundancy rules are same
Distribute UC application nodes across UCS blades chassis and sites to minimize failure impact
Primarysecondary on different blade chassis sites
On same blade mix Subs with TFTPMoH vs just Subs
Redundancy of UCS components (blade chassis FEX links Interconnect switching)
Redundancy of ldquonewrdquo network types (10GbE SAN multi-pathing etc)
copy 2010 Cisco andor its affiliates All rights reserved 40
TAC Support Demarcation
Server Hardware
Shared Storage
VMware Cisco
Application
UC on UCS
Tested Reference Configuration
Cisco 3rd-party Cisco
Cisco VCE VCE
3rd-party
VCE
UC on UCS
Specs-based (including Vblock option)
Cisco 3rd-party Cisco
Cisco VCE VCE
3rd-party
VCE
3rd-party VMware (HP IBM)
Specs-based 3rd-party 3rd-party 3rd-party Cisco
MCS 7800 Appliances Cisco NA NA Cisco
Customer-provided MCS 7800 equivalent 3rd party NA NA Cisco
41 copy 2010 Cisco andor its affiliates All rights reserved
Customer Example ndash Primary Data Center
OLD NEW
Hardware Nodes 62 Physical Servers ( EU HQ clusters) 14 Approx
Software Version 615 amp 851 851
Ucxn Version 421 851 ndash 3 pairs -- Virtualized
CER 20 70 86 -- Virtualized
CM SUB CM PUB CM SUB CM SUB
MOH TFTP CER CM SUB
CER UCxn UCxN CM SUB
42 copy 2010 Cisco andor its affiliates All rights reserved
Deployment Model ndash Data Center 1
CM SUB CM PUB CM SUB CM SUB
MOH TFTP UCxN UCxn
CER UCxn UCxn UCxn
43 copy 2010 Cisco andor its affiliates All rights reserved
Deployment Model ndash Data Center 2
CM SUB CM PUB CM SUB CM SUB
MOH TFTP Ucxn UCxn
CER UCxn UCxn UCxn
copy 2010 Cisco andor its affiliates All rights reserved 44
Customer Design
PSTN
IP WAN
SIP Proxy
Unified Communications Manager
Unity Connection
Unified Presence
Unified Contact Center Express
Cisco UCS 5108 Chassis with UCS B200 Blade Servers
Cisco UCS 6100 Fabric Interconnect Switch
Cisco UCS C210 or C200 General-Purpose Rack-mount Server
CUSP
(11K phones)
(3K phones)
(400 phones)
copy 2010 Cisco andor its affiliates All rights reserved 45
HQ Details
Blade 1 Blade 2
CPU-1 CPU-2 CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
Blade 3 Blade 4
CPU-1 CPU-2 CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
CUP-1 UCCX-1 CUP-2
Blade 5 Blade 6
CPU-1 CPU-2 CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
UCCX-1
UCxn-1 Active
Leave
idle for
UCxn
UCxn-2
Active
TFTP-2
SUB-2
SUB-4
SUB-6
SUB-8 CUP-2 UCCX-2
SUB-3 PUB SUB-1
CUP-1
TFTP-1 SUB-5
SUB-7
Blade Slot 7 Blade Slot 8
CPU-1 CPU-2 CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
CUCM VM OVAs
Messaging VM OVAs
Leave
idle for
UCxn
ldquoSparerdquo blade slots available for non-UC workloads such as Cisco Nexus 1000V VMware
vCenter or 3rd-party workloads such as directories email groupware or other business
applications
Contact Center VM OVAs
Presence VM OVAs
copy 2010 Cisco andor its affiliates All rights reserved 46
Rack Server 3
CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
UCxn-2 CCX-1 CUP
Rack Server 2
CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
Rack Server 1
CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
Leave
idle for
UCxn
UCxn-1 TFTP-2 SUB-2
Leave
idle for
UCxn
SUB-1 CCX-2 PUB TFTP-1
Branch Office Details
Rack Server 2
CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
Rack Server 1
CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
PUBTFTP CCX-1 CUP
SUB CCX-2
Leave
idle for
UCxn
Leave
idle for
UCxn
UC
xn
-1
UC
xn
-1
CUCM VM OVAs
Messaging VM OVAs
Contact Center VM OVAs
Presence VM OVAs
copy 2010 Cisco andor its affiliates All rights reserved 47
bull DAS Rack Mount Server (Cisco C-Series)
bull Popular DAS Protocol SCSI
bull iSCSI Access SCSI storage media using IP network
bull Fibre Channel The most popular SAN protocol today
Cable distance ~ 2 km
Popular speed - 4 Gbs
bull NAS (Network Attached Storage) uses NFS (Network File System) protocol over TCPIP
DAS
SCSI
Computer System
SCSI Bus Adapter
SCSI Device Driver
Volume Manager
File System
Application
iSCSI
File System
Application
SCSI Device Driver iSCSI Driver
TCPIP Stack
NIC
Volume Manager
NIC
TCPIP Stack
iSCSI Layer
Bus Adapter
Host
Server
Storage
Transport
Storage
Media
FC SAN
FC
FC HBA
SCSI Device Driver
Volume Manager
File System
Application
Computer System Computer System
Block IO
SAN IP
copy 2010 Cisco andor its affiliates All rights reserved 48
HDD 1
450gig
15K RPM
HDD 2
450gig
15K RPM
HDD 3
450gig
15K RPM
HDD 4
450gig
15K RPM
HDD 5
450gig
15K RPM
Single RAID5 Group (14 TB Usable Space)
PUB SUB1 UCCX1
UC
VM 1
UC
VM 2
UC
VM 3
LUN 2 (720 GB) LUN 1 (720 GB)
UCCX2 CUP1 CUP2
UC
VM 4
UC
VM 5 UC
VM 6
4 to 8 UC VMs per
LUN
(max dependent on
sumvDisks)
NASSAN Array Best Practices for UC
Must be lt2TB per
LUN Recommend
500GB to 15 TB
Use FC class disks
with ~180 IOPS (eg
450 GB 15K or
300 GB 15K)
copy 2010 Cisco andor its affiliates All rights reserved 49
DAS Example UCS C210 M2 TRC1
HDD 1
146 GB
15K RPM
HDD 2
146 GB
15K RPM
HDD 3
146 GB
15K RPM
HDD 4
146 GB
15K RPM
HDD 5
146 GB
15K RPM
HDD 6
146 GB
15K RPM
HDD 7
146 GB
15K RPM
HDD 8
146 GB
15K RPM
HDD 9
146 GB
15K RPM
HDD 10
146 GB
15K RPM
Single RAID1
Volume Single RAID5 Volume (1022 GB after RAID overhead)
VMFS Filestore (947 GB after VMFS overhead)
PUB UCCX1
UC
VM 1
UC
VM 3
CUP1
UC
VM 5
vSphere ESXi
image
Notes
bull VMFS block size limits
max vDisk size
bull Could have gt1 VMFS
datastore on RAID
volume
copy 2010 Cisco andor its affiliates All rights reserved 50
DAS Example UCS C200 M2 TRC1 for BE6K
HDD 1
1 TB
72K RPM
HDD 2
1 TB
72K RPM
HDD 3
1 TB
72K RPM
HDD 4 1
TB
72K RPM
Single RAID10 Volume (2 TB after RAID
overhead)
VMFS Filestore (18 TB after VMFS overhead)
PUB UCCX1
UC
VM 1
UC
VM 3
CUP1
UC
VM 5 vSphere ESXi
image
copy 2010 Cisco andor its affiliates All rights reserved 51
UC Application Co-residency
bull See wwwciscocomgouc-virtualized for latest
bull Based on supported OVA for download
OVA reserves cores RAM etc to VMs
Basic rule of thumb fill up blade until out of capacity
If blade contains VM for messaging must reserve core for ESXi
bull Hardware oversubscription not supported
52 copy 2010 Cisco andor its affiliates All rights reserved
Virtual Machine Sizing
bull Virtual Machine virtual hardware defined by an VM template
vCPU vRAM vDisk vNICs
bull Capacity
bullAn VM template is associated with a specific capacity
bull VM templates are packaged in a OVA file
bull There are usually different VM template per release For example CUCM_80_vmv7_v21ova
CUCM_85_vmv7_v21ova
CUCM_86_vmv7_v15ova
Includes product product version VMware hardware version template version
53 copy 2010 Cisco andor its affiliates All rights reserved httptoolsciscocomcucst
copy 2010 Cisco andor its affiliates All rights reserved 54
bull Customer-accessible
bullUC on UCS httpwwwciscocomgouconucs and wwwciscocomgouc-virtualized and wwwciscocomgoucsrnd and wwwciscocomgoswonly (UCS page)
bullUCS in general httpwwwciscocomgoucs
bullVblocks amp Virtual Computing Environment wwwvceportalcom and wwwvcecom
bullFlexPods wwwcisconetappcom
bullCisco Infrastructure as a Service (IaaS) Portal httpwwwciscocomgoiaas
bullCisco Unified Service Delivery Portal httpwwwciscocomgousd
bullldquoCUCM on Virtual Serversrdquo (summary of ldquowhatrsquos newdifferent when virtualizedrdquo) httpwwwciscocomenUSdocsvoice_ip_commcucmvirtualservershtml
bull ldquoVirtualization and UCS 101rdquo httpnewsroomciscocomdlls2010ts_102810html
bull UC on UCS ldquotech valuerdquo httpwwwciscocomenUSsolutionsns340ns339ns638ns914html_TWTVtwtv_episode_74html
copy 2010 Cisco andor its affiliates All rights reserved 55
bull Partner Central ldquoServers OS and Virtualizationrdquo
httpwwwciscocomwebpartnersselltechnologyipcservers_os_virt_for_uchtml
bull Partner Community ldquoServers OS and Virtualizationrdquo
httpswwwmyciscocommunitycomcommunitypartnercollaborationucservers
bull Design info
httpwwwciscocomgouc-virtualized and httpwwwciscocomgoucsrnd
bull ldquoWhatrsquos new whatrsquos different when on VMwarerdquo customer doc
httpwwwciscocomenUSdocsvoice_ip_commcucmvirtualservershtml
bull Product Literature
httpwwwciscocomgoucs and httpwwwciscocomgouconucs
bull Ordering Guide
httpwwwciscocomwebpartnersselltechnologyipcuc_tech_readinesshtml~7
bull Virtual Computing Environment Portal httpwwwvceportalcom
bull Cisco Infrastructure as a Service (IaaS) Portal httpwwwciscocomgoiaas
copy 2010 Cisco andor its affiliates All rights reserved 23
UC app Scale
(ldquousersrdquo)
vCPU (cores) Usually 253+ GHz
per core required
vRAM
(GB)
vDisk
(GB)
Notes
Unified
CM
1000 2 4 1 x 80 UCS C200 or BE6K only
2500 1 225 1 x 80 Not for use with C200BE6K
7500 2 6 2 x 80
10000 4 6 2 x 80
Unity
Connection
500 1 2 1 x 160
1000 1 4 1 x 160
5000 2 4 1 x 200
10000 4 4 2 x 146 Not for use with C200BE6K
20000 7 8 2 x 300
Unified
Presence
1000 1 2 1 x 80
2500 2 4 1 x 80 Not for use with C200BE6K
5000 4 4 2 x 80
Unified CCX 100 2 4 2 x 146 UCS C200 or BE6K only
300 2 4 2 x 146 Not for use with C200BE6K
400 4 8 2 x 146
Not exhaustive subject to change see wwwciscocomgouc-virtualized for latest
ie user count for particular values of BHCA trace level encryption CTI and other factors Actual
supportable user count may vary by deployment
copy 2010 Cisco andor its affiliates All rights reserved 24
Policy still lives here httpdocwikiciscocomwikiUnified_Communications_Virtualization_Sizing_GuidelinesApplication_Co-residency_Support_Policy
Three aspects
1 Allowed App Mix on same physical server
SAME RULES for TRC vs specs-based UCSHPIBM
2 Allowed VM OVA choices
DIFFERENT RULES for TRC vs specs-based due to CPU differences
3 Max number of VMs on same physical server
SAME RULES for TRC vs specs-based to determine max but specs-based might allow more VMs
Note DAS IO bottlenecks may prevent very high VM counts even if CPURAM are sufficient
copy 2010 Cisco andor its affiliates All rights reserved 25
bull Which apps can share the same physical server In general any UC with UC from apps listed at httpdocwikiciscocomwikiUnified_Communications_Virtualization_Supported_Applications
But note some UC apps restrict this eg BE6K CUCCE See their rules on their docwiki ldquochild pagesrdquo
NMTGrsquos UC Mgmt Suite (CUOM CUSM CUSSM CUPM) counts as a UC app for this
Note UCS C200 M2 TRC1 for non-BE6K no longer has special restrictions on UC App Mix
bull SEPARATE PHYSICAL SERVER required for non-UC or 3rd-party Eg N1KV ARC SingleWire vCenter FilePrint Directory CRMERP Groupware non-CUCM TFTP Nuance etc
UC VM UC VMhellipVM for
VMware
vCenter
VM for
Nexus
1KV
VSM
VM for
Solutions
Plus
CTDP
app
VM for
Unaffiliated
3rd-party
app
UC VM UC VMhellipVM for
VMware
vCenter
VM for
Nexus
1KV
VSM
VM for
Solutions
Plus
CTDP
app
VM for
Unaffiliated
3rd-party
app
UC VM UC VMhellipVM for
VMware
vCenter
VM for
Nexus
1KV
VSM
VM for
Solutions
Plus
CTDP
app
VM for
Unaffiliated
3rd-party
app
UC VM UC VMhellipVM for
VMware
vCenter
VM for
Nexus
1KV
VSM
VM for
Solutions
Plus
CTDP
app
VM for
Unaffiliated
3rd-party
app
Different blades in same chassis OK
Same blade same chassis not OK
copy 2010 Cisco andor its affiliates All rights reserved 26
bull App to HW some apps eg CUCCE donrsquot allow any of their OVAs on certain TRCs
See httpdocwikiciscocomwikiUnified_Communications_Virtualization_Supported_Applications
bull OVA to HW Some OVAs are deliberately only for use with a particular TRC or CPU
See co-res policy page and Notes column in httpdocwikiciscocomwikiUnified_Communications_Virtualization_Downloads_28including_OVAOVF_Templates29
Why Usually due to CPU modelspeed dependencies
C200 M2 TRC1
(E5506 213 GHz)
UCM
25K
UCM
75K
UCM
10K
C200 M2 Specs-based
(56xx 253+GHz )
B200C210 M2 TRC or Specs-based
(E5640 266 GHz on TRC
56xx75xx 253+ GHz on specs-based)
UCM
25K
UCM
75K
UCM
10K
UCM
1K
UCM
1K
UCM
25K
UCM
75K
UCM
10K
UCM
1K
copy 2010 Cisco andor its affiliates All rights reserved 27
Core
Core
Core
Core
Core
Core
Core
Core
Core
Core
Core
Core
Core
Core
Core
Core
Core
Core
Core
Core
VM VM VM VM
V
M
V
M
V
M
V
M
V
M
V
M
V
M
V
M
VM VM
Dual-socket 4-core
Eg UCS C210 M2 TRC1
with dual E5640
Dual-socket 6-core
Eg UCS C210 M2 Specs-based
with UC-supported CPU model
and min speed
VM VM V
M
V
M
VM VM VM V
M
Idle
VM
VM
VM
Jumbo + 1 reserved
or
Mixed sizes + 1 reserved
or
Mixed sizes
or
21 Large eg UCM 10K
or
41 Med eg UCM 75K
or
81 Small eg UCM 25K
ldquoSmallrdquo VM
ldquoMediumrdquo VM
ldquoLargerdquo VM
ldquoJumbordquo VM
VM
VM
Idle
VM VM VM VM
V
M
V
M
V
M
V
M
V
M
V
M
V
M
V
M
VM VM
VM VM V
M
V
M
VM VM VM V
M
Idle
Mixed sizes + 1 reserved
or
Mixed sizes
or
31 Large eg UCM 10K
or
61 Med eg UCM 75K
or
121 Small eg UCM 25K
VM
Idle
V
M
V
M
V
M
V
M
VM VM
VM
VM VM
VM
VM VM
copy 2010 Cisco andor its affiliates All rights reserved 28
Virtual Software Switch Options
VM
LAN SAN
ESXi Hypervisor
Software Switch
vNIC
CNA
FCoE
VMware
vSwitch
VMware
dvSwitch
Cisco Nexus
1KV
Host based (local) Distributed Distributed
IEEE 8021Q VLAN
tagging
IEEE 8021Q VLAN
tagging
IEEE 8021Q VLAN
tagging
VLANs only visible to
local ESXi host
VLANs visible to all
ESXi hosts
VLANs visible to all
ESXi hosts
EtherChannel EtherChannel EtherChannel
-- -- Virtual PortChannel
-- -- QoS Marking
(DSCPCoS)
-- -- ACL
-- -- SPAN
RADIUSTACACS+
No VM needed No VM needed VM needed for VSM
vmNIC
UCS B200
Strongly recommended for UC on UCS B-Series
Not required but recommended for UC on UCS C-Series
copy 2010 Cisco andor its affiliates All rights reserved 29
pSwitch
ESXi
bull Cisco Software Switch in Hypervisor
bull Familiar network server operations amp management model
bull Enhanced diagnostic amp monitoring capability
bull Visibility direct to VM
ESXi Nexus
1000V
VEM
Nexus
1000V
VEM
Nexus 1000V VSM
copy 2010 Cisco andor its affiliates All rights reserved 30
Physical switch maps L3 DSCP to L2 CoS
CUCM marks traffic based on L3 DSCP values
pSwitch (CAT6K etc) can do mapping from L3 DSCP to L2 CoS (if needed)
CTL Packet L3
dc1-access-6k(config)mls qos map dscp-cos 24 to 3
dc1-access-6k(config)mls qos map dscp-cos 46 to 5
CS3
L20 L3CS3
CUCM
CAT6K
L23 L3CS3 L23 L3CS3
copy 2010 Cisco andor its affiliates All rights reserved 31
bull UCS 6100 doesnrsquot look into L3 IP header
bull DSCPToS setting in IP header is not altered by UCS
bull 6100 sends packet to uplink pEthernet switch
bull Default QoS settings on UCS
FCoE (ldquomatch cos 3rdquo) ndash no drop policy
Rest (ldquomatch anyrdquo) ndash Best Effort Queue
vSwitch amp UCS 6100 can not map L3 DSCP to L2 CoS
L20 L3CS3
CUCM
CAT6K
UCS 6100
copy 2010 Cisco andor its affiliates All rights reserved 32
bull UC blades Network Adapters QoS policy set to Platinum (CoS=5 No
Drop)
bull Non-UC blades Network Adapters QoS policy set to best effort
N1Kv Considerations
bull UC sig traffic (CoS3) share queues with FCoE traffic (CoS3)
bull UC sig traffic is given lossless behavior
bull Default CoS value of 3 for FCoE traffic should never be changed
Without N1Kv
Caveat
bull All traffic types from virtual UC App will get CoS value of Platinum
bull Non-UC application gets best-effort class might not be acceptable
L20 L3CS3
L23 L3CS3
L23 L3CS3
CUCM
N1KV
UCS 6100
CAT6K
copy 2010 Cisco andor its affiliates All rights reserved 33
Compute Layer
SANStorage
Layer ndash Cisco
SRND
Cisco
UCS 6100
Fabric
Interconnect
UCS 5100
Blade
Server
Cisco SAN
Switch
4x10GE
4x10GE
4x10GE
4x10GE
FC FC
FC FC
Nexus
1000V
FC Storage
SP-A SP-B
3rd party layer
CUCM VM IOPS ~ 200
200 IOPS 4KB ~ 64 Mbps per VM
bull Total capacity 28000 IOPS
bull 14000 IOPS per controller
bull 4 KByte block size
14000 IOPS x (4KB) ~ 428 Mbps
600 Mbps throughputcontroller
3rd Party SAN Example
Result
bull One 4 Gbps FC interface is enough to
handle the entire capacity of one Storage
Array
bull HA requires four FC interfaces
copy 2010 Cisco andor its affiliates All rights reserved 34
copy 2010 Cisco andor its affiliates All rights reserved 35
bull All UC Deployment Models are supported
No change in the current deployment models
Base deployment model ndash Single Site Centralized Call Processing etc are not changing
bull VM machine layout on a blade andor chassis
Unity Connection requires one extra CPU core (vCPU) on blade
bull Software checks for design rules
No rules or restrictions are in place in UC Apps to check if you are running the primary and sub on the same blade
bull CoWAN ruleslatency requirement are same
Does not depend on CUCM code or hardware
httpwwwciscocomgoucsrnd
copy 2010 Cisco andor its affiliates All rights reserved 36
bull SRND application-layer guidelines are same as when on MCS
Redundancy Rules are the same
Clustering over the WANlatency numbers
Mega Cluster supported in 85
Determine quantityrole of nodes
For HA No design checks validating proper placement of primary and secondary servers
CUCCE private network requirement
bull Mixed clusters of HP IBM UCS are supported
Subject to ldquocommon senserdquo rules ndash eg donrsquot make Pub or Primary less powerful than Sub or Secondary
bull Direct attach devices must be on physical MCS server
-MOH Live audio stream
-Tape BackupFloppy
bull New factors to consider for end-to-end QoS designconfiguration
37 copy 2010 Cisco andor its affiliates All rights reserved
FROM CUCM
System Release To Unified CM 85 To UC System 85
4x Multi Hop thru 61(x)71(x) Multi Hop thru 61(3)
51(2) Multi Hop thru 61(x)71(x) NA
51(3) 2 Hop thru 71(3) NA
61(1) 2 Hop thru 61(x)71(x) 2 Hop
61(2) 2 Hop thru 61(x)71(x) NA
61(3) 2 Hop thru 61(x)71(x) NA
61(4) Single Hop NA
61(5) Single Hop NA
70(1) 2 Hop thru 71(x) 2 Hop
71(2) 2 Hop thru 71(x) 2 Hop
71(3) Single Hop Single Hop Multi stagesBWC supported
71(5) Single Hop NA
80(1) Single Hop Single Hop Multi stagesBWC supported
80(2) 80(3) Single Hop NA
38 copy 2010 Cisco andor its affiliates All rights reserved
VMware Feature support bull VMware feature support varies by application
bull Some features are supported with Caveats some Partially
bull For example
bull Clone Virtual Machine
ldquoY (C)rdquo means VM has to be powered off
bull vMotion
ldquoY (C)rdquo means vMotion supported for live traffic calls shouldnrsquot be dropped (but not guaranteed)
ldquoPartialrdquo means in maintenance mode only
ESXi Features CUCM CUC CUP CCX
Clone Virtual Machine Y (C) Y (C) Y (C) Y (C)
VMware vMotion Y (C) Partial Partial Y (C)
Resize Virtual Machine Partial Partial Partial Partial
VMware HA Y (C) Y (C) Y (C) Y (C)
Boot From SAN Y (C) Y (C) Y (C) Y (C)
VMware DRS No No No No
Docwiki httpdocwikiciscocomwikiUnified_Communications_VMware_Requirements
copy 2010 Cisco andor its affiliates All rights reserved 39
Current Business Continuity and Disaster Recovery strategies are still applicable
The UC apps redundancy rules are same
Distribute UC application nodes across UCS blades chassis and sites to minimize failure impact
Primarysecondary on different blade chassis sites
On same blade mix Subs with TFTPMoH vs just Subs
Redundancy of UCS components (blade chassis FEX links Interconnect switching)
Redundancy of ldquonewrdquo network types (10GbE SAN multi-pathing etc)
copy 2010 Cisco andor its affiliates All rights reserved 40
TAC Support Demarcation
Server Hardware
Shared Storage
VMware Cisco
Application
UC on UCS
Tested Reference Configuration
Cisco 3rd-party Cisco
Cisco VCE VCE
3rd-party
VCE
UC on UCS
Specs-based (including Vblock option)
Cisco 3rd-party Cisco
Cisco VCE VCE
3rd-party
VCE
3rd-party VMware (HP IBM)
Specs-based 3rd-party 3rd-party 3rd-party Cisco
MCS 7800 Appliances Cisco NA NA Cisco
Customer-provided MCS 7800 equivalent 3rd party NA NA Cisco
41 copy 2010 Cisco andor its affiliates All rights reserved
Customer Example ndash Primary Data Center
OLD NEW
Hardware Nodes 62 Physical Servers ( EU HQ clusters) 14 Approx
Software Version 615 amp 851 851
Ucxn Version 421 851 ndash 3 pairs -- Virtualized
CER 20 70 86 -- Virtualized
CM SUB CM PUB CM SUB CM SUB
MOH TFTP CER CM SUB
CER UCxn UCxN CM SUB
42 copy 2010 Cisco andor its affiliates All rights reserved
Deployment Model ndash Data Center 1
CM SUB CM PUB CM SUB CM SUB
MOH TFTP UCxN UCxn
CER UCxn UCxn UCxn
43 copy 2010 Cisco andor its affiliates All rights reserved
Deployment Model ndash Data Center 2
CM SUB CM PUB CM SUB CM SUB
MOH TFTP Ucxn UCxn
CER UCxn UCxn UCxn
copy 2010 Cisco andor its affiliates All rights reserved 44
Customer Design
PSTN
IP WAN
SIP Proxy
Unified Communications Manager
Unity Connection
Unified Presence
Unified Contact Center Express
Cisco UCS 5108 Chassis with UCS B200 Blade Servers
Cisco UCS 6100 Fabric Interconnect Switch
Cisco UCS C210 or C200 General-Purpose Rack-mount Server
CUSP
(11K phones)
(3K phones)
(400 phones)
copy 2010 Cisco andor its affiliates All rights reserved 45
HQ Details
Blade 1 Blade 2
CPU-1 CPU-2 CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
Blade 3 Blade 4
CPU-1 CPU-2 CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
CUP-1 UCCX-1 CUP-2
Blade 5 Blade 6
CPU-1 CPU-2 CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
UCCX-1
UCxn-1 Active
Leave
idle for
UCxn
UCxn-2
Active
TFTP-2
SUB-2
SUB-4
SUB-6
SUB-8 CUP-2 UCCX-2
SUB-3 PUB SUB-1
CUP-1
TFTP-1 SUB-5
SUB-7
Blade Slot 7 Blade Slot 8
CPU-1 CPU-2 CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
CUCM VM OVAs
Messaging VM OVAs
Leave
idle for
UCxn
ldquoSparerdquo blade slots available for non-UC workloads such as Cisco Nexus 1000V VMware
vCenter or 3rd-party workloads such as directories email groupware or other business
applications
Contact Center VM OVAs
Presence VM OVAs
copy 2010 Cisco andor its affiliates All rights reserved 46
Rack Server 3
CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
UCxn-2 CCX-1 CUP
Rack Server 2
CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
Rack Server 1
CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
Leave
idle for
UCxn
UCxn-1 TFTP-2 SUB-2
Leave
idle for
UCxn
SUB-1 CCX-2 PUB TFTP-1
Branch Office Details
Rack Server 2
CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
Rack Server 1
CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
PUBTFTP CCX-1 CUP
SUB CCX-2
Leave
idle for
UCxn
Leave
idle for
UCxn
UC
xn
-1
UC
xn
-1
CUCM VM OVAs
Messaging VM OVAs
Contact Center VM OVAs
Presence VM OVAs
copy 2010 Cisco andor its affiliates All rights reserved 47
bull DAS Rack Mount Server (Cisco C-Series)
bull Popular DAS Protocol SCSI
bull iSCSI Access SCSI storage media using IP network
bull Fibre Channel The most popular SAN protocol today
Cable distance ~ 2 km
Popular speed - 4 Gbs
bull NAS (Network Attached Storage) uses NFS (Network File System) protocol over TCPIP
DAS
SCSI
Computer System
SCSI Bus Adapter
SCSI Device Driver
Volume Manager
File System
Application
iSCSI
File System
Application
SCSI Device Driver iSCSI Driver
TCPIP Stack
NIC
Volume Manager
NIC
TCPIP Stack
iSCSI Layer
Bus Adapter
Host
Server
Storage
Transport
Storage
Media
FC SAN
FC
FC HBA
SCSI Device Driver
Volume Manager
File System
Application
Computer System Computer System
Block IO
SAN IP
copy 2010 Cisco andor its affiliates All rights reserved 48
HDD 1
450gig
15K RPM
HDD 2
450gig
15K RPM
HDD 3
450gig
15K RPM
HDD 4
450gig
15K RPM
HDD 5
450gig
15K RPM
Single RAID5 Group (14 TB Usable Space)
PUB SUB1 UCCX1
UC
VM 1
UC
VM 2
UC
VM 3
LUN 2 (720 GB) LUN 1 (720 GB)
UCCX2 CUP1 CUP2
UC
VM 4
UC
VM 5 UC
VM 6
4 to 8 UC VMs per
LUN
(max dependent on
sumvDisks)
NASSAN Array Best Practices for UC
Must be lt2TB per
LUN Recommend
500GB to 15 TB
Use FC class disks
with ~180 IOPS (eg
450 GB 15K or
300 GB 15K)
copy 2010 Cisco andor its affiliates All rights reserved 49
DAS Example UCS C210 M2 TRC1
HDD 1
146 GB
15K RPM
HDD 2
146 GB
15K RPM
HDD 3
146 GB
15K RPM
HDD 4
146 GB
15K RPM
HDD 5
146 GB
15K RPM
HDD 6
146 GB
15K RPM
HDD 7
146 GB
15K RPM
HDD 8
146 GB
15K RPM
HDD 9
146 GB
15K RPM
HDD 10
146 GB
15K RPM
Single RAID1
Volume Single RAID5 Volume (1022 GB after RAID overhead)
VMFS Filestore (947 GB after VMFS overhead)
PUB UCCX1
UC
VM 1
UC
VM 3
CUP1
UC
VM 5
vSphere ESXi
image
Notes
bull VMFS block size limits
max vDisk size
bull Could have gt1 VMFS
datastore on RAID
volume
copy 2010 Cisco andor its affiliates All rights reserved 50
DAS Example UCS C200 M2 TRC1 for BE6K
HDD 1
1 TB
72K RPM
HDD 2
1 TB
72K RPM
HDD 3
1 TB
72K RPM
HDD 4 1
TB
72K RPM
Single RAID10 Volume (2 TB after RAID
overhead)
VMFS Filestore (18 TB after VMFS overhead)
PUB UCCX1
UC
VM 1
UC
VM 3
CUP1
UC
VM 5 vSphere ESXi
image
copy 2010 Cisco andor its affiliates All rights reserved 51
UC Application Co-residency
bull See wwwciscocomgouc-virtualized for latest
bull Based on supported OVA for download
OVA reserves cores RAM etc to VMs
Basic rule of thumb fill up blade until out of capacity
If blade contains VM for messaging must reserve core for ESXi
bull Hardware oversubscription not supported
52 copy 2010 Cisco andor its affiliates All rights reserved
Virtual Machine Sizing
bull Virtual Machine virtual hardware defined by an VM template
vCPU vRAM vDisk vNICs
bull Capacity
bullAn VM template is associated with a specific capacity
bull VM templates are packaged in a OVA file
bull There are usually different VM template per release For example CUCM_80_vmv7_v21ova
CUCM_85_vmv7_v21ova
CUCM_86_vmv7_v15ova
Includes product product version VMware hardware version template version
53 copy 2010 Cisco andor its affiliates All rights reserved httptoolsciscocomcucst
copy 2010 Cisco andor its affiliates All rights reserved 54
bull Customer-accessible
bullUC on UCS httpwwwciscocomgouconucs and wwwciscocomgouc-virtualized and wwwciscocomgoucsrnd and wwwciscocomgoswonly (UCS page)
bullUCS in general httpwwwciscocomgoucs
bullVblocks amp Virtual Computing Environment wwwvceportalcom and wwwvcecom
bullFlexPods wwwcisconetappcom
bullCisco Infrastructure as a Service (IaaS) Portal httpwwwciscocomgoiaas
bullCisco Unified Service Delivery Portal httpwwwciscocomgousd
bullldquoCUCM on Virtual Serversrdquo (summary of ldquowhatrsquos newdifferent when virtualizedrdquo) httpwwwciscocomenUSdocsvoice_ip_commcucmvirtualservershtml
bull ldquoVirtualization and UCS 101rdquo httpnewsroomciscocomdlls2010ts_102810html
bull UC on UCS ldquotech valuerdquo httpwwwciscocomenUSsolutionsns340ns339ns638ns914html_TWTVtwtv_episode_74html
copy 2010 Cisco andor its affiliates All rights reserved 55
bull Partner Central ldquoServers OS and Virtualizationrdquo
httpwwwciscocomwebpartnersselltechnologyipcservers_os_virt_for_uchtml
bull Partner Community ldquoServers OS and Virtualizationrdquo
httpswwwmyciscocommunitycomcommunitypartnercollaborationucservers
bull Design info
httpwwwciscocomgouc-virtualized and httpwwwciscocomgoucsrnd
bull ldquoWhatrsquos new whatrsquos different when on VMwarerdquo customer doc
httpwwwciscocomenUSdocsvoice_ip_commcucmvirtualservershtml
bull Product Literature
httpwwwciscocomgoucs and httpwwwciscocomgouconucs
bull Ordering Guide
httpwwwciscocomwebpartnersselltechnologyipcuc_tech_readinesshtml~7
bull Virtual Computing Environment Portal httpwwwvceportalcom
bull Cisco Infrastructure as a Service (IaaS) Portal httpwwwciscocomgoiaas
copy 2010 Cisco andor its affiliates All rights reserved 24
Policy still lives here httpdocwikiciscocomwikiUnified_Communications_Virtualization_Sizing_GuidelinesApplication_Co-residency_Support_Policy
Three aspects
1 Allowed App Mix on same physical server
SAME RULES for TRC vs specs-based UCSHPIBM
2 Allowed VM OVA choices
DIFFERENT RULES for TRC vs specs-based due to CPU differences
3 Max number of VMs on same physical server
SAME RULES for TRC vs specs-based to determine max but specs-based might allow more VMs
Note DAS IO bottlenecks may prevent very high VM counts even if CPURAM are sufficient
copy 2010 Cisco andor its affiliates All rights reserved 25
bull Which apps can share the same physical server In general any UC with UC from apps listed at httpdocwikiciscocomwikiUnified_Communications_Virtualization_Supported_Applications
But note some UC apps restrict this eg BE6K CUCCE See their rules on their docwiki ldquochild pagesrdquo
NMTGrsquos UC Mgmt Suite (CUOM CUSM CUSSM CUPM) counts as a UC app for this
Note UCS C200 M2 TRC1 for non-BE6K no longer has special restrictions on UC App Mix
bull SEPARATE PHYSICAL SERVER required for non-UC or 3rd-party Eg N1KV ARC SingleWire vCenter FilePrint Directory CRMERP Groupware non-CUCM TFTP Nuance etc
UC VM UC VMhellipVM for
VMware
vCenter
VM for
Nexus
1KV
VSM
VM for
Solutions
Plus
CTDP
app
VM for
Unaffiliated
3rd-party
app
UC VM UC VMhellipVM for
VMware
vCenter
VM for
Nexus
1KV
VSM
VM for
Solutions
Plus
CTDP
app
VM for
Unaffiliated
3rd-party
app
UC VM UC VMhellipVM for
VMware
vCenter
VM for
Nexus
1KV
VSM
VM for
Solutions
Plus
CTDP
app
VM for
Unaffiliated
3rd-party
app
UC VM UC VMhellipVM for
VMware
vCenter
VM for
Nexus
1KV
VSM
VM for
Solutions
Plus
CTDP
app
VM for
Unaffiliated
3rd-party
app
Different blades in same chassis OK
Same blade same chassis not OK
copy 2010 Cisco andor its affiliates All rights reserved 26
bull App to HW some apps eg CUCCE donrsquot allow any of their OVAs on certain TRCs
See httpdocwikiciscocomwikiUnified_Communications_Virtualization_Supported_Applications
bull OVA to HW Some OVAs are deliberately only for use with a particular TRC or CPU
See co-res policy page and Notes column in httpdocwikiciscocomwikiUnified_Communications_Virtualization_Downloads_28including_OVAOVF_Templates29
Why Usually due to CPU modelspeed dependencies
C200 M2 TRC1
(E5506 213 GHz)
UCM
25K
UCM
75K
UCM
10K
C200 M2 Specs-based
(56xx 253+GHz )
B200C210 M2 TRC or Specs-based
(E5640 266 GHz on TRC
56xx75xx 253+ GHz on specs-based)
UCM
25K
UCM
75K
UCM
10K
UCM
1K
UCM
1K
UCM
25K
UCM
75K
UCM
10K
UCM
1K
copy 2010 Cisco andor its affiliates All rights reserved 27
Core
Core
Core
Core
Core
Core
Core
Core
Core
Core
Core
Core
Core
Core
Core
Core
Core
Core
Core
Core
VM VM VM VM
V
M
V
M
V
M
V
M
V
M
V
M
V
M
V
M
VM VM
Dual-socket 4-core
Eg UCS C210 M2 TRC1
with dual E5640
Dual-socket 6-core
Eg UCS C210 M2 Specs-based
with UC-supported CPU model
and min speed
VM VM V
M
V
M
VM VM VM V
M
Idle
VM
VM
VM
Jumbo + 1 reserved
or
Mixed sizes + 1 reserved
or
Mixed sizes
or
21 Large eg UCM 10K
or
41 Med eg UCM 75K
or
81 Small eg UCM 25K
ldquoSmallrdquo VM
ldquoMediumrdquo VM
ldquoLargerdquo VM
ldquoJumbordquo VM
VM
VM
Idle
VM VM VM VM
V
M
V
M
V
M
V
M
V
M
V
M
V
M
V
M
VM VM
VM VM V
M
V
M
VM VM VM V
M
Idle
Mixed sizes + 1 reserved
or
Mixed sizes
or
31 Large eg UCM 10K
or
61 Med eg UCM 75K
or
121 Small eg UCM 25K
VM
Idle
V
M
V
M
V
M
V
M
VM VM
VM
VM VM
VM
VM VM
copy 2010 Cisco andor its affiliates All rights reserved 28
Virtual Software Switch Options
VM
LAN SAN
ESXi Hypervisor
Software Switch
vNIC
CNA
FCoE
VMware
vSwitch
VMware
dvSwitch
Cisco Nexus
1KV
Host based (local) Distributed Distributed
IEEE 8021Q VLAN
tagging
IEEE 8021Q VLAN
tagging
IEEE 8021Q VLAN
tagging
VLANs only visible to
local ESXi host
VLANs visible to all
ESXi hosts
VLANs visible to all
ESXi hosts
EtherChannel EtherChannel EtherChannel
-- -- Virtual PortChannel
-- -- QoS Marking
(DSCPCoS)
-- -- ACL
-- -- SPAN
RADIUSTACACS+
No VM needed No VM needed VM needed for VSM
vmNIC
UCS B200
Strongly recommended for UC on UCS B-Series
Not required but recommended for UC on UCS C-Series
copy 2010 Cisco andor its affiliates All rights reserved 29
pSwitch
ESXi
bull Cisco Software Switch in Hypervisor
bull Familiar network server operations amp management model
bull Enhanced diagnostic amp monitoring capability
bull Visibility direct to VM
ESXi Nexus
1000V
VEM
Nexus
1000V
VEM
Nexus 1000V VSM
copy 2010 Cisco andor its affiliates All rights reserved 30
Physical switch maps L3 DSCP to L2 CoS
CUCM marks traffic based on L3 DSCP values
pSwitch (CAT6K etc) can do mapping from L3 DSCP to L2 CoS (if needed)
CTL Packet L3
dc1-access-6k(config)mls qos map dscp-cos 24 to 3
dc1-access-6k(config)mls qos map dscp-cos 46 to 5
CS3
L20 L3CS3
CUCM
CAT6K
L23 L3CS3 L23 L3CS3
copy 2010 Cisco andor its affiliates All rights reserved 31
bull UCS 6100 doesnrsquot look into L3 IP header
bull DSCPToS setting in IP header is not altered by UCS
bull 6100 sends packet to uplink pEthernet switch
bull Default QoS settings on UCS
FCoE (ldquomatch cos 3rdquo) ndash no drop policy
Rest (ldquomatch anyrdquo) ndash Best Effort Queue
vSwitch amp UCS 6100 can not map L3 DSCP to L2 CoS
L20 L3CS3
CUCM
CAT6K
UCS 6100
copy 2010 Cisco andor its affiliates All rights reserved 32
bull UC blades Network Adapters QoS policy set to Platinum (CoS=5 No
Drop)
bull Non-UC blades Network Adapters QoS policy set to best effort
N1Kv Considerations
bull UC sig traffic (CoS3) share queues with FCoE traffic (CoS3)
bull UC sig traffic is given lossless behavior
bull Default CoS value of 3 for FCoE traffic should never be changed
Without N1Kv
Caveat
bull All traffic types from virtual UC App will get CoS value of Platinum
bull Non-UC application gets best-effort class might not be acceptable
L20 L3CS3
L23 L3CS3
L23 L3CS3
CUCM
N1KV
UCS 6100
CAT6K
copy 2010 Cisco andor its affiliates All rights reserved 33
Compute Layer
SANStorage
Layer ndash Cisco
SRND
Cisco
UCS 6100
Fabric
Interconnect
UCS 5100
Blade
Server
Cisco SAN
Switch
4x10GE
4x10GE
4x10GE
4x10GE
FC FC
FC FC
Nexus
1000V
FC Storage
SP-A SP-B
3rd party layer
CUCM VM IOPS ~ 200
200 IOPS 4KB ~ 64 Mbps per VM
bull Total capacity 28000 IOPS
bull 14000 IOPS per controller
bull 4 KByte block size
14000 IOPS x (4KB) ~ 428 Mbps
600 Mbps throughputcontroller
3rd Party SAN Example
Result
bull One 4 Gbps FC interface is enough to
handle the entire capacity of one Storage
Array
bull HA requires four FC interfaces
copy 2010 Cisco andor its affiliates All rights reserved 34
copy 2010 Cisco andor its affiliates All rights reserved 35
bull All UC Deployment Models are supported
No change in the current deployment models
Base deployment model ndash Single Site Centralized Call Processing etc are not changing
bull VM machine layout on a blade andor chassis
Unity Connection requires one extra CPU core (vCPU) on blade
bull Software checks for design rules
No rules or restrictions are in place in UC Apps to check if you are running the primary and sub on the same blade
bull CoWAN ruleslatency requirement are same
Does not depend on CUCM code or hardware
httpwwwciscocomgoucsrnd
copy 2010 Cisco andor its affiliates All rights reserved 36
bull SRND application-layer guidelines are same as when on MCS
Redundancy Rules are the same
Clustering over the WANlatency numbers
Mega Cluster supported in 85
Determine quantityrole of nodes
For HA No design checks validating proper placement of primary and secondary servers
CUCCE private network requirement
bull Mixed clusters of HP IBM UCS are supported
Subject to ldquocommon senserdquo rules ndash eg donrsquot make Pub or Primary less powerful than Sub or Secondary
bull Direct attach devices must be on physical MCS server
-MOH Live audio stream
-Tape BackupFloppy
bull New factors to consider for end-to-end QoS designconfiguration
37 copy 2010 Cisco andor its affiliates All rights reserved
FROM CUCM
System Release To Unified CM 85 To UC System 85
4x Multi Hop thru 61(x)71(x) Multi Hop thru 61(3)
51(2) Multi Hop thru 61(x)71(x) NA
51(3) 2 Hop thru 71(3) NA
61(1) 2 Hop thru 61(x)71(x) 2 Hop
61(2) 2 Hop thru 61(x)71(x) NA
61(3) 2 Hop thru 61(x)71(x) NA
61(4) Single Hop NA
61(5) Single Hop NA
70(1) 2 Hop thru 71(x) 2 Hop
71(2) 2 Hop thru 71(x) 2 Hop
71(3) Single Hop Single Hop Multi stagesBWC supported
71(5) Single Hop NA
80(1) Single Hop Single Hop Multi stagesBWC supported
80(2) 80(3) Single Hop NA
38 copy 2010 Cisco andor its affiliates All rights reserved
VMware Feature support bull VMware feature support varies by application
bull Some features are supported with Caveats some Partially
bull For example
bull Clone Virtual Machine
ldquoY (C)rdquo means VM has to be powered off
bull vMotion
ldquoY (C)rdquo means vMotion supported for live traffic calls shouldnrsquot be dropped (but not guaranteed)
ldquoPartialrdquo means in maintenance mode only
ESXi Features CUCM CUC CUP CCX
Clone Virtual Machine Y (C) Y (C) Y (C) Y (C)
VMware vMotion Y (C) Partial Partial Y (C)
Resize Virtual Machine Partial Partial Partial Partial
VMware HA Y (C) Y (C) Y (C) Y (C)
Boot From SAN Y (C) Y (C) Y (C) Y (C)
VMware DRS No No No No
Docwiki httpdocwikiciscocomwikiUnified_Communications_VMware_Requirements
copy 2010 Cisco andor its affiliates All rights reserved 39
Current Business Continuity and Disaster Recovery strategies are still applicable
The UC apps redundancy rules are same
Distribute UC application nodes across UCS blades chassis and sites to minimize failure impact
Primarysecondary on different blade chassis sites
On same blade mix Subs with TFTPMoH vs just Subs
Redundancy of UCS components (blade chassis FEX links Interconnect switching)
Redundancy of ldquonewrdquo network types (10GbE SAN multi-pathing etc)
copy 2010 Cisco andor its affiliates All rights reserved 40
TAC Support Demarcation
Server Hardware
Shared Storage
VMware Cisco
Application
UC on UCS
Tested Reference Configuration
Cisco 3rd-party Cisco
Cisco VCE VCE
3rd-party
VCE
UC on UCS
Specs-based (including Vblock option)
Cisco 3rd-party Cisco
Cisco VCE VCE
3rd-party
VCE
3rd-party VMware (HP IBM)
Specs-based 3rd-party 3rd-party 3rd-party Cisco
MCS 7800 Appliances Cisco NA NA Cisco
Customer-provided MCS 7800 equivalent 3rd party NA NA Cisco
41 copy 2010 Cisco andor its affiliates All rights reserved
Customer Example ndash Primary Data Center
OLD NEW
Hardware Nodes 62 Physical Servers ( EU HQ clusters) 14 Approx
Software Version 615 amp 851 851
Ucxn Version 421 851 ndash 3 pairs -- Virtualized
CER 20 70 86 -- Virtualized
CM SUB CM PUB CM SUB CM SUB
MOH TFTP CER CM SUB
CER UCxn UCxN CM SUB
42 copy 2010 Cisco andor its affiliates All rights reserved
Deployment Model ndash Data Center 1
CM SUB CM PUB CM SUB CM SUB
MOH TFTP UCxN UCxn
CER UCxn UCxn UCxn
43 copy 2010 Cisco andor its affiliates All rights reserved
Deployment Model ndash Data Center 2
CM SUB CM PUB CM SUB CM SUB
MOH TFTP Ucxn UCxn
CER UCxn UCxn UCxn
copy 2010 Cisco andor its affiliates All rights reserved 44
Customer Design
PSTN
IP WAN
SIP Proxy
Unified Communications Manager
Unity Connection
Unified Presence
Unified Contact Center Express
Cisco UCS 5108 Chassis with UCS B200 Blade Servers
Cisco UCS 6100 Fabric Interconnect Switch
Cisco UCS C210 or C200 General-Purpose Rack-mount Server
CUSP
(11K phones)
(3K phones)
(400 phones)
copy 2010 Cisco andor its affiliates All rights reserved 45
HQ Details
Blade 1 Blade 2
CPU-1 CPU-2 CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
Blade 3 Blade 4
CPU-1 CPU-2 CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
CUP-1 UCCX-1 CUP-2
Blade 5 Blade 6
CPU-1 CPU-2 CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
UCCX-1
UCxn-1 Active
Leave
idle for
UCxn
UCxn-2
Active
TFTP-2
SUB-2
SUB-4
SUB-6
SUB-8 CUP-2 UCCX-2
SUB-3 PUB SUB-1
CUP-1
TFTP-1 SUB-5
SUB-7
Blade Slot 7 Blade Slot 8
CPU-1 CPU-2 CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
CUCM VM OVAs
Messaging VM OVAs
Leave
idle for
UCxn
ldquoSparerdquo blade slots available for non-UC workloads such as Cisco Nexus 1000V VMware
vCenter or 3rd-party workloads such as directories email groupware or other business
applications
Contact Center VM OVAs
Presence VM OVAs
copy 2010 Cisco andor its affiliates All rights reserved 46
Rack Server 3
CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
UCxn-2 CCX-1 CUP
Rack Server 2
CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
Rack Server 1
CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
Leave
idle for
UCxn
UCxn-1 TFTP-2 SUB-2
Leave
idle for
UCxn
SUB-1 CCX-2 PUB TFTP-1
Branch Office Details
Rack Server 2
CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
Rack Server 1
CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
PUBTFTP CCX-1 CUP
SUB CCX-2
Leave
idle for
UCxn
Leave
idle for
UCxn
UC
xn
-1
UC
xn
-1
CUCM VM OVAs
Messaging VM OVAs
Contact Center VM OVAs
Presence VM OVAs
copy 2010 Cisco andor its affiliates All rights reserved 47
bull DAS Rack Mount Server (Cisco C-Series)
bull Popular DAS Protocol SCSI
bull iSCSI Access SCSI storage media using IP network
bull Fibre Channel The most popular SAN protocol today
Cable distance ~ 2 km
Popular speed - 4 Gbs
bull NAS (Network Attached Storage) uses NFS (Network File System) protocol over TCPIP
DAS
SCSI
Computer System
SCSI Bus Adapter
SCSI Device Driver
Volume Manager
File System
Application
iSCSI
File System
Application
SCSI Device Driver iSCSI Driver
TCPIP Stack
NIC
Volume Manager
NIC
TCPIP Stack
iSCSI Layer
Bus Adapter
Host
Server
Storage
Transport
Storage
Media
FC SAN
FC
FC HBA
SCSI Device Driver
Volume Manager
File System
Application
Computer System Computer System
Block IO
SAN IP
copy 2010 Cisco andor its affiliates All rights reserved 48
HDD 1
450gig
15K RPM
HDD 2
450gig
15K RPM
HDD 3
450gig
15K RPM
HDD 4
450gig
15K RPM
HDD 5
450gig
15K RPM
Single RAID5 Group (14 TB Usable Space)
PUB SUB1 UCCX1
UC
VM 1
UC
VM 2
UC
VM 3
LUN 2 (720 GB) LUN 1 (720 GB)
UCCX2 CUP1 CUP2
UC
VM 4
UC
VM 5 UC
VM 6
4 to 8 UC VMs per
LUN
(max dependent on
sumvDisks)
NASSAN Array Best Practices for UC
Must be lt2TB per
LUN Recommend
500GB to 15 TB
Use FC class disks
with ~180 IOPS (eg
450 GB 15K or
300 GB 15K)
copy 2010 Cisco andor its affiliates All rights reserved 49
DAS Example UCS C210 M2 TRC1
HDD 1
146 GB
15K RPM
HDD 2
146 GB
15K RPM
HDD 3
146 GB
15K RPM
HDD 4
146 GB
15K RPM
HDD 5
146 GB
15K RPM
HDD 6
146 GB
15K RPM
HDD 7
146 GB
15K RPM
HDD 8
146 GB
15K RPM
HDD 9
146 GB
15K RPM
HDD 10
146 GB
15K RPM
Single RAID1
Volume Single RAID5 Volume (1022 GB after RAID overhead)
VMFS Filestore (947 GB after VMFS overhead)
PUB UCCX1
UC
VM 1
UC
VM 3
CUP1
UC
VM 5
vSphere ESXi
image
Notes
bull VMFS block size limits
max vDisk size
bull Could have gt1 VMFS
datastore on RAID
volume
copy 2010 Cisco andor its affiliates All rights reserved 50
DAS Example UCS C200 M2 TRC1 for BE6K
HDD 1
1 TB
72K RPM
HDD 2
1 TB
72K RPM
HDD 3
1 TB
72K RPM
HDD 4 1
TB
72K RPM
Single RAID10 Volume (2 TB after RAID
overhead)
VMFS Filestore (18 TB after VMFS overhead)
PUB UCCX1
UC
VM 1
UC
VM 3
CUP1
UC
VM 5 vSphere ESXi
image
copy 2010 Cisco andor its affiliates All rights reserved 51
UC Application Co-residency
bull See wwwciscocomgouc-virtualized for latest
bull Based on supported OVA for download
OVA reserves cores RAM etc to VMs
Basic rule of thumb fill up blade until out of capacity
If blade contains VM for messaging must reserve core for ESXi
bull Hardware oversubscription not supported
52 copy 2010 Cisco andor its affiliates All rights reserved
Virtual Machine Sizing
bull Virtual Machine virtual hardware defined by an VM template
vCPU vRAM vDisk vNICs
bull Capacity
bullAn VM template is associated with a specific capacity
bull VM templates are packaged in a OVA file
bull There are usually different VM template per release For example CUCM_80_vmv7_v21ova
CUCM_85_vmv7_v21ova
CUCM_86_vmv7_v15ova
Includes product product version VMware hardware version template version
53 copy 2010 Cisco andor its affiliates All rights reserved httptoolsciscocomcucst
copy 2010 Cisco andor its affiliates All rights reserved 54
bull Customer-accessible
bullUC on UCS httpwwwciscocomgouconucs and wwwciscocomgouc-virtualized and wwwciscocomgoucsrnd and wwwciscocomgoswonly (UCS page)
bullUCS in general httpwwwciscocomgoucs
bullVblocks amp Virtual Computing Environment wwwvceportalcom and wwwvcecom
bullFlexPods wwwcisconetappcom
bullCisco Infrastructure as a Service (IaaS) Portal httpwwwciscocomgoiaas
bullCisco Unified Service Delivery Portal httpwwwciscocomgousd
bullldquoCUCM on Virtual Serversrdquo (summary of ldquowhatrsquos newdifferent when virtualizedrdquo) httpwwwciscocomenUSdocsvoice_ip_commcucmvirtualservershtml
bull ldquoVirtualization and UCS 101rdquo httpnewsroomciscocomdlls2010ts_102810html
bull UC on UCS ldquotech valuerdquo httpwwwciscocomenUSsolutionsns340ns339ns638ns914html_TWTVtwtv_episode_74html
copy 2010 Cisco andor its affiliates All rights reserved 55
bull Partner Central ldquoServers OS and Virtualizationrdquo
httpwwwciscocomwebpartnersselltechnologyipcservers_os_virt_for_uchtml
bull Partner Community ldquoServers OS and Virtualizationrdquo
httpswwwmyciscocommunitycomcommunitypartnercollaborationucservers
bull Design info
httpwwwciscocomgouc-virtualized and httpwwwciscocomgoucsrnd
bull ldquoWhatrsquos new whatrsquos different when on VMwarerdquo customer doc
httpwwwciscocomenUSdocsvoice_ip_commcucmvirtualservershtml
bull Product Literature
httpwwwciscocomgoucs and httpwwwciscocomgouconucs
bull Ordering Guide
httpwwwciscocomwebpartnersselltechnologyipcuc_tech_readinesshtml~7
bull Virtual Computing Environment Portal httpwwwvceportalcom
bull Cisco Infrastructure as a Service (IaaS) Portal httpwwwciscocomgoiaas
copy 2010 Cisco andor its affiliates All rights reserved 25
bull Which apps can share the same physical server In general any UC with UC from apps listed at httpdocwikiciscocomwikiUnified_Communications_Virtualization_Supported_Applications
But note some UC apps restrict this eg BE6K CUCCE See their rules on their docwiki ldquochild pagesrdquo
NMTGrsquos UC Mgmt Suite (CUOM CUSM CUSSM CUPM) counts as a UC app for this
Note UCS C200 M2 TRC1 for non-BE6K no longer has special restrictions on UC App Mix
bull SEPARATE PHYSICAL SERVER required for non-UC or 3rd-party Eg N1KV ARC SingleWire vCenter FilePrint Directory CRMERP Groupware non-CUCM TFTP Nuance etc
UC VM UC VMhellipVM for
VMware
vCenter
VM for
Nexus
1KV
VSM
VM for
Solutions
Plus
CTDP
app
VM for
Unaffiliated
3rd-party
app
UC VM UC VMhellipVM for
VMware
vCenter
VM for
Nexus
1KV
VSM
VM for
Solutions
Plus
CTDP
app
VM for
Unaffiliated
3rd-party
app
UC VM UC VMhellipVM for
VMware
vCenter
VM for
Nexus
1KV
VSM
VM for
Solutions
Plus
CTDP
app
VM for
Unaffiliated
3rd-party
app
UC VM UC VMhellipVM for
VMware
vCenter
VM for
Nexus
1KV
VSM
VM for
Solutions
Plus
CTDP
app
VM for
Unaffiliated
3rd-party
app
Different blades in same chassis OK
Same blade same chassis not OK
copy 2010 Cisco andor its affiliates All rights reserved 26
bull App to HW some apps eg CUCCE donrsquot allow any of their OVAs on certain TRCs
See httpdocwikiciscocomwikiUnified_Communications_Virtualization_Supported_Applications
bull OVA to HW Some OVAs are deliberately only for use with a particular TRC or CPU
See co-res policy page and Notes column in httpdocwikiciscocomwikiUnified_Communications_Virtualization_Downloads_28including_OVAOVF_Templates29
Why Usually due to CPU modelspeed dependencies
C200 M2 TRC1
(E5506 213 GHz)
UCM
25K
UCM
75K
UCM
10K
C200 M2 Specs-based
(56xx 253+GHz )
B200C210 M2 TRC or Specs-based
(E5640 266 GHz on TRC
56xx75xx 253+ GHz on specs-based)
UCM
25K
UCM
75K
UCM
10K
UCM
1K
UCM
1K
UCM
25K
UCM
75K
UCM
10K
UCM
1K
copy 2010 Cisco andor its affiliates All rights reserved 27
Core
Core
Core
Core
Core
Core
Core
Core
Core
Core
Core
Core
Core
Core
Core
Core
Core
Core
Core
Core
VM VM VM VM
V
M
V
M
V
M
V
M
V
M
V
M
V
M
V
M
VM VM
Dual-socket 4-core
Eg UCS C210 M2 TRC1
with dual E5640
Dual-socket 6-core
Eg UCS C210 M2 Specs-based
with UC-supported CPU model
and min speed
VM VM V
M
V
M
VM VM VM V
M
Idle
VM
VM
VM
Jumbo + 1 reserved
or
Mixed sizes + 1 reserved
or
Mixed sizes
or
21 Large eg UCM 10K
or
41 Med eg UCM 75K
or
81 Small eg UCM 25K
ldquoSmallrdquo VM
ldquoMediumrdquo VM
ldquoLargerdquo VM
ldquoJumbordquo VM
VM
VM
Idle
VM VM VM VM
V
M
V
M
V
M
V
M
V
M
V
M
V
M
V
M
VM VM
VM VM V
M
V
M
VM VM VM V
M
Idle
Mixed sizes + 1 reserved
or
Mixed sizes
or
31 Large eg UCM 10K
or
61 Med eg UCM 75K
or
121 Small eg UCM 25K
VM
Idle
V
M
V
M
V
M
V
M
VM VM
VM
VM VM
VM
VM VM
copy 2010 Cisco andor its affiliates All rights reserved 28
Virtual Software Switch Options
VM
LAN SAN
ESXi Hypervisor
Software Switch
vNIC
CNA
FCoE
VMware
vSwitch
VMware
dvSwitch
Cisco Nexus
1KV
Host based (local) Distributed Distributed
IEEE 8021Q VLAN
tagging
IEEE 8021Q VLAN
tagging
IEEE 8021Q VLAN
tagging
VLANs only visible to
local ESXi host
VLANs visible to all
ESXi hosts
VLANs visible to all
ESXi hosts
EtherChannel EtherChannel EtherChannel
-- -- Virtual PortChannel
-- -- QoS Marking
(DSCPCoS)
-- -- ACL
-- -- SPAN
RADIUSTACACS+
No VM needed No VM needed VM needed for VSM
vmNIC
UCS B200
Strongly recommended for UC on UCS B-Series
Not required but recommended for UC on UCS C-Series
copy 2010 Cisco andor its affiliates All rights reserved 29
pSwitch
ESXi
bull Cisco Software Switch in Hypervisor
bull Familiar network server operations amp management model
bull Enhanced diagnostic amp monitoring capability
bull Visibility direct to VM
ESXi Nexus
1000V
VEM
Nexus
1000V
VEM
Nexus 1000V VSM
copy 2010 Cisco andor its affiliates All rights reserved 30
Physical switch maps L3 DSCP to L2 CoS
CUCM marks traffic based on L3 DSCP values
pSwitch (CAT6K etc) can do mapping from L3 DSCP to L2 CoS (if needed)
CTL Packet L3
dc1-access-6k(config)mls qos map dscp-cos 24 to 3
dc1-access-6k(config)mls qos map dscp-cos 46 to 5
CS3
L20 L3CS3
CUCM
CAT6K
L23 L3CS3 L23 L3CS3
copy 2010 Cisco andor its affiliates All rights reserved 31
bull UCS 6100 doesnrsquot look into L3 IP header
bull DSCPToS setting in IP header is not altered by UCS
bull 6100 sends packet to uplink pEthernet switch
bull Default QoS settings on UCS
FCoE (ldquomatch cos 3rdquo) ndash no drop policy
Rest (ldquomatch anyrdquo) ndash Best Effort Queue
vSwitch amp UCS 6100 can not map L3 DSCP to L2 CoS
L20 L3CS3
CUCM
CAT6K
UCS 6100
copy 2010 Cisco andor its affiliates All rights reserved 32
bull UC blades Network Adapters QoS policy set to Platinum (CoS=5 No
Drop)
bull Non-UC blades Network Adapters QoS policy set to best effort
N1Kv Considerations
bull UC sig traffic (CoS3) share queues with FCoE traffic (CoS3)
bull UC sig traffic is given lossless behavior
bull Default CoS value of 3 for FCoE traffic should never be changed
Without N1Kv
Caveat
bull All traffic types from virtual UC App will get CoS value of Platinum
bull Non-UC application gets best-effort class might not be acceptable
L20 L3CS3
L23 L3CS3
L23 L3CS3
CUCM
N1KV
UCS 6100
CAT6K
copy 2010 Cisco andor its affiliates All rights reserved 33
Compute Layer
SANStorage
Layer ndash Cisco
SRND
Cisco
UCS 6100
Fabric
Interconnect
UCS 5100
Blade
Server
Cisco SAN
Switch
4x10GE
4x10GE
4x10GE
4x10GE
FC FC
FC FC
Nexus
1000V
FC Storage
SP-A SP-B
3rd party layer
CUCM VM IOPS ~ 200
200 IOPS 4KB ~ 64 Mbps per VM
bull Total capacity 28000 IOPS
bull 14000 IOPS per controller
bull 4 KByte block size
14000 IOPS x (4KB) ~ 428 Mbps
600 Mbps throughputcontroller
3rd Party SAN Example
Result
bull One 4 Gbps FC interface is enough to
handle the entire capacity of one Storage
Array
bull HA requires four FC interfaces
copy 2010 Cisco andor its affiliates All rights reserved 34
copy 2010 Cisco andor its affiliates All rights reserved 35
bull All UC Deployment Models are supported
No change in the current deployment models
Base deployment model ndash Single Site Centralized Call Processing etc are not changing
bull VM machine layout on a blade andor chassis
Unity Connection requires one extra CPU core (vCPU) on blade
bull Software checks for design rules
No rules or restrictions are in place in UC Apps to check if you are running the primary and sub on the same blade
bull CoWAN ruleslatency requirement are same
Does not depend on CUCM code or hardware
httpwwwciscocomgoucsrnd
copy 2010 Cisco andor its affiliates All rights reserved 36
bull SRND application-layer guidelines are same as when on MCS
Redundancy Rules are the same
Clustering over the WANlatency numbers
Mega Cluster supported in 85
Determine quantityrole of nodes
For HA No design checks validating proper placement of primary and secondary servers
CUCCE private network requirement
bull Mixed clusters of HP IBM UCS are supported
Subject to ldquocommon senserdquo rules ndash eg donrsquot make Pub or Primary less powerful than Sub or Secondary
bull Direct attach devices must be on physical MCS server
-MOH Live audio stream
-Tape BackupFloppy
bull New factors to consider for end-to-end QoS designconfiguration
37 copy 2010 Cisco andor its affiliates All rights reserved
FROM CUCM
System Release To Unified CM 85 To UC System 85
4x Multi Hop thru 61(x)71(x) Multi Hop thru 61(3)
51(2) Multi Hop thru 61(x)71(x) NA
51(3) 2 Hop thru 71(3) NA
61(1) 2 Hop thru 61(x)71(x) 2 Hop
61(2) 2 Hop thru 61(x)71(x) NA
61(3) 2 Hop thru 61(x)71(x) NA
61(4) Single Hop NA
61(5) Single Hop NA
70(1) 2 Hop thru 71(x) 2 Hop
71(2) 2 Hop thru 71(x) 2 Hop
71(3) Single Hop Single Hop Multi stagesBWC supported
71(5) Single Hop NA
80(1) Single Hop Single Hop Multi stagesBWC supported
80(2) 80(3) Single Hop NA
38 copy 2010 Cisco andor its affiliates All rights reserved
VMware Feature support bull VMware feature support varies by application
bull Some features are supported with Caveats some Partially
bull For example
bull Clone Virtual Machine
ldquoY (C)rdquo means VM has to be powered off
bull vMotion
ldquoY (C)rdquo means vMotion supported for live traffic calls shouldnrsquot be dropped (but not guaranteed)
ldquoPartialrdquo means in maintenance mode only
ESXi Features CUCM CUC CUP CCX
Clone Virtual Machine Y (C) Y (C) Y (C) Y (C)
VMware vMotion Y (C) Partial Partial Y (C)
Resize Virtual Machine Partial Partial Partial Partial
VMware HA Y (C) Y (C) Y (C) Y (C)
Boot From SAN Y (C) Y (C) Y (C) Y (C)
VMware DRS No No No No
Docwiki httpdocwikiciscocomwikiUnified_Communications_VMware_Requirements
copy 2010 Cisco andor its affiliates All rights reserved 39
Current Business Continuity and Disaster Recovery strategies are still applicable
The UC apps redundancy rules are same
Distribute UC application nodes across UCS blades chassis and sites to minimize failure impact
Primarysecondary on different blade chassis sites
On same blade mix Subs with TFTPMoH vs just Subs
Redundancy of UCS components (blade chassis FEX links Interconnect switching)
Redundancy of ldquonewrdquo network types (10GbE SAN multi-pathing etc)
copy 2010 Cisco andor its affiliates All rights reserved 40
TAC Support Demarcation
Server Hardware
Shared Storage
VMware Cisco
Application
UC on UCS
Tested Reference Configuration
Cisco 3rd-party Cisco
Cisco VCE VCE
3rd-party
VCE
UC on UCS
Specs-based (including Vblock option)
Cisco 3rd-party Cisco
Cisco VCE VCE
3rd-party
VCE
3rd-party VMware (HP IBM)
Specs-based 3rd-party 3rd-party 3rd-party Cisco
MCS 7800 Appliances Cisco NA NA Cisco
Customer-provided MCS 7800 equivalent 3rd party NA NA Cisco
41 copy 2010 Cisco andor its affiliates All rights reserved
Customer Example ndash Primary Data Center
OLD NEW
Hardware Nodes 62 Physical Servers ( EU HQ clusters) 14 Approx
Software Version 615 amp 851 851
Ucxn Version 421 851 ndash 3 pairs -- Virtualized
CER 20 70 86 -- Virtualized
CM SUB CM PUB CM SUB CM SUB
MOH TFTP CER CM SUB
CER UCxn UCxN CM SUB
42 copy 2010 Cisco andor its affiliates All rights reserved
Deployment Model ndash Data Center 1
CM SUB CM PUB CM SUB CM SUB
MOH TFTP UCxN UCxn
CER UCxn UCxn UCxn
43 copy 2010 Cisco andor its affiliates All rights reserved
Deployment Model ndash Data Center 2
CM SUB CM PUB CM SUB CM SUB
MOH TFTP Ucxn UCxn
CER UCxn UCxn UCxn
copy 2010 Cisco andor its affiliates All rights reserved 44
Customer Design
PSTN
IP WAN
SIP Proxy
Unified Communications Manager
Unity Connection
Unified Presence
Unified Contact Center Express
Cisco UCS 5108 Chassis with UCS B200 Blade Servers
Cisco UCS 6100 Fabric Interconnect Switch
Cisco UCS C210 or C200 General-Purpose Rack-mount Server
CUSP
(11K phones)
(3K phones)
(400 phones)
copy 2010 Cisco andor its affiliates All rights reserved 45
HQ Details
Blade 1 Blade 2
CPU-1 CPU-2 CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
Blade 3 Blade 4
CPU-1 CPU-2 CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
CUP-1 UCCX-1 CUP-2
Blade 5 Blade 6
CPU-1 CPU-2 CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
UCCX-1
UCxn-1 Active
Leave
idle for
UCxn
UCxn-2
Active
TFTP-2
SUB-2
SUB-4
SUB-6
SUB-8 CUP-2 UCCX-2
SUB-3 PUB SUB-1
CUP-1
TFTP-1 SUB-5
SUB-7
Blade Slot 7 Blade Slot 8
CPU-1 CPU-2 CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
CUCM VM OVAs
Messaging VM OVAs
Leave
idle for
UCxn
ldquoSparerdquo blade slots available for non-UC workloads such as Cisco Nexus 1000V VMware
vCenter or 3rd-party workloads such as directories email groupware or other business
applications
Contact Center VM OVAs
Presence VM OVAs
copy 2010 Cisco andor its affiliates All rights reserved 46
Rack Server 3
CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
UCxn-2 CCX-1 CUP
Rack Server 2
CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
Rack Server 1
CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
Leave
idle for
UCxn
UCxn-1 TFTP-2 SUB-2
Leave
idle for
UCxn
SUB-1 CCX-2 PUB TFTP-1
Branch Office Details
Rack Server 2
CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
Rack Server 1
CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
PUBTFTP CCX-1 CUP
SUB CCX-2
Leave
idle for
UCxn
Leave
idle for
UCxn
UC
xn
-1
UC
xn
-1
CUCM VM OVAs
Messaging VM OVAs
Contact Center VM OVAs
Presence VM OVAs
copy 2010 Cisco andor its affiliates All rights reserved 47
bull DAS Rack Mount Server (Cisco C-Series)
bull Popular DAS Protocol SCSI
bull iSCSI Access SCSI storage media using IP network
bull Fibre Channel The most popular SAN protocol today
Cable distance ~ 2 km
Popular speed - 4 Gbs
bull NAS (Network Attached Storage) uses NFS (Network File System) protocol over TCPIP
DAS
SCSI
Computer System
SCSI Bus Adapter
SCSI Device Driver
Volume Manager
File System
Application
iSCSI
File System
Application
SCSI Device Driver iSCSI Driver
TCPIP Stack
NIC
Volume Manager
NIC
TCPIP Stack
iSCSI Layer
Bus Adapter
Host
Server
Storage
Transport
Storage
Media
FC SAN
FC
FC HBA
SCSI Device Driver
Volume Manager
File System
Application
Computer System Computer System
Block IO
SAN IP
copy 2010 Cisco andor its affiliates All rights reserved 48
HDD 1
450gig
15K RPM
HDD 2
450gig
15K RPM
HDD 3
450gig
15K RPM
HDD 4
450gig
15K RPM
HDD 5
450gig
15K RPM
Single RAID5 Group (14 TB Usable Space)
PUB SUB1 UCCX1
UC
VM 1
UC
VM 2
UC
VM 3
LUN 2 (720 GB) LUN 1 (720 GB)
UCCX2 CUP1 CUP2
UC
VM 4
UC
VM 5 UC
VM 6
4 to 8 UC VMs per
LUN
(max dependent on
sumvDisks)
NASSAN Array Best Practices for UC
Must be lt2TB per
LUN Recommend
500GB to 15 TB
Use FC class disks
with ~180 IOPS (eg
450 GB 15K or
300 GB 15K)
copy 2010 Cisco andor its affiliates All rights reserved 49
DAS Example UCS C210 M2 TRC1
HDD 1
146 GB
15K RPM
HDD 2
146 GB
15K RPM
HDD 3
146 GB
15K RPM
HDD 4
146 GB
15K RPM
HDD 5
146 GB
15K RPM
HDD 6
146 GB
15K RPM
HDD 7
146 GB
15K RPM
HDD 8
146 GB
15K RPM
HDD 9
146 GB
15K RPM
HDD 10
146 GB
15K RPM
Single RAID1
Volume Single RAID5 Volume (1022 GB after RAID overhead)
VMFS Filestore (947 GB after VMFS overhead)
PUB UCCX1
UC
VM 1
UC
VM 3
CUP1
UC
VM 5
vSphere ESXi
image
Notes
bull VMFS block size limits
max vDisk size
bull Could have gt1 VMFS
datastore on RAID
volume
copy 2010 Cisco andor its affiliates All rights reserved 50
DAS Example UCS C200 M2 TRC1 for BE6K
HDD 1
1 TB
72K RPM
HDD 2
1 TB
72K RPM
HDD 3
1 TB
72K RPM
HDD 4 1
TB
72K RPM
Single RAID10 Volume (2 TB after RAID
overhead)
VMFS Filestore (18 TB after VMFS overhead)
PUB UCCX1
UC
VM 1
UC
VM 3
CUP1
UC
VM 5 vSphere ESXi
image
copy 2010 Cisco andor its affiliates All rights reserved 51
UC Application Co-residency
bull See wwwciscocomgouc-virtualized for latest
bull Based on supported OVA for download
OVA reserves cores RAM etc to VMs
Basic rule of thumb fill up blade until out of capacity
If blade contains VM for messaging must reserve core for ESXi
bull Hardware oversubscription not supported
52 copy 2010 Cisco andor its affiliates All rights reserved
Virtual Machine Sizing
bull Virtual Machine virtual hardware defined by an VM template
vCPU vRAM vDisk vNICs
bull Capacity
bullAn VM template is associated with a specific capacity
bull VM templates are packaged in a OVA file
bull There are usually different VM template per release For example CUCM_80_vmv7_v21ova
CUCM_85_vmv7_v21ova
CUCM_86_vmv7_v15ova
Includes product product version VMware hardware version template version
53 copy 2010 Cisco andor its affiliates All rights reserved httptoolsciscocomcucst
copy 2010 Cisco andor its affiliates All rights reserved 54
bull Customer-accessible
bullUC on UCS httpwwwciscocomgouconucs and wwwciscocomgouc-virtualized and wwwciscocomgoucsrnd and wwwciscocomgoswonly (UCS page)
bullUCS in general httpwwwciscocomgoucs
bullVblocks amp Virtual Computing Environment wwwvceportalcom and wwwvcecom
bullFlexPods wwwcisconetappcom
bullCisco Infrastructure as a Service (IaaS) Portal httpwwwciscocomgoiaas
bullCisco Unified Service Delivery Portal httpwwwciscocomgousd
bullldquoCUCM on Virtual Serversrdquo (summary of ldquowhatrsquos newdifferent when virtualizedrdquo) httpwwwciscocomenUSdocsvoice_ip_commcucmvirtualservershtml
bull ldquoVirtualization and UCS 101rdquo httpnewsroomciscocomdlls2010ts_102810html
bull UC on UCS ldquotech valuerdquo httpwwwciscocomenUSsolutionsns340ns339ns638ns914html_TWTVtwtv_episode_74html
copy 2010 Cisco andor its affiliates All rights reserved 55
bull Partner Central ldquoServers OS and Virtualizationrdquo
httpwwwciscocomwebpartnersselltechnologyipcservers_os_virt_for_uchtml
bull Partner Community ldquoServers OS and Virtualizationrdquo
httpswwwmyciscocommunitycomcommunitypartnercollaborationucservers
bull Design info
httpwwwciscocomgouc-virtualized and httpwwwciscocomgoucsrnd
bull ldquoWhatrsquos new whatrsquos different when on VMwarerdquo customer doc
httpwwwciscocomenUSdocsvoice_ip_commcucmvirtualservershtml
bull Product Literature
httpwwwciscocomgoucs and httpwwwciscocomgouconucs
bull Ordering Guide
httpwwwciscocomwebpartnersselltechnologyipcuc_tech_readinesshtml~7
bull Virtual Computing Environment Portal httpwwwvceportalcom
bull Cisco Infrastructure as a Service (IaaS) Portal httpwwwciscocomgoiaas
copy 2010 Cisco andor its affiliates All rights reserved 26
bull App to HW some apps eg CUCCE donrsquot allow any of their OVAs on certain TRCs
See httpdocwikiciscocomwikiUnified_Communications_Virtualization_Supported_Applications
bull OVA to HW Some OVAs are deliberately only for use with a particular TRC or CPU
See co-res policy page and Notes column in httpdocwikiciscocomwikiUnified_Communications_Virtualization_Downloads_28including_OVAOVF_Templates29
Why Usually due to CPU modelspeed dependencies
C200 M2 TRC1
(E5506 213 GHz)
UCM
25K
UCM
75K
UCM
10K
C200 M2 Specs-based
(56xx 253+GHz )
B200C210 M2 TRC or Specs-based
(E5640 266 GHz on TRC
56xx75xx 253+ GHz on specs-based)
UCM
25K
UCM
75K
UCM
10K
UCM
1K
UCM
1K
UCM
25K
UCM
75K
UCM
10K
UCM
1K
copy 2010 Cisco andor its affiliates All rights reserved 27
Core
Core
Core
Core
Core
Core
Core
Core
Core
Core
Core
Core
Core
Core
Core
Core
Core
Core
Core
Core
VM VM VM VM
V
M
V
M
V
M
V
M
V
M
V
M
V
M
V
M
VM VM
Dual-socket 4-core
Eg UCS C210 M2 TRC1
with dual E5640
Dual-socket 6-core
Eg UCS C210 M2 Specs-based
with UC-supported CPU model
and min speed
VM VM V
M
V
M
VM VM VM V
M
Idle
VM
VM
VM
Jumbo + 1 reserved
or
Mixed sizes + 1 reserved
or
Mixed sizes
or
21 Large eg UCM 10K
or
41 Med eg UCM 75K
or
81 Small eg UCM 25K
ldquoSmallrdquo VM
ldquoMediumrdquo VM
ldquoLargerdquo VM
ldquoJumbordquo VM
VM
VM
Idle
VM VM VM VM
V
M
V
M
V
M
V
M
V
M
V
M
V
M
V
M
VM VM
VM VM V
M
V
M
VM VM VM V
M
Idle
Mixed sizes + 1 reserved
or
Mixed sizes
or
31 Large eg UCM 10K
or
61 Med eg UCM 75K
or
121 Small eg UCM 25K
VM
Idle
V
M
V
M
V
M
V
M
VM VM
VM
VM VM
VM
VM VM
copy 2010 Cisco andor its affiliates All rights reserved 28
Virtual Software Switch Options
VM
LAN SAN
ESXi Hypervisor
Software Switch
vNIC
CNA
FCoE
VMware
vSwitch
VMware
dvSwitch
Cisco Nexus
1KV
Host based (local) Distributed Distributed
IEEE 8021Q VLAN
tagging
IEEE 8021Q VLAN
tagging
IEEE 8021Q VLAN
tagging
VLANs only visible to
local ESXi host
VLANs visible to all
ESXi hosts
VLANs visible to all
ESXi hosts
EtherChannel EtherChannel EtherChannel
-- -- Virtual PortChannel
-- -- QoS Marking
(DSCPCoS)
-- -- ACL
-- -- SPAN
RADIUSTACACS+
No VM needed No VM needed VM needed for VSM
vmNIC
UCS B200
Strongly recommended for UC on UCS B-Series
Not required but recommended for UC on UCS C-Series
copy 2010 Cisco andor its affiliates All rights reserved 29
pSwitch
ESXi
bull Cisco Software Switch in Hypervisor
bull Familiar network server operations amp management model
bull Enhanced diagnostic amp monitoring capability
bull Visibility direct to VM
ESXi Nexus
1000V
VEM
Nexus
1000V
VEM
Nexus 1000V VSM
copy 2010 Cisco andor its affiliates All rights reserved 30
Physical switch maps L3 DSCP to L2 CoS
CUCM marks traffic based on L3 DSCP values
pSwitch (CAT6K etc) can do mapping from L3 DSCP to L2 CoS (if needed)
CTL Packet L3
dc1-access-6k(config)mls qos map dscp-cos 24 to 3
dc1-access-6k(config)mls qos map dscp-cos 46 to 5
CS3
L20 L3CS3
CUCM
CAT6K
L23 L3CS3 L23 L3CS3
copy 2010 Cisco andor its affiliates All rights reserved 31
bull UCS 6100 doesnrsquot look into L3 IP header
bull DSCPToS setting in IP header is not altered by UCS
bull 6100 sends packet to uplink pEthernet switch
bull Default QoS settings on UCS
FCoE (ldquomatch cos 3rdquo) ndash no drop policy
Rest (ldquomatch anyrdquo) ndash Best Effort Queue
vSwitch amp UCS 6100 can not map L3 DSCP to L2 CoS
L20 L3CS3
CUCM
CAT6K
UCS 6100
copy 2010 Cisco andor its affiliates All rights reserved 32
bull UC blades Network Adapters QoS policy set to Platinum (CoS=5 No
Drop)
bull Non-UC blades Network Adapters QoS policy set to best effort
N1Kv Considerations
bull UC sig traffic (CoS3) share queues with FCoE traffic (CoS3)
bull UC sig traffic is given lossless behavior
bull Default CoS value of 3 for FCoE traffic should never be changed
Without N1Kv
Caveat
bull All traffic types from virtual UC App will get CoS value of Platinum
bull Non-UC application gets best-effort class might not be acceptable
L20 L3CS3
L23 L3CS3
L23 L3CS3
CUCM
N1KV
UCS 6100
CAT6K
copy 2010 Cisco andor its affiliates All rights reserved 33
Compute Layer
SANStorage
Layer ndash Cisco
SRND
Cisco
UCS 6100
Fabric
Interconnect
UCS 5100
Blade
Server
Cisco SAN
Switch
4x10GE
4x10GE
4x10GE
4x10GE
FC FC
FC FC
Nexus
1000V
FC Storage
SP-A SP-B
3rd party layer
CUCM VM IOPS ~ 200
200 IOPS 4KB ~ 64 Mbps per VM
bull Total capacity 28000 IOPS
bull 14000 IOPS per controller
bull 4 KByte block size
14000 IOPS x (4KB) ~ 428 Mbps
600 Mbps throughputcontroller
3rd Party SAN Example
Result
bull One 4 Gbps FC interface is enough to
handle the entire capacity of one Storage
Array
bull HA requires four FC interfaces
copy 2010 Cisco andor its affiliates All rights reserved 34
copy 2010 Cisco andor its affiliates All rights reserved 35
bull All UC Deployment Models are supported
No change in the current deployment models
Base deployment model ndash Single Site Centralized Call Processing etc are not changing
bull VM machine layout on a blade andor chassis
Unity Connection requires one extra CPU core (vCPU) on blade
bull Software checks for design rules
No rules or restrictions are in place in UC Apps to check if you are running the primary and sub on the same blade
bull CoWAN ruleslatency requirement are same
Does not depend on CUCM code or hardware
httpwwwciscocomgoucsrnd
copy 2010 Cisco andor its affiliates All rights reserved 36
bull SRND application-layer guidelines are same as when on MCS
Redundancy Rules are the same
Clustering over the WANlatency numbers
Mega Cluster supported in 85
Determine quantityrole of nodes
For HA No design checks validating proper placement of primary and secondary servers
CUCCE private network requirement
bull Mixed clusters of HP IBM UCS are supported
Subject to ldquocommon senserdquo rules ndash eg donrsquot make Pub or Primary less powerful than Sub or Secondary
bull Direct attach devices must be on physical MCS server
-MOH Live audio stream
-Tape BackupFloppy
bull New factors to consider for end-to-end QoS designconfiguration
37 copy 2010 Cisco andor its affiliates All rights reserved
FROM CUCM
System Release To Unified CM 85 To UC System 85
4x Multi Hop thru 61(x)71(x) Multi Hop thru 61(3)
51(2) Multi Hop thru 61(x)71(x) NA
51(3) 2 Hop thru 71(3) NA
61(1) 2 Hop thru 61(x)71(x) 2 Hop
61(2) 2 Hop thru 61(x)71(x) NA
61(3) 2 Hop thru 61(x)71(x) NA
61(4) Single Hop NA
61(5) Single Hop NA
70(1) 2 Hop thru 71(x) 2 Hop
71(2) 2 Hop thru 71(x) 2 Hop
71(3) Single Hop Single Hop Multi stagesBWC supported
71(5) Single Hop NA
80(1) Single Hop Single Hop Multi stagesBWC supported
80(2) 80(3) Single Hop NA
38 copy 2010 Cisco andor its affiliates All rights reserved
VMware Feature support bull VMware feature support varies by application
bull Some features are supported with Caveats some Partially
bull For example
bull Clone Virtual Machine
ldquoY (C)rdquo means VM has to be powered off
bull vMotion
ldquoY (C)rdquo means vMotion supported for live traffic calls shouldnrsquot be dropped (but not guaranteed)
ldquoPartialrdquo means in maintenance mode only
ESXi Features CUCM CUC CUP CCX
Clone Virtual Machine Y (C) Y (C) Y (C) Y (C)
VMware vMotion Y (C) Partial Partial Y (C)
Resize Virtual Machine Partial Partial Partial Partial
VMware HA Y (C) Y (C) Y (C) Y (C)
Boot From SAN Y (C) Y (C) Y (C) Y (C)
VMware DRS No No No No
Docwiki httpdocwikiciscocomwikiUnified_Communications_VMware_Requirements
copy 2010 Cisco andor its affiliates All rights reserved 39
Current Business Continuity and Disaster Recovery strategies are still applicable
The UC apps redundancy rules are same
Distribute UC application nodes across UCS blades chassis and sites to minimize failure impact
Primarysecondary on different blade chassis sites
On same blade mix Subs with TFTPMoH vs just Subs
Redundancy of UCS components (blade chassis FEX links Interconnect switching)
Redundancy of ldquonewrdquo network types (10GbE SAN multi-pathing etc)
copy 2010 Cisco andor its affiliates All rights reserved 40
TAC Support Demarcation
Server Hardware
Shared Storage
VMware Cisco
Application
UC on UCS
Tested Reference Configuration
Cisco 3rd-party Cisco
Cisco VCE VCE
3rd-party
VCE
UC on UCS
Specs-based (including Vblock option)
Cisco 3rd-party Cisco
Cisco VCE VCE
3rd-party
VCE
3rd-party VMware (HP IBM)
Specs-based 3rd-party 3rd-party 3rd-party Cisco
MCS 7800 Appliances Cisco NA NA Cisco
Customer-provided MCS 7800 equivalent 3rd party NA NA Cisco
41 copy 2010 Cisco andor its affiliates All rights reserved
Customer Example ndash Primary Data Center
OLD NEW
Hardware Nodes 62 Physical Servers ( EU HQ clusters) 14 Approx
Software Version 615 amp 851 851
Ucxn Version 421 851 ndash 3 pairs -- Virtualized
CER 20 70 86 -- Virtualized
CM SUB CM PUB CM SUB CM SUB
MOH TFTP CER CM SUB
CER UCxn UCxN CM SUB
42 copy 2010 Cisco andor its affiliates All rights reserved
Deployment Model ndash Data Center 1
CM SUB CM PUB CM SUB CM SUB
MOH TFTP UCxN UCxn
CER UCxn UCxn UCxn
43 copy 2010 Cisco andor its affiliates All rights reserved
Deployment Model ndash Data Center 2
CM SUB CM PUB CM SUB CM SUB
MOH TFTP Ucxn UCxn
CER UCxn UCxn UCxn
copy 2010 Cisco andor its affiliates All rights reserved 44
Customer Design
PSTN
IP WAN
SIP Proxy
Unified Communications Manager
Unity Connection
Unified Presence
Unified Contact Center Express
Cisco UCS 5108 Chassis with UCS B200 Blade Servers
Cisco UCS 6100 Fabric Interconnect Switch
Cisco UCS C210 or C200 General-Purpose Rack-mount Server
CUSP
(11K phones)
(3K phones)
(400 phones)
copy 2010 Cisco andor its affiliates All rights reserved 45
HQ Details
Blade 1 Blade 2
CPU-1 CPU-2 CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
Blade 3 Blade 4
CPU-1 CPU-2 CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
CUP-1 UCCX-1 CUP-2
Blade 5 Blade 6
CPU-1 CPU-2 CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
UCCX-1
UCxn-1 Active
Leave
idle for
UCxn
UCxn-2
Active
TFTP-2
SUB-2
SUB-4
SUB-6
SUB-8 CUP-2 UCCX-2
SUB-3 PUB SUB-1
CUP-1
TFTP-1 SUB-5
SUB-7
Blade Slot 7 Blade Slot 8
CPU-1 CPU-2 CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
CUCM VM OVAs
Messaging VM OVAs
Leave
idle for
UCxn
ldquoSparerdquo blade slots available for non-UC workloads such as Cisco Nexus 1000V VMware
vCenter or 3rd-party workloads such as directories email groupware or other business
applications
Contact Center VM OVAs
Presence VM OVAs
copy 2010 Cisco andor its affiliates All rights reserved 46
Rack Server 3
CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
UCxn-2 CCX-1 CUP
Rack Server 2
CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
Rack Server 1
CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
Leave
idle for
UCxn
UCxn-1 TFTP-2 SUB-2
Leave
idle for
UCxn
SUB-1 CCX-2 PUB TFTP-1
Branch Office Details
Rack Server 2
CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
Rack Server 1
CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
PUBTFTP CCX-1 CUP
SUB CCX-2
Leave
idle for
UCxn
Leave
idle for
UCxn
UC
xn
-1
UC
xn
-1
CUCM VM OVAs
Messaging VM OVAs
Contact Center VM OVAs
Presence VM OVAs
copy 2010 Cisco andor its affiliates All rights reserved 47
bull DAS Rack Mount Server (Cisco C-Series)
bull Popular DAS Protocol SCSI
bull iSCSI Access SCSI storage media using IP network
bull Fibre Channel The most popular SAN protocol today
Cable distance ~ 2 km
Popular speed - 4 Gbs
bull NAS (Network Attached Storage) uses NFS (Network File System) protocol over TCPIP
DAS
SCSI
Computer System
SCSI Bus Adapter
SCSI Device Driver
Volume Manager
File System
Application
iSCSI
File System
Application
SCSI Device Driver iSCSI Driver
TCPIP Stack
NIC
Volume Manager
NIC
TCPIP Stack
iSCSI Layer
Bus Adapter
Host
Server
Storage
Transport
Storage
Media
FC SAN
FC
FC HBA
SCSI Device Driver
Volume Manager
File System
Application
Computer System Computer System
Block IO
SAN IP
copy 2010 Cisco andor its affiliates All rights reserved 48
HDD 1
450gig
15K RPM
HDD 2
450gig
15K RPM
HDD 3
450gig
15K RPM
HDD 4
450gig
15K RPM
HDD 5
450gig
15K RPM
Single RAID5 Group (14 TB Usable Space)
PUB SUB1 UCCX1
UC
VM 1
UC
VM 2
UC
VM 3
LUN 2 (720 GB) LUN 1 (720 GB)
UCCX2 CUP1 CUP2
UC
VM 4
UC
VM 5 UC
VM 6
4 to 8 UC VMs per
LUN
(max dependent on
sumvDisks)
NASSAN Array Best Practices for UC
Must be lt2TB per
LUN Recommend
500GB to 15 TB
Use FC class disks
with ~180 IOPS (eg
450 GB 15K or
300 GB 15K)
copy 2010 Cisco andor its affiliates All rights reserved 49
DAS Example UCS C210 M2 TRC1
HDD 1
146 GB
15K RPM
HDD 2
146 GB
15K RPM
HDD 3
146 GB
15K RPM
HDD 4
146 GB
15K RPM
HDD 5
146 GB
15K RPM
HDD 6
146 GB
15K RPM
HDD 7
146 GB
15K RPM
HDD 8
146 GB
15K RPM
HDD 9
146 GB
15K RPM
HDD 10
146 GB
15K RPM
Single RAID1
Volume Single RAID5 Volume (1022 GB after RAID overhead)
VMFS Filestore (947 GB after VMFS overhead)
PUB UCCX1
UC
VM 1
UC
VM 3
CUP1
UC
VM 5
vSphere ESXi
image
Notes
bull VMFS block size limits
max vDisk size
bull Could have gt1 VMFS
datastore on RAID
volume
copy 2010 Cisco andor its affiliates All rights reserved 50
DAS Example UCS C200 M2 TRC1 for BE6K
HDD 1
1 TB
72K RPM
HDD 2
1 TB
72K RPM
HDD 3
1 TB
72K RPM
HDD 4 1
TB
72K RPM
Single RAID10 Volume (2 TB after RAID
overhead)
VMFS Filestore (18 TB after VMFS overhead)
PUB UCCX1
UC
VM 1
UC
VM 3
CUP1
UC
VM 5 vSphere ESXi
image
copy 2010 Cisco andor its affiliates All rights reserved 51
UC Application Co-residency
bull See wwwciscocomgouc-virtualized for latest
bull Based on supported OVA for download
OVA reserves cores RAM etc to VMs
Basic rule of thumb fill up blade until out of capacity
If blade contains VM for messaging must reserve core for ESXi
bull Hardware oversubscription not supported
52 copy 2010 Cisco andor its affiliates All rights reserved
Virtual Machine Sizing
bull Virtual Machine virtual hardware defined by an VM template
vCPU vRAM vDisk vNICs
bull Capacity
bullAn VM template is associated with a specific capacity
bull VM templates are packaged in a OVA file
bull There are usually different VM template per release For example CUCM_80_vmv7_v21ova
CUCM_85_vmv7_v21ova
CUCM_86_vmv7_v15ova
Includes product product version VMware hardware version template version
53 copy 2010 Cisco andor its affiliates All rights reserved httptoolsciscocomcucst
copy 2010 Cisco andor its affiliates All rights reserved 54
bull Customer-accessible
bullUC on UCS httpwwwciscocomgouconucs and wwwciscocomgouc-virtualized and wwwciscocomgoucsrnd and wwwciscocomgoswonly (UCS page)
bullUCS in general httpwwwciscocomgoucs
bullVblocks amp Virtual Computing Environment wwwvceportalcom and wwwvcecom
bullFlexPods wwwcisconetappcom
bullCisco Infrastructure as a Service (IaaS) Portal httpwwwciscocomgoiaas
bullCisco Unified Service Delivery Portal httpwwwciscocomgousd
bullldquoCUCM on Virtual Serversrdquo (summary of ldquowhatrsquos newdifferent when virtualizedrdquo) httpwwwciscocomenUSdocsvoice_ip_commcucmvirtualservershtml
bull ldquoVirtualization and UCS 101rdquo httpnewsroomciscocomdlls2010ts_102810html
bull UC on UCS ldquotech valuerdquo httpwwwciscocomenUSsolutionsns340ns339ns638ns914html_TWTVtwtv_episode_74html
copy 2010 Cisco andor its affiliates All rights reserved 55
bull Partner Central ldquoServers OS and Virtualizationrdquo
httpwwwciscocomwebpartnersselltechnologyipcservers_os_virt_for_uchtml
bull Partner Community ldquoServers OS and Virtualizationrdquo
httpswwwmyciscocommunitycomcommunitypartnercollaborationucservers
bull Design info
httpwwwciscocomgouc-virtualized and httpwwwciscocomgoucsrnd
bull ldquoWhatrsquos new whatrsquos different when on VMwarerdquo customer doc
httpwwwciscocomenUSdocsvoice_ip_commcucmvirtualservershtml
bull Product Literature
httpwwwciscocomgoucs and httpwwwciscocomgouconucs
bull Ordering Guide
httpwwwciscocomwebpartnersselltechnologyipcuc_tech_readinesshtml~7
bull Virtual Computing Environment Portal httpwwwvceportalcom
bull Cisco Infrastructure as a Service (IaaS) Portal httpwwwciscocomgoiaas
copy 2010 Cisco andor its affiliates All rights reserved 27
Core
Core
Core
Core
Core
Core
Core
Core
Core
Core
Core
Core
Core
Core
Core
Core
Core
Core
Core
Core
VM VM VM VM
V
M
V
M
V
M
V
M
V
M
V
M
V
M
V
M
VM VM
Dual-socket 4-core
Eg UCS C210 M2 TRC1
with dual E5640
Dual-socket 6-core
Eg UCS C210 M2 Specs-based
with UC-supported CPU model
and min speed
VM VM V
M
V
M
VM VM VM V
M
Idle
VM
VM
VM
Jumbo + 1 reserved
or
Mixed sizes + 1 reserved
or
Mixed sizes
or
21 Large eg UCM 10K
or
41 Med eg UCM 75K
or
81 Small eg UCM 25K
ldquoSmallrdquo VM
ldquoMediumrdquo VM
ldquoLargerdquo VM
ldquoJumbordquo VM
VM
VM
Idle
VM VM VM VM
V
M
V
M
V
M
V
M
V
M
V
M
V
M
V
M
VM VM
VM VM V
M
V
M
VM VM VM V
M
Idle
Mixed sizes + 1 reserved
or
Mixed sizes
or
31 Large eg UCM 10K
or
61 Med eg UCM 75K
or
121 Small eg UCM 25K
VM
Idle
V
M
V
M
V
M
V
M
VM VM
VM
VM VM
VM
VM VM
copy 2010 Cisco andor its affiliates All rights reserved 28
Virtual Software Switch Options
VM
LAN SAN
ESXi Hypervisor
Software Switch
vNIC
CNA
FCoE
VMware
vSwitch
VMware
dvSwitch
Cisco Nexus
1KV
Host based (local) Distributed Distributed
IEEE 8021Q VLAN
tagging
IEEE 8021Q VLAN
tagging
IEEE 8021Q VLAN
tagging
VLANs only visible to
local ESXi host
VLANs visible to all
ESXi hosts
VLANs visible to all
ESXi hosts
EtherChannel EtherChannel EtherChannel
-- -- Virtual PortChannel
-- -- QoS Marking
(DSCPCoS)
-- -- ACL
-- -- SPAN
RADIUSTACACS+
No VM needed No VM needed VM needed for VSM
vmNIC
UCS B200
Strongly recommended for UC on UCS B-Series
Not required but recommended for UC on UCS C-Series
copy 2010 Cisco andor its affiliates All rights reserved 29
pSwitch
ESXi
bull Cisco Software Switch in Hypervisor
bull Familiar network server operations amp management model
bull Enhanced diagnostic amp monitoring capability
bull Visibility direct to VM
ESXi Nexus
1000V
VEM
Nexus
1000V
VEM
Nexus 1000V VSM
copy 2010 Cisco andor its affiliates All rights reserved 30
Physical switch maps L3 DSCP to L2 CoS
CUCM marks traffic based on L3 DSCP values
pSwitch (CAT6K etc) can do mapping from L3 DSCP to L2 CoS (if needed)
CTL Packet L3
dc1-access-6k(config)mls qos map dscp-cos 24 to 3
dc1-access-6k(config)mls qos map dscp-cos 46 to 5
CS3
L20 L3CS3
CUCM
CAT6K
L23 L3CS3 L23 L3CS3
copy 2010 Cisco andor its affiliates All rights reserved 31
bull UCS 6100 doesnrsquot look into L3 IP header
bull DSCPToS setting in IP header is not altered by UCS
bull 6100 sends packet to uplink pEthernet switch
bull Default QoS settings on UCS
FCoE (ldquomatch cos 3rdquo) ndash no drop policy
Rest (ldquomatch anyrdquo) ndash Best Effort Queue
vSwitch amp UCS 6100 can not map L3 DSCP to L2 CoS
L20 L3CS3
CUCM
CAT6K
UCS 6100
copy 2010 Cisco andor its affiliates All rights reserved 32
bull UC blades Network Adapters QoS policy set to Platinum (CoS=5 No
Drop)
bull Non-UC blades Network Adapters QoS policy set to best effort
N1Kv Considerations
bull UC sig traffic (CoS3) share queues with FCoE traffic (CoS3)
bull UC sig traffic is given lossless behavior
bull Default CoS value of 3 for FCoE traffic should never be changed
Without N1Kv
Caveat
bull All traffic types from virtual UC App will get CoS value of Platinum
bull Non-UC application gets best-effort class might not be acceptable
L20 L3CS3
L23 L3CS3
L23 L3CS3
CUCM
N1KV
UCS 6100
CAT6K
copy 2010 Cisco andor its affiliates All rights reserved 33
Compute Layer
SANStorage
Layer ndash Cisco
SRND
Cisco
UCS 6100
Fabric
Interconnect
UCS 5100
Blade
Server
Cisco SAN
Switch
4x10GE
4x10GE
4x10GE
4x10GE
FC FC
FC FC
Nexus
1000V
FC Storage
SP-A SP-B
3rd party layer
CUCM VM IOPS ~ 200
200 IOPS 4KB ~ 64 Mbps per VM
bull Total capacity 28000 IOPS
bull 14000 IOPS per controller
bull 4 KByte block size
14000 IOPS x (4KB) ~ 428 Mbps
600 Mbps throughputcontroller
3rd Party SAN Example
Result
bull One 4 Gbps FC interface is enough to
handle the entire capacity of one Storage
Array
bull HA requires four FC interfaces
copy 2010 Cisco andor its affiliates All rights reserved 34
copy 2010 Cisco andor its affiliates All rights reserved 35
bull All UC Deployment Models are supported
No change in the current deployment models
Base deployment model ndash Single Site Centralized Call Processing etc are not changing
bull VM machine layout on a blade andor chassis
Unity Connection requires one extra CPU core (vCPU) on blade
bull Software checks for design rules
No rules or restrictions are in place in UC Apps to check if you are running the primary and sub on the same blade
bull CoWAN ruleslatency requirement are same
Does not depend on CUCM code or hardware
httpwwwciscocomgoucsrnd
copy 2010 Cisco andor its affiliates All rights reserved 36
bull SRND application-layer guidelines are same as when on MCS
Redundancy Rules are the same
Clustering over the WANlatency numbers
Mega Cluster supported in 85
Determine quantityrole of nodes
For HA No design checks validating proper placement of primary and secondary servers
CUCCE private network requirement
bull Mixed clusters of HP IBM UCS are supported
Subject to ldquocommon senserdquo rules ndash eg donrsquot make Pub or Primary less powerful than Sub or Secondary
bull Direct attach devices must be on physical MCS server
-MOH Live audio stream
-Tape BackupFloppy
bull New factors to consider for end-to-end QoS designconfiguration
37 copy 2010 Cisco andor its affiliates All rights reserved
FROM CUCM
System Release To Unified CM 85 To UC System 85
4x Multi Hop thru 61(x)71(x) Multi Hop thru 61(3)
51(2) Multi Hop thru 61(x)71(x) NA
51(3) 2 Hop thru 71(3) NA
61(1) 2 Hop thru 61(x)71(x) 2 Hop
61(2) 2 Hop thru 61(x)71(x) NA
61(3) 2 Hop thru 61(x)71(x) NA
61(4) Single Hop NA
61(5) Single Hop NA
70(1) 2 Hop thru 71(x) 2 Hop
71(2) 2 Hop thru 71(x) 2 Hop
71(3) Single Hop Single Hop Multi stagesBWC supported
71(5) Single Hop NA
80(1) Single Hop Single Hop Multi stagesBWC supported
80(2) 80(3) Single Hop NA
38 copy 2010 Cisco andor its affiliates All rights reserved
VMware Feature support bull VMware feature support varies by application
bull Some features are supported with Caveats some Partially
bull For example
bull Clone Virtual Machine
ldquoY (C)rdquo means VM has to be powered off
bull vMotion
ldquoY (C)rdquo means vMotion supported for live traffic calls shouldnrsquot be dropped (but not guaranteed)
ldquoPartialrdquo means in maintenance mode only
ESXi Features CUCM CUC CUP CCX
Clone Virtual Machine Y (C) Y (C) Y (C) Y (C)
VMware vMotion Y (C) Partial Partial Y (C)
Resize Virtual Machine Partial Partial Partial Partial
VMware HA Y (C) Y (C) Y (C) Y (C)
Boot From SAN Y (C) Y (C) Y (C) Y (C)
VMware DRS No No No No
Docwiki httpdocwikiciscocomwikiUnified_Communications_VMware_Requirements
copy 2010 Cisco andor its affiliates All rights reserved 39
Current Business Continuity and Disaster Recovery strategies are still applicable
The UC apps redundancy rules are same
Distribute UC application nodes across UCS blades chassis and sites to minimize failure impact
Primarysecondary on different blade chassis sites
On same blade mix Subs with TFTPMoH vs just Subs
Redundancy of UCS components (blade chassis FEX links Interconnect switching)
Redundancy of ldquonewrdquo network types (10GbE SAN multi-pathing etc)
copy 2010 Cisco andor its affiliates All rights reserved 40
TAC Support Demarcation
Server Hardware
Shared Storage
VMware Cisco
Application
UC on UCS
Tested Reference Configuration
Cisco 3rd-party Cisco
Cisco VCE VCE
3rd-party
VCE
UC on UCS
Specs-based (including Vblock option)
Cisco 3rd-party Cisco
Cisco VCE VCE
3rd-party
VCE
3rd-party VMware (HP IBM)
Specs-based 3rd-party 3rd-party 3rd-party Cisco
MCS 7800 Appliances Cisco NA NA Cisco
Customer-provided MCS 7800 equivalent 3rd party NA NA Cisco
41 copy 2010 Cisco andor its affiliates All rights reserved
Customer Example ndash Primary Data Center
OLD NEW
Hardware Nodes 62 Physical Servers ( EU HQ clusters) 14 Approx
Software Version 615 amp 851 851
Ucxn Version 421 851 ndash 3 pairs -- Virtualized
CER 20 70 86 -- Virtualized
CM SUB CM PUB CM SUB CM SUB
MOH TFTP CER CM SUB
CER UCxn UCxN CM SUB
42 copy 2010 Cisco andor its affiliates All rights reserved
Deployment Model ndash Data Center 1
CM SUB CM PUB CM SUB CM SUB
MOH TFTP UCxN UCxn
CER UCxn UCxn UCxn
43 copy 2010 Cisco andor its affiliates All rights reserved
Deployment Model ndash Data Center 2
CM SUB CM PUB CM SUB CM SUB
MOH TFTP Ucxn UCxn
CER UCxn UCxn UCxn
copy 2010 Cisco andor its affiliates All rights reserved 44
Customer Design
PSTN
IP WAN
SIP Proxy
Unified Communications Manager
Unity Connection
Unified Presence
Unified Contact Center Express
Cisco UCS 5108 Chassis with UCS B200 Blade Servers
Cisco UCS 6100 Fabric Interconnect Switch
Cisco UCS C210 or C200 General-Purpose Rack-mount Server
CUSP
(11K phones)
(3K phones)
(400 phones)
copy 2010 Cisco andor its affiliates All rights reserved 45
HQ Details
Blade 1 Blade 2
CPU-1 CPU-2 CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
Blade 3 Blade 4
CPU-1 CPU-2 CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
CUP-1 UCCX-1 CUP-2
Blade 5 Blade 6
CPU-1 CPU-2 CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
UCCX-1
UCxn-1 Active
Leave
idle for
UCxn
UCxn-2
Active
TFTP-2
SUB-2
SUB-4
SUB-6
SUB-8 CUP-2 UCCX-2
SUB-3 PUB SUB-1
CUP-1
TFTP-1 SUB-5
SUB-7
Blade Slot 7 Blade Slot 8
CPU-1 CPU-2 CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
CUCM VM OVAs
Messaging VM OVAs
Leave
idle for
UCxn
ldquoSparerdquo blade slots available for non-UC workloads such as Cisco Nexus 1000V VMware
vCenter or 3rd-party workloads such as directories email groupware or other business
applications
Contact Center VM OVAs
Presence VM OVAs
copy 2010 Cisco andor its affiliates All rights reserved 46
Rack Server 3
CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
UCxn-2 CCX-1 CUP
Rack Server 2
CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
Rack Server 1
CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
Leave
idle for
UCxn
UCxn-1 TFTP-2 SUB-2
Leave
idle for
UCxn
SUB-1 CCX-2 PUB TFTP-1
Branch Office Details
Rack Server 2
CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
Rack Server 1
CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
PUBTFTP CCX-1 CUP
SUB CCX-2
Leave
idle for
UCxn
Leave
idle for
UCxn
UC
xn
-1
UC
xn
-1
CUCM VM OVAs
Messaging VM OVAs
Contact Center VM OVAs
Presence VM OVAs
copy 2010 Cisco andor its affiliates All rights reserved 47
bull DAS Rack Mount Server (Cisco C-Series)
bull Popular DAS Protocol SCSI
bull iSCSI Access SCSI storage media using IP network
bull Fibre Channel The most popular SAN protocol today
Cable distance ~ 2 km
Popular speed - 4 Gbs
bull NAS (Network Attached Storage) uses NFS (Network File System) protocol over TCPIP
DAS
SCSI
Computer System
SCSI Bus Adapter
SCSI Device Driver
Volume Manager
File System
Application
iSCSI
File System
Application
SCSI Device Driver iSCSI Driver
TCPIP Stack
NIC
Volume Manager
NIC
TCPIP Stack
iSCSI Layer
Bus Adapter
Host
Server
Storage
Transport
Storage
Media
FC SAN
FC
FC HBA
SCSI Device Driver
Volume Manager
File System
Application
Computer System Computer System
Block IO
SAN IP
copy 2010 Cisco andor its affiliates All rights reserved 48
HDD 1
450gig
15K RPM
HDD 2
450gig
15K RPM
HDD 3
450gig
15K RPM
HDD 4
450gig
15K RPM
HDD 5
450gig
15K RPM
Single RAID5 Group (14 TB Usable Space)
PUB SUB1 UCCX1
UC
VM 1
UC
VM 2
UC
VM 3
LUN 2 (720 GB) LUN 1 (720 GB)
UCCX2 CUP1 CUP2
UC
VM 4
UC
VM 5 UC
VM 6
4 to 8 UC VMs per
LUN
(max dependent on
sumvDisks)
NASSAN Array Best Practices for UC
Must be lt2TB per
LUN Recommend
500GB to 15 TB
Use FC class disks
with ~180 IOPS (eg
450 GB 15K or
300 GB 15K)
copy 2010 Cisco andor its affiliates All rights reserved 49
DAS Example UCS C210 M2 TRC1
HDD 1
146 GB
15K RPM
HDD 2
146 GB
15K RPM
HDD 3
146 GB
15K RPM
HDD 4
146 GB
15K RPM
HDD 5
146 GB
15K RPM
HDD 6
146 GB
15K RPM
HDD 7
146 GB
15K RPM
HDD 8
146 GB
15K RPM
HDD 9
146 GB
15K RPM
HDD 10
146 GB
15K RPM
Single RAID1
Volume Single RAID5 Volume (1022 GB after RAID overhead)
VMFS Filestore (947 GB after VMFS overhead)
PUB UCCX1
UC
VM 1
UC
VM 3
CUP1
UC
VM 5
vSphere ESXi
image
Notes
bull VMFS block size limits
max vDisk size
bull Could have gt1 VMFS
datastore on RAID
volume
copy 2010 Cisco andor its affiliates All rights reserved 50
DAS Example UCS C200 M2 TRC1 for BE6K
HDD 1
1 TB
72K RPM
HDD 2
1 TB
72K RPM
HDD 3
1 TB
72K RPM
HDD 4 1
TB
72K RPM
Single RAID10 Volume (2 TB after RAID
overhead)
VMFS Filestore (18 TB after VMFS overhead)
PUB UCCX1
UC
VM 1
UC
VM 3
CUP1
UC
VM 5 vSphere ESXi
image
copy 2010 Cisco andor its affiliates All rights reserved 51
UC Application Co-residency
bull See wwwciscocomgouc-virtualized for latest
bull Based on supported OVA for download
OVA reserves cores RAM etc to VMs
Basic rule of thumb fill up blade until out of capacity
If blade contains VM for messaging must reserve core for ESXi
bull Hardware oversubscription not supported
52 copy 2010 Cisco andor its affiliates All rights reserved
Virtual Machine Sizing
bull Virtual Machine virtual hardware defined by an VM template
vCPU vRAM vDisk vNICs
bull Capacity
bullAn VM template is associated with a specific capacity
bull VM templates are packaged in a OVA file
bull There are usually different VM template per release For example CUCM_80_vmv7_v21ova
CUCM_85_vmv7_v21ova
CUCM_86_vmv7_v15ova
Includes product product version VMware hardware version template version
53 copy 2010 Cisco andor its affiliates All rights reserved httptoolsciscocomcucst
copy 2010 Cisco andor its affiliates All rights reserved 54
bull Customer-accessible
bullUC on UCS httpwwwciscocomgouconucs and wwwciscocomgouc-virtualized and wwwciscocomgoucsrnd and wwwciscocomgoswonly (UCS page)
bullUCS in general httpwwwciscocomgoucs
bullVblocks amp Virtual Computing Environment wwwvceportalcom and wwwvcecom
bullFlexPods wwwcisconetappcom
bullCisco Infrastructure as a Service (IaaS) Portal httpwwwciscocomgoiaas
bullCisco Unified Service Delivery Portal httpwwwciscocomgousd
bullldquoCUCM on Virtual Serversrdquo (summary of ldquowhatrsquos newdifferent when virtualizedrdquo) httpwwwciscocomenUSdocsvoice_ip_commcucmvirtualservershtml
bull ldquoVirtualization and UCS 101rdquo httpnewsroomciscocomdlls2010ts_102810html
bull UC on UCS ldquotech valuerdquo httpwwwciscocomenUSsolutionsns340ns339ns638ns914html_TWTVtwtv_episode_74html
copy 2010 Cisco andor its affiliates All rights reserved 55
bull Partner Central ldquoServers OS and Virtualizationrdquo
httpwwwciscocomwebpartnersselltechnologyipcservers_os_virt_for_uchtml
bull Partner Community ldquoServers OS and Virtualizationrdquo
httpswwwmyciscocommunitycomcommunitypartnercollaborationucservers
bull Design info
httpwwwciscocomgouc-virtualized and httpwwwciscocomgoucsrnd
bull ldquoWhatrsquos new whatrsquos different when on VMwarerdquo customer doc
httpwwwciscocomenUSdocsvoice_ip_commcucmvirtualservershtml
bull Product Literature
httpwwwciscocomgoucs and httpwwwciscocomgouconucs
bull Ordering Guide
httpwwwciscocomwebpartnersselltechnologyipcuc_tech_readinesshtml~7
bull Virtual Computing Environment Portal httpwwwvceportalcom
bull Cisco Infrastructure as a Service (IaaS) Portal httpwwwciscocomgoiaas
copy 2010 Cisco andor its affiliates All rights reserved 28
Virtual Software Switch Options
VM
LAN SAN
ESXi Hypervisor
Software Switch
vNIC
CNA
FCoE
VMware
vSwitch
VMware
dvSwitch
Cisco Nexus
1KV
Host based (local) Distributed Distributed
IEEE 8021Q VLAN
tagging
IEEE 8021Q VLAN
tagging
IEEE 8021Q VLAN
tagging
VLANs only visible to
local ESXi host
VLANs visible to all
ESXi hosts
VLANs visible to all
ESXi hosts
EtherChannel EtherChannel EtherChannel
-- -- Virtual PortChannel
-- -- QoS Marking
(DSCPCoS)
-- -- ACL
-- -- SPAN
RADIUSTACACS+
No VM needed No VM needed VM needed for VSM
vmNIC
UCS B200
Strongly recommended for UC on UCS B-Series
Not required but recommended for UC on UCS C-Series
copy 2010 Cisco andor its affiliates All rights reserved 29
pSwitch
ESXi
bull Cisco Software Switch in Hypervisor
bull Familiar network server operations amp management model
bull Enhanced diagnostic amp monitoring capability
bull Visibility direct to VM
ESXi Nexus
1000V
VEM
Nexus
1000V
VEM
Nexus 1000V VSM
copy 2010 Cisco andor its affiliates All rights reserved 30
Physical switch maps L3 DSCP to L2 CoS
CUCM marks traffic based on L3 DSCP values
pSwitch (CAT6K etc) can do mapping from L3 DSCP to L2 CoS (if needed)
CTL Packet L3
dc1-access-6k(config)mls qos map dscp-cos 24 to 3
dc1-access-6k(config)mls qos map dscp-cos 46 to 5
CS3
L20 L3CS3
CUCM
CAT6K
L23 L3CS3 L23 L3CS3
copy 2010 Cisco andor its affiliates All rights reserved 31
bull UCS 6100 doesnrsquot look into L3 IP header
bull DSCPToS setting in IP header is not altered by UCS
bull 6100 sends packet to uplink pEthernet switch
bull Default QoS settings on UCS
FCoE (ldquomatch cos 3rdquo) ndash no drop policy
Rest (ldquomatch anyrdquo) ndash Best Effort Queue
vSwitch amp UCS 6100 can not map L3 DSCP to L2 CoS
L20 L3CS3
CUCM
CAT6K
UCS 6100
copy 2010 Cisco andor its affiliates All rights reserved 32
bull UC blades Network Adapters QoS policy set to Platinum (CoS=5 No
Drop)
bull Non-UC blades Network Adapters QoS policy set to best effort
N1Kv Considerations
bull UC sig traffic (CoS3) share queues with FCoE traffic (CoS3)
bull UC sig traffic is given lossless behavior
bull Default CoS value of 3 for FCoE traffic should never be changed
Without N1Kv
Caveat
bull All traffic types from virtual UC App will get CoS value of Platinum
bull Non-UC application gets best-effort class might not be acceptable
L20 L3CS3
L23 L3CS3
L23 L3CS3
CUCM
N1KV
UCS 6100
CAT6K
copy 2010 Cisco andor its affiliates All rights reserved 33
Compute Layer
SANStorage
Layer ndash Cisco
SRND
Cisco
UCS 6100
Fabric
Interconnect
UCS 5100
Blade
Server
Cisco SAN
Switch
4x10GE
4x10GE
4x10GE
4x10GE
FC FC
FC FC
Nexus
1000V
FC Storage
SP-A SP-B
3rd party layer
CUCM VM IOPS ~ 200
200 IOPS 4KB ~ 64 Mbps per VM
bull Total capacity 28000 IOPS
bull 14000 IOPS per controller
bull 4 KByte block size
14000 IOPS x (4KB) ~ 428 Mbps
600 Mbps throughputcontroller
3rd Party SAN Example
Result
bull One 4 Gbps FC interface is enough to
handle the entire capacity of one Storage
Array
bull HA requires four FC interfaces
copy 2010 Cisco andor its affiliates All rights reserved 34
copy 2010 Cisco andor its affiliates All rights reserved 35
bull All UC Deployment Models are supported
No change in the current deployment models
Base deployment model ndash Single Site Centralized Call Processing etc are not changing
bull VM machine layout on a blade andor chassis
Unity Connection requires one extra CPU core (vCPU) on blade
bull Software checks for design rules
No rules or restrictions are in place in UC Apps to check if you are running the primary and sub on the same blade
bull CoWAN ruleslatency requirement are same
Does not depend on CUCM code or hardware
httpwwwciscocomgoucsrnd
copy 2010 Cisco andor its affiliates All rights reserved 36
bull SRND application-layer guidelines are same as when on MCS
Redundancy Rules are the same
Clustering over the WANlatency numbers
Mega Cluster supported in 85
Determine quantityrole of nodes
For HA No design checks validating proper placement of primary and secondary servers
CUCCE private network requirement
bull Mixed clusters of HP IBM UCS are supported
Subject to ldquocommon senserdquo rules ndash eg donrsquot make Pub or Primary less powerful than Sub or Secondary
bull Direct attach devices must be on physical MCS server
-MOH Live audio stream
-Tape BackupFloppy
bull New factors to consider for end-to-end QoS designconfiguration
37 copy 2010 Cisco andor its affiliates All rights reserved
FROM CUCM
System Release To Unified CM 85 To UC System 85
4x Multi Hop thru 61(x)71(x) Multi Hop thru 61(3)
51(2) Multi Hop thru 61(x)71(x) NA
51(3) 2 Hop thru 71(3) NA
61(1) 2 Hop thru 61(x)71(x) 2 Hop
61(2) 2 Hop thru 61(x)71(x) NA
61(3) 2 Hop thru 61(x)71(x) NA
61(4) Single Hop NA
61(5) Single Hop NA
70(1) 2 Hop thru 71(x) 2 Hop
71(2) 2 Hop thru 71(x) 2 Hop
71(3) Single Hop Single Hop Multi stagesBWC supported
71(5) Single Hop NA
80(1) Single Hop Single Hop Multi stagesBWC supported
80(2) 80(3) Single Hop NA
38 copy 2010 Cisco andor its affiliates All rights reserved
VMware Feature support bull VMware feature support varies by application
bull Some features are supported with Caveats some Partially
bull For example
bull Clone Virtual Machine
ldquoY (C)rdquo means VM has to be powered off
bull vMotion
ldquoY (C)rdquo means vMotion supported for live traffic calls shouldnrsquot be dropped (but not guaranteed)
ldquoPartialrdquo means in maintenance mode only
ESXi Features CUCM CUC CUP CCX
Clone Virtual Machine Y (C) Y (C) Y (C) Y (C)
VMware vMotion Y (C) Partial Partial Y (C)
Resize Virtual Machine Partial Partial Partial Partial
VMware HA Y (C) Y (C) Y (C) Y (C)
Boot From SAN Y (C) Y (C) Y (C) Y (C)
VMware DRS No No No No
Docwiki httpdocwikiciscocomwikiUnified_Communications_VMware_Requirements
copy 2010 Cisco andor its affiliates All rights reserved 39
Current Business Continuity and Disaster Recovery strategies are still applicable
The UC apps redundancy rules are same
Distribute UC application nodes across UCS blades chassis and sites to minimize failure impact
Primarysecondary on different blade chassis sites
On same blade mix Subs with TFTPMoH vs just Subs
Redundancy of UCS components (blade chassis FEX links Interconnect switching)
Redundancy of ldquonewrdquo network types (10GbE SAN multi-pathing etc)
copy 2010 Cisco andor its affiliates All rights reserved 40
TAC Support Demarcation
Server Hardware
Shared Storage
VMware Cisco
Application
UC on UCS
Tested Reference Configuration
Cisco 3rd-party Cisco
Cisco VCE VCE
3rd-party
VCE
UC on UCS
Specs-based (including Vblock option)
Cisco 3rd-party Cisco
Cisco VCE VCE
3rd-party
VCE
3rd-party VMware (HP IBM)
Specs-based 3rd-party 3rd-party 3rd-party Cisco
MCS 7800 Appliances Cisco NA NA Cisco
Customer-provided MCS 7800 equivalent 3rd party NA NA Cisco
41 copy 2010 Cisco andor its affiliates All rights reserved
Customer Example ndash Primary Data Center
OLD NEW
Hardware Nodes 62 Physical Servers ( EU HQ clusters) 14 Approx
Software Version 615 amp 851 851
Ucxn Version 421 851 ndash 3 pairs -- Virtualized
CER 20 70 86 -- Virtualized
CM SUB CM PUB CM SUB CM SUB
MOH TFTP CER CM SUB
CER UCxn UCxN CM SUB
42 copy 2010 Cisco andor its affiliates All rights reserved
Deployment Model ndash Data Center 1
CM SUB CM PUB CM SUB CM SUB
MOH TFTP UCxN UCxn
CER UCxn UCxn UCxn
43 copy 2010 Cisco andor its affiliates All rights reserved
Deployment Model ndash Data Center 2
CM SUB CM PUB CM SUB CM SUB
MOH TFTP Ucxn UCxn
CER UCxn UCxn UCxn
copy 2010 Cisco andor its affiliates All rights reserved 44
Customer Design
PSTN
IP WAN
SIP Proxy
Unified Communications Manager
Unity Connection
Unified Presence
Unified Contact Center Express
Cisco UCS 5108 Chassis with UCS B200 Blade Servers
Cisco UCS 6100 Fabric Interconnect Switch
Cisco UCS C210 or C200 General-Purpose Rack-mount Server
CUSP
(11K phones)
(3K phones)
(400 phones)
copy 2010 Cisco andor its affiliates All rights reserved 45
HQ Details
Blade 1 Blade 2
CPU-1 CPU-2 CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
Blade 3 Blade 4
CPU-1 CPU-2 CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
CUP-1 UCCX-1 CUP-2
Blade 5 Blade 6
CPU-1 CPU-2 CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
UCCX-1
UCxn-1 Active
Leave
idle for
UCxn
UCxn-2
Active
TFTP-2
SUB-2
SUB-4
SUB-6
SUB-8 CUP-2 UCCX-2
SUB-3 PUB SUB-1
CUP-1
TFTP-1 SUB-5
SUB-7
Blade Slot 7 Blade Slot 8
CPU-1 CPU-2 CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
CUCM VM OVAs
Messaging VM OVAs
Leave
idle for
UCxn
ldquoSparerdquo blade slots available for non-UC workloads such as Cisco Nexus 1000V VMware
vCenter or 3rd-party workloads such as directories email groupware or other business
applications
Contact Center VM OVAs
Presence VM OVAs
copy 2010 Cisco andor its affiliates All rights reserved 46
Rack Server 3
CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
UCxn-2 CCX-1 CUP
Rack Server 2
CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
Rack Server 1
CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
Leave
idle for
UCxn
UCxn-1 TFTP-2 SUB-2
Leave
idle for
UCxn
SUB-1 CCX-2 PUB TFTP-1
Branch Office Details
Rack Server 2
CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
Rack Server 1
CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
PUBTFTP CCX-1 CUP
SUB CCX-2
Leave
idle for
UCxn
Leave
idle for
UCxn
UC
xn
-1
UC
xn
-1
CUCM VM OVAs
Messaging VM OVAs
Contact Center VM OVAs
Presence VM OVAs
copy 2010 Cisco andor its affiliates All rights reserved 47
bull DAS Rack Mount Server (Cisco C-Series)
bull Popular DAS Protocol SCSI
bull iSCSI Access SCSI storage media using IP network
bull Fibre Channel The most popular SAN protocol today
Cable distance ~ 2 km
Popular speed - 4 Gbs
bull NAS (Network Attached Storage) uses NFS (Network File System) protocol over TCPIP
DAS
SCSI
Computer System
SCSI Bus Adapter
SCSI Device Driver
Volume Manager
File System
Application
iSCSI
File System
Application
SCSI Device Driver iSCSI Driver
TCPIP Stack
NIC
Volume Manager
NIC
TCPIP Stack
iSCSI Layer
Bus Adapter
Host
Server
Storage
Transport
Storage
Media
FC SAN
FC
FC HBA
SCSI Device Driver
Volume Manager
File System
Application
Computer System Computer System
Block IO
SAN IP
copy 2010 Cisco andor its affiliates All rights reserved 48
HDD 1
450gig
15K RPM
HDD 2
450gig
15K RPM
HDD 3
450gig
15K RPM
HDD 4
450gig
15K RPM
HDD 5
450gig
15K RPM
Single RAID5 Group (14 TB Usable Space)
PUB SUB1 UCCX1
UC
VM 1
UC
VM 2
UC
VM 3
LUN 2 (720 GB) LUN 1 (720 GB)
UCCX2 CUP1 CUP2
UC
VM 4
UC
VM 5 UC
VM 6
4 to 8 UC VMs per
LUN
(max dependent on
sumvDisks)
NASSAN Array Best Practices for UC
Must be lt2TB per
LUN Recommend
500GB to 15 TB
Use FC class disks
with ~180 IOPS (eg
450 GB 15K or
300 GB 15K)
copy 2010 Cisco andor its affiliates All rights reserved 49
DAS Example UCS C210 M2 TRC1
HDD 1
146 GB
15K RPM
HDD 2
146 GB
15K RPM
HDD 3
146 GB
15K RPM
HDD 4
146 GB
15K RPM
HDD 5
146 GB
15K RPM
HDD 6
146 GB
15K RPM
HDD 7
146 GB
15K RPM
HDD 8
146 GB
15K RPM
HDD 9
146 GB
15K RPM
HDD 10
146 GB
15K RPM
Single RAID1
Volume Single RAID5 Volume (1022 GB after RAID overhead)
VMFS Filestore (947 GB after VMFS overhead)
PUB UCCX1
UC
VM 1
UC
VM 3
CUP1
UC
VM 5
vSphere ESXi
image
Notes
bull VMFS block size limits
max vDisk size
bull Could have gt1 VMFS
datastore on RAID
volume
copy 2010 Cisco andor its affiliates All rights reserved 50
DAS Example UCS C200 M2 TRC1 for BE6K
HDD 1
1 TB
72K RPM
HDD 2
1 TB
72K RPM
HDD 3
1 TB
72K RPM
HDD 4 1
TB
72K RPM
Single RAID10 Volume (2 TB after RAID
overhead)
VMFS Filestore (18 TB after VMFS overhead)
PUB UCCX1
UC
VM 1
UC
VM 3
CUP1
UC
VM 5 vSphere ESXi
image
copy 2010 Cisco andor its affiliates All rights reserved 51
UC Application Co-residency
bull See wwwciscocomgouc-virtualized for latest
bull Based on supported OVA for download
OVA reserves cores RAM etc to VMs
Basic rule of thumb fill up blade until out of capacity
If blade contains VM for messaging must reserve core for ESXi
bull Hardware oversubscription not supported
52 copy 2010 Cisco andor its affiliates All rights reserved
Virtual Machine Sizing
bull Virtual Machine virtual hardware defined by an VM template
vCPU vRAM vDisk vNICs
bull Capacity
bullAn VM template is associated with a specific capacity
bull VM templates are packaged in a OVA file
bull There are usually different VM template per release For example CUCM_80_vmv7_v21ova
CUCM_85_vmv7_v21ova
CUCM_86_vmv7_v15ova
Includes product product version VMware hardware version template version
53 copy 2010 Cisco andor its affiliates All rights reserved httptoolsciscocomcucst
copy 2010 Cisco andor its affiliates All rights reserved 54
bull Customer-accessible
bullUC on UCS httpwwwciscocomgouconucs and wwwciscocomgouc-virtualized and wwwciscocomgoucsrnd and wwwciscocomgoswonly (UCS page)
bullUCS in general httpwwwciscocomgoucs
bullVblocks amp Virtual Computing Environment wwwvceportalcom and wwwvcecom
bullFlexPods wwwcisconetappcom
bullCisco Infrastructure as a Service (IaaS) Portal httpwwwciscocomgoiaas
bullCisco Unified Service Delivery Portal httpwwwciscocomgousd
bullldquoCUCM on Virtual Serversrdquo (summary of ldquowhatrsquos newdifferent when virtualizedrdquo) httpwwwciscocomenUSdocsvoice_ip_commcucmvirtualservershtml
bull ldquoVirtualization and UCS 101rdquo httpnewsroomciscocomdlls2010ts_102810html
bull UC on UCS ldquotech valuerdquo httpwwwciscocomenUSsolutionsns340ns339ns638ns914html_TWTVtwtv_episode_74html
copy 2010 Cisco andor its affiliates All rights reserved 55
bull Partner Central ldquoServers OS and Virtualizationrdquo
httpwwwciscocomwebpartnersselltechnologyipcservers_os_virt_for_uchtml
bull Partner Community ldquoServers OS and Virtualizationrdquo
httpswwwmyciscocommunitycomcommunitypartnercollaborationucservers
bull Design info
httpwwwciscocomgouc-virtualized and httpwwwciscocomgoucsrnd
bull ldquoWhatrsquos new whatrsquos different when on VMwarerdquo customer doc
httpwwwciscocomenUSdocsvoice_ip_commcucmvirtualservershtml
bull Product Literature
httpwwwciscocomgoucs and httpwwwciscocomgouconucs
bull Ordering Guide
httpwwwciscocomwebpartnersselltechnologyipcuc_tech_readinesshtml~7
bull Virtual Computing Environment Portal httpwwwvceportalcom
bull Cisco Infrastructure as a Service (IaaS) Portal httpwwwciscocomgoiaas
copy 2010 Cisco andor its affiliates All rights reserved 29
pSwitch
ESXi
bull Cisco Software Switch in Hypervisor
bull Familiar network server operations amp management model
bull Enhanced diagnostic amp monitoring capability
bull Visibility direct to VM
ESXi Nexus
1000V
VEM
Nexus
1000V
VEM
Nexus 1000V VSM
copy 2010 Cisco andor its affiliates All rights reserved 30
Physical switch maps L3 DSCP to L2 CoS
CUCM marks traffic based on L3 DSCP values
pSwitch (CAT6K etc) can do mapping from L3 DSCP to L2 CoS (if needed)
CTL Packet L3
dc1-access-6k(config)mls qos map dscp-cos 24 to 3
dc1-access-6k(config)mls qos map dscp-cos 46 to 5
CS3
L20 L3CS3
CUCM
CAT6K
L23 L3CS3 L23 L3CS3
copy 2010 Cisco andor its affiliates All rights reserved 31
bull UCS 6100 doesnrsquot look into L3 IP header
bull DSCPToS setting in IP header is not altered by UCS
bull 6100 sends packet to uplink pEthernet switch
bull Default QoS settings on UCS
FCoE (ldquomatch cos 3rdquo) ndash no drop policy
Rest (ldquomatch anyrdquo) ndash Best Effort Queue
vSwitch amp UCS 6100 can not map L3 DSCP to L2 CoS
L20 L3CS3
CUCM
CAT6K
UCS 6100
copy 2010 Cisco andor its affiliates All rights reserved 32
bull UC blades Network Adapters QoS policy set to Platinum (CoS=5 No
Drop)
bull Non-UC blades Network Adapters QoS policy set to best effort
N1Kv Considerations
bull UC sig traffic (CoS3) share queues with FCoE traffic (CoS3)
bull UC sig traffic is given lossless behavior
bull Default CoS value of 3 for FCoE traffic should never be changed
Without N1Kv
Caveat
bull All traffic types from virtual UC App will get CoS value of Platinum
bull Non-UC application gets best-effort class might not be acceptable
L20 L3CS3
L23 L3CS3
L23 L3CS3
CUCM
N1KV
UCS 6100
CAT6K
copy 2010 Cisco andor its affiliates All rights reserved 33
Compute Layer
SANStorage
Layer ndash Cisco
SRND
Cisco
UCS 6100
Fabric
Interconnect
UCS 5100
Blade
Server
Cisco SAN
Switch
4x10GE
4x10GE
4x10GE
4x10GE
FC FC
FC FC
Nexus
1000V
FC Storage
SP-A SP-B
3rd party layer
CUCM VM IOPS ~ 200
200 IOPS 4KB ~ 64 Mbps per VM
bull Total capacity 28000 IOPS
bull 14000 IOPS per controller
bull 4 KByte block size
14000 IOPS x (4KB) ~ 428 Mbps
600 Mbps throughputcontroller
3rd Party SAN Example
Result
bull One 4 Gbps FC interface is enough to
handle the entire capacity of one Storage
Array
bull HA requires four FC interfaces
copy 2010 Cisco andor its affiliates All rights reserved 34
copy 2010 Cisco andor its affiliates All rights reserved 35
bull All UC Deployment Models are supported
No change in the current deployment models
Base deployment model ndash Single Site Centralized Call Processing etc are not changing
bull VM machine layout on a blade andor chassis
Unity Connection requires one extra CPU core (vCPU) on blade
bull Software checks for design rules
No rules or restrictions are in place in UC Apps to check if you are running the primary and sub on the same blade
bull CoWAN ruleslatency requirement are same
Does not depend on CUCM code or hardware
httpwwwciscocomgoucsrnd
copy 2010 Cisco andor its affiliates All rights reserved 36
bull SRND application-layer guidelines are same as when on MCS
Redundancy Rules are the same
Clustering over the WANlatency numbers
Mega Cluster supported in 85
Determine quantityrole of nodes
For HA No design checks validating proper placement of primary and secondary servers
CUCCE private network requirement
bull Mixed clusters of HP IBM UCS are supported
Subject to ldquocommon senserdquo rules ndash eg donrsquot make Pub or Primary less powerful than Sub or Secondary
bull Direct attach devices must be on physical MCS server
-MOH Live audio stream
-Tape BackupFloppy
bull New factors to consider for end-to-end QoS designconfiguration
37 copy 2010 Cisco andor its affiliates All rights reserved
FROM CUCM
System Release To Unified CM 85 To UC System 85
4x Multi Hop thru 61(x)71(x) Multi Hop thru 61(3)
51(2) Multi Hop thru 61(x)71(x) NA
51(3) 2 Hop thru 71(3) NA
61(1) 2 Hop thru 61(x)71(x) 2 Hop
61(2) 2 Hop thru 61(x)71(x) NA
61(3) 2 Hop thru 61(x)71(x) NA
61(4) Single Hop NA
61(5) Single Hop NA
70(1) 2 Hop thru 71(x) 2 Hop
71(2) 2 Hop thru 71(x) 2 Hop
71(3) Single Hop Single Hop Multi stagesBWC supported
71(5) Single Hop NA
80(1) Single Hop Single Hop Multi stagesBWC supported
80(2) 80(3) Single Hop NA
38 copy 2010 Cisco andor its affiliates All rights reserved
VMware Feature support bull VMware feature support varies by application
bull Some features are supported with Caveats some Partially
bull For example
bull Clone Virtual Machine
ldquoY (C)rdquo means VM has to be powered off
bull vMotion
ldquoY (C)rdquo means vMotion supported for live traffic calls shouldnrsquot be dropped (but not guaranteed)
ldquoPartialrdquo means in maintenance mode only
ESXi Features CUCM CUC CUP CCX
Clone Virtual Machine Y (C) Y (C) Y (C) Y (C)
VMware vMotion Y (C) Partial Partial Y (C)
Resize Virtual Machine Partial Partial Partial Partial
VMware HA Y (C) Y (C) Y (C) Y (C)
Boot From SAN Y (C) Y (C) Y (C) Y (C)
VMware DRS No No No No
Docwiki httpdocwikiciscocomwikiUnified_Communications_VMware_Requirements
copy 2010 Cisco andor its affiliates All rights reserved 39
Current Business Continuity and Disaster Recovery strategies are still applicable
The UC apps redundancy rules are same
Distribute UC application nodes across UCS blades chassis and sites to minimize failure impact
Primarysecondary on different blade chassis sites
On same blade mix Subs with TFTPMoH vs just Subs
Redundancy of UCS components (blade chassis FEX links Interconnect switching)
Redundancy of ldquonewrdquo network types (10GbE SAN multi-pathing etc)
copy 2010 Cisco andor its affiliates All rights reserved 40
TAC Support Demarcation
Server Hardware
Shared Storage
VMware Cisco
Application
UC on UCS
Tested Reference Configuration
Cisco 3rd-party Cisco
Cisco VCE VCE
3rd-party
VCE
UC on UCS
Specs-based (including Vblock option)
Cisco 3rd-party Cisco
Cisco VCE VCE
3rd-party
VCE
3rd-party VMware (HP IBM)
Specs-based 3rd-party 3rd-party 3rd-party Cisco
MCS 7800 Appliances Cisco NA NA Cisco
Customer-provided MCS 7800 equivalent 3rd party NA NA Cisco
41 copy 2010 Cisco andor its affiliates All rights reserved
Customer Example ndash Primary Data Center
OLD NEW
Hardware Nodes 62 Physical Servers ( EU HQ clusters) 14 Approx
Software Version 615 amp 851 851
Ucxn Version 421 851 ndash 3 pairs -- Virtualized
CER 20 70 86 -- Virtualized
CM SUB CM PUB CM SUB CM SUB
MOH TFTP CER CM SUB
CER UCxn UCxN CM SUB
42 copy 2010 Cisco andor its affiliates All rights reserved
Deployment Model ndash Data Center 1
CM SUB CM PUB CM SUB CM SUB
MOH TFTP UCxN UCxn
CER UCxn UCxn UCxn
43 copy 2010 Cisco andor its affiliates All rights reserved
Deployment Model ndash Data Center 2
CM SUB CM PUB CM SUB CM SUB
MOH TFTP Ucxn UCxn
CER UCxn UCxn UCxn
copy 2010 Cisco andor its affiliates All rights reserved 44
Customer Design
PSTN
IP WAN
SIP Proxy
Unified Communications Manager
Unity Connection
Unified Presence
Unified Contact Center Express
Cisco UCS 5108 Chassis with UCS B200 Blade Servers
Cisco UCS 6100 Fabric Interconnect Switch
Cisco UCS C210 or C200 General-Purpose Rack-mount Server
CUSP
(11K phones)
(3K phones)
(400 phones)
copy 2010 Cisco andor its affiliates All rights reserved 45
HQ Details
Blade 1 Blade 2
CPU-1 CPU-2 CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
Blade 3 Blade 4
CPU-1 CPU-2 CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
CUP-1 UCCX-1 CUP-2
Blade 5 Blade 6
CPU-1 CPU-2 CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
UCCX-1
UCxn-1 Active
Leave
idle for
UCxn
UCxn-2
Active
TFTP-2
SUB-2
SUB-4
SUB-6
SUB-8 CUP-2 UCCX-2
SUB-3 PUB SUB-1
CUP-1
TFTP-1 SUB-5
SUB-7
Blade Slot 7 Blade Slot 8
CPU-1 CPU-2 CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
CUCM VM OVAs
Messaging VM OVAs
Leave
idle for
UCxn
ldquoSparerdquo blade slots available for non-UC workloads such as Cisco Nexus 1000V VMware
vCenter or 3rd-party workloads such as directories email groupware or other business
applications
Contact Center VM OVAs
Presence VM OVAs
copy 2010 Cisco andor its affiliates All rights reserved 46
Rack Server 3
CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
UCxn-2 CCX-1 CUP
Rack Server 2
CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
Rack Server 1
CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
Leave
idle for
UCxn
UCxn-1 TFTP-2 SUB-2
Leave
idle for
UCxn
SUB-1 CCX-2 PUB TFTP-1
Branch Office Details
Rack Server 2
CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
Rack Server 1
CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
PUBTFTP CCX-1 CUP
SUB CCX-2
Leave
idle for
UCxn
Leave
idle for
UCxn
UC
xn
-1
UC
xn
-1
CUCM VM OVAs
Messaging VM OVAs
Contact Center VM OVAs
Presence VM OVAs
copy 2010 Cisco andor its affiliates All rights reserved 47
bull DAS Rack Mount Server (Cisco C-Series)
bull Popular DAS Protocol SCSI
bull iSCSI Access SCSI storage media using IP network
bull Fibre Channel The most popular SAN protocol today
Cable distance ~ 2 km
Popular speed - 4 Gbs
bull NAS (Network Attached Storage) uses NFS (Network File System) protocol over TCPIP
DAS
SCSI
Computer System
SCSI Bus Adapter
SCSI Device Driver
Volume Manager
File System
Application
iSCSI
File System
Application
SCSI Device Driver iSCSI Driver
TCPIP Stack
NIC
Volume Manager
NIC
TCPIP Stack
iSCSI Layer
Bus Adapter
Host
Server
Storage
Transport
Storage
Media
FC SAN
FC
FC HBA
SCSI Device Driver
Volume Manager
File System
Application
Computer System Computer System
Block IO
SAN IP
copy 2010 Cisco andor its affiliates All rights reserved 48
HDD 1
450gig
15K RPM
HDD 2
450gig
15K RPM
HDD 3
450gig
15K RPM
HDD 4
450gig
15K RPM
HDD 5
450gig
15K RPM
Single RAID5 Group (14 TB Usable Space)
PUB SUB1 UCCX1
UC
VM 1
UC
VM 2
UC
VM 3
LUN 2 (720 GB) LUN 1 (720 GB)
UCCX2 CUP1 CUP2
UC
VM 4
UC
VM 5 UC
VM 6
4 to 8 UC VMs per
LUN
(max dependent on
sumvDisks)
NASSAN Array Best Practices for UC
Must be lt2TB per
LUN Recommend
500GB to 15 TB
Use FC class disks
with ~180 IOPS (eg
450 GB 15K or
300 GB 15K)
copy 2010 Cisco andor its affiliates All rights reserved 49
DAS Example UCS C210 M2 TRC1
HDD 1
146 GB
15K RPM
HDD 2
146 GB
15K RPM
HDD 3
146 GB
15K RPM
HDD 4
146 GB
15K RPM
HDD 5
146 GB
15K RPM
HDD 6
146 GB
15K RPM
HDD 7
146 GB
15K RPM
HDD 8
146 GB
15K RPM
HDD 9
146 GB
15K RPM
HDD 10
146 GB
15K RPM
Single RAID1
Volume Single RAID5 Volume (1022 GB after RAID overhead)
VMFS Filestore (947 GB after VMFS overhead)
PUB UCCX1
UC
VM 1
UC
VM 3
CUP1
UC
VM 5
vSphere ESXi
image
Notes
bull VMFS block size limits
max vDisk size
bull Could have gt1 VMFS
datastore on RAID
volume
copy 2010 Cisco andor its affiliates All rights reserved 50
DAS Example UCS C200 M2 TRC1 for BE6K
HDD 1
1 TB
72K RPM
HDD 2
1 TB
72K RPM
HDD 3
1 TB
72K RPM
HDD 4 1
TB
72K RPM
Single RAID10 Volume (2 TB after RAID
overhead)
VMFS Filestore (18 TB after VMFS overhead)
PUB UCCX1
UC
VM 1
UC
VM 3
CUP1
UC
VM 5 vSphere ESXi
image
copy 2010 Cisco andor its affiliates All rights reserved 51
UC Application Co-residency
bull See wwwciscocomgouc-virtualized for latest
bull Based on supported OVA for download
OVA reserves cores RAM etc to VMs
Basic rule of thumb fill up blade until out of capacity
If blade contains VM for messaging must reserve core for ESXi
bull Hardware oversubscription not supported
52 copy 2010 Cisco andor its affiliates All rights reserved
Virtual Machine Sizing
bull Virtual Machine virtual hardware defined by an VM template
vCPU vRAM vDisk vNICs
bull Capacity
bullAn VM template is associated with a specific capacity
bull VM templates are packaged in a OVA file
bull There are usually different VM template per release For example CUCM_80_vmv7_v21ova
CUCM_85_vmv7_v21ova
CUCM_86_vmv7_v15ova
Includes product product version VMware hardware version template version
53 copy 2010 Cisco andor its affiliates All rights reserved httptoolsciscocomcucst
copy 2010 Cisco andor its affiliates All rights reserved 54
bull Customer-accessible
bullUC on UCS httpwwwciscocomgouconucs and wwwciscocomgouc-virtualized and wwwciscocomgoucsrnd and wwwciscocomgoswonly (UCS page)
bullUCS in general httpwwwciscocomgoucs
bullVblocks amp Virtual Computing Environment wwwvceportalcom and wwwvcecom
bullFlexPods wwwcisconetappcom
bullCisco Infrastructure as a Service (IaaS) Portal httpwwwciscocomgoiaas
bullCisco Unified Service Delivery Portal httpwwwciscocomgousd
bullldquoCUCM on Virtual Serversrdquo (summary of ldquowhatrsquos newdifferent when virtualizedrdquo) httpwwwciscocomenUSdocsvoice_ip_commcucmvirtualservershtml
bull ldquoVirtualization and UCS 101rdquo httpnewsroomciscocomdlls2010ts_102810html
bull UC on UCS ldquotech valuerdquo httpwwwciscocomenUSsolutionsns340ns339ns638ns914html_TWTVtwtv_episode_74html
copy 2010 Cisco andor its affiliates All rights reserved 55
bull Partner Central ldquoServers OS and Virtualizationrdquo
httpwwwciscocomwebpartnersselltechnologyipcservers_os_virt_for_uchtml
bull Partner Community ldquoServers OS and Virtualizationrdquo
httpswwwmyciscocommunitycomcommunitypartnercollaborationucservers
bull Design info
httpwwwciscocomgouc-virtualized and httpwwwciscocomgoucsrnd
bull ldquoWhatrsquos new whatrsquos different when on VMwarerdquo customer doc
httpwwwciscocomenUSdocsvoice_ip_commcucmvirtualservershtml
bull Product Literature
httpwwwciscocomgoucs and httpwwwciscocomgouconucs
bull Ordering Guide
httpwwwciscocomwebpartnersselltechnologyipcuc_tech_readinesshtml~7
bull Virtual Computing Environment Portal httpwwwvceportalcom
bull Cisco Infrastructure as a Service (IaaS) Portal httpwwwciscocomgoiaas
copy 2010 Cisco andor its affiliates All rights reserved 30
Physical switch maps L3 DSCP to L2 CoS
CUCM marks traffic based on L3 DSCP values
pSwitch (CAT6K etc) can do mapping from L3 DSCP to L2 CoS (if needed)
CTL Packet L3
dc1-access-6k(config)mls qos map dscp-cos 24 to 3
dc1-access-6k(config)mls qos map dscp-cos 46 to 5
CS3
L20 L3CS3
CUCM
CAT6K
L23 L3CS3 L23 L3CS3
copy 2010 Cisco andor its affiliates All rights reserved 31
bull UCS 6100 doesnrsquot look into L3 IP header
bull DSCPToS setting in IP header is not altered by UCS
bull 6100 sends packet to uplink pEthernet switch
bull Default QoS settings on UCS
FCoE (ldquomatch cos 3rdquo) ndash no drop policy
Rest (ldquomatch anyrdquo) ndash Best Effort Queue
vSwitch amp UCS 6100 can not map L3 DSCP to L2 CoS
L20 L3CS3
CUCM
CAT6K
UCS 6100
copy 2010 Cisco andor its affiliates All rights reserved 32
bull UC blades Network Adapters QoS policy set to Platinum (CoS=5 No
Drop)
bull Non-UC blades Network Adapters QoS policy set to best effort
N1Kv Considerations
bull UC sig traffic (CoS3) share queues with FCoE traffic (CoS3)
bull UC sig traffic is given lossless behavior
bull Default CoS value of 3 for FCoE traffic should never be changed
Without N1Kv
Caveat
bull All traffic types from virtual UC App will get CoS value of Platinum
bull Non-UC application gets best-effort class might not be acceptable
L20 L3CS3
L23 L3CS3
L23 L3CS3
CUCM
N1KV
UCS 6100
CAT6K
copy 2010 Cisco andor its affiliates All rights reserved 33
Compute Layer
SANStorage
Layer ndash Cisco
SRND
Cisco
UCS 6100
Fabric
Interconnect
UCS 5100
Blade
Server
Cisco SAN
Switch
4x10GE
4x10GE
4x10GE
4x10GE
FC FC
FC FC
Nexus
1000V
FC Storage
SP-A SP-B
3rd party layer
CUCM VM IOPS ~ 200
200 IOPS 4KB ~ 64 Mbps per VM
bull Total capacity 28000 IOPS
bull 14000 IOPS per controller
bull 4 KByte block size
14000 IOPS x (4KB) ~ 428 Mbps
600 Mbps throughputcontroller
3rd Party SAN Example
Result
bull One 4 Gbps FC interface is enough to
handle the entire capacity of one Storage
Array
bull HA requires four FC interfaces
copy 2010 Cisco andor its affiliates All rights reserved 34
copy 2010 Cisco andor its affiliates All rights reserved 35
bull All UC Deployment Models are supported
No change in the current deployment models
Base deployment model ndash Single Site Centralized Call Processing etc are not changing
bull VM machine layout on a blade andor chassis
Unity Connection requires one extra CPU core (vCPU) on blade
bull Software checks for design rules
No rules or restrictions are in place in UC Apps to check if you are running the primary and sub on the same blade
bull CoWAN ruleslatency requirement are same
Does not depend on CUCM code or hardware
httpwwwciscocomgoucsrnd
copy 2010 Cisco andor its affiliates All rights reserved 36
bull SRND application-layer guidelines are same as when on MCS
Redundancy Rules are the same
Clustering over the WANlatency numbers
Mega Cluster supported in 85
Determine quantityrole of nodes
For HA No design checks validating proper placement of primary and secondary servers
CUCCE private network requirement
bull Mixed clusters of HP IBM UCS are supported
Subject to ldquocommon senserdquo rules ndash eg donrsquot make Pub or Primary less powerful than Sub or Secondary
bull Direct attach devices must be on physical MCS server
-MOH Live audio stream
-Tape BackupFloppy
bull New factors to consider for end-to-end QoS designconfiguration
37 copy 2010 Cisco andor its affiliates All rights reserved
FROM CUCM
System Release To Unified CM 85 To UC System 85
4x Multi Hop thru 61(x)71(x) Multi Hop thru 61(3)
51(2) Multi Hop thru 61(x)71(x) NA
51(3) 2 Hop thru 71(3) NA
61(1) 2 Hop thru 61(x)71(x) 2 Hop
61(2) 2 Hop thru 61(x)71(x) NA
61(3) 2 Hop thru 61(x)71(x) NA
61(4) Single Hop NA
61(5) Single Hop NA
70(1) 2 Hop thru 71(x) 2 Hop
71(2) 2 Hop thru 71(x) 2 Hop
71(3) Single Hop Single Hop Multi stagesBWC supported
71(5) Single Hop NA
80(1) Single Hop Single Hop Multi stagesBWC supported
80(2) 80(3) Single Hop NA
38 copy 2010 Cisco andor its affiliates All rights reserved
VMware Feature support bull VMware feature support varies by application
bull Some features are supported with Caveats some Partially
bull For example
bull Clone Virtual Machine
ldquoY (C)rdquo means VM has to be powered off
bull vMotion
ldquoY (C)rdquo means vMotion supported for live traffic calls shouldnrsquot be dropped (but not guaranteed)
ldquoPartialrdquo means in maintenance mode only
ESXi Features CUCM CUC CUP CCX
Clone Virtual Machine Y (C) Y (C) Y (C) Y (C)
VMware vMotion Y (C) Partial Partial Y (C)
Resize Virtual Machine Partial Partial Partial Partial
VMware HA Y (C) Y (C) Y (C) Y (C)
Boot From SAN Y (C) Y (C) Y (C) Y (C)
VMware DRS No No No No
Docwiki httpdocwikiciscocomwikiUnified_Communications_VMware_Requirements
copy 2010 Cisco andor its affiliates All rights reserved 39
Current Business Continuity and Disaster Recovery strategies are still applicable
The UC apps redundancy rules are same
Distribute UC application nodes across UCS blades chassis and sites to minimize failure impact
Primarysecondary on different blade chassis sites
On same blade mix Subs with TFTPMoH vs just Subs
Redundancy of UCS components (blade chassis FEX links Interconnect switching)
Redundancy of ldquonewrdquo network types (10GbE SAN multi-pathing etc)
copy 2010 Cisco andor its affiliates All rights reserved 40
TAC Support Demarcation
Server Hardware
Shared Storage
VMware Cisco
Application
UC on UCS
Tested Reference Configuration
Cisco 3rd-party Cisco
Cisco VCE VCE
3rd-party
VCE
UC on UCS
Specs-based (including Vblock option)
Cisco 3rd-party Cisco
Cisco VCE VCE
3rd-party
VCE
3rd-party VMware (HP IBM)
Specs-based 3rd-party 3rd-party 3rd-party Cisco
MCS 7800 Appliances Cisco NA NA Cisco
Customer-provided MCS 7800 equivalent 3rd party NA NA Cisco
41 copy 2010 Cisco andor its affiliates All rights reserved
Customer Example ndash Primary Data Center
OLD NEW
Hardware Nodes 62 Physical Servers ( EU HQ clusters) 14 Approx
Software Version 615 amp 851 851
Ucxn Version 421 851 ndash 3 pairs -- Virtualized
CER 20 70 86 -- Virtualized
CM SUB CM PUB CM SUB CM SUB
MOH TFTP CER CM SUB
CER UCxn UCxN CM SUB
42 copy 2010 Cisco andor its affiliates All rights reserved
Deployment Model ndash Data Center 1
CM SUB CM PUB CM SUB CM SUB
MOH TFTP UCxN UCxn
CER UCxn UCxn UCxn
43 copy 2010 Cisco andor its affiliates All rights reserved
Deployment Model ndash Data Center 2
CM SUB CM PUB CM SUB CM SUB
MOH TFTP Ucxn UCxn
CER UCxn UCxn UCxn
copy 2010 Cisco andor its affiliates All rights reserved 44
Customer Design
PSTN
IP WAN
SIP Proxy
Unified Communications Manager
Unity Connection
Unified Presence
Unified Contact Center Express
Cisco UCS 5108 Chassis with UCS B200 Blade Servers
Cisco UCS 6100 Fabric Interconnect Switch
Cisco UCS C210 or C200 General-Purpose Rack-mount Server
CUSP
(11K phones)
(3K phones)
(400 phones)
copy 2010 Cisco andor its affiliates All rights reserved 45
HQ Details
Blade 1 Blade 2
CPU-1 CPU-2 CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
Blade 3 Blade 4
CPU-1 CPU-2 CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
CUP-1 UCCX-1 CUP-2
Blade 5 Blade 6
CPU-1 CPU-2 CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
UCCX-1
UCxn-1 Active
Leave
idle for
UCxn
UCxn-2
Active
TFTP-2
SUB-2
SUB-4
SUB-6
SUB-8 CUP-2 UCCX-2
SUB-3 PUB SUB-1
CUP-1
TFTP-1 SUB-5
SUB-7
Blade Slot 7 Blade Slot 8
CPU-1 CPU-2 CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
CUCM VM OVAs
Messaging VM OVAs
Leave
idle for
UCxn
ldquoSparerdquo blade slots available for non-UC workloads such as Cisco Nexus 1000V VMware
vCenter or 3rd-party workloads such as directories email groupware or other business
applications
Contact Center VM OVAs
Presence VM OVAs
copy 2010 Cisco andor its affiliates All rights reserved 46
Rack Server 3
CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
UCxn-2 CCX-1 CUP
Rack Server 2
CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
Rack Server 1
CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
Leave
idle for
UCxn
UCxn-1 TFTP-2 SUB-2
Leave
idle for
UCxn
SUB-1 CCX-2 PUB TFTP-1
Branch Office Details
Rack Server 2
CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
Rack Server 1
CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
PUBTFTP CCX-1 CUP
SUB CCX-2
Leave
idle for
UCxn
Leave
idle for
UCxn
UC
xn
-1
UC
xn
-1
CUCM VM OVAs
Messaging VM OVAs
Contact Center VM OVAs
Presence VM OVAs
copy 2010 Cisco andor its affiliates All rights reserved 47
bull DAS Rack Mount Server (Cisco C-Series)
bull Popular DAS Protocol SCSI
bull iSCSI Access SCSI storage media using IP network
bull Fibre Channel The most popular SAN protocol today
Cable distance ~ 2 km
Popular speed - 4 Gbs
bull NAS (Network Attached Storage) uses NFS (Network File System) protocol over TCPIP
DAS
SCSI
Computer System
SCSI Bus Adapter
SCSI Device Driver
Volume Manager
File System
Application
iSCSI
File System
Application
SCSI Device Driver iSCSI Driver
TCPIP Stack
NIC
Volume Manager
NIC
TCPIP Stack
iSCSI Layer
Bus Adapter
Host
Server
Storage
Transport
Storage
Media
FC SAN
FC
FC HBA
SCSI Device Driver
Volume Manager
File System
Application
Computer System Computer System
Block IO
SAN IP
copy 2010 Cisco andor its affiliates All rights reserved 48
HDD 1
450gig
15K RPM
HDD 2
450gig
15K RPM
HDD 3
450gig
15K RPM
HDD 4
450gig
15K RPM
HDD 5
450gig
15K RPM
Single RAID5 Group (14 TB Usable Space)
PUB SUB1 UCCX1
UC
VM 1
UC
VM 2
UC
VM 3
LUN 2 (720 GB) LUN 1 (720 GB)
UCCX2 CUP1 CUP2
UC
VM 4
UC
VM 5 UC
VM 6
4 to 8 UC VMs per
LUN
(max dependent on
sumvDisks)
NASSAN Array Best Practices for UC
Must be lt2TB per
LUN Recommend
500GB to 15 TB
Use FC class disks
with ~180 IOPS (eg
450 GB 15K or
300 GB 15K)
copy 2010 Cisco andor its affiliates All rights reserved 49
DAS Example UCS C210 M2 TRC1
HDD 1
146 GB
15K RPM
HDD 2
146 GB
15K RPM
HDD 3
146 GB
15K RPM
HDD 4
146 GB
15K RPM
HDD 5
146 GB
15K RPM
HDD 6
146 GB
15K RPM
HDD 7
146 GB
15K RPM
HDD 8
146 GB
15K RPM
HDD 9
146 GB
15K RPM
HDD 10
146 GB
15K RPM
Single RAID1
Volume Single RAID5 Volume (1022 GB after RAID overhead)
VMFS Filestore (947 GB after VMFS overhead)
PUB UCCX1
UC
VM 1
UC
VM 3
CUP1
UC
VM 5
vSphere ESXi
image
Notes
bull VMFS block size limits
max vDisk size
bull Could have gt1 VMFS
datastore on RAID
volume
copy 2010 Cisco andor its affiliates All rights reserved 50
DAS Example UCS C200 M2 TRC1 for BE6K
HDD 1
1 TB
72K RPM
HDD 2
1 TB
72K RPM
HDD 3
1 TB
72K RPM
HDD 4 1
TB
72K RPM
Single RAID10 Volume (2 TB after RAID
overhead)
VMFS Filestore (18 TB after VMFS overhead)
PUB UCCX1
UC
VM 1
UC
VM 3
CUP1
UC
VM 5 vSphere ESXi
image
copy 2010 Cisco andor its affiliates All rights reserved 51
UC Application Co-residency
bull See wwwciscocomgouc-virtualized for latest
bull Based on supported OVA for download
OVA reserves cores RAM etc to VMs
Basic rule of thumb fill up blade until out of capacity
If blade contains VM for messaging must reserve core for ESXi
bull Hardware oversubscription not supported
52 copy 2010 Cisco andor its affiliates All rights reserved
Virtual Machine Sizing
bull Virtual Machine virtual hardware defined by an VM template
vCPU vRAM vDisk vNICs
bull Capacity
bullAn VM template is associated with a specific capacity
bull VM templates are packaged in a OVA file
bull There are usually different VM template per release For example CUCM_80_vmv7_v21ova
CUCM_85_vmv7_v21ova
CUCM_86_vmv7_v15ova
Includes product product version VMware hardware version template version
53 copy 2010 Cisco andor its affiliates All rights reserved httptoolsciscocomcucst
copy 2010 Cisco andor its affiliates All rights reserved 54
bull Customer-accessible
bullUC on UCS httpwwwciscocomgouconucs and wwwciscocomgouc-virtualized and wwwciscocomgoucsrnd and wwwciscocomgoswonly (UCS page)
bullUCS in general httpwwwciscocomgoucs
bullVblocks amp Virtual Computing Environment wwwvceportalcom and wwwvcecom
bullFlexPods wwwcisconetappcom
bullCisco Infrastructure as a Service (IaaS) Portal httpwwwciscocomgoiaas
bullCisco Unified Service Delivery Portal httpwwwciscocomgousd
bullldquoCUCM on Virtual Serversrdquo (summary of ldquowhatrsquos newdifferent when virtualizedrdquo) httpwwwciscocomenUSdocsvoice_ip_commcucmvirtualservershtml
bull ldquoVirtualization and UCS 101rdquo httpnewsroomciscocomdlls2010ts_102810html
bull UC on UCS ldquotech valuerdquo httpwwwciscocomenUSsolutionsns340ns339ns638ns914html_TWTVtwtv_episode_74html
copy 2010 Cisco andor its affiliates All rights reserved 55
bull Partner Central ldquoServers OS and Virtualizationrdquo
httpwwwciscocomwebpartnersselltechnologyipcservers_os_virt_for_uchtml
bull Partner Community ldquoServers OS and Virtualizationrdquo
httpswwwmyciscocommunitycomcommunitypartnercollaborationucservers
bull Design info
httpwwwciscocomgouc-virtualized and httpwwwciscocomgoucsrnd
bull ldquoWhatrsquos new whatrsquos different when on VMwarerdquo customer doc
httpwwwciscocomenUSdocsvoice_ip_commcucmvirtualservershtml
bull Product Literature
httpwwwciscocomgoucs and httpwwwciscocomgouconucs
bull Ordering Guide
httpwwwciscocomwebpartnersselltechnologyipcuc_tech_readinesshtml~7
bull Virtual Computing Environment Portal httpwwwvceportalcom
bull Cisco Infrastructure as a Service (IaaS) Portal httpwwwciscocomgoiaas
copy 2010 Cisco andor its affiliates All rights reserved 31
bull UCS 6100 doesnrsquot look into L3 IP header
bull DSCPToS setting in IP header is not altered by UCS
bull 6100 sends packet to uplink pEthernet switch
bull Default QoS settings on UCS
FCoE (ldquomatch cos 3rdquo) ndash no drop policy
Rest (ldquomatch anyrdquo) ndash Best Effort Queue
vSwitch amp UCS 6100 can not map L3 DSCP to L2 CoS
L20 L3CS3
CUCM
CAT6K
UCS 6100
copy 2010 Cisco andor its affiliates All rights reserved 32
bull UC blades Network Adapters QoS policy set to Platinum (CoS=5 No
Drop)
bull Non-UC blades Network Adapters QoS policy set to best effort
N1Kv Considerations
bull UC sig traffic (CoS3) share queues with FCoE traffic (CoS3)
bull UC sig traffic is given lossless behavior
bull Default CoS value of 3 for FCoE traffic should never be changed
Without N1Kv
Caveat
bull All traffic types from virtual UC App will get CoS value of Platinum
bull Non-UC application gets best-effort class might not be acceptable
L20 L3CS3
L23 L3CS3
L23 L3CS3
CUCM
N1KV
UCS 6100
CAT6K
copy 2010 Cisco andor its affiliates All rights reserved 33
Compute Layer
SANStorage
Layer ndash Cisco
SRND
Cisco
UCS 6100
Fabric
Interconnect
UCS 5100
Blade
Server
Cisco SAN
Switch
4x10GE
4x10GE
4x10GE
4x10GE
FC FC
FC FC
Nexus
1000V
FC Storage
SP-A SP-B
3rd party layer
CUCM VM IOPS ~ 200
200 IOPS 4KB ~ 64 Mbps per VM
bull Total capacity 28000 IOPS
bull 14000 IOPS per controller
bull 4 KByte block size
14000 IOPS x (4KB) ~ 428 Mbps
600 Mbps throughputcontroller
3rd Party SAN Example
Result
bull One 4 Gbps FC interface is enough to
handle the entire capacity of one Storage
Array
bull HA requires four FC interfaces
copy 2010 Cisco andor its affiliates All rights reserved 34
copy 2010 Cisco andor its affiliates All rights reserved 35
bull All UC Deployment Models are supported
No change in the current deployment models
Base deployment model ndash Single Site Centralized Call Processing etc are not changing
bull VM machine layout on a blade andor chassis
Unity Connection requires one extra CPU core (vCPU) on blade
bull Software checks for design rules
No rules or restrictions are in place in UC Apps to check if you are running the primary and sub on the same blade
bull CoWAN ruleslatency requirement are same
Does not depend on CUCM code or hardware
httpwwwciscocomgoucsrnd
copy 2010 Cisco andor its affiliates All rights reserved 36
bull SRND application-layer guidelines are same as when on MCS
Redundancy Rules are the same
Clustering over the WANlatency numbers
Mega Cluster supported in 85
Determine quantityrole of nodes
For HA No design checks validating proper placement of primary and secondary servers
CUCCE private network requirement
bull Mixed clusters of HP IBM UCS are supported
Subject to ldquocommon senserdquo rules ndash eg donrsquot make Pub or Primary less powerful than Sub or Secondary
bull Direct attach devices must be on physical MCS server
-MOH Live audio stream
-Tape BackupFloppy
bull New factors to consider for end-to-end QoS designconfiguration
37 copy 2010 Cisco andor its affiliates All rights reserved
FROM CUCM
System Release To Unified CM 85 To UC System 85
4x Multi Hop thru 61(x)71(x) Multi Hop thru 61(3)
51(2) Multi Hop thru 61(x)71(x) NA
51(3) 2 Hop thru 71(3) NA
61(1) 2 Hop thru 61(x)71(x) 2 Hop
61(2) 2 Hop thru 61(x)71(x) NA
61(3) 2 Hop thru 61(x)71(x) NA
61(4) Single Hop NA
61(5) Single Hop NA
70(1) 2 Hop thru 71(x) 2 Hop
71(2) 2 Hop thru 71(x) 2 Hop
71(3) Single Hop Single Hop Multi stagesBWC supported
71(5) Single Hop NA
80(1) Single Hop Single Hop Multi stagesBWC supported
80(2) 80(3) Single Hop NA
38 copy 2010 Cisco andor its affiliates All rights reserved
VMware Feature support bull VMware feature support varies by application
bull Some features are supported with Caveats some Partially
bull For example
bull Clone Virtual Machine
ldquoY (C)rdquo means VM has to be powered off
bull vMotion
ldquoY (C)rdquo means vMotion supported for live traffic calls shouldnrsquot be dropped (but not guaranteed)
ldquoPartialrdquo means in maintenance mode only
ESXi Features CUCM CUC CUP CCX
Clone Virtual Machine Y (C) Y (C) Y (C) Y (C)
VMware vMotion Y (C) Partial Partial Y (C)
Resize Virtual Machine Partial Partial Partial Partial
VMware HA Y (C) Y (C) Y (C) Y (C)
Boot From SAN Y (C) Y (C) Y (C) Y (C)
VMware DRS No No No No
Docwiki httpdocwikiciscocomwikiUnified_Communications_VMware_Requirements
copy 2010 Cisco andor its affiliates All rights reserved 39
Current Business Continuity and Disaster Recovery strategies are still applicable
The UC apps redundancy rules are same
Distribute UC application nodes across UCS blades chassis and sites to minimize failure impact
Primarysecondary on different blade chassis sites
On same blade mix Subs with TFTPMoH vs just Subs
Redundancy of UCS components (blade chassis FEX links Interconnect switching)
Redundancy of ldquonewrdquo network types (10GbE SAN multi-pathing etc)
copy 2010 Cisco andor its affiliates All rights reserved 40
TAC Support Demarcation
Server Hardware
Shared Storage
VMware Cisco
Application
UC on UCS
Tested Reference Configuration
Cisco 3rd-party Cisco
Cisco VCE VCE
3rd-party
VCE
UC on UCS
Specs-based (including Vblock option)
Cisco 3rd-party Cisco
Cisco VCE VCE
3rd-party
VCE
3rd-party VMware (HP IBM)
Specs-based 3rd-party 3rd-party 3rd-party Cisco
MCS 7800 Appliances Cisco NA NA Cisco
Customer-provided MCS 7800 equivalent 3rd party NA NA Cisco
41 copy 2010 Cisco andor its affiliates All rights reserved
Customer Example ndash Primary Data Center
OLD NEW
Hardware Nodes 62 Physical Servers ( EU HQ clusters) 14 Approx
Software Version 615 amp 851 851
Ucxn Version 421 851 ndash 3 pairs -- Virtualized
CER 20 70 86 -- Virtualized
CM SUB CM PUB CM SUB CM SUB
MOH TFTP CER CM SUB
CER UCxn UCxN CM SUB
42 copy 2010 Cisco andor its affiliates All rights reserved
Deployment Model ndash Data Center 1
CM SUB CM PUB CM SUB CM SUB
MOH TFTP UCxN UCxn
CER UCxn UCxn UCxn
43 copy 2010 Cisco andor its affiliates All rights reserved
Deployment Model ndash Data Center 2
CM SUB CM PUB CM SUB CM SUB
MOH TFTP Ucxn UCxn
CER UCxn UCxn UCxn
copy 2010 Cisco andor its affiliates All rights reserved 44
Customer Design
PSTN
IP WAN
SIP Proxy
Unified Communications Manager
Unity Connection
Unified Presence
Unified Contact Center Express
Cisco UCS 5108 Chassis with UCS B200 Blade Servers
Cisco UCS 6100 Fabric Interconnect Switch
Cisco UCS C210 or C200 General-Purpose Rack-mount Server
CUSP
(11K phones)
(3K phones)
(400 phones)
copy 2010 Cisco andor its affiliates All rights reserved 45
HQ Details
Blade 1 Blade 2
CPU-1 CPU-2 CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
Blade 3 Blade 4
CPU-1 CPU-2 CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
CUP-1 UCCX-1 CUP-2
Blade 5 Blade 6
CPU-1 CPU-2 CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
UCCX-1
UCxn-1 Active
Leave
idle for
UCxn
UCxn-2
Active
TFTP-2
SUB-2
SUB-4
SUB-6
SUB-8 CUP-2 UCCX-2
SUB-3 PUB SUB-1
CUP-1
TFTP-1 SUB-5
SUB-7
Blade Slot 7 Blade Slot 8
CPU-1 CPU-2 CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
CUCM VM OVAs
Messaging VM OVAs
Leave
idle for
UCxn
ldquoSparerdquo blade slots available for non-UC workloads such as Cisco Nexus 1000V VMware
vCenter or 3rd-party workloads such as directories email groupware or other business
applications
Contact Center VM OVAs
Presence VM OVAs
copy 2010 Cisco andor its affiliates All rights reserved 46
Rack Server 3
CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
UCxn-2 CCX-1 CUP
Rack Server 2
CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
Rack Server 1
CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
Leave
idle for
UCxn
UCxn-1 TFTP-2 SUB-2
Leave
idle for
UCxn
SUB-1 CCX-2 PUB TFTP-1
Branch Office Details
Rack Server 2
CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
Rack Server 1
CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
PUBTFTP CCX-1 CUP
SUB CCX-2
Leave
idle for
UCxn
Leave
idle for
UCxn
UC
xn
-1
UC
xn
-1
CUCM VM OVAs
Messaging VM OVAs
Contact Center VM OVAs
Presence VM OVAs
copy 2010 Cisco andor its affiliates All rights reserved 47
bull DAS Rack Mount Server (Cisco C-Series)
bull Popular DAS Protocol SCSI
bull iSCSI Access SCSI storage media using IP network
bull Fibre Channel The most popular SAN protocol today
Cable distance ~ 2 km
Popular speed - 4 Gbs
bull NAS (Network Attached Storage) uses NFS (Network File System) protocol over TCPIP
DAS
SCSI
Computer System
SCSI Bus Adapter
SCSI Device Driver
Volume Manager
File System
Application
iSCSI
File System
Application
SCSI Device Driver iSCSI Driver
TCPIP Stack
NIC
Volume Manager
NIC
TCPIP Stack
iSCSI Layer
Bus Adapter
Host
Server
Storage
Transport
Storage
Media
FC SAN
FC
FC HBA
SCSI Device Driver
Volume Manager
File System
Application
Computer System Computer System
Block IO
SAN IP
copy 2010 Cisco andor its affiliates All rights reserved 48
HDD 1
450gig
15K RPM
HDD 2
450gig
15K RPM
HDD 3
450gig
15K RPM
HDD 4
450gig
15K RPM
HDD 5
450gig
15K RPM
Single RAID5 Group (14 TB Usable Space)
PUB SUB1 UCCX1
UC
VM 1
UC
VM 2
UC
VM 3
LUN 2 (720 GB) LUN 1 (720 GB)
UCCX2 CUP1 CUP2
UC
VM 4
UC
VM 5 UC
VM 6
4 to 8 UC VMs per
LUN
(max dependent on
sumvDisks)
NASSAN Array Best Practices for UC
Must be lt2TB per
LUN Recommend
500GB to 15 TB
Use FC class disks
with ~180 IOPS (eg
450 GB 15K or
300 GB 15K)
copy 2010 Cisco andor its affiliates All rights reserved 49
DAS Example UCS C210 M2 TRC1
HDD 1
146 GB
15K RPM
HDD 2
146 GB
15K RPM
HDD 3
146 GB
15K RPM
HDD 4
146 GB
15K RPM
HDD 5
146 GB
15K RPM
HDD 6
146 GB
15K RPM
HDD 7
146 GB
15K RPM
HDD 8
146 GB
15K RPM
HDD 9
146 GB
15K RPM
HDD 10
146 GB
15K RPM
Single RAID1
Volume Single RAID5 Volume (1022 GB after RAID overhead)
VMFS Filestore (947 GB after VMFS overhead)
PUB UCCX1
UC
VM 1
UC
VM 3
CUP1
UC
VM 5
vSphere ESXi
image
Notes
bull VMFS block size limits
max vDisk size
bull Could have gt1 VMFS
datastore on RAID
volume
copy 2010 Cisco andor its affiliates All rights reserved 50
DAS Example UCS C200 M2 TRC1 for BE6K
HDD 1
1 TB
72K RPM
HDD 2
1 TB
72K RPM
HDD 3
1 TB
72K RPM
HDD 4 1
TB
72K RPM
Single RAID10 Volume (2 TB after RAID
overhead)
VMFS Filestore (18 TB after VMFS overhead)
PUB UCCX1
UC
VM 1
UC
VM 3
CUP1
UC
VM 5 vSphere ESXi
image
copy 2010 Cisco andor its affiliates All rights reserved 51
UC Application Co-residency
bull See wwwciscocomgouc-virtualized for latest
bull Based on supported OVA for download
OVA reserves cores RAM etc to VMs
Basic rule of thumb fill up blade until out of capacity
If blade contains VM for messaging must reserve core for ESXi
bull Hardware oversubscription not supported
52 copy 2010 Cisco andor its affiliates All rights reserved
Virtual Machine Sizing
bull Virtual Machine virtual hardware defined by an VM template
vCPU vRAM vDisk vNICs
bull Capacity
bullAn VM template is associated with a specific capacity
bull VM templates are packaged in a OVA file
bull There are usually different VM template per release For example CUCM_80_vmv7_v21ova
CUCM_85_vmv7_v21ova
CUCM_86_vmv7_v15ova
Includes product product version VMware hardware version template version
53 copy 2010 Cisco andor its affiliates All rights reserved httptoolsciscocomcucst
copy 2010 Cisco andor its affiliates All rights reserved 54
bull Customer-accessible
bullUC on UCS httpwwwciscocomgouconucs and wwwciscocomgouc-virtualized and wwwciscocomgoucsrnd and wwwciscocomgoswonly (UCS page)
bullUCS in general httpwwwciscocomgoucs
bullVblocks amp Virtual Computing Environment wwwvceportalcom and wwwvcecom
bullFlexPods wwwcisconetappcom
bullCisco Infrastructure as a Service (IaaS) Portal httpwwwciscocomgoiaas
bullCisco Unified Service Delivery Portal httpwwwciscocomgousd
bullldquoCUCM on Virtual Serversrdquo (summary of ldquowhatrsquos newdifferent when virtualizedrdquo) httpwwwciscocomenUSdocsvoice_ip_commcucmvirtualservershtml
bull ldquoVirtualization and UCS 101rdquo httpnewsroomciscocomdlls2010ts_102810html
bull UC on UCS ldquotech valuerdquo httpwwwciscocomenUSsolutionsns340ns339ns638ns914html_TWTVtwtv_episode_74html
copy 2010 Cisco andor its affiliates All rights reserved 55
bull Partner Central ldquoServers OS and Virtualizationrdquo
httpwwwciscocomwebpartnersselltechnologyipcservers_os_virt_for_uchtml
bull Partner Community ldquoServers OS and Virtualizationrdquo
httpswwwmyciscocommunitycomcommunitypartnercollaborationucservers
bull Design info
httpwwwciscocomgouc-virtualized and httpwwwciscocomgoucsrnd
bull ldquoWhatrsquos new whatrsquos different when on VMwarerdquo customer doc
httpwwwciscocomenUSdocsvoice_ip_commcucmvirtualservershtml
bull Product Literature
httpwwwciscocomgoucs and httpwwwciscocomgouconucs
bull Ordering Guide
httpwwwciscocomwebpartnersselltechnologyipcuc_tech_readinesshtml~7
bull Virtual Computing Environment Portal httpwwwvceportalcom
bull Cisco Infrastructure as a Service (IaaS) Portal httpwwwciscocomgoiaas
copy 2010 Cisco andor its affiliates All rights reserved 32
bull UC blades Network Adapters QoS policy set to Platinum (CoS=5 No
Drop)
bull Non-UC blades Network Adapters QoS policy set to best effort
N1Kv Considerations
bull UC sig traffic (CoS3) share queues with FCoE traffic (CoS3)
bull UC sig traffic is given lossless behavior
bull Default CoS value of 3 for FCoE traffic should never be changed
Without N1Kv
Caveat
bull All traffic types from virtual UC App will get CoS value of Platinum
bull Non-UC application gets best-effort class might not be acceptable
L20 L3CS3
L23 L3CS3
L23 L3CS3
CUCM
N1KV
UCS 6100
CAT6K
copy 2010 Cisco andor its affiliates All rights reserved 33
Compute Layer
SANStorage
Layer ndash Cisco
SRND
Cisco
UCS 6100
Fabric
Interconnect
UCS 5100
Blade
Server
Cisco SAN
Switch
4x10GE
4x10GE
4x10GE
4x10GE
FC FC
FC FC
Nexus
1000V
FC Storage
SP-A SP-B
3rd party layer
CUCM VM IOPS ~ 200
200 IOPS 4KB ~ 64 Mbps per VM
bull Total capacity 28000 IOPS
bull 14000 IOPS per controller
bull 4 KByte block size
14000 IOPS x (4KB) ~ 428 Mbps
600 Mbps throughputcontroller
3rd Party SAN Example
Result
bull One 4 Gbps FC interface is enough to
handle the entire capacity of one Storage
Array
bull HA requires four FC interfaces
copy 2010 Cisco andor its affiliates All rights reserved 34
copy 2010 Cisco andor its affiliates All rights reserved 35
bull All UC Deployment Models are supported
No change in the current deployment models
Base deployment model ndash Single Site Centralized Call Processing etc are not changing
bull VM machine layout on a blade andor chassis
Unity Connection requires one extra CPU core (vCPU) on blade
bull Software checks for design rules
No rules or restrictions are in place in UC Apps to check if you are running the primary and sub on the same blade
bull CoWAN ruleslatency requirement are same
Does not depend on CUCM code or hardware
httpwwwciscocomgoucsrnd
copy 2010 Cisco andor its affiliates All rights reserved 36
bull SRND application-layer guidelines are same as when on MCS
Redundancy Rules are the same
Clustering over the WANlatency numbers
Mega Cluster supported in 85
Determine quantityrole of nodes
For HA No design checks validating proper placement of primary and secondary servers
CUCCE private network requirement
bull Mixed clusters of HP IBM UCS are supported
Subject to ldquocommon senserdquo rules ndash eg donrsquot make Pub or Primary less powerful than Sub or Secondary
bull Direct attach devices must be on physical MCS server
-MOH Live audio stream
-Tape BackupFloppy
bull New factors to consider for end-to-end QoS designconfiguration
37 copy 2010 Cisco andor its affiliates All rights reserved
FROM CUCM
System Release To Unified CM 85 To UC System 85
4x Multi Hop thru 61(x)71(x) Multi Hop thru 61(3)
51(2) Multi Hop thru 61(x)71(x) NA
51(3) 2 Hop thru 71(3) NA
61(1) 2 Hop thru 61(x)71(x) 2 Hop
61(2) 2 Hop thru 61(x)71(x) NA
61(3) 2 Hop thru 61(x)71(x) NA
61(4) Single Hop NA
61(5) Single Hop NA
70(1) 2 Hop thru 71(x) 2 Hop
71(2) 2 Hop thru 71(x) 2 Hop
71(3) Single Hop Single Hop Multi stagesBWC supported
71(5) Single Hop NA
80(1) Single Hop Single Hop Multi stagesBWC supported
80(2) 80(3) Single Hop NA
38 copy 2010 Cisco andor its affiliates All rights reserved
VMware Feature support bull VMware feature support varies by application
bull Some features are supported with Caveats some Partially
bull For example
bull Clone Virtual Machine
ldquoY (C)rdquo means VM has to be powered off
bull vMotion
ldquoY (C)rdquo means vMotion supported for live traffic calls shouldnrsquot be dropped (but not guaranteed)
ldquoPartialrdquo means in maintenance mode only
ESXi Features CUCM CUC CUP CCX
Clone Virtual Machine Y (C) Y (C) Y (C) Y (C)
VMware vMotion Y (C) Partial Partial Y (C)
Resize Virtual Machine Partial Partial Partial Partial
VMware HA Y (C) Y (C) Y (C) Y (C)
Boot From SAN Y (C) Y (C) Y (C) Y (C)
VMware DRS No No No No
Docwiki httpdocwikiciscocomwikiUnified_Communications_VMware_Requirements
copy 2010 Cisco andor its affiliates All rights reserved 39
Current Business Continuity and Disaster Recovery strategies are still applicable
The UC apps redundancy rules are same
Distribute UC application nodes across UCS blades chassis and sites to minimize failure impact
Primarysecondary on different blade chassis sites
On same blade mix Subs with TFTPMoH vs just Subs
Redundancy of UCS components (blade chassis FEX links Interconnect switching)
Redundancy of ldquonewrdquo network types (10GbE SAN multi-pathing etc)
copy 2010 Cisco andor its affiliates All rights reserved 40
TAC Support Demarcation
Server Hardware
Shared Storage
VMware Cisco
Application
UC on UCS
Tested Reference Configuration
Cisco 3rd-party Cisco
Cisco VCE VCE
3rd-party
VCE
UC on UCS
Specs-based (including Vblock option)
Cisco 3rd-party Cisco
Cisco VCE VCE
3rd-party
VCE
3rd-party VMware (HP IBM)
Specs-based 3rd-party 3rd-party 3rd-party Cisco
MCS 7800 Appliances Cisco NA NA Cisco
Customer-provided MCS 7800 equivalent 3rd party NA NA Cisco
41 copy 2010 Cisco andor its affiliates All rights reserved
Customer Example ndash Primary Data Center
OLD NEW
Hardware Nodes 62 Physical Servers ( EU HQ clusters) 14 Approx
Software Version 615 amp 851 851
Ucxn Version 421 851 ndash 3 pairs -- Virtualized
CER 20 70 86 -- Virtualized
CM SUB CM PUB CM SUB CM SUB
MOH TFTP CER CM SUB
CER UCxn UCxN CM SUB
42 copy 2010 Cisco andor its affiliates All rights reserved
Deployment Model ndash Data Center 1
CM SUB CM PUB CM SUB CM SUB
MOH TFTP UCxN UCxn
CER UCxn UCxn UCxn
43 copy 2010 Cisco andor its affiliates All rights reserved
Deployment Model ndash Data Center 2
CM SUB CM PUB CM SUB CM SUB
MOH TFTP Ucxn UCxn
CER UCxn UCxn UCxn
copy 2010 Cisco andor its affiliates All rights reserved 44
Customer Design
PSTN
IP WAN
SIP Proxy
Unified Communications Manager
Unity Connection
Unified Presence
Unified Contact Center Express
Cisco UCS 5108 Chassis with UCS B200 Blade Servers
Cisco UCS 6100 Fabric Interconnect Switch
Cisco UCS C210 or C200 General-Purpose Rack-mount Server
CUSP
(11K phones)
(3K phones)
(400 phones)
copy 2010 Cisco andor its affiliates All rights reserved 45
HQ Details
Blade 1 Blade 2
CPU-1 CPU-2 CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
Blade 3 Blade 4
CPU-1 CPU-2 CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
CUP-1 UCCX-1 CUP-2
Blade 5 Blade 6
CPU-1 CPU-2 CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
UCCX-1
UCxn-1 Active
Leave
idle for
UCxn
UCxn-2
Active
TFTP-2
SUB-2
SUB-4
SUB-6
SUB-8 CUP-2 UCCX-2
SUB-3 PUB SUB-1
CUP-1
TFTP-1 SUB-5
SUB-7
Blade Slot 7 Blade Slot 8
CPU-1 CPU-2 CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
CUCM VM OVAs
Messaging VM OVAs
Leave
idle for
UCxn
ldquoSparerdquo blade slots available for non-UC workloads such as Cisco Nexus 1000V VMware
vCenter or 3rd-party workloads such as directories email groupware or other business
applications
Contact Center VM OVAs
Presence VM OVAs
copy 2010 Cisco andor its affiliates All rights reserved 46
Rack Server 3
CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
UCxn-2 CCX-1 CUP
Rack Server 2
CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
Rack Server 1
CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
Leave
idle for
UCxn
UCxn-1 TFTP-2 SUB-2
Leave
idle for
UCxn
SUB-1 CCX-2 PUB TFTP-1
Branch Office Details
Rack Server 2
CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
Rack Server 1
CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
PUBTFTP CCX-1 CUP
SUB CCX-2
Leave
idle for
UCxn
Leave
idle for
UCxn
UC
xn
-1
UC
xn
-1
CUCM VM OVAs
Messaging VM OVAs
Contact Center VM OVAs
Presence VM OVAs
copy 2010 Cisco andor its affiliates All rights reserved 47
bull DAS Rack Mount Server (Cisco C-Series)
bull Popular DAS Protocol SCSI
bull iSCSI Access SCSI storage media using IP network
bull Fibre Channel The most popular SAN protocol today
Cable distance ~ 2 km
Popular speed - 4 Gbs
bull NAS (Network Attached Storage) uses NFS (Network File System) protocol over TCPIP
DAS
SCSI
Computer System
SCSI Bus Adapter
SCSI Device Driver
Volume Manager
File System
Application
iSCSI
File System
Application
SCSI Device Driver iSCSI Driver
TCPIP Stack
NIC
Volume Manager
NIC
TCPIP Stack
iSCSI Layer
Bus Adapter
Host
Server
Storage
Transport
Storage
Media
FC SAN
FC
FC HBA
SCSI Device Driver
Volume Manager
File System
Application
Computer System Computer System
Block IO
SAN IP
copy 2010 Cisco andor its affiliates All rights reserved 48
HDD 1
450gig
15K RPM
HDD 2
450gig
15K RPM
HDD 3
450gig
15K RPM
HDD 4
450gig
15K RPM
HDD 5
450gig
15K RPM
Single RAID5 Group (14 TB Usable Space)
PUB SUB1 UCCX1
UC
VM 1
UC
VM 2
UC
VM 3
LUN 2 (720 GB) LUN 1 (720 GB)
UCCX2 CUP1 CUP2
UC
VM 4
UC
VM 5 UC
VM 6
4 to 8 UC VMs per
LUN
(max dependent on
sumvDisks)
NASSAN Array Best Practices for UC
Must be lt2TB per
LUN Recommend
500GB to 15 TB
Use FC class disks
with ~180 IOPS (eg
450 GB 15K or
300 GB 15K)
copy 2010 Cisco andor its affiliates All rights reserved 49
DAS Example UCS C210 M2 TRC1
HDD 1
146 GB
15K RPM
HDD 2
146 GB
15K RPM
HDD 3
146 GB
15K RPM
HDD 4
146 GB
15K RPM
HDD 5
146 GB
15K RPM
HDD 6
146 GB
15K RPM
HDD 7
146 GB
15K RPM
HDD 8
146 GB
15K RPM
HDD 9
146 GB
15K RPM
HDD 10
146 GB
15K RPM
Single RAID1
Volume Single RAID5 Volume (1022 GB after RAID overhead)
VMFS Filestore (947 GB after VMFS overhead)
PUB UCCX1
UC
VM 1
UC
VM 3
CUP1
UC
VM 5
vSphere ESXi
image
Notes
bull VMFS block size limits
max vDisk size
bull Could have gt1 VMFS
datastore on RAID
volume
copy 2010 Cisco andor its affiliates All rights reserved 50
DAS Example UCS C200 M2 TRC1 for BE6K
HDD 1
1 TB
72K RPM
HDD 2
1 TB
72K RPM
HDD 3
1 TB
72K RPM
HDD 4 1
TB
72K RPM
Single RAID10 Volume (2 TB after RAID
overhead)
VMFS Filestore (18 TB after VMFS overhead)
PUB UCCX1
UC
VM 1
UC
VM 3
CUP1
UC
VM 5 vSphere ESXi
image
copy 2010 Cisco andor its affiliates All rights reserved 51
UC Application Co-residency
bull See wwwciscocomgouc-virtualized for latest
bull Based on supported OVA for download
OVA reserves cores RAM etc to VMs
Basic rule of thumb fill up blade until out of capacity
If blade contains VM for messaging must reserve core for ESXi
bull Hardware oversubscription not supported
52 copy 2010 Cisco andor its affiliates All rights reserved
Virtual Machine Sizing
bull Virtual Machine virtual hardware defined by an VM template
vCPU vRAM vDisk vNICs
bull Capacity
bullAn VM template is associated with a specific capacity
bull VM templates are packaged in a OVA file
bull There are usually different VM template per release For example CUCM_80_vmv7_v21ova
CUCM_85_vmv7_v21ova
CUCM_86_vmv7_v15ova
Includes product product version VMware hardware version template version
53 copy 2010 Cisco andor its affiliates All rights reserved httptoolsciscocomcucst
copy 2010 Cisco andor its affiliates All rights reserved 54
bull Customer-accessible
bullUC on UCS httpwwwciscocomgouconucs and wwwciscocomgouc-virtualized and wwwciscocomgoucsrnd and wwwciscocomgoswonly (UCS page)
bullUCS in general httpwwwciscocomgoucs
bullVblocks amp Virtual Computing Environment wwwvceportalcom and wwwvcecom
bullFlexPods wwwcisconetappcom
bullCisco Infrastructure as a Service (IaaS) Portal httpwwwciscocomgoiaas
bullCisco Unified Service Delivery Portal httpwwwciscocomgousd
bullldquoCUCM on Virtual Serversrdquo (summary of ldquowhatrsquos newdifferent when virtualizedrdquo) httpwwwciscocomenUSdocsvoice_ip_commcucmvirtualservershtml
bull ldquoVirtualization and UCS 101rdquo httpnewsroomciscocomdlls2010ts_102810html
bull UC on UCS ldquotech valuerdquo httpwwwciscocomenUSsolutionsns340ns339ns638ns914html_TWTVtwtv_episode_74html
copy 2010 Cisco andor its affiliates All rights reserved 55
bull Partner Central ldquoServers OS and Virtualizationrdquo
httpwwwciscocomwebpartnersselltechnologyipcservers_os_virt_for_uchtml
bull Partner Community ldquoServers OS and Virtualizationrdquo
httpswwwmyciscocommunitycomcommunitypartnercollaborationucservers
bull Design info
httpwwwciscocomgouc-virtualized and httpwwwciscocomgoucsrnd
bull ldquoWhatrsquos new whatrsquos different when on VMwarerdquo customer doc
httpwwwciscocomenUSdocsvoice_ip_commcucmvirtualservershtml
bull Product Literature
httpwwwciscocomgoucs and httpwwwciscocomgouconucs
bull Ordering Guide
httpwwwciscocomwebpartnersselltechnologyipcuc_tech_readinesshtml~7
bull Virtual Computing Environment Portal httpwwwvceportalcom
bull Cisco Infrastructure as a Service (IaaS) Portal httpwwwciscocomgoiaas
copy 2010 Cisco andor its affiliates All rights reserved 33
Compute Layer
SANStorage
Layer ndash Cisco
SRND
Cisco
UCS 6100
Fabric
Interconnect
UCS 5100
Blade
Server
Cisco SAN
Switch
4x10GE
4x10GE
4x10GE
4x10GE
FC FC
FC FC
Nexus
1000V
FC Storage
SP-A SP-B
3rd party layer
CUCM VM IOPS ~ 200
200 IOPS 4KB ~ 64 Mbps per VM
bull Total capacity 28000 IOPS
bull 14000 IOPS per controller
bull 4 KByte block size
14000 IOPS x (4KB) ~ 428 Mbps
600 Mbps throughputcontroller
3rd Party SAN Example
Result
bull One 4 Gbps FC interface is enough to
handle the entire capacity of one Storage
Array
bull HA requires four FC interfaces
copy 2010 Cisco andor its affiliates All rights reserved 34
copy 2010 Cisco andor its affiliates All rights reserved 35
bull All UC Deployment Models are supported
No change in the current deployment models
Base deployment model ndash Single Site Centralized Call Processing etc are not changing
bull VM machine layout on a blade andor chassis
Unity Connection requires one extra CPU core (vCPU) on blade
bull Software checks for design rules
No rules or restrictions are in place in UC Apps to check if you are running the primary and sub on the same blade
bull CoWAN ruleslatency requirement are same
Does not depend on CUCM code or hardware
httpwwwciscocomgoucsrnd
copy 2010 Cisco andor its affiliates All rights reserved 36
bull SRND application-layer guidelines are same as when on MCS
Redundancy Rules are the same
Clustering over the WANlatency numbers
Mega Cluster supported in 85
Determine quantityrole of nodes
For HA No design checks validating proper placement of primary and secondary servers
CUCCE private network requirement
bull Mixed clusters of HP IBM UCS are supported
Subject to ldquocommon senserdquo rules ndash eg donrsquot make Pub or Primary less powerful than Sub or Secondary
bull Direct attach devices must be on physical MCS server
-MOH Live audio stream
-Tape BackupFloppy
bull New factors to consider for end-to-end QoS designconfiguration
37 copy 2010 Cisco andor its affiliates All rights reserved
FROM CUCM
System Release To Unified CM 85 To UC System 85
4x Multi Hop thru 61(x)71(x) Multi Hop thru 61(3)
51(2) Multi Hop thru 61(x)71(x) NA
51(3) 2 Hop thru 71(3) NA
61(1) 2 Hop thru 61(x)71(x) 2 Hop
61(2) 2 Hop thru 61(x)71(x) NA
61(3) 2 Hop thru 61(x)71(x) NA
61(4) Single Hop NA
61(5) Single Hop NA
70(1) 2 Hop thru 71(x) 2 Hop
71(2) 2 Hop thru 71(x) 2 Hop
71(3) Single Hop Single Hop Multi stagesBWC supported
71(5) Single Hop NA
80(1) Single Hop Single Hop Multi stagesBWC supported
80(2) 80(3) Single Hop NA
38 copy 2010 Cisco andor its affiliates All rights reserved
VMware Feature support bull VMware feature support varies by application
bull Some features are supported with Caveats some Partially
bull For example
bull Clone Virtual Machine
ldquoY (C)rdquo means VM has to be powered off
bull vMotion
ldquoY (C)rdquo means vMotion supported for live traffic calls shouldnrsquot be dropped (but not guaranteed)
ldquoPartialrdquo means in maintenance mode only
ESXi Features CUCM CUC CUP CCX
Clone Virtual Machine Y (C) Y (C) Y (C) Y (C)
VMware vMotion Y (C) Partial Partial Y (C)
Resize Virtual Machine Partial Partial Partial Partial
VMware HA Y (C) Y (C) Y (C) Y (C)
Boot From SAN Y (C) Y (C) Y (C) Y (C)
VMware DRS No No No No
Docwiki httpdocwikiciscocomwikiUnified_Communications_VMware_Requirements
copy 2010 Cisco andor its affiliates All rights reserved 39
Current Business Continuity and Disaster Recovery strategies are still applicable
The UC apps redundancy rules are same
Distribute UC application nodes across UCS blades chassis and sites to minimize failure impact
Primarysecondary on different blade chassis sites
On same blade mix Subs with TFTPMoH vs just Subs
Redundancy of UCS components (blade chassis FEX links Interconnect switching)
Redundancy of ldquonewrdquo network types (10GbE SAN multi-pathing etc)
copy 2010 Cisco andor its affiliates All rights reserved 40
TAC Support Demarcation
Server Hardware
Shared Storage
VMware Cisco
Application
UC on UCS
Tested Reference Configuration
Cisco 3rd-party Cisco
Cisco VCE VCE
3rd-party
VCE
UC on UCS
Specs-based (including Vblock option)
Cisco 3rd-party Cisco
Cisco VCE VCE
3rd-party
VCE
3rd-party VMware (HP IBM)
Specs-based 3rd-party 3rd-party 3rd-party Cisco
MCS 7800 Appliances Cisco NA NA Cisco
Customer-provided MCS 7800 equivalent 3rd party NA NA Cisco
41 copy 2010 Cisco andor its affiliates All rights reserved
Customer Example ndash Primary Data Center
OLD NEW
Hardware Nodes 62 Physical Servers ( EU HQ clusters) 14 Approx
Software Version 615 amp 851 851
Ucxn Version 421 851 ndash 3 pairs -- Virtualized
CER 20 70 86 -- Virtualized
CM SUB CM PUB CM SUB CM SUB
MOH TFTP CER CM SUB
CER UCxn UCxN CM SUB
42 copy 2010 Cisco andor its affiliates All rights reserved
Deployment Model ndash Data Center 1
CM SUB CM PUB CM SUB CM SUB
MOH TFTP UCxN UCxn
CER UCxn UCxn UCxn
43 copy 2010 Cisco andor its affiliates All rights reserved
Deployment Model ndash Data Center 2
CM SUB CM PUB CM SUB CM SUB
MOH TFTP Ucxn UCxn
CER UCxn UCxn UCxn
copy 2010 Cisco andor its affiliates All rights reserved 44
Customer Design
PSTN
IP WAN
SIP Proxy
Unified Communications Manager
Unity Connection
Unified Presence
Unified Contact Center Express
Cisco UCS 5108 Chassis with UCS B200 Blade Servers
Cisco UCS 6100 Fabric Interconnect Switch
Cisco UCS C210 or C200 General-Purpose Rack-mount Server
CUSP
(11K phones)
(3K phones)
(400 phones)
copy 2010 Cisco andor its affiliates All rights reserved 45
HQ Details
Blade 1 Blade 2
CPU-1 CPU-2 CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
Blade 3 Blade 4
CPU-1 CPU-2 CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
CUP-1 UCCX-1 CUP-2
Blade 5 Blade 6
CPU-1 CPU-2 CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
UCCX-1
UCxn-1 Active
Leave
idle for
UCxn
UCxn-2
Active
TFTP-2
SUB-2
SUB-4
SUB-6
SUB-8 CUP-2 UCCX-2
SUB-3 PUB SUB-1
CUP-1
TFTP-1 SUB-5
SUB-7
Blade Slot 7 Blade Slot 8
CPU-1 CPU-2 CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
CUCM VM OVAs
Messaging VM OVAs
Leave
idle for
UCxn
ldquoSparerdquo blade slots available for non-UC workloads such as Cisco Nexus 1000V VMware
vCenter or 3rd-party workloads such as directories email groupware or other business
applications
Contact Center VM OVAs
Presence VM OVAs
copy 2010 Cisco andor its affiliates All rights reserved 46
Rack Server 3
CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
UCxn-2 CCX-1 CUP
Rack Server 2
CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
Rack Server 1
CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
Leave
idle for
UCxn
UCxn-1 TFTP-2 SUB-2
Leave
idle for
UCxn
SUB-1 CCX-2 PUB TFTP-1
Branch Office Details
Rack Server 2
CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
Rack Server 1
CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
PUBTFTP CCX-1 CUP
SUB CCX-2
Leave
idle for
UCxn
Leave
idle for
UCxn
UC
xn
-1
UC
xn
-1
CUCM VM OVAs
Messaging VM OVAs
Contact Center VM OVAs
Presence VM OVAs
copy 2010 Cisco andor its affiliates All rights reserved 47
bull DAS Rack Mount Server (Cisco C-Series)
bull Popular DAS Protocol SCSI
bull iSCSI Access SCSI storage media using IP network
bull Fibre Channel The most popular SAN protocol today
Cable distance ~ 2 km
Popular speed - 4 Gbs
bull NAS (Network Attached Storage) uses NFS (Network File System) protocol over TCPIP
DAS
SCSI
Computer System
SCSI Bus Adapter
SCSI Device Driver
Volume Manager
File System
Application
iSCSI
File System
Application
SCSI Device Driver iSCSI Driver
TCPIP Stack
NIC
Volume Manager
NIC
TCPIP Stack
iSCSI Layer
Bus Adapter
Host
Server
Storage
Transport
Storage
Media
FC SAN
FC
FC HBA
SCSI Device Driver
Volume Manager
File System
Application
Computer System Computer System
Block IO
SAN IP
copy 2010 Cisco andor its affiliates All rights reserved 48
HDD 1
450gig
15K RPM
HDD 2
450gig
15K RPM
HDD 3
450gig
15K RPM
HDD 4
450gig
15K RPM
HDD 5
450gig
15K RPM
Single RAID5 Group (14 TB Usable Space)
PUB SUB1 UCCX1
UC
VM 1
UC
VM 2
UC
VM 3
LUN 2 (720 GB) LUN 1 (720 GB)
UCCX2 CUP1 CUP2
UC
VM 4
UC
VM 5 UC
VM 6
4 to 8 UC VMs per
LUN
(max dependent on
sumvDisks)
NASSAN Array Best Practices for UC
Must be lt2TB per
LUN Recommend
500GB to 15 TB
Use FC class disks
with ~180 IOPS (eg
450 GB 15K or
300 GB 15K)
copy 2010 Cisco andor its affiliates All rights reserved 49
DAS Example UCS C210 M2 TRC1
HDD 1
146 GB
15K RPM
HDD 2
146 GB
15K RPM
HDD 3
146 GB
15K RPM
HDD 4
146 GB
15K RPM
HDD 5
146 GB
15K RPM
HDD 6
146 GB
15K RPM
HDD 7
146 GB
15K RPM
HDD 8
146 GB
15K RPM
HDD 9
146 GB
15K RPM
HDD 10
146 GB
15K RPM
Single RAID1
Volume Single RAID5 Volume (1022 GB after RAID overhead)
VMFS Filestore (947 GB after VMFS overhead)
PUB UCCX1
UC
VM 1
UC
VM 3
CUP1
UC
VM 5
vSphere ESXi
image
Notes
bull VMFS block size limits
max vDisk size
bull Could have gt1 VMFS
datastore on RAID
volume
copy 2010 Cisco andor its affiliates All rights reserved 50
DAS Example UCS C200 M2 TRC1 for BE6K
HDD 1
1 TB
72K RPM
HDD 2
1 TB
72K RPM
HDD 3
1 TB
72K RPM
HDD 4 1
TB
72K RPM
Single RAID10 Volume (2 TB after RAID
overhead)
VMFS Filestore (18 TB after VMFS overhead)
PUB UCCX1
UC
VM 1
UC
VM 3
CUP1
UC
VM 5 vSphere ESXi
image
copy 2010 Cisco andor its affiliates All rights reserved 51
UC Application Co-residency
bull See wwwciscocomgouc-virtualized for latest
bull Based on supported OVA for download
OVA reserves cores RAM etc to VMs
Basic rule of thumb fill up blade until out of capacity
If blade contains VM for messaging must reserve core for ESXi
bull Hardware oversubscription not supported
52 copy 2010 Cisco andor its affiliates All rights reserved
Virtual Machine Sizing
bull Virtual Machine virtual hardware defined by an VM template
vCPU vRAM vDisk vNICs
bull Capacity
bullAn VM template is associated with a specific capacity
bull VM templates are packaged in a OVA file
bull There are usually different VM template per release For example CUCM_80_vmv7_v21ova
CUCM_85_vmv7_v21ova
CUCM_86_vmv7_v15ova
Includes product product version VMware hardware version template version
53 copy 2010 Cisco andor its affiliates All rights reserved httptoolsciscocomcucst
copy 2010 Cisco andor its affiliates All rights reserved 54
bull Customer-accessible
bullUC on UCS httpwwwciscocomgouconucs and wwwciscocomgouc-virtualized and wwwciscocomgoucsrnd and wwwciscocomgoswonly (UCS page)
bullUCS in general httpwwwciscocomgoucs
bullVblocks amp Virtual Computing Environment wwwvceportalcom and wwwvcecom
bullFlexPods wwwcisconetappcom
bullCisco Infrastructure as a Service (IaaS) Portal httpwwwciscocomgoiaas
bullCisco Unified Service Delivery Portal httpwwwciscocomgousd
bullldquoCUCM on Virtual Serversrdquo (summary of ldquowhatrsquos newdifferent when virtualizedrdquo) httpwwwciscocomenUSdocsvoice_ip_commcucmvirtualservershtml
bull ldquoVirtualization and UCS 101rdquo httpnewsroomciscocomdlls2010ts_102810html
bull UC on UCS ldquotech valuerdquo httpwwwciscocomenUSsolutionsns340ns339ns638ns914html_TWTVtwtv_episode_74html
copy 2010 Cisco andor its affiliates All rights reserved 55
bull Partner Central ldquoServers OS and Virtualizationrdquo
httpwwwciscocomwebpartnersselltechnologyipcservers_os_virt_for_uchtml
bull Partner Community ldquoServers OS and Virtualizationrdquo
httpswwwmyciscocommunitycomcommunitypartnercollaborationucservers
bull Design info
httpwwwciscocomgouc-virtualized and httpwwwciscocomgoucsrnd
bull ldquoWhatrsquos new whatrsquos different when on VMwarerdquo customer doc
httpwwwciscocomenUSdocsvoice_ip_commcucmvirtualservershtml
bull Product Literature
httpwwwciscocomgoucs and httpwwwciscocomgouconucs
bull Ordering Guide
httpwwwciscocomwebpartnersselltechnologyipcuc_tech_readinesshtml~7
bull Virtual Computing Environment Portal httpwwwvceportalcom
bull Cisco Infrastructure as a Service (IaaS) Portal httpwwwciscocomgoiaas
copy 2010 Cisco andor its affiliates All rights reserved 34
copy 2010 Cisco andor its affiliates All rights reserved 35
bull All UC Deployment Models are supported
No change in the current deployment models
Base deployment model ndash Single Site Centralized Call Processing etc are not changing
bull VM machine layout on a blade andor chassis
Unity Connection requires one extra CPU core (vCPU) on blade
bull Software checks for design rules
No rules or restrictions are in place in UC Apps to check if you are running the primary and sub on the same blade
bull CoWAN ruleslatency requirement are same
Does not depend on CUCM code or hardware
httpwwwciscocomgoucsrnd
copy 2010 Cisco andor its affiliates All rights reserved 36
bull SRND application-layer guidelines are same as when on MCS
Redundancy Rules are the same
Clustering over the WANlatency numbers
Mega Cluster supported in 85
Determine quantityrole of nodes
For HA No design checks validating proper placement of primary and secondary servers
CUCCE private network requirement
bull Mixed clusters of HP IBM UCS are supported
Subject to ldquocommon senserdquo rules ndash eg donrsquot make Pub or Primary less powerful than Sub or Secondary
bull Direct attach devices must be on physical MCS server
-MOH Live audio stream
-Tape BackupFloppy
bull New factors to consider for end-to-end QoS designconfiguration
37 copy 2010 Cisco andor its affiliates All rights reserved
FROM CUCM
System Release To Unified CM 85 To UC System 85
4x Multi Hop thru 61(x)71(x) Multi Hop thru 61(3)
51(2) Multi Hop thru 61(x)71(x) NA
51(3) 2 Hop thru 71(3) NA
61(1) 2 Hop thru 61(x)71(x) 2 Hop
61(2) 2 Hop thru 61(x)71(x) NA
61(3) 2 Hop thru 61(x)71(x) NA
61(4) Single Hop NA
61(5) Single Hop NA
70(1) 2 Hop thru 71(x) 2 Hop
71(2) 2 Hop thru 71(x) 2 Hop
71(3) Single Hop Single Hop Multi stagesBWC supported
71(5) Single Hop NA
80(1) Single Hop Single Hop Multi stagesBWC supported
80(2) 80(3) Single Hop NA
38 copy 2010 Cisco andor its affiliates All rights reserved
VMware Feature support bull VMware feature support varies by application
bull Some features are supported with Caveats some Partially
bull For example
bull Clone Virtual Machine
ldquoY (C)rdquo means VM has to be powered off
bull vMotion
ldquoY (C)rdquo means vMotion supported for live traffic calls shouldnrsquot be dropped (but not guaranteed)
ldquoPartialrdquo means in maintenance mode only
ESXi Features CUCM CUC CUP CCX
Clone Virtual Machine Y (C) Y (C) Y (C) Y (C)
VMware vMotion Y (C) Partial Partial Y (C)
Resize Virtual Machine Partial Partial Partial Partial
VMware HA Y (C) Y (C) Y (C) Y (C)
Boot From SAN Y (C) Y (C) Y (C) Y (C)
VMware DRS No No No No
Docwiki httpdocwikiciscocomwikiUnified_Communications_VMware_Requirements
copy 2010 Cisco andor its affiliates All rights reserved 39
Current Business Continuity and Disaster Recovery strategies are still applicable
The UC apps redundancy rules are same
Distribute UC application nodes across UCS blades chassis and sites to minimize failure impact
Primarysecondary on different blade chassis sites
On same blade mix Subs with TFTPMoH vs just Subs
Redundancy of UCS components (blade chassis FEX links Interconnect switching)
Redundancy of ldquonewrdquo network types (10GbE SAN multi-pathing etc)
copy 2010 Cisco andor its affiliates All rights reserved 40
TAC Support Demarcation
Server Hardware
Shared Storage
VMware Cisco
Application
UC on UCS
Tested Reference Configuration
Cisco 3rd-party Cisco
Cisco VCE VCE
3rd-party
VCE
UC on UCS
Specs-based (including Vblock option)
Cisco 3rd-party Cisco
Cisco VCE VCE
3rd-party
VCE
3rd-party VMware (HP IBM)
Specs-based 3rd-party 3rd-party 3rd-party Cisco
MCS 7800 Appliances Cisco NA NA Cisco
Customer-provided MCS 7800 equivalent 3rd party NA NA Cisco
41 copy 2010 Cisco andor its affiliates All rights reserved
Customer Example ndash Primary Data Center
OLD NEW
Hardware Nodes 62 Physical Servers ( EU HQ clusters) 14 Approx
Software Version 615 amp 851 851
Ucxn Version 421 851 ndash 3 pairs -- Virtualized
CER 20 70 86 -- Virtualized
CM SUB CM PUB CM SUB CM SUB
MOH TFTP CER CM SUB
CER UCxn UCxN CM SUB
42 copy 2010 Cisco andor its affiliates All rights reserved
Deployment Model ndash Data Center 1
CM SUB CM PUB CM SUB CM SUB
MOH TFTP UCxN UCxn
CER UCxn UCxn UCxn
43 copy 2010 Cisco andor its affiliates All rights reserved
Deployment Model ndash Data Center 2
CM SUB CM PUB CM SUB CM SUB
MOH TFTP Ucxn UCxn
CER UCxn UCxn UCxn
copy 2010 Cisco andor its affiliates All rights reserved 44
Customer Design
PSTN
IP WAN
SIP Proxy
Unified Communications Manager
Unity Connection
Unified Presence
Unified Contact Center Express
Cisco UCS 5108 Chassis with UCS B200 Blade Servers
Cisco UCS 6100 Fabric Interconnect Switch
Cisco UCS C210 or C200 General-Purpose Rack-mount Server
CUSP
(11K phones)
(3K phones)
(400 phones)
copy 2010 Cisco andor its affiliates All rights reserved 45
HQ Details
Blade 1 Blade 2
CPU-1 CPU-2 CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
Blade 3 Blade 4
CPU-1 CPU-2 CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
CUP-1 UCCX-1 CUP-2
Blade 5 Blade 6
CPU-1 CPU-2 CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
UCCX-1
UCxn-1 Active
Leave
idle for
UCxn
UCxn-2
Active
TFTP-2
SUB-2
SUB-4
SUB-6
SUB-8 CUP-2 UCCX-2
SUB-3 PUB SUB-1
CUP-1
TFTP-1 SUB-5
SUB-7
Blade Slot 7 Blade Slot 8
CPU-1 CPU-2 CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
CUCM VM OVAs
Messaging VM OVAs
Leave
idle for
UCxn
ldquoSparerdquo blade slots available for non-UC workloads such as Cisco Nexus 1000V VMware
vCenter or 3rd-party workloads such as directories email groupware or other business
applications
Contact Center VM OVAs
Presence VM OVAs
copy 2010 Cisco andor its affiliates All rights reserved 46
Rack Server 3
CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
UCxn-2 CCX-1 CUP
Rack Server 2
CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
Rack Server 1
CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
Leave
idle for
UCxn
UCxn-1 TFTP-2 SUB-2
Leave
idle for
UCxn
SUB-1 CCX-2 PUB TFTP-1
Branch Office Details
Rack Server 2
CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
Rack Server 1
CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
PUBTFTP CCX-1 CUP
SUB CCX-2
Leave
idle for
UCxn
Leave
idle for
UCxn
UC
xn
-1
UC
xn
-1
CUCM VM OVAs
Messaging VM OVAs
Contact Center VM OVAs
Presence VM OVAs
copy 2010 Cisco andor its affiliates All rights reserved 47
bull DAS Rack Mount Server (Cisco C-Series)
bull Popular DAS Protocol SCSI
bull iSCSI Access SCSI storage media using IP network
bull Fibre Channel The most popular SAN protocol today
Cable distance ~ 2 km
Popular speed - 4 Gbs
bull NAS (Network Attached Storage) uses NFS (Network File System) protocol over TCPIP
DAS
SCSI
Computer System
SCSI Bus Adapter
SCSI Device Driver
Volume Manager
File System
Application
iSCSI
File System
Application
SCSI Device Driver iSCSI Driver
TCPIP Stack
NIC
Volume Manager
NIC
TCPIP Stack
iSCSI Layer
Bus Adapter
Host
Server
Storage
Transport
Storage
Media
FC SAN
FC
FC HBA
SCSI Device Driver
Volume Manager
File System
Application
Computer System Computer System
Block IO
SAN IP
copy 2010 Cisco andor its affiliates All rights reserved 48
HDD 1
450gig
15K RPM
HDD 2
450gig
15K RPM
HDD 3
450gig
15K RPM
HDD 4
450gig
15K RPM
HDD 5
450gig
15K RPM
Single RAID5 Group (14 TB Usable Space)
PUB SUB1 UCCX1
UC
VM 1
UC
VM 2
UC
VM 3
LUN 2 (720 GB) LUN 1 (720 GB)
UCCX2 CUP1 CUP2
UC
VM 4
UC
VM 5 UC
VM 6
4 to 8 UC VMs per
LUN
(max dependent on
sumvDisks)
NASSAN Array Best Practices for UC
Must be lt2TB per
LUN Recommend
500GB to 15 TB
Use FC class disks
with ~180 IOPS (eg
450 GB 15K or
300 GB 15K)
copy 2010 Cisco andor its affiliates All rights reserved 49
DAS Example UCS C210 M2 TRC1
HDD 1
146 GB
15K RPM
HDD 2
146 GB
15K RPM
HDD 3
146 GB
15K RPM
HDD 4
146 GB
15K RPM
HDD 5
146 GB
15K RPM
HDD 6
146 GB
15K RPM
HDD 7
146 GB
15K RPM
HDD 8
146 GB
15K RPM
HDD 9
146 GB
15K RPM
HDD 10
146 GB
15K RPM
Single RAID1
Volume Single RAID5 Volume (1022 GB after RAID overhead)
VMFS Filestore (947 GB after VMFS overhead)
PUB UCCX1
UC
VM 1
UC
VM 3
CUP1
UC
VM 5
vSphere ESXi
image
Notes
bull VMFS block size limits
max vDisk size
bull Could have gt1 VMFS
datastore on RAID
volume
copy 2010 Cisco andor its affiliates All rights reserved 50
DAS Example UCS C200 M2 TRC1 for BE6K
HDD 1
1 TB
72K RPM
HDD 2
1 TB
72K RPM
HDD 3
1 TB
72K RPM
HDD 4 1
TB
72K RPM
Single RAID10 Volume (2 TB after RAID
overhead)
VMFS Filestore (18 TB after VMFS overhead)
PUB UCCX1
UC
VM 1
UC
VM 3
CUP1
UC
VM 5 vSphere ESXi
image
copy 2010 Cisco andor its affiliates All rights reserved 51
UC Application Co-residency
bull See wwwciscocomgouc-virtualized for latest
bull Based on supported OVA for download
OVA reserves cores RAM etc to VMs
Basic rule of thumb fill up blade until out of capacity
If blade contains VM for messaging must reserve core for ESXi
bull Hardware oversubscription not supported
52 copy 2010 Cisco andor its affiliates All rights reserved
Virtual Machine Sizing
bull Virtual Machine virtual hardware defined by an VM template
vCPU vRAM vDisk vNICs
bull Capacity
bullAn VM template is associated with a specific capacity
bull VM templates are packaged in a OVA file
bull There are usually different VM template per release For example CUCM_80_vmv7_v21ova
CUCM_85_vmv7_v21ova
CUCM_86_vmv7_v15ova
Includes product product version VMware hardware version template version
53 copy 2010 Cisco andor its affiliates All rights reserved httptoolsciscocomcucst
copy 2010 Cisco andor its affiliates All rights reserved 54
bull Customer-accessible
bullUC on UCS httpwwwciscocomgouconucs and wwwciscocomgouc-virtualized and wwwciscocomgoucsrnd and wwwciscocomgoswonly (UCS page)
bullUCS in general httpwwwciscocomgoucs
bullVblocks amp Virtual Computing Environment wwwvceportalcom and wwwvcecom
bullFlexPods wwwcisconetappcom
bullCisco Infrastructure as a Service (IaaS) Portal httpwwwciscocomgoiaas
bullCisco Unified Service Delivery Portal httpwwwciscocomgousd
bullldquoCUCM on Virtual Serversrdquo (summary of ldquowhatrsquos newdifferent when virtualizedrdquo) httpwwwciscocomenUSdocsvoice_ip_commcucmvirtualservershtml
bull ldquoVirtualization and UCS 101rdquo httpnewsroomciscocomdlls2010ts_102810html
bull UC on UCS ldquotech valuerdquo httpwwwciscocomenUSsolutionsns340ns339ns638ns914html_TWTVtwtv_episode_74html
copy 2010 Cisco andor its affiliates All rights reserved 55
bull Partner Central ldquoServers OS and Virtualizationrdquo
httpwwwciscocomwebpartnersselltechnologyipcservers_os_virt_for_uchtml
bull Partner Community ldquoServers OS and Virtualizationrdquo
httpswwwmyciscocommunitycomcommunitypartnercollaborationucservers
bull Design info
httpwwwciscocomgouc-virtualized and httpwwwciscocomgoucsrnd
bull ldquoWhatrsquos new whatrsquos different when on VMwarerdquo customer doc
httpwwwciscocomenUSdocsvoice_ip_commcucmvirtualservershtml
bull Product Literature
httpwwwciscocomgoucs and httpwwwciscocomgouconucs
bull Ordering Guide
httpwwwciscocomwebpartnersselltechnologyipcuc_tech_readinesshtml~7
bull Virtual Computing Environment Portal httpwwwvceportalcom
bull Cisco Infrastructure as a Service (IaaS) Portal httpwwwciscocomgoiaas
copy 2010 Cisco andor its affiliates All rights reserved 35
bull All UC Deployment Models are supported
No change in the current deployment models
Base deployment model ndash Single Site Centralized Call Processing etc are not changing
bull VM machine layout on a blade andor chassis
Unity Connection requires one extra CPU core (vCPU) on blade
bull Software checks for design rules
No rules or restrictions are in place in UC Apps to check if you are running the primary and sub on the same blade
bull CoWAN ruleslatency requirement are same
Does not depend on CUCM code or hardware
httpwwwciscocomgoucsrnd
copy 2010 Cisco andor its affiliates All rights reserved 36
bull SRND application-layer guidelines are same as when on MCS
Redundancy Rules are the same
Clustering over the WANlatency numbers
Mega Cluster supported in 85
Determine quantityrole of nodes
For HA No design checks validating proper placement of primary and secondary servers
CUCCE private network requirement
bull Mixed clusters of HP IBM UCS are supported
Subject to ldquocommon senserdquo rules ndash eg donrsquot make Pub or Primary less powerful than Sub or Secondary
bull Direct attach devices must be on physical MCS server
-MOH Live audio stream
-Tape BackupFloppy
bull New factors to consider for end-to-end QoS designconfiguration
37 copy 2010 Cisco andor its affiliates All rights reserved
FROM CUCM
System Release To Unified CM 85 To UC System 85
4x Multi Hop thru 61(x)71(x) Multi Hop thru 61(3)
51(2) Multi Hop thru 61(x)71(x) NA
51(3) 2 Hop thru 71(3) NA
61(1) 2 Hop thru 61(x)71(x) 2 Hop
61(2) 2 Hop thru 61(x)71(x) NA
61(3) 2 Hop thru 61(x)71(x) NA
61(4) Single Hop NA
61(5) Single Hop NA
70(1) 2 Hop thru 71(x) 2 Hop
71(2) 2 Hop thru 71(x) 2 Hop
71(3) Single Hop Single Hop Multi stagesBWC supported
71(5) Single Hop NA
80(1) Single Hop Single Hop Multi stagesBWC supported
80(2) 80(3) Single Hop NA
38 copy 2010 Cisco andor its affiliates All rights reserved
VMware Feature support bull VMware feature support varies by application
bull Some features are supported with Caveats some Partially
bull For example
bull Clone Virtual Machine
ldquoY (C)rdquo means VM has to be powered off
bull vMotion
ldquoY (C)rdquo means vMotion supported for live traffic calls shouldnrsquot be dropped (but not guaranteed)
ldquoPartialrdquo means in maintenance mode only
ESXi Features CUCM CUC CUP CCX
Clone Virtual Machine Y (C) Y (C) Y (C) Y (C)
VMware vMotion Y (C) Partial Partial Y (C)
Resize Virtual Machine Partial Partial Partial Partial
VMware HA Y (C) Y (C) Y (C) Y (C)
Boot From SAN Y (C) Y (C) Y (C) Y (C)
VMware DRS No No No No
Docwiki httpdocwikiciscocomwikiUnified_Communications_VMware_Requirements
copy 2010 Cisco andor its affiliates All rights reserved 39
Current Business Continuity and Disaster Recovery strategies are still applicable
The UC apps redundancy rules are same
Distribute UC application nodes across UCS blades chassis and sites to minimize failure impact
Primarysecondary on different blade chassis sites
On same blade mix Subs with TFTPMoH vs just Subs
Redundancy of UCS components (blade chassis FEX links Interconnect switching)
Redundancy of ldquonewrdquo network types (10GbE SAN multi-pathing etc)
copy 2010 Cisco andor its affiliates All rights reserved 40
TAC Support Demarcation
Server Hardware
Shared Storage
VMware Cisco
Application
UC on UCS
Tested Reference Configuration
Cisco 3rd-party Cisco
Cisco VCE VCE
3rd-party
VCE
UC on UCS
Specs-based (including Vblock option)
Cisco 3rd-party Cisco
Cisco VCE VCE
3rd-party
VCE
3rd-party VMware (HP IBM)
Specs-based 3rd-party 3rd-party 3rd-party Cisco
MCS 7800 Appliances Cisco NA NA Cisco
Customer-provided MCS 7800 equivalent 3rd party NA NA Cisco
41 copy 2010 Cisco andor its affiliates All rights reserved
Customer Example ndash Primary Data Center
OLD NEW
Hardware Nodes 62 Physical Servers ( EU HQ clusters) 14 Approx
Software Version 615 amp 851 851
Ucxn Version 421 851 ndash 3 pairs -- Virtualized
CER 20 70 86 -- Virtualized
CM SUB CM PUB CM SUB CM SUB
MOH TFTP CER CM SUB
CER UCxn UCxN CM SUB
42 copy 2010 Cisco andor its affiliates All rights reserved
Deployment Model ndash Data Center 1
CM SUB CM PUB CM SUB CM SUB
MOH TFTP UCxN UCxn
CER UCxn UCxn UCxn
43 copy 2010 Cisco andor its affiliates All rights reserved
Deployment Model ndash Data Center 2
CM SUB CM PUB CM SUB CM SUB
MOH TFTP Ucxn UCxn
CER UCxn UCxn UCxn
copy 2010 Cisco andor its affiliates All rights reserved 44
Customer Design
PSTN
IP WAN
SIP Proxy
Unified Communications Manager
Unity Connection
Unified Presence
Unified Contact Center Express
Cisco UCS 5108 Chassis with UCS B200 Blade Servers
Cisco UCS 6100 Fabric Interconnect Switch
Cisco UCS C210 or C200 General-Purpose Rack-mount Server
CUSP
(11K phones)
(3K phones)
(400 phones)
copy 2010 Cisco andor its affiliates All rights reserved 45
HQ Details
Blade 1 Blade 2
CPU-1 CPU-2 CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
Blade 3 Blade 4
CPU-1 CPU-2 CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
CUP-1 UCCX-1 CUP-2
Blade 5 Blade 6
CPU-1 CPU-2 CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
UCCX-1
UCxn-1 Active
Leave
idle for
UCxn
UCxn-2
Active
TFTP-2
SUB-2
SUB-4
SUB-6
SUB-8 CUP-2 UCCX-2
SUB-3 PUB SUB-1
CUP-1
TFTP-1 SUB-5
SUB-7
Blade Slot 7 Blade Slot 8
CPU-1 CPU-2 CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
CUCM VM OVAs
Messaging VM OVAs
Leave
idle for
UCxn
ldquoSparerdquo blade slots available for non-UC workloads such as Cisco Nexus 1000V VMware
vCenter or 3rd-party workloads such as directories email groupware or other business
applications
Contact Center VM OVAs
Presence VM OVAs
copy 2010 Cisco andor its affiliates All rights reserved 46
Rack Server 3
CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
UCxn-2 CCX-1 CUP
Rack Server 2
CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
Rack Server 1
CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
Leave
idle for
UCxn
UCxn-1 TFTP-2 SUB-2
Leave
idle for
UCxn
SUB-1 CCX-2 PUB TFTP-1
Branch Office Details
Rack Server 2
CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
Rack Server 1
CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
PUBTFTP CCX-1 CUP
SUB CCX-2
Leave
idle for
UCxn
Leave
idle for
UCxn
UC
xn
-1
UC
xn
-1
CUCM VM OVAs
Messaging VM OVAs
Contact Center VM OVAs
Presence VM OVAs
copy 2010 Cisco andor its affiliates All rights reserved 47
bull DAS Rack Mount Server (Cisco C-Series)
bull Popular DAS Protocol SCSI
bull iSCSI Access SCSI storage media using IP network
bull Fibre Channel The most popular SAN protocol today
Cable distance ~ 2 km
Popular speed - 4 Gbs
bull NAS (Network Attached Storage) uses NFS (Network File System) protocol over TCPIP
DAS
SCSI
Computer System
SCSI Bus Adapter
SCSI Device Driver
Volume Manager
File System
Application
iSCSI
File System
Application
SCSI Device Driver iSCSI Driver
TCPIP Stack
NIC
Volume Manager
NIC
TCPIP Stack
iSCSI Layer
Bus Adapter
Host
Server
Storage
Transport
Storage
Media
FC SAN
FC
FC HBA
SCSI Device Driver
Volume Manager
File System
Application
Computer System Computer System
Block IO
SAN IP
copy 2010 Cisco andor its affiliates All rights reserved 48
HDD 1
450gig
15K RPM
HDD 2
450gig
15K RPM
HDD 3
450gig
15K RPM
HDD 4
450gig
15K RPM
HDD 5
450gig
15K RPM
Single RAID5 Group (14 TB Usable Space)
PUB SUB1 UCCX1
UC
VM 1
UC
VM 2
UC
VM 3
LUN 2 (720 GB) LUN 1 (720 GB)
UCCX2 CUP1 CUP2
UC
VM 4
UC
VM 5 UC
VM 6
4 to 8 UC VMs per
LUN
(max dependent on
sumvDisks)
NASSAN Array Best Practices for UC
Must be lt2TB per
LUN Recommend
500GB to 15 TB
Use FC class disks
with ~180 IOPS (eg
450 GB 15K or
300 GB 15K)
copy 2010 Cisco andor its affiliates All rights reserved 49
DAS Example UCS C210 M2 TRC1
HDD 1
146 GB
15K RPM
HDD 2
146 GB
15K RPM
HDD 3
146 GB
15K RPM
HDD 4
146 GB
15K RPM
HDD 5
146 GB
15K RPM
HDD 6
146 GB
15K RPM
HDD 7
146 GB
15K RPM
HDD 8
146 GB
15K RPM
HDD 9
146 GB
15K RPM
HDD 10
146 GB
15K RPM
Single RAID1
Volume Single RAID5 Volume (1022 GB after RAID overhead)
VMFS Filestore (947 GB after VMFS overhead)
PUB UCCX1
UC
VM 1
UC
VM 3
CUP1
UC
VM 5
vSphere ESXi
image
Notes
bull VMFS block size limits
max vDisk size
bull Could have gt1 VMFS
datastore on RAID
volume
copy 2010 Cisco andor its affiliates All rights reserved 50
DAS Example UCS C200 M2 TRC1 for BE6K
HDD 1
1 TB
72K RPM
HDD 2
1 TB
72K RPM
HDD 3
1 TB
72K RPM
HDD 4 1
TB
72K RPM
Single RAID10 Volume (2 TB after RAID
overhead)
VMFS Filestore (18 TB after VMFS overhead)
PUB UCCX1
UC
VM 1
UC
VM 3
CUP1
UC
VM 5 vSphere ESXi
image
copy 2010 Cisco andor its affiliates All rights reserved 51
UC Application Co-residency
bull See wwwciscocomgouc-virtualized for latest
bull Based on supported OVA for download
OVA reserves cores RAM etc to VMs
Basic rule of thumb fill up blade until out of capacity
If blade contains VM for messaging must reserve core for ESXi
bull Hardware oversubscription not supported
52 copy 2010 Cisco andor its affiliates All rights reserved
Virtual Machine Sizing
bull Virtual Machine virtual hardware defined by an VM template
vCPU vRAM vDisk vNICs
bull Capacity
bullAn VM template is associated with a specific capacity
bull VM templates are packaged in a OVA file
bull There are usually different VM template per release For example CUCM_80_vmv7_v21ova
CUCM_85_vmv7_v21ova
CUCM_86_vmv7_v15ova
Includes product product version VMware hardware version template version
53 copy 2010 Cisco andor its affiliates All rights reserved httptoolsciscocomcucst
copy 2010 Cisco andor its affiliates All rights reserved 54
bull Customer-accessible
bullUC on UCS httpwwwciscocomgouconucs and wwwciscocomgouc-virtualized and wwwciscocomgoucsrnd and wwwciscocomgoswonly (UCS page)
bullUCS in general httpwwwciscocomgoucs
bullVblocks amp Virtual Computing Environment wwwvceportalcom and wwwvcecom
bullFlexPods wwwcisconetappcom
bullCisco Infrastructure as a Service (IaaS) Portal httpwwwciscocomgoiaas
bullCisco Unified Service Delivery Portal httpwwwciscocomgousd
bullldquoCUCM on Virtual Serversrdquo (summary of ldquowhatrsquos newdifferent when virtualizedrdquo) httpwwwciscocomenUSdocsvoice_ip_commcucmvirtualservershtml
bull ldquoVirtualization and UCS 101rdquo httpnewsroomciscocomdlls2010ts_102810html
bull UC on UCS ldquotech valuerdquo httpwwwciscocomenUSsolutionsns340ns339ns638ns914html_TWTVtwtv_episode_74html
copy 2010 Cisco andor its affiliates All rights reserved 55
bull Partner Central ldquoServers OS and Virtualizationrdquo
httpwwwciscocomwebpartnersselltechnologyipcservers_os_virt_for_uchtml
bull Partner Community ldquoServers OS and Virtualizationrdquo
httpswwwmyciscocommunitycomcommunitypartnercollaborationucservers
bull Design info
httpwwwciscocomgouc-virtualized and httpwwwciscocomgoucsrnd
bull ldquoWhatrsquos new whatrsquos different when on VMwarerdquo customer doc
httpwwwciscocomenUSdocsvoice_ip_commcucmvirtualservershtml
bull Product Literature
httpwwwciscocomgoucs and httpwwwciscocomgouconucs
bull Ordering Guide
httpwwwciscocomwebpartnersselltechnologyipcuc_tech_readinesshtml~7
bull Virtual Computing Environment Portal httpwwwvceportalcom
bull Cisco Infrastructure as a Service (IaaS) Portal httpwwwciscocomgoiaas
copy 2010 Cisco andor its affiliates All rights reserved 36
bull SRND application-layer guidelines are same as when on MCS
Redundancy Rules are the same
Clustering over the WANlatency numbers
Mega Cluster supported in 85
Determine quantityrole of nodes
For HA No design checks validating proper placement of primary and secondary servers
CUCCE private network requirement
bull Mixed clusters of HP IBM UCS are supported
Subject to ldquocommon senserdquo rules ndash eg donrsquot make Pub or Primary less powerful than Sub or Secondary
bull Direct attach devices must be on physical MCS server
-MOH Live audio stream
-Tape BackupFloppy
bull New factors to consider for end-to-end QoS designconfiguration
37 copy 2010 Cisco andor its affiliates All rights reserved
FROM CUCM
System Release To Unified CM 85 To UC System 85
4x Multi Hop thru 61(x)71(x) Multi Hop thru 61(3)
51(2) Multi Hop thru 61(x)71(x) NA
51(3) 2 Hop thru 71(3) NA
61(1) 2 Hop thru 61(x)71(x) 2 Hop
61(2) 2 Hop thru 61(x)71(x) NA
61(3) 2 Hop thru 61(x)71(x) NA
61(4) Single Hop NA
61(5) Single Hop NA
70(1) 2 Hop thru 71(x) 2 Hop
71(2) 2 Hop thru 71(x) 2 Hop
71(3) Single Hop Single Hop Multi stagesBWC supported
71(5) Single Hop NA
80(1) Single Hop Single Hop Multi stagesBWC supported
80(2) 80(3) Single Hop NA
38 copy 2010 Cisco andor its affiliates All rights reserved
VMware Feature support bull VMware feature support varies by application
bull Some features are supported with Caveats some Partially
bull For example
bull Clone Virtual Machine
ldquoY (C)rdquo means VM has to be powered off
bull vMotion
ldquoY (C)rdquo means vMotion supported for live traffic calls shouldnrsquot be dropped (but not guaranteed)
ldquoPartialrdquo means in maintenance mode only
ESXi Features CUCM CUC CUP CCX
Clone Virtual Machine Y (C) Y (C) Y (C) Y (C)
VMware vMotion Y (C) Partial Partial Y (C)
Resize Virtual Machine Partial Partial Partial Partial
VMware HA Y (C) Y (C) Y (C) Y (C)
Boot From SAN Y (C) Y (C) Y (C) Y (C)
VMware DRS No No No No
Docwiki httpdocwikiciscocomwikiUnified_Communications_VMware_Requirements
copy 2010 Cisco andor its affiliates All rights reserved 39
Current Business Continuity and Disaster Recovery strategies are still applicable
The UC apps redundancy rules are same
Distribute UC application nodes across UCS blades chassis and sites to minimize failure impact
Primarysecondary on different blade chassis sites
On same blade mix Subs with TFTPMoH vs just Subs
Redundancy of UCS components (blade chassis FEX links Interconnect switching)
Redundancy of ldquonewrdquo network types (10GbE SAN multi-pathing etc)
copy 2010 Cisco andor its affiliates All rights reserved 40
TAC Support Demarcation
Server Hardware
Shared Storage
VMware Cisco
Application
UC on UCS
Tested Reference Configuration
Cisco 3rd-party Cisco
Cisco VCE VCE
3rd-party
VCE
UC on UCS
Specs-based (including Vblock option)
Cisco 3rd-party Cisco
Cisco VCE VCE
3rd-party
VCE
3rd-party VMware (HP IBM)
Specs-based 3rd-party 3rd-party 3rd-party Cisco
MCS 7800 Appliances Cisco NA NA Cisco
Customer-provided MCS 7800 equivalent 3rd party NA NA Cisco
41 copy 2010 Cisco andor its affiliates All rights reserved
Customer Example ndash Primary Data Center
OLD NEW
Hardware Nodes 62 Physical Servers ( EU HQ clusters) 14 Approx
Software Version 615 amp 851 851
Ucxn Version 421 851 ndash 3 pairs -- Virtualized
CER 20 70 86 -- Virtualized
CM SUB CM PUB CM SUB CM SUB
MOH TFTP CER CM SUB
CER UCxn UCxN CM SUB
42 copy 2010 Cisco andor its affiliates All rights reserved
Deployment Model ndash Data Center 1
CM SUB CM PUB CM SUB CM SUB
MOH TFTP UCxN UCxn
CER UCxn UCxn UCxn
43 copy 2010 Cisco andor its affiliates All rights reserved
Deployment Model ndash Data Center 2
CM SUB CM PUB CM SUB CM SUB
MOH TFTP Ucxn UCxn
CER UCxn UCxn UCxn
copy 2010 Cisco andor its affiliates All rights reserved 44
Customer Design
PSTN
IP WAN
SIP Proxy
Unified Communications Manager
Unity Connection
Unified Presence
Unified Contact Center Express
Cisco UCS 5108 Chassis with UCS B200 Blade Servers
Cisco UCS 6100 Fabric Interconnect Switch
Cisco UCS C210 or C200 General-Purpose Rack-mount Server
CUSP
(11K phones)
(3K phones)
(400 phones)
copy 2010 Cisco andor its affiliates All rights reserved 45
HQ Details
Blade 1 Blade 2
CPU-1 CPU-2 CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
Blade 3 Blade 4
CPU-1 CPU-2 CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
CUP-1 UCCX-1 CUP-2
Blade 5 Blade 6
CPU-1 CPU-2 CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
UCCX-1
UCxn-1 Active
Leave
idle for
UCxn
UCxn-2
Active
TFTP-2
SUB-2
SUB-4
SUB-6
SUB-8 CUP-2 UCCX-2
SUB-3 PUB SUB-1
CUP-1
TFTP-1 SUB-5
SUB-7
Blade Slot 7 Blade Slot 8
CPU-1 CPU-2 CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
CUCM VM OVAs
Messaging VM OVAs
Leave
idle for
UCxn
ldquoSparerdquo blade slots available for non-UC workloads such as Cisco Nexus 1000V VMware
vCenter or 3rd-party workloads such as directories email groupware or other business
applications
Contact Center VM OVAs
Presence VM OVAs
copy 2010 Cisco andor its affiliates All rights reserved 46
Rack Server 3
CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
UCxn-2 CCX-1 CUP
Rack Server 2
CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
Rack Server 1
CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
Leave
idle for
UCxn
UCxn-1 TFTP-2 SUB-2
Leave
idle for
UCxn
SUB-1 CCX-2 PUB TFTP-1
Branch Office Details
Rack Server 2
CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
Rack Server 1
CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
PUBTFTP CCX-1 CUP
SUB CCX-2
Leave
idle for
UCxn
Leave
idle for
UCxn
UC
xn
-1
UC
xn
-1
CUCM VM OVAs
Messaging VM OVAs
Contact Center VM OVAs
Presence VM OVAs
copy 2010 Cisco andor its affiliates All rights reserved 47
bull DAS Rack Mount Server (Cisco C-Series)
bull Popular DAS Protocol SCSI
bull iSCSI Access SCSI storage media using IP network
bull Fibre Channel The most popular SAN protocol today
Cable distance ~ 2 km
Popular speed - 4 Gbs
bull NAS (Network Attached Storage) uses NFS (Network File System) protocol over TCPIP
DAS
SCSI
Computer System
SCSI Bus Adapter
SCSI Device Driver
Volume Manager
File System
Application
iSCSI
File System
Application
SCSI Device Driver iSCSI Driver
TCPIP Stack
NIC
Volume Manager
NIC
TCPIP Stack
iSCSI Layer
Bus Adapter
Host
Server
Storage
Transport
Storage
Media
FC SAN
FC
FC HBA
SCSI Device Driver
Volume Manager
File System
Application
Computer System Computer System
Block IO
SAN IP
copy 2010 Cisco andor its affiliates All rights reserved 48
HDD 1
450gig
15K RPM
HDD 2
450gig
15K RPM
HDD 3
450gig
15K RPM
HDD 4
450gig
15K RPM
HDD 5
450gig
15K RPM
Single RAID5 Group (14 TB Usable Space)
PUB SUB1 UCCX1
UC
VM 1
UC
VM 2
UC
VM 3
LUN 2 (720 GB) LUN 1 (720 GB)
UCCX2 CUP1 CUP2
UC
VM 4
UC
VM 5 UC
VM 6
4 to 8 UC VMs per
LUN
(max dependent on
sumvDisks)
NASSAN Array Best Practices for UC
Must be lt2TB per
LUN Recommend
500GB to 15 TB
Use FC class disks
with ~180 IOPS (eg
450 GB 15K or
300 GB 15K)
copy 2010 Cisco andor its affiliates All rights reserved 49
DAS Example UCS C210 M2 TRC1
HDD 1
146 GB
15K RPM
HDD 2
146 GB
15K RPM
HDD 3
146 GB
15K RPM
HDD 4
146 GB
15K RPM
HDD 5
146 GB
15K RPM
HDD 6
146 GB
15K RPM
HDD 7
146 GB
15K RPM
HDD 8
146 GB
15K RPM
HDD 9
146 GB
15K RPM
HDD 10
146 GB
15K RPM
Single RAID1
Volume Single RAID5 Volume (1022 GB after RAID overhead)
VMFS Filestore (947 GB after VMFS overhead)
PUB UCCX1
UC
VM 1
UC
VM 3
CUP1
UC
VM 5
vSphere ESXi
image
Notes
bull VMFS block size limits
max vDisk size
bull Could have gt1 VMFS
datastore on RAID
volume
copy 2010 Cisco andor its affiliates All rights reserved 50
DAS Example UCS C200 M2 TRC1 for BE6K
HDD 1
1 TB
72K RPM
HDD 2
1 TB
72K RPM
HDD 3
1 TB
72K RPM
HDD 4 1
TB
72K RPM
Single RAID10 Volume (2 TB after RAID
overhead)
VMFS Filestore (18 TB after VMFS overhead)
PUB UCCX1
UC
VM 1
UC
VM 3
CUP1
UC
VM 5 vSphere ESXi
image
copy 2010 Cisco andor its affiliates All rights reserved 51
UC Application Co-residency
bull See wwwciscocomgouc-virtualized for latest
bull Based on supported OVA for download
OVA reserves cores RAM etc to VMs
Basic rule of thumb fill up blade until out of capacity
If blade contains VM for messaging must reserve core for ESXi
bull Hardware oversubscription not supported
52 copy 2010 Cisco andor its affiliates All rights reserved
Virtual Machine Sizing
bull Virtual Machine virtual hardware defined by an VM template
vCPU vRAM vDisk vNICs
bull Capacity
bullAn VM template is associated with a specific capacity
bull VM templates are packaged in a OVA file
bull There are usually different VM template per release For example CUCM_80_vmv7_v21ova
CUCM_85_vmv7_v21ova
CUCM_86_vmv7_v15ova
Includes product product version VMware hardware version template version
53 copy 2010 Cisco andor its affiliates All rights reserved httptoolsciscocomcucst
copy 2010 Cisco andor its affiliates All rights reserved 54
bull Customer-accessible
bullUC on UCS httpwwwciscocomgouconucs and wwwciscocomgouc-virtualized and wwwciscocomgoucsrnd and wwwciscocomgoswonly (UCS page)
bullUCS in general httpwwwciscocomgoucs
bullVblocks amp Virtual Computing Environment wwwvceportalcom and wwwvcecom
bullFlexPods wwwcisconetappcom
bullCisco Infrastructure as a Service (IaaS) Portal httpwwwciscocomgoiaas
bullCisco Unified Service Delivery Portal httpwwwciscocomgousd
bullldquoCUCM on Virtual Serversrdquo (summary of ldquowhatrsquos newdifferent when virtualizedrdquo) httpwwwciscocomenUSdocsvoice_ip_commcucmvirtualservershtml
bull ldquoVirtualization and UCS 101rdquo httpnewsroomciscocomdlls2010ts_102810html
bull UC on UCS ldquotech valuerdquo httpwwwciscocomenUSsolutionsns340ns339ns638ns914html_TWTVtwtv_episode_74html
copy 2010 Cisco andor its affiliates All rights reserved 55
bull Partner Central ldquoServers OS and Virtualizationrdquo
httpwwwciscocomwebpartnersselltechnologyipcservers_os_virt_for_uchtml
bull Partner Community ldquoServers OS and Virtualizationrdquo
httpswwwmyciscocommunitycomcommunitypartnercollaborationucservers
bull Design info
httpwwwciscocomgouc-virtualized and httpwwwciscocomgoucsrnd
bull ldquoWhatrsquos new whatrsquos different when on VMwarerdquo customer doc
httpwwwciscocomenUSdocsvoice_ip_commcucmvirtualservershtml
bull Product Literature
httpwwwciscocomgoucs and httpwwwciscocomgouconucs
bull Ordering Guide
httpwwwciscocomwebpartnersselltechnologyipcuc_tech_readinesshtml~7
bull Virtual Computing Environment Portal httpwwwvceportalcom
bull Cisco Infrastructure as a Service (IaaS) Portal httpwwwciscocomgoiaas
37 copy 2010 Cisco andor its affiliates All rights reserved
FROM CUCM
System Release To Unified CM 85 To UC System 85
4x Multi Hop thru 61(x)71(x) Multi Hop thru 61(3)
51(2) Multi Hop thru 61(x)71(x) NA
51(3) 2 Hop thru 71(3) NA
61(1) 2 Hop thru 61(x)71(x) 2 Hop
61(2) 2 Hop thru 61(x)71(x) NA
61(3) 2 Hop thru 61(x)71(x) NA
61(4) Single Hop NA
61(5) Single Hop NA
70(1) 2 Hop thru 71(x) 2 Hop
71(2) 2 Hop thru 71(x) 2 Hop
71(3) Single Hop Single Hop Multi stagesBWC supported
71(5) Single Hop NA
80(1) Single Hop Single Hop Multi stagesBWC supported
80(2) 80(3) Single Hop NA
38 copy 2010 Cisco andor its affiliates All rights reserved
VMware Feature support bull VMware feature support varies by application
bull Some features are supported with Caveats some Partially
bull For example
bull Clone Virtual Machine
ldquoY (C)rdquo means VM has to be powered off
bull vMotion
ldquoY (C)rdquo means vMotion supported for live traffic calls shouldnrsquot be dropped (but not guaranteed)
ldquoPartialrdquo means in maintenance mode only
ESXi Features CUCM CUC CUP CCX
Clone Virtual Machine Y (C) Y (C) Y (C) Y (C)
VMware vMotion Y (C) Partial Partial Y (C)
Resize Virtual Machine Partial Partial Partial Partial
VMware HA Y (C) Y (C) Y (C) Y (C)
Boot From SAN Y (C) Y (C) Y (C) Y (C)
VMware DRS No No No No
Docwiki httpdocwikiciscocomwikiUnified_Communications_VMware_Requirements
copy 2010 Cisco andor its affiliates All rights reserved 39
Current Business Continuity and Disaster Recovery strategies are still applicable
The UC apps redundancy rules are same
Distribute UC application nodes across UCS blades chassis and sites to minimize failure impact
Primarysecondary on different blade chassis sites
On same blade mix Subs with TFTPMoH vs just Subs
Redundancy of UCS components (blade chassis FEX links Interconnect switching)
Redundancy of ldquonewrdquo network types (10GbE SAN multi-pathing etc)
copy 2010 Cisco andor its affiliates All rights reserved 40
TAC Support Demarcation
Server Hardware
Shared Storage
VMware Cisco
Application
UC on UCS
Tested Reference Configuration
Cisco 3rd-party Cisco
Cisco VCE VCE
3rd-party
VCE
UC on UCS
Specs-based (including Vblock option)
Cisco 3rd-party Cisco
Cisco VCE VCE
3rd-party
VCE
3rd-party VMware (HP IBM)
Specs-based 3rd-party 3rd-party 3rd-party Cisco
MCS 7800 Appliances Cisco NA NA Cisco
Customer-provided MCS 7800 equivalent 3rd party NA NA Cisco
41 copy 2010 Cisco andor its affiliates All rights reserved
Customer Example ndash Primary Data Center
OLD NEW
Hardware Nodes 62 Physical Servers ( EU HQ clusters) 14 Approx
Software Version 615 amp 851 851
Ucxn Version 421 851 ndash 3 pairs -- Virtualized
CER 20 70 86 -- Virtualized
CM SUB CM PUB CM SUB CM SUB
MOH TFTP CER CM SUB
CER UCxn UCxN CM SUB
42 copy 2010 Cisco andor its affiliates All rights reserved
Deployment Model ndash Data Center 1
CM SUB CM PUB CM SUB CM SUB
MOH TFTP UCxN UCxn
CER UCxn UCxn UCxn
43 copy 2010 Cisco andor its affiliates All rights reserved
Deployment Model ndash Data Center 2
CM SUB CM PUB CM SUB CM SUB
MOH TFTP Ucxn UCxn
CER UCxn UCxn UCxn
copy 2010 Cisco andor its affiliates All rights reserved 44
Customer Design
PSTN
IP WAN
SIP Proxy
Unified Communications Manager
Unity Connection
Unified Presence
Unified Contact Center Express
Cisco UCS 5108 Chassis with UCS B200 Blade Servers
Cisco UCS 6100 Fabric Interconnect Switch
Cisco UCS C210 or C200 General-Purpose Rack-mount Server
CUSP
(11K phones)
(3K phones)
(400 phones)
copy 2010 Cisco andor its affiliates All rights reserved 45
HQ Details
Blade 1 Blade 2
CPU-1 CPU-2 CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
Blade 3 Blade 4
CPU-1 CPU-2 CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
CUP-1 UCCX-1 CUP-2
Blade 5 Blade 6
CPU-1 CPU-2 CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
UCCX-1
UCxn-1 Active
Leave
idle for
UCxn
UCxn-2
Active
TFTP-2
SUB-2
SUB-4
SUB-6
SUB-8 CUP-2 UCCX-2
SUB-3 PUB SUB-1
CUP-1
TFTP-1 SUB-5
SUB-7
Blade Slot 7 Blade Slot 8
CPU-1 CPU-2 CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
CUCM VM OVAs
Messaging VM OVAs
Leave
idle for
UCxn
ldquoSparerdquo blade slots available for non-UC workloads such as Cisco Nexus 1000V VMware
vCenter or 3rd-party workloads such as directories email groupware or other business
applications
Contact Center VM OVAs
Presence VM OVAs
copy 2010 Cisco andor its affiliates All rights reserved 46
Rack Server 3
CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
UCxn-2 CCX-1 CUP
Rack Server 2
CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
Rack Server 1
CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
Leave
idle for
UCxn
UCxn-1 TFTP-2 SUB-2
Leave
idle for
UCxn
SUB-1 CCX-2 PUB TFTP-1
Branch Office Details
Rack Server 2
CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
Rack Server 1
CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
PUBTFTP CCX-1 CUP
SUB CCX-2
Leave
idle for
UCxn
Leave
idle for
UCxn
UC
xn
-1
UC
xn
-1
CUCM VM OVAs
Messaging VM OVAs
Contact Center VM OVAs
Presence VM OVAs
copy 2010 Cisco andor its affiliates All rights reserved 47
bull DAS Rack Mount Server (Cisco C-Series)
bull Popular DAS Protocol SCSI
bull iSCSI Access SCSI storage media using IP network
bull Fibre Channel The most popular SAN protocol today
Cable distance ~ 2 km
Popular speed - 4 Gbs
bull NAS (Network Attached Storage) uses NFS (Network File System) protocol over TCPIP
DAS
SCSI
Computer System
SCSI Bus Adapter
SCSI Device Driver
Volume Manager
File System
Application
iSCSI
File System
Application
SCSI Device Driver iSCSI Driver
TCPIP Stack
NIC
Volume Manager
NIC
TCPIP Stack
iSCSI Layer
Bus Adapter
Host
Server
Storage
Transport
Storage
Media
FC SAN
FC
FC HBA
SCSI Device Driver
Volume Manager
File System
Application
Computer System Computer System
Block IO
SAN IP
copy 2010 Cisco andor its affiliates All rights reserved 48
HDD 1
450gig
15K RPM
HDD 2
450gig
15K RPM
HDD 3
450gig
15K RPM
HDD 4
450gig
15K RPM
HDD 5
450gig
15K RPM
Single RAID5 Group (14 TB Usable Space)
PUB SUB1 UCCX1
UC
VM 1
UC
VM 2
UC
VM 3
LUN 2 (720 GB) LUN 1 (720 GB)
UCCX2 CUP1 CUP2
UC
VM 4
UC
VM 5 UC
VM 6
4 to 8 UC VMs per
LUN
(max dependent on
sumvDisks)
NASSAN Array Best Practices for UC
Must be lt2TB per
LUN Recommend
500GB to 15 TB
Use FC class disks
with ~180 IOPS (eg
450 GB 15K or
300 GB 15K)
copy 2010 Cisco andor its affiliates All rights reserved 49
DAS Example UCS C210 M2 TRC1
HDD 1
146 GB
15K RPM
HDD 2
146 GB
15K RPM
HDD 3
146 GB
15K RPM
HDD 4
146 GB
15K RPM
HDD 5
146 GB
15K RPM
HDD 6
146 GB
15K RPM
HDD 7
146 GB
15K RPM
HDD 8
146 GB
15K RPM
HDD 9
146 GB
15K RPM
HDD 10
146 GB
15K RPM
Single RAID1
Volume Single RAID5 Volume (1022 GB after RAID overhead)
VMFS Filestore (947 GB after VMFS overhead)
PUB UCCX1
UC
VM 1
UC
VM 3
CUP1
UC
VM 5
vSphere ESXi
image
Notes
bull VMFS block size limits
max vDisk size
bull Could have gt1 VMFS
datastore on RAID
volume
copy 2010 Cisco andor its affiliates All rights reserved 50
DAS Example UCS C200 M2 TRC1 for BE6K
HDD 1
1 TB
72K RPM
HDD 2
1 TB
72K RPM
HDD 3
1 TB
72K RPM
HDD 4 1
TB
72K RPM
Single RAID10 Volume (2 TB after RAID
overhead)
VMFS Filestore (18 TB after VMFS overhead)
PUB UCCX1
UC
VM 1
UC
VM 3
CUP1
UC
VM 5 vSphere ESXi
image
copy 2010 Cisco andor its affiliates All rights reserved 51
UC Application Co-residency
bull See wwwciscocomgouc-virtualized for latest
bull Based on supported OVA for download
OVA reserves cores RAM etc to VMs
Basic rule of thumb fill up blade until out of capacity
If blade contains VM for messaging must reserve core for ESXi
bull Hardware oversubscription not supported
52 copy 2010 Cisco andor its affiliates All rights reserved
Virtual Machine Sizing
bull Virtual Machine virtual hardware defined by an VM template
vCPU vRAM vDisk vNICs
bull Capacity
bullAn VM template is associated with a specific capacity
bull VM templates are packaged in a OVA file
bull There are usually different VM template per release For example CUCM_80_vmv7_v21ova
CUCM_85_vmv7_v21ova
CUCM_86_vmv7_v15ova
Includes product product version VMware hardware version template version
53 copy 2010 Cisco andor its affiliates All rights reserved httptoolsciscocomcucst
copy 2010 Cisco andor its affiliates All rights reserved 54
bull Customer-accessible
bullUC on UCS httpwwwciscocomgouconucs and wwwciscocomgouc-virtualized and wwwciscocomgoucsrnd and wwwciscocomgoswonly (UCS page)
bullUCS in general httpwwwciscocomgoucs
bullVblocks amp Virtual Computing Environment wwwvceportalcom and wwwvcecom
bullFlexPods wwwcisconetappcom
bullCisco Infrastructure as a Service (IaaS) Portal httpwwwciscocomgoiaas
bullCisco Unified Service Delivery Portal httpwwwciscocomgousd
bullldquoCUCM on Virtual Serversrdquo (summary of ldquowhatrsquos newdifferent when virtualizedrdquo) httpwwwciscocomenUSdocsvoice_ip_commcucmvirtualservershtml
bull ldquoVirtualization and UCS 101rdquo httpnewsroomciscocomdlls2010ts_102810html
bull UC on UCS ldquotech valuerdquo httpwwwciscocomenUSsolutionsns340ns339ns638ns914html_TWTVtwtv_episode_74html
copy 2010 Cisco andor its affiliates All rights reserved 55
bull Partner Central ldquoServers OS and Virtualizationrdquo
httpwwwciscocomwebpartnersselltechnologyipcservers_os_virt_for_uchtml
bull Partner Community ldquoServers OS and Virtualizationrdquo
httpswwwmyciscocommunitycomcommunitypartnercollaborationucservers
bull Design info
httpwwwciscocomgouc-virtualized and httpwwwciscocomgoucsrnd
bull ldquoWhatrsquos new whatrsquos different when on VMwarerdquo customer doc
httpwwwciscocomenUSdocsvoice_ip_commcucmvirtualservershtml
bull Product Literature
httpwwwciscocomgoucs and httpwwwciscocomgouconucs
bull Ordering Guide
httpwwwciscocomwebpartnersselltechnologyipcuc_tech_readinesshtml~7
bull Virtual Computing Environment Portal httpwwwvceportalcom
bull Cisco Infrastructure as a Service (IaaS) Portal httpwwwciscocomgoiaas
38 copy 2010 Cisco andor its affiliates All rights reserved
VMware Feature support bull VMware feature support varies by application
bull Some features are supported with Caveats some Partially
bull For example
bull Clone Virtual Machine
ldquoY (C)rdquo means VM has to be powered off
bull vMotion
ldquoY (C)rdquo means vMotion supported for live traffic calls shouldnrsquot be dropped (but not guaranteed)
ldquoPartialrdquo means in maintenance mode only
ESXi Features CUCM CUC CUP CCX
Clone Virtual Machine Y (C) Y (C) Y (C) Y (C)
VMware vMotion Y (C) Partial Partial Y (C)
Resize Virtual Machine Partial Partial Partial Partial
VMware HA Y (C) Y (C) Y (C) Y (C)
Boot From SAN Y (C) Y (C) Y (C) Y (C)
VMware DRS No No No No
Docwiki httpdocwikiciscocomwikiUnified_Communications_VMware_Requirements
copy 2010 Cisco andor its affiliates All rights reserved 39
Current Business Continuity and Disaster Recovery strategies are still applicable
The UC apps redundancy rules are same
Distribute UC application nodes across UCS blades chassis and sites to minimize failure impact
Primarysecondary on different blade chassis sites
On same blade mix Subs with TFTPMoH vs just Subs
Redundancy of UCS components (blade chassis FEX links Interconnect switching)
Redundancy of ldquonewrdquo network types (10GbE SAN multi-pathing etc)
copy 2010 Cisco andor its affiliates All rights reserved 40
TAC Support Demarcation
Server Hardware
Shared Storage
VMware Cisco
Application
UC on UCS
Tested Reference Configuration
Cisco 3rd-party Cisco
Cisco VCE VCE
3rd-party
VCE
UC on UCS
Specs-based (including Vblock option)
Cisco 3rd-party Cisco
Cisco VCE VCE
3rd-party
VCE
3rd-party VMware (HP IBM)
Specs-based 3rd-party 3rd-party 3rd-party Cisco
MCS 7800 Appliances Cisco NA NA Cisco
Customer-provided MCS 7800 equivalent 3rd party NA NA Cisco
41 copy 2010 Cisco andor its affiliates All rights reserved
Customer Example ndash Primary Data Center
OLD NEW
Hardware Nodes 62 Physical Servers ( EU HQ clusters) 14 Approx
Software Version 615 amp 851 851
Ucxn Version 421 851 ndash 3 pairs -- Virtualized
CER 20 70 86 -- Virtualized
CM SUB CM PUB CM SUB CM SUB
MOH TFTP CER CM SUB
CER UCxn UCxN CM SUB
42 copy 2010 Cisco andor its affiliates All rights reserved
Deployment Model ndash Data Center 1
CM SUB CM PUB CM SUB CM SUB
MOH TFTP UCxN UCxn
CER UCxn UCxn UCxn
43 copy 2010 Cisco andor its affiliates All rights reserved
Deployment Model ndash Data Center 2
CM SUB CM PUB CM SUB CM SUB
MOH TFTP Ucxn UCxn
CER UCxn UCxn UCxn
copy 2010 Cisco andor its affiliates All rights reserved 44
Customer Design
PSTN
IP WAN
SIP Proxy
Unified Communications Manager
Unity Connection
Unified Presence
Unified Contact Center Express
Cisco UCS 5108 Chassis with UCS B200 Blade Servers
Cisco UCS 6100 Fabric Interconnect Switch
Cisco UCS C210 or C200 General-Purpose Rack-mount Server
CUSP
(11K phones)
(3K phones)
(400 phones)
copy 2010 Cisco andor its affiliates All rights reserved 45
HQ Details
Blade 1 Blade 2
CPU-1 CPU-2 CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
Blade 3 Blade 4
CPU-1 CPU-2 CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
CUP-1 UCCX-1 CUP-2
Blade 5 Blade 6
CPU-1 CPU-2 CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
UCCX-1
UCxn-1 Active
Leave
idle for
UCxn
UCxn-2
Active
TFTP-2
SUB-2
SUB-4
SUB-6
SUB-8 CUP-2 UCCX-2
SUB-3 PUB SUB-1
CUP-1
TFTP-1 SUB-5
SUB-7
Blade Slot 7 Blade Slot 8
CPU-1 CPU-2 CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
CUCM VM OVAs
Messaging VM OVAs
Leave
idle for
UCxn
ldquoSparerdquo blade slots available for non-UC workloads such as Cisco Nexus 1000V VMware
vCenter or 3rd-party workloads such as directories email groupware or other business
applications
Contact Center VM OVAs
Presence VM OVAs
copy 2010 Cisco andor its affiliates All rights reserved 46
Rack Server 3
CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
UCxn-2 CCX-1 CUP
Rack Server 2
CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
Rack Server 1
CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
Leave
idle for
UCxn
UCxn-1 TFTP-2 SUB-2
Leave
idle for
UCxn
SUB-1 CCX-2 PUB TFTP-1
Branch Office Details
Rack Server 2
CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
Rack Server 1
CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
PUBTFTP CCX-1 CUP
SUB CCX-2
Leave
idle for
UCxn
Leave
idle for
UCxn
UC
xn
-1
UC
xn
-1
CUCM VM OVAs
Messaging VM OVAs
Contact Center VM OVAs
Presence VM OVAs
copy 2010 Cisco andor its affiliates All rights reserved 47
bull DAS Rack Mount Server (Cisco C-Series)
bull Popular DAS Protocol SCSI
bull iSCSI Access SCSI storage media using IP network
bull Fibre Channel The most popular SAN protocol today
Cable distance ~ 2 km
Popular speed - 4 Gbs
bull NAS (Network Attached Storage) uses NFS (Network File System) protocol over TCPIP
DAS
SCSI
Computer System
SCSI Bus Adapter
SCSI Device Driver
Volume Manager
File System
Application
iSCSI
File System
Application
SCSI Device Driver iSCSI Driver
TCPIP Stack
NIC
Volume Manager
NIC
TCPIP Stack
iSCSI Layer
Bus Adapter
Host
Server
Storage
Transport
Storage
Media
FC SAN
FC
FC HBA
SCSI Device Driver
Volume Manager
File System
Application
Computer System Computer System
Block IO
SAN IP
copy 2010 Cisco andor its affiliates All rights reserved 48
HDD 1
450gig
15K RPM
HDD 2
450gig
15K RPM
HDD 3
450gig
15K RPM
HDD 4
450gig
15K RPM
HDD 5
450gig
15K RPM
Single RAID5 Group (14 TB Usable Space)
PUB SUB1 UCCX1
UC
VM 1
UC
VM 2
UC
VM 3
LUN 2 (720 GB) LUN 1 (720 GB)
UCCX2 CUP1 CUP2
UC
VM 4
UC
VM 5 UC
VM 6
4 to 8 UC VMs per
LUN
(max dependent on
sumvDisks)
NASSAN Array Best Practices for UC
Must be lt2TB per
LUN Recommend
500GB to 15 TB
Use FC class disks
with ~180 IOPS (eg
450 GB 15K or
300 GB 15K)
copy 2010 Cisco andor its affiliates All rights reserved 49
DAS Example UCS C210 M2 TRC1
HDD 1
146 GB
15K RPM
HDD 2
146 GB
15K RPM
HDD 3
146 GB
15K RPM
HDD 4
146 GB
15K RPM
HDD 5
146 GB
15K RPM
HDD 6
146 GB
15K RPM
HDD 7
146 GB
15K RPM
HDD 8
146 GB
15K RPM
HDD 9
146 GB
15K RPM
HDD 10
146 GB
15K RPM
Single RAID1
Volume Single RAID5 Volume (1022 GB after RAID overhead)
VMFS Filestore (947 GB after VMFS overhead)
PUB UCCX1
UC
VM 1
UC
VM 3
CUP1
UC
VM 5
vSphere ESXi
image
Notes
bull VMFS block size limits
max vDisk size
bull Could have gt1 VMFS
datastore on RAID
volume
copy 2010 Cisco andor its affiliates All rights reserved 50
DAS Example UCS C200 M2 TRC1 for BE6K
HDD 1
1 TB
72K RPM
HDD 2
1 TB
72K RPM
HDD 3
1 TB
72K RPM
HDD 4 1
TB
72K RPM
Single RAID10 Volume (2 TB after RAID
overhead)
VMFS Filestore (18 TB after VMFS overhead)
PUB UCCX1
UC
VM 1
UC
VM 3
CUP1
UC
VM 5 vSphere ESXi
image
copy 2010 Cisco andor its affiliates All rights reserved 51
UC Application Co-residency
bull See wwwciscocomgouc-virtualized for latest
bull Based on supported OVA for download
OVA reserves cores RAM etc to VMs
Basic rule of thumb fill up blade until out of capacity
If blade contains VM for messaging must reserve core for ESXi
bull Hardware oversubscription not supported
52 copy 2010 Cisco andor its affiliates All rights reserved
Virtual Machine Sizing
bull Virtual Machine virtual hardware defined by an VM template
vCPU vRAM vDisk vNICs
bull Capacity
bullAn VM template is associated with a specific capacity
bull VM templates are packaged in a OVA file
bull There are usually different VM template per release For example CUCM_80_vmv7_v21ova
CUCM_85_vmv7_v21ova
CUCM_86_vmv7_v15ova
Includes product product version VMware hardware version template version
53 copy 2010 Cisco andor its affiliates All rights reserved httptoolsciscocomcucst
copy 2010 Cisco andor its affiliates All rights reserved 54
bull Customer-accessible
bullUC on UCS httpwwwciscocomgouconucs and wwwciscocomgouc-virtualized and wwwciscocomgoucsrnd and wwwciscocomgoswonly (UCS page)
bullUCS in general httpwwwciscocomgoucs
bullVblocks amp Virtual Computing Environment wwwvceportalcom and wwwvcecom
bullFlexPods wwwcisconetappcom
bullCisco Infrastructure as a Service (IaaS) Portal httpwwwciscocomgoiaas
bullCisco Unified Service Delivery Portal httpwwwciscocomgousd
bullldquoCUCM on Virtual Serversrdquo (summary of ldquowhatrsquos newdifferent when virtualizedrdquo) httpwwwciscocomenUSdocsvoice_ip_commcucmvirtualservershtml
bull ldquoVirtualization and UCS 101rdquo httpnewsroomciscocomdlls2010ts_102810html
bull UC on UCS ldquotech valuerdquo httpwwwciscocomenUSsolutionsns340ns339ns638ns914html_TWTVtwtv_episode_74html
copy 2010 Cisco andor its affiliates All rights reserved 55
bull Partner Central ldquoServers OS and Virtualizationrdquo
httpwwwciscocomwebpartnersselltechnologyipcservers_os_virt_for_uchtml
bull Partner Community ldquoServers OS and Virtualizationrdquo
httpswwwmyciscocommunitycomcommunitypartnercollaborationucservers
bull Design info
httpwwwciscocomgouc-virtualized and httpwwwciscocomgoucsrnd
bull ldquoWhatrsquos new whatrsquos different when on VMwarerdquo customer doc
httpwwwciscocomenUSdocsvoice_ip_commcucmvirtualservershtml
bull Product Literature
httpwwwciscocomgoucs and httpwwwciscocomgouconucs
bull Ordering Guide
httpwwwciscocomwebpartnersselltechnologyipcuc_tech_readinesshtml~7
bull Virtual Computing Environment Portal httpwwwvceportalcom
bull Cisco Infrastructure as a Service (IaaS) Portal httpwwwciscocomgoiaas
copy 2010 Cisco andor its affiliates All rights reserved 39
Current Business Continuity and Disaster Recovery strategies are still applicable
The UC apps redundancy rules are same
Distribute UC application nodes across UCS blades chassis and sites to minimize failure impact
Primarysecondary on different blade chassis sites
On same blade mix Subs with TFTPMoH vs just Subs
Redundancy of UCS components (blade chassis FEX links Interconnect switching)
Redundancy of ldquonewrdquo network types (10GbE SAN multi-pathing etc)
copy 2010 Cisco andor its affiliates All rights reserved 40
TAC Support Demarcation
Server Hardware
Shared Storage
VMware Cisco
Application
UC on UCS
Tested Reference Configuration
Cisco 3rd-party Cisco
Cisco VCE VCE
3rd-party
VCE
UC on UCS
Specs-based (including Vblock option)
Cisco 3rd-party Cisco
Cisco VCE VCE
3rd-party
VCE
3rd-party VMware (HP IBM)
Specs-based 3rd-party 3rd-party 3rd-party Cisco
MCS 7800 Appliances Cisco NA NA Cisco
Customer-provided MCS 7800 equivalent 3rd party NA NA Cisco
41 copy 2010 Cisco andor its affiliates All rights reserved
Customer Example ndash Primary Data Center
OLD NEW
Hardware Nodes 62 Physical Servers ( EU HQ clusters) 14 Approx
Software Version 615 amp 851 851
Ucxn Version 421 851 ndash 3 pairs -- Virtualized
CER 20 70 86 -- Virtualized
CM SUB CM PUB CM SUB CM SUB
MOH TFTP CER CM SUB
CER UCxn UCxN CM SUB
42 copy 2010 Cisco andor its affiliates All rights reserved
Deployment Model ndash Data Center 1
CM SUB CM PUB CM SUB CM SUB
MOH TFTP UCxN UCxn
CER UCxn UCxn UCxn
43 copy 2010 Cisco andor its affiliates All rights reserved
Deployment Model ndash Data Center 2
CM SUB CM PUB CM SUB CM SUB
MOH TFTP Ucxn UCxn
CER UCxn UCxn UCxn
copy 2010 Cisco andor its affiliates All rights reserved 44
Customer Design
PSTN
IP WAN
SIP Proxy
Unified Communications Manager
Unity Connection
Unified Presence
Unified Contact Center Express
Cisco UCS 5108 Chassis with UCS B200 Blade Servers
Cisco UCS 6100 Fabric Interconnect Switch
Cisco UCS C210 or C200 General-Purpose Rack-mount Server
CUSP
(11K phones)
(3K phones)
(400 phones)
copy 2010 Cisco andor its affiliates All rights reserved 45
HQ Details
Blade 1 Blade 2
CPU-1 CPU-2 CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
Blade 3 Blade 4
CPU-1 CPU-2 CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
CUP-1 UCCX-1 CUP-2
Blade 5 Blade 6
CPU-1 CPU-2 CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
UCCX-1
UCxn-1 Active
Leave
idle for
UCxn
UCxn-2
Active
TFTP-2
SUB-2
SUB-4
SUB-6
SUB-8 CUP-2 UCCX-2
SUB-3 PUB SUB-1
CUP-1
TFTP-1 SUB-5
SUB-7
Blade Slot 7 Blade Slot 8
CPU-1 CPU-2 CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
CUCM VM OVAs
Messaging VM OVAs
Leave
idle for
UCxn
ldquoSparerdquo blade slots available for non-UC workloads such as Cisco Nexus 1000V VMware
vCenter or 3rd-party workloads such as directories email groupware or other business
applications
Contact Center VM OVAs
Presence VM OVAs
copy 2010 Cisco andor its affiliates All rights reserved 46
Rack Server 3
CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
UCxn-2 CCX-1 CUP
Rack Server 2
CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
Rack Server 1
CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
Leave
idle for
UCxn
UCxn-1 TFTP-2 SUB-2
Leave
idle for
UCxn
SUB-1 CCX-2 PUB TFTP-1
Branch Office Details
Rack Server 2
CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
Rack Server 1
CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
PUBTFTP CCX-1 CUP
SUB CCX-2
Leave
idle for
UCxn
Leave
idle for
UCxn
UC
xn
-1
UC
xn
-1
CUCM VM OVAs
Messaging VM OVAs
Contact Center VM OVAs
Presence VM OVAs
copy 2010 Cisco andor its affiliates All rights reserved 47
bull DAS Rack Mount Server (Cisco C-Series)
bull Popular DAS Protocol SCSI
bull iSCSI Access SCSI storage media using IP network
bull Fibre Channel The most popular SAN protocol today
Cable distance ~ 2 km
Popular speed - 4 Gbs
bull NAS (Network Attached Storage) uses NFS (Network File System) protocol over TCPIP
DAS
SCSI
Computer System
SCSI Bus Adapter
SCSI Device Driver
Volume Manager
File System
Application
iSCSI
File System
Application
SCSI Device Driver iSCSI Driver
TCPIP Stack
NIC
Volume Manager
NIC
TCPIP Stack
iSCSI Layer
Bus Adapter
Host
Server
Storage
Transport
Storage
Media
FC SAN
FC
FC HBA
SCSI Device Driver
Volume Manager
File System
Application
Computer System Computer System
Block IO
SAN IP
copy 2010 Cisco andor its affiliates All rights reserved 48
HDD 1
450gig
15K RPM
HDD 2
450gig
15K RPM
HDD 3
450gig
15K RPM
HDD 4
450gig
15K RPM
HDD 5
450gig
15K RPM
Single RAID5 Group (14 TB Usable Space)
PUB SUB1 UCCX1
UC
VM 1
UC
VM 2
UC
VM 3
LUN 2 (720 GB) LUN 1 (720 GB)
UCCX2 CUP1 CUP2
UC
VM 4
UC
VM 5 UC
VM 6
4 to 8 UC VMs per
LUN
(max dependent on
sumvDisks)
NASSAN Array Best Practices for UC
Must be lt2TB per
LUN Recommend
500GB to 15 TB
Use FC class disks
with ~180 IOPS (eg
450 GB 15K or
300 GB 15K)
copy 2010 Cisco andor its affiliates All rights reserved 49
DAS Example UCS C210 M2 TRC1
HDD 1
146 GB
15K RPM
HDD 2
146 GB
15K RPM
HDD 3
146 GB
15K RPM
HDD 4
146 GB
15K RPM
HDD 5
146 GB
15K RPM
HDD 6
146 GB
15K RPM
HDD 7
146 GB
15K RPM
HDD 8
146 GB
15K RPM
HDD 9
146 GB
15K RPM
HDD 10
146 GB
15K RPM
Single RAID1
Volume Single RAID5 Volume (1022 GB after RAID overhead)
VMFS Filestore (947 GB after VMFS overhead)
PUB UCCX1
UC
VM 1
UC
VM 3
CUP1
UC
VM 5
vSphere ESXi
image
Notes
bull VMFS block size limits
max vDisk size
bull Could have gt1 VMFS
datastore on RAID
volume
copy 2010 Cisco andor its affiliates All rights reserved 50
DAS Example UCS C200 M2 TRC1 for BE6K
HDD 1
1 TB
72K RPM
HDD 2
1 TB
72K RPM
HDD 3
1 TB
72K RPM
HDD 4 1
TB
72K RPM
Single RAID10 Volume (2 TB after RAID
overhead)
VMFS Filestore (18 TB after VMFS overhead)
PUB UCCX1
UC
VM 1
UC
VM 3
CUP1
UC
VM 5 vSphere ESXi
image
copy 2010 Cisco andor its affiliates All rights reserved 51
UC Application Co-residency
bull See wwwciscocomgouc-virtualized for latest
bull Based on supported OVA for download
OVA reserves cores RAM etc to VMs
Basic rule of thumb fill up blade until out of capacity
If blade contains VM for messaging must reserve core for ESXi
bull Hardware oversubscription not supported
52 copy 2010 Cisco andor its affiliates All rights reserved
Virtual Machine Sizing
bull Virtual Machine virtual hardware defined by an VM template
vCPU vRAM vDisk vNICs
bull Capacity
bullAn VM template is associated with a specific capacity
bull VM templates are packaged in a OVA file
bull There are usually different VM template per release For example CUCM_80_vmv7_v21ova
CUCM_85_vmv7_v21ova
CUCM_86_vmv7_v15ova
Includes product product version VMware hardware version template version
53 copy 2010 Cisco andor its affiliates All rights reserved httptoolsciscocomcucst
copy 2010 Cisco andor its affiliates All rights reserved 54
bull Customer-accessible
bullUC on UCS httpwwwciscocomgouconucs and wwwciscocomgouc-virtualized and wwwciscocomgoucsrnd and wwwciscocomgoswonly (UCS page)
bullUCS in general httpwwwciscocomgoucs
bullVblocks amp Virtual Computing Environment wwwvceportalcom and wwwvcecom
bullFlexPods wwwcisconetappcom
bullCisco Infrastructure as a Service (IaaS) Portal httpwwwciscocomgoiaas
bullCisco Unified Service Delivery Portal httpwwwciscocomgousd
bullldquoCUCM on Virtual Serversrdquo (summary of ldquowhatrsquos newdifferent when virtualizedrdquo) httpwwwciscocomenUSdocsvoice_ip_commcucmvirtualservershtml
bull ldquoVirtualization and UCS 101rdquo httpnewsroomciscocomdlls2010ts_102810html
bull UC on UCS ldquotech valuerdquo httpwwwciscocomenUSsolutionsns340ns339ns638ns914html_TWTVtwtv_episode_74html
copy 2010 Cisco andor its affiliates All rights reserved 55
bull Partner Central ldquoServers OS and Virtualizationrdquo
httpwwwciscocomwebpartnersselltechnologyipcservers_os_virt_for_uchtml
bull Partner Community ldquoServers OS and Virtualizationrdquo
httpswwwmyciscocommunitycomcommunitypartnercollaborationucservers
bull Design info
httpwwwciscocomgouc-virtualized and httpwwwciscocomgoucsrnd
bull ldquoWhatrsquos new whatrsquos different when on VMwarerdquo customer doc
httpwwwciscocomenUSdocsvoice_ip_commcucmvirtualservershtml
bull Product Literature
httpwwwciscocomgoucs and httpwwwciscocomgouconucs
bull Ordering Guide
httpwwwciscocomwebpartnersselltechnologyipcuc_tech_readinesshtml~7
bull Virtual Computing Environment Portal httpwwwvceportalcom
bull Cisco Infrastructure as a Service (IaaS) Portal httpwwwciscocomgoiaas
copy 2010 Cisco andor its affiliates All rights reserved 40
TAC Support Demarcation
Server Hardware
Shared Storage
VMware Cisco
Application
UC on UCS
Tested Reference Configuration
Cisco 3rd-party Cisco
Cisco VCE VCE
3rd-party
VCE
UC on UCS
Specs-based (including Vblock option)
Cisco 3rd-party Cisco
Cisco VCE VCE
3rd-party
VCE
3rd-party VMware (HP IBM)
Specs-based 3rd-party 3rd-party 3rd-party Cisco
MCS 7800 Appliances Cisco NA NA Cisco
Customer-provided MCS 7800 equivalent 3rd party NA NA Cisco
41 copy 2010 Cisco andor its affiliates All rights reserved
Customer Example ndash Primary Data Center
OLD NEW
Hardware Nodes 62 Physical Servers ( EU HQ clusters) 14 Approx
Software Version 615 amp 851 851
Ucxn Version 421 851 ndash 3 pairs -- Virtualized
CER 20 70 86 -- Virtualized
CM SUB CM PUB CM SUB CM SUB
MOH TFTP CER CM SUB
CER UCxn UCxN CM SUB
42 copy 2010 Cisco andor its affiliates All rights reserved
Deployment Model ndash Data Center 1
CM SUB CM PUB CM SUB CM SUB
MOH TFTP UCxN UCxn
CER UCxn UCxn UCxn
43 copy 2010 Cisco andor its affiliates All rights reserved
Deployment Model ndash Data Center 2
CM SUB CM PUB CM SUB CM SUB
MOH TFTP Ucxn UCxn
CER UCxn UCxn UCxn
copy 2010 Cisco andor its affiliates All rights reserved 44
Customer Design
PSTN
IP WAN
SIP Proxy
Unified Communications Manager
Unity Connection
Unified Presence
Unified Contact Center Express
Cisco UCS 5108 Chassis with UCS B200 Blade Servers
Cisco UCS 6100 Fabric Interconnect Switch
Cisco UCS C210 or C200 General-Purpose Rack-mount Server
CUSP
(11K phones)
(3K phones)
(400 phones)
copy 2010 Cisco andor its affiliates All rights reserved 45
HQ Details
Blade 1 Blade 2
CPU-1 CPU-2 CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
Blade 3 Blade 4
CPU-1 CPU-2 CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
CUP-1 UCCX-1 CUP-2
Blade 5 Blade 6
CPU-1 CPU-2 CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
UCCX-1
UCxn-1 Active
Leave
idle for
UCxn
UCxn-2
Active
TFTP-2
SUB-2
SUB-4
SUB-6
SUB-8 CUP-2 UCCX-2
SUB-3 PUB SUB-1
CUP-1
TFTP-1 SUB-5
SUB-7
Blade Slot 7 Blade Slot 8
CPU-1 CPU-2 CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
CUCM VM OVAs
Messaging VM OVAs
Leave
idle for
UCxn
ldquoSparerdquo blade slots available for non-UC workloads such as Cisco Nexus 1000V VMware
vCenter or 3rd-party workloads such as directories email groupware or other business
applications
Contact Center VM OVAs
Presence VM OVAs
copy 2010 Cisco andor its affiliates All rights reserved 46
Rack Server 3
CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
UCxn-2 CCX-1 CUP
Rack Server 2
CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
Rack Server 1
CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
Leave
idle for
UCxn
UCxn-1 TFTP-2 SUB-2
Leave
idle for
UCxn
SUB-1 CCX-2 PUB TFTP-1
Branch Office Details
Rack Server 2
CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
Rack Server 1
CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
PUBTFTP CCX-1 CUP
SUB CCX-2
Leave
idle for
UCxn
Leave
idle for
UCxn
UC
xn
-1
UC
xn
-1
CUCM VM OVAs
Messaging VM OVAs
Contact Center VM OVAs
Presence VM OVAs
copy 2010 Cisco andor its affiliates All rights reserved 47
bull DAS Rack Mount Server (Cisco C-Series)
bull Popular DAS Protocol SCSI
bull iSCSI Access SCSI storage media using IP network
bull Fibre Channel The most popular SAN protocol today
Cable distance ~ 2 km
Popular speed - 4 Gbs
bull NAS (Network Attached Storage) uses NFS (Network File System) protocol over TCPIP
DAS
SCSI
Computer System
SCSI Bus Adapter
SCSI Device Driver
Volume Manager
File System
Application
iSCSI
File System
Application
SCSI Device Driver iSCSI Driver
TCPIP Stack
NIC
Volume Manager
NIC
TCPIP Stack
iSCSI Layer
Bus Adapter
Host
Server
Storage
Transport
Storage
Media
FC SAN
FC
FC HBA
SCSI Device Driver
Volume Manager
File System
Application
Computer System Computer System
Block IO
SAN IP
copy 2010 Cisco andor its affiliates All rights reserved 48
HDD 1
450gig
15K RPM
HDD 2
450gig
15K RPM
HDD 3
450gig
15K RPM
HDD 4
450gig
15K RPM
HDD 5
450gig
15K RPM
Single RAID5 Group (14 TB Usable Space)
PUB SUB1 UCCX1
UC
VM 1
UC
VM 2
UC
VM 3
LUN 2 (720 GB) LUN 1 (720 GB)
UCCX2 CUP1 CUP2
UC
VM 4
UC
VM 5 UC
VM 6
4 to 8 UC VMs per
LUN
(max dependent on
sumvDisks)
NASSAN Array Best Practices for UC
Must be lt2TB per
LUN Recommend
500GB to 15 TB
Use FC class disks
with ~180 IOPS (eg
450 GB 15K or
300 GB 15K)
copy 2010 Cisco andor its affiliates All rights reserved 49
DAS Example UCS C210 M2 TRC1
HDD 1
146 GB
15K RPM
HDD 2
146 GB
15K RPM
HDD 3
146 GB
15K RPM
HDD 4
146 GB
15K RPM
HDD 5
146 GB
15K RPM
HDD 6
146 GB
15K RPM
HDD 7
146 GB
15K RPM
HDD 8
146 GB
15K RPM
HDD 9
146 GB
15K RPM
HDD 10
146 GB
15K RPM
Single RAID1
Volume Single RAID5 Volume (1022 GB after RAID overhead)
VMFS Filestore (947 GB after VMFS overhead)
PUB UCCX1
UC
VM 1
UC
VM 3
CUP1
UC
VM 5
vSphere ESXi
image
Notes
bull VMFS block size limits
max vDisk size
bull Could have gt1 VMFS
datastore on RAID
volume
copy 2010 Cisco andor its affiliates All rights reserved 50
DAS Example UCS C200 M2 TRC1 for BE6K
HDD 1
1 TB
72K RPM
HDD 2
1 TB
72K RPM
HDD 3
1 TB
72K RPM
HDD 4 1
TB
72K RPM
Single RAID10 Volume (2 TB after RAID
overhead)
VMFS Filestore (18 TB after VMFS overhead)
PUB UCCX1
UC
VM 1
UC
VM 3
CUP1
UC
VM 5 vSphere ESXi
image
copy 2010 Cisco andor its affiliates All rights reserved 51
UC Application Co-residency
bull See wwwciscocomgouc-virtualized for latest
bull Based on supported OVA for download
OVA reserves cores RAM etc to VMs
Basic rule of thumb fill up blade until out of capacity
If blade contains VM for messaging must reserve core for ESXi
bull Hardware oversubscription not supported
52 copy 2010 Cisco andor its affiliates All rights reserved
Virtual Machine Sizing
bull Virtual Machine virtual hardware defined by an VM template
vCPU vRAM vDisk vNICs
bull Capacity
bullAn VM template is associated with a specific capacity
bull VM templates are packaged in a OVA file
bull There are usually different VM template per release For example CUCM_80_vmv7_v21ova
CUCM_85_vmv7_v21ova
CUCM_86_vmv7_v15ova
Includes product product version VMware hardware version template version
53 copy 2010 Cisco andor its affiliates All rights reserved httptoolsciscocomcucst
copy 2010 Cisco andor its affiliates All rights reserved 54
bull Customer-accessible
bullUC on UCS httpwwwciscocomgouconucs and wwwciscocomgouc-virtualized and wwwciscocomgoucsrnd and wwwciscocomgoswonly (UCS page)
bullUCS in general httpwwwciscocomgoucs
bullVblocks amp Virtual Computing Environment wwwvceportalcom and wwwvcecom
bullFlexPods wwwcisconetappcom
bullCisco Infrastructure as a Service (IaaS) Portal httpwwwciscocomgoiaas
bullCisco Unified Service Delivery Portal httpwwwciscocomgousd
bullldquoCUCM on Virtual Serversrdquo (summary of ldquowhatrsquos newdifferent when virtualizedrdquo) httpwwwciscocomenUSdocsvoice_ip_commcucmvirtualservershtml
bull ldquoVirtualization and UCS 101rdquo httpnewsroomciscocomdlls2010ts_102810html
bull UC on UCS ldquotech valuerdquo httpwwwciscocomenUSsolutionsns340ns339ns638ns914html_TWTVtwtv_episode_74html
copy 2010 Cisco andor its affiliates All rights reserved 55
bull Partner Central ldquoServers OS and Virtualizationrdquo
httpwwwciscocomwebpartnersselltechnologyipcservers_os_virt_for_uchtml
bull Partner Community ldquoServers OS and Virtualizationrdquo
httpswwwmyciscocommunitycomcommunitypartnercollaborationucservers
bull Design info
httpwwwciscocomgouc-virtualized and httpwwwciscocomgoucsrnd
bull ldquoWhatrsquos new whatrsquos different when on VMwarerdquo customer doc
httpwwwciscocomenUSdocsvoice_ip_commcucmvirtualservershtml
bull Product Literature
httpwwwciscocomgoucs and httpwwwciscocomgouconucs
bull Ordering Guide
httpwwwciscocomwebpartnersselltechnologyipcuc_tech_readinesshtml~7
bull Virtual Computing Environment Portal httpwwwvceportalcom
bull Cisco Infrastructure as a Service (IaaS) Portal httpwwwciscocomgoiaas
41 copy 2010 Cisco andor its affiliates All rights reserved
Customer Example ndash Primary Data Center
OLD NEW
Hardware Nodes 62 Physical Servers ( EU HQ clusters) 14 Approx
Software Version 615 amp 851 851
Ucxn Version 421 851 ndash 3 pairs -- Virtualized
CER 20 70 86 -- Virtualized
CM SUB CM PUB CM SUB CM SUB
MOH TFTP CER CM SUB
CER UCxn UCxN CM SUB
42 copy 2010 Cisco andor its affiliates All rights reserved
Deployment Model ndash Data Center 1
CM SUB CM PUB CM SUB CM SUB
MOH TFTP UCxN UCxn
CER UCxn UCxn UCxn
43 copy 2010 Cisco andor its affiliates All rights reserved
Deployment Model ndash Data Center 2
CM SUB CM PUB CM SUB CM SUB
MOH TFTP Ucxn UCxn
CER UCxn UCxn UCxn
copy 2010 Cisco andor its affiliates All rights reserved 44
Customer Design
PSTN
IP WAN
SIP Proxy
Unified Communications Manager
Unity Connection
Unified Presence
Unified Contact Center Express
Cisco UCS 5108 Chassis with UCS B200 Blade Servers
Cisco UCS 6100 Fabric Interconnect Switch
Cisco UCS C210 or C200 General-Purpose Rack-mount Server
CUSP
(11K phones)
(3K phones)
(400 phones)
copy 2010 Cisco andor its affiliates All rights reserved 45
HQ Details
Blade 1 Blade 2
CPU-1 CPU-2 CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
Blade 3 Blade 4
CPU-1 CPU-2 CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
CUP-1 UCCX-1 CUP-2
Blade 5 Blade 6
CPU-1 CPU-2 CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
UCCX-1
UCxn-1 Active
Leave
idle for
UCxn
UCxn-2
Active
TFTP-2
SUB-2
SUB-4
SUB-6
SUB-8 CUP-2 UCCX-2
SUB-3 PUB SUB-1
CUP-1
TFTP-1 SUB-5
SUB-7
Blade Slot 7 Blade Slot 8
CPU-1 CPU-2 CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
CUCM VM OVAs
Messaging VM OVAs
Leave
idle for
UCxn
ldquoSparerdquo blade slots available for non-UC workloads such as Cisco Nexus 1000V VMware
vCenter or 3rd-party workloads such as directories email groupware or other business
applications
Contact Center VM OVAs
Presence VM OVAs
copy 2010 Cisco andor its affiliates All rights reserved 46
Rack Server 3
CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
UCxn-2 CCX-1 CUP
Rack Server 2
CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
Rack Server 1
CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
Leave
idle for
UCxn
UCxn-1 TFTP-2 SUB-2
Leave
idle for
UCxn
SUB-1 CCX-2 PUB TFTP-1
Branch Office Details
Rack Server 2
CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
Rack Server 1
CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
PUBTFTP CCX-1 CUP
SUB CCX-2
Leave
idle for
UCxn
Leave
idle for
UCxn
UC
xn
-1
UC
xn
-1
CUCM VM OVAs
Messaging VM OVAs
Contact Center VM OVAs
Presence VM OVAs
copy 2010 Cisco andor its affiliates All rights reserved 47
bull DAS Rack Mount Server (Cisco C-Series)
bull Popular DAS Protocol SCSI
bull iSCSI Access SCSI storage media using IP network
bull Fibre Channel The most popular SAN protocol today
Cable distance ~ 2 km
Popular speed - 4 Gbs
bull NAS (Network Attached Storage) uses NFS (Network File System) protocol over TCPIP
DAS
SCSI
Computer System
SCSI Bus Adapter
SCSI Device Driver
Volume Manager
File System
Application
iSCSI
File System
Application
SCSI Device Driver iSCSI Driver
TCPIP Stack
NIC
Volume Manager
NIC
TCPIP Stack
iSCSI Layer
Bus Adapter
Host
Server
Storage
Transport
Storage
Media
FC SAN
FC
FC HBA
SCSI Device Driver
Volume Manager
File System
Application
Computer System Computer System
Block IO
SAN IP
copy 2010 Cisco andor its affiliates All rights reserved 48
HDD 1
450gig
15K RPM
HDD 2
450gig
15K RPM
HDD 3
450gig
15K RPM
HDD 4
450gig
15K RPM
HDD 5
450gig
15K RPM
Single RAID5 Group (14 TB Usable Space)
PUB SUB1 UCCX1
UC
VM 1
UC
VM 2
UC
VM 3
LUN 2 (720 GB) LUN 1 (720 GB)
UCCX2 CUP1 CUP2
UC
VM 4
UC
VM 5 UC
VM 6
4 to 8 UC VMs per
LUN
(max dependent on
sumvDisks)
NASSAN Array Best Practices for UC
Must be lt2TB per
LUN Recommend
500GB to 15 TB
Use FC class disks
with ~180 IOPS (eg
450 GB 15K or
300 GB 15K)
copy 2010 Cisco andor its affiliates All rights reserved 49
DAS Example UCS C210 M2 TRC1
HDD 1
146 GB
15K RPM
HDD 2
146 GB
15K RPM
HDD 3
146 GB
15K RPM
HDD 4
146 GB
15K RPM
HDD 5
146 GB
15K RPM
HDD 6
146 GB
15K RPM
HDD 7
146 GB
15K RPM
HDD 8
146 GB
15K RPM
HDD 9
146 GB
15K RPM
HDD 10
146 GB
15K RPM
Single RAID1
Volume Single RAID5 Volume (1022 GB after RAID overhead)
VMFS Filestore (947 GB after VMFS overhead)
PUB UCCX1
UC
VM 1
UC
VM 3
CUP1
UC
VM 5
vSphere ESXi
image
Notes
bull VMFS block size limits
max vDisk size
bull Could have gt1 VMFS
datastore on RAID
volume
copy 2010 Cisco andor its affiliates All rights reserved 50
DAS Example UCS C200 M2 TRC1 for BE6K
HDD 1
1 TB
72K RPM
HDD 2
1 TB
72K RPM
HDD 3
1 TB
72K RPM
HDD 4 1
TB
72K RPM
Single RAID10 Volume (2 TB after RAID
overhead)
VMFS Filestore (18 TB after VMFS overhead)
PUB UCCX1
UC
VM 1
UC
VM 3
CUP1
UC
VM 5 vSphere ESXi
image
copy 2010 Cisco andor its affiliates All rights reserved 51
UC Application Co-residency
bull See wwwciscocomgouc-virtualized for latest
bull Based on supported OVA for download
OVA reserves cores RAM etc to VMs
Basic rule of thumb fill up blade until out of capacity
If blade contains VM for messaging must reserve core for ESXi
bull Hardware oversubscription not supported
52 copy 2010 Cisco andor its affiliates All rights reserved
Virtual Machine Sizing
bull Virtual Machine virtual hardware defined by an VM template
vCPU vRAM vDisk vNICs
bull Capacity
bullAn VM template is associated with a specific capacity
bull VM templates are packaged in a OVA file
bull There are usually different VM template per release For example CUCM_80_vmv7_v21ova
CUCM_85_vmv7_v21ova
CUCM_86_vmv7_v15ova
Includes product product version VMware hardware version template version
53 copy 2010 Cisco andor its affiliates All rights reserved httptoolsciscocomcucst
copy 2010 Cisco andor its affiliates All rights reserved 54
bull Customer-accessible
bullUC on UCS httpwwwciscocomgouconucs and wwwciscocomgouc-virtualized and wwwciscocomgoucsrnd and wwwciscocomgoswonly (UCS page)
bullUCS in general httpwwwciscocomgoucs
bullVblocks amp Virtual Computing Environment wwwvceportalcom and wwwvcecom
bullFlexPods wwwcisconetappcom
bullCisco Infrastructure as a Service (IaaS) Portal httpwwwciscocomgoiaas
bullCisco Unified Service Delivery Portal httpwwwciscocomgousd
bullldquoCUCM on Virtual Serversrdquo (summary of ldquowhatrsquos newdifferent when virtualizedrdquo) httpwwwciscocomenUSdocsvoice_ip_commcucmvirtualservershtml
bull ldquoVirtualization and UCS 101rdquo httpnewsroomciscocomdlls2010ts_102810html
bull UC on UCS ldquotech valuerdquo httpwwwciscocomenUSsolutionsns340ns339ns638ns914html_TWTVtwtv_episode_74html
copy 2010 Cisco andor its affiliates All rights reserved 55
bull Partner Central ldquoServers OS and Virtualizationrdquo
httpwwwciscocomwebpartnersselltechnologyipcservers_os_virt_for_uchtml
bull Partner Community ldquoServers OS and Virtualizationrdquo
httpswwwmyciscocommunitycomcommunitypartnercollaborationucservers
bull Design info
httpwwwciscocomgouc-virtualized and httpwwwciscocomgoucsrnd
bull ldquoWhatrsquos new whatrsquos different when on VMwarerdquo customer doc
httpwwwciscocomenUSdocsvoice_ip_commcucmvirtualservershtml
bull Product Literature
httpwwwciscocomgoucs and httpwwwciscocomgouconucs
bull Ordering Guide
httpwwwciscocomwebpartnersselltechnologyipcuc_tech_readinesshtml~7
bull Virtual Computing Environment Portal httpwwwvceportalcom
bull Cisco Infrastructure as a Service (IaaS) Portal httpwwwciscocomgoiaas
42 copy 2010 Cisco andor its affiliates All rights reserved
Deployment Model ndash Data Center 1
CM SUB CM PUB CM SUB CM SUB
MOH TFTP UCxN UCxn
CER UCxn UCxn UCxn
43 copy 2010 Cisco andor its affiliates All rights reserved
Deployment Model ndash Data Center 2
CM SUB CM PUB CM SUB CM SUB
MOH TFTP Ucxn UCxn
CER UCxn UCxn UCxn
copy 2010 Cisco andor its affiliates All rights reserved 44
Customer Design
PSTN
IP WAN
SIP Proxy
Unified Communications Manager
Unity Connection
Unified Presence
Unified Contact Center Express
Cisco UCS 5108 Chassis with UCS B200 Blade Servers
Cisco UCS 6100 Fabric Interconnect Switch
Cisco UCS C210 or C200 General-Purpose Rack-mount Server
CUSP
(11K phones)
(3K phones)
(400 phones)
copy 2010 Cisco andor its affiliates All rights reserved 45
HQ Details
Blade 1 Blade 2
CPU-1 CPU-2 CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
Blade 3 Blade 4
CPU-1 CPU-2 CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
CUP-1 UCCX-1 CUP-2
Blade 5 Blade 6
CPU-1 CPU-2 CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
UCCX-1
UCxn-1 Active
Leave
idle for
UCxn
UCxn-2
Active
TFTP-2
SUB-2
SUB-4
SUB-6
SUB-8 CUP-2 UCCX-2
SUB-3 PUB SUB-1
CUP-1
TFTP-1 SUB-5
SUB-7
Blade Slot 7 Blade Slot 8
CPU-1 CPU-2 CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
CUCM VM OVAs
Messaging VM OVAs
Leave
idle for
UCxn
ldquoSparerdquo blade slots available for non-UC workloads such as Cisco Nexus 1000V VMware
vCenter or 3rd-party workloads such as directories email groupware or other business
applications
Contact Center VM OVAs
Presence VM OVAs
copy 2010 Cisco andor its affiliates All rights reserved 46
Rack Server 3
CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
UCxn-2 CCX-1 CUP
Rack Server 2
CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
Rack Server 1
CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
Leave
idle for
UCxn
UCxn-1 TFTP-2 SUB-2
Leave
idle for
UCxn
SUB-1 CCX-2 PUB TFTP-1
Branch Office Details
Rack Server 2
CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
Rack Server 1
CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
PUBTFTP CCX-1 CUP
SUB CCX-2
Leave
idle for
UCxn
Leave
idle for
UCxn
UC
xn
-1
UC
xn
-1
CUCM VM OVAs
Messaging VM OVAs
Contact Center VM OVAs
Presence VM OVAs
copy 2010 Cisco andor its affiliates All rights reserved 47
bull DAS Rack Mount Server (Cisco C-Series)
bull Popular DAS Protocol SCSI
bull iSCSI Access SCSI storage media using IP network
bull Fibre Channel The most popular SAN protocol today
Cable distance ~ 2 km
Popular speed - 4 Gbs
bull NAS (Network Attached Storage) uses NFS (Network File System) protocol over TCPIP
DAS
SCSI
Computer System
SCSI Bus Adapter
SCSI Device Driver
Volume Manager
File System
Application
iSCSI
File System
Application
SCSI Device Driver iSCSI Driver
TCPIP Stack
NIC
Volume Manager
NIC
TCPIP Stack
iSCSI Layer
Bus Adapter
Host
Server
Storage
Transport
Storage
Media
FC SAN
FC
FC HBA
SCSI Device Driver
Volume Manager
File System
Application
Computer System Computer System
Block IO
SAN IP
copy 2010 Cisco andor its affiliates All rights reserved 48
HDD 1
450gig
15K RPM
HDD 2
450gig
15K RPM
HDD 3
450gig
15K RPM
HDD 4
450gig
15K RPM
HDD 5
450gig
15K RPM
Single RAID5 Group (14 TB Usable Space)
PUB SUB1 UCCX1
UC
VM 1
UC
VM 2
UC
VM 3
LUN 2 (720 GB) LUN 1 (720 GB)
UCCX2 CUP1 CUP2
UC
VM 4
UC
VM 5 UC
VM 6
4 to 8 UC VMs per
LUN
(max dependent on
sumvDisks)
NASSAN Array Best Practices for UC
Must be lt2TB per
LUN Recommend
500GB to 15 TB
Use FC class disks
with ~180 IOPS (eg
450 GB 15K or
300 GB 15K)
copy 2010 Cisco andor its affiliates All rights reserved 49
DAS Example UCS C210 M2 TRC1
HDD 1
146 GB
15K RPM
HDD 2
146 GB
15K RPM
HDD 3
146 GB
15K RPM
HDD 4
146 GB
15K RPM
HDD 5
146 GB
15K RPM
HDD 6
146 GB
15K RPM
HDD 7
146 GB
15K RPM
HDD 8
146 GB
15K RPM
HDD 9
146 GB
15K RPM
HDD 10
146 GB
15K RPM
Single RAID1
Volume Single RAID5 Volume (1022 GB after RAID overhead)
VMFS Filestore (947 GB after VMFS overhead)
PUB UCCX1
UC
VM 1
UC
VM 3
CUP1
UC
VM 5
vSphere ESXi
image
Notes
bull VMFS block size limits
max vDisk size
bull Could have gt1 VMFS
datastore on RAID
volume
copy 2010 Cisco andor its affiliates All rights reserved 50
DAS Example UCS C200 M2 TRC1 for BE6K
HDD 1
1 TB
72K RPM
HDD 2
1 TB
72K RPM
HDD 3
1 TB
72K RPM
HDD 4 1
TB
72K RPM
Single RAID10 Volume (2 TB after RAID
overhead)
VMFS Filestore (18 TB after VMFS overhead)
PUB UCCX1
UC
VM 1
UC
VM 3
CUP1
UC
VM 5 vSphere ESXi
image
copy 2010 Cisco andor its affiliates All rights reserved 51
UC Application Co-residency
bull See wwwciscocomgouc-virtualized for latest
bull Based on supported OVA for download
OVA reserves cores RAM etc to VMs
Basic rule of thumb fill up blade until out of capacity
If blade contains VM for messaging must reserve core for ESXi
bull Hardware oversubscription not supported
52 copy 2010 Cisco andor its affiliates All rights reserved
Virtual Machine Sizing
bull Virtual Machine virtual hardware defined by an VM template
vCPU vRAM vDisk vNICs
bull Capacity
bullAn VM template is associated with a specific capacity
bull VM templates are packaged in a OVA file
bull There are usually different VM template per release For example CUCM_80_vmv7_v21ova
CUCM_85_vmv7_v21ova
CUCM_86_vmv7_v15ova
Includes product product version VMware hardware version template version
53 copy 2010 Cisco andor its affiliates All rights reserved httptoolsciscocomcucst
copy 2010 Cisco andor its affiliates All rights reserved 54
bull Customer-accessible
bullUC on UCS httpwwwciscocomgouconucs and wwwciscocomgouc-virtualized and wwwciscocomgoucsrnd and wwwciscocomgoswonly (UCS page)
bullUCS in general httpwwwciscocomgoucs
bullVblocks amp Virtual Computing Environment wwwvceportalcom and wwwvcecom
bullFlexPods wwwcisconetappcom
bullCisco Infrastructure as a Service (IaaS) Portal httpwwwciscocomgoiaas
bullCisco Unified Service Delivery Portal httpwwwciscocomgousd
bullldquoCUCM on Virtual Serversrdquo (summary of ldquowhatrsquos newdifferent when virtualizedrdquo) httpwwwciscocomenUSdocsvoice_ip_commcucmvirtualservershtml
bull ldquoVirtualization and UCS 101rdquo httpnewsroomciscocomdlls2010ts_102810html
bull UC on UCS ldquotech valuerdquo httpwwwciscocomenUSsolutionsns340ns339ns638ns914html_TWTVtwtv_episode_74html
copy 2010 Cisco andor its affiliates All rights reserved 55
bull Partner Central ldquoServers OS and Virtualizationrdquo
httpwwwciscocomwebpartnersselltechnologyipcservers_os_virt_for_uchtml
bull Partner Community ldquoServers OS and Virtualizationrdquo
httpswwwmyciscocommunitycomcommunitypartnercollaborationucservers
bull Design info
httpwwwciscocomgouc-virtualized and httpwwwciscocomgoucsrnd
bull ldquoWhatrsquos new whatrsquos different when on VMwarerdquo customer doc
httpwwwciscocomenUSdocsvoice_ip_commcucmvirtualservershtml
bull Product Literature
httpwwwciscocomgoucs and httpwwwciscocomgouconucs
bull Ordering Guide
httpwwwciscocomwebpartnersselltechnologyipcuc_tech_readinesshtml~7
bull Virtual Computing Environment Portal httpwwwvceportalcom
bull Cisco Infrastructure as a Service (IaaS) Portal httpwwwciscocomgoiaas
43 copy 2010 Cisco andor its affiliates All rights reserved
Deployment Model ndash Data Center 2
CM SUB CM PUB CM SUB CM SUB
MOH TFTP Ucxn UCxn
CER UCxn UCxn UCxn
copy 2010 Cisco andor its affiliates All rights reserved 44
Customer Design
PSTN
IP WAN
SIP Proxy
Unified Communications Manager
Unity Connection
Unified Presence
Unified Contact Center Express
Cisco UCS 5108 Chassis with UCS B200 Blade Servers
Cisco UCS 6100 Fabric Interconnect Switch
Cisco UCS C210 or C200 General-Purpose Rack-mount Server
CUSP
(11K phones)
(3K phones)
(400 phones)
copy 2010 Cisco andor its affiliates All rights reserved 45
HQ Details
Blade 1 Blade 2
CPU-1 CPU-2 CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
Blade 3 Blade 4
CPU-1 CPU-2 CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
CUP-1 UCCX-1 CUP-2
Blade 5 Blade 6
CPU-1 CPU-2 CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
UCCX-1
UCxn-1 Active
Leave
idle for
UCxn
UCxn-2
Active
TFTP-2
SUB-2
SUB-4
SUB-6
SUB-8 CUP-2 UCCX-2
SUB-3 PUB SUB-1
CUP-1
TFTP-1 SUB-5
SUB-7
Blade Slot 7 Blade Slot 8
CPU-1 CPU-2 CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
CUCM VM OVAs
Messaging VM OVAs
Leave
idle for
UCxn
ldquoSparerdquo blade slots available for non-UC workloads such as Cisco Nexus 1000V VMware
vCenter or 3rd-party workloads such as directories email groupware or other business
applications
Contact Center VM OVAs
Presence VM OVAs
copy 2010 Cisco andor its affiliates All rights reserved 46
Rack Server 3
CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
UCxn-2 CCX-1 CUP
Rack Server 2
CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
Rack Server 1
CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
Leave
idle for
UCxn
UCxn-1 TFTP-2 SUB-2
Leave
idle for
UCxn
SUB-1 CCX-2 PUB TFTP-1
Branch Office Details
Rack Server 2
CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
Rack Server 1
CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
PUBTFTP CCX-1 CUP
SUB CCX-2
Leave
idle for
UCxn
Leave
idle for
UCxn
UC
xn
-1
UC
xn
-1
CUCM VM OVAs
Messaging VM OVAs
Contact Center VM OVAs
Presence VM OVAs
copy 2010 Cisco andor its affiliates All rights reserved 47
bull DAS Rack Mount Server (Cisco C-Series)
bull Popular DAS Protocol SCSI
bull iSCSI Access SCSI storage media using IP network
bull Fibre Channel The most popular SAN protocol today
Cable distance ~ 2 km
Popular speed - 4 Gbs
bull NAS (Network Attached Storage) uses NFS (Network File System) protocol over TCPIP
DAS
SCSI
Computer System
SCSI Bus Adapter
SCSI Device Driver
Volume Manager
File System
Application
iSCSI
File System
Application
SCSI Device Driver iSCSI Driver
TCPIP Stack
NIC
Volume Manager
NIC
TCPIP Stack
iSCSI Layer
Bus Adapter
Host
Server
Storage
Transport
Storage
Media
FC SAN
FC
FC HBA
SCSI Device Driver
Volume Manager
File System
Application
Computer System Computer System
Block IO
SAN IP
copy 2010 Cisco andor its affiliates All rights reserved 48
HDD 1
450gig
15K RPM
HDD 2
450gig
15K RPM
HDD 3
450gig
15K RPM
HDD 4
450gig
15K RPM
HDD 5
450gig
15K RPM
Single RAID5 Group (14 TB Usable Space)
PUB SUB1 UCCX1
UC
VM 1
UC
VM 2
UC
VM 3
LUN 2 (720 GB) LUN 1 (720 GB)
UCCX2 CUP1 CUP2
UC
VM 4
UC
VM 5 UC
VM 6
4 to 8 UC VMs per
LUN
(max dependent on
sumvDisks)
NASSAN Array Best Practices for UC
Must be lt2TB per
LUN Recommend
500GB to 15 TB
Use FC class disks
with ~180 IOPS (eg
450 GB 15K or
300 GB 15K)
copy 2010 Cisco andor its affiliates All rights reserved 49
DAS Example UCS C210 M2 TRC1
HDD 1
146 GB
15K RPM
HDD 2
146 GB
15K RPM
HDD 3
146 GB
15K RPM
HDD 4
146 GB
15K RPM
HDD 5
146 GB
15K RPM
HDD 6
146 GB
15K RPM
HDD 7
146 GB
15K RPM
HDD 8
146 GB
15K RPM
HDD 9
146 GB
15K RPM
HDD 10
146 GB
15K RPM
Single RAID1
Volume Single RAID5 Volume (1022 GB after RAID overhead)
VMFS Filestore (947 GB after VMFS overhead)
PUB UCCX1
UC
VM 1
UC
VM 3
CUP1
UC
VM 5
vSphere ESXi
image
Notes
bull VMFS block size limits
max vDisk size
bull Could have gt1 VMFS
datastore on RAID
volume
copy 2010 Cisco andor its affiliates All rights reserved 50
DAS Example UCS C200 M2 TRC1 for BE6K
HDD 1
1 TB
72K RPM
HDD 2
1 TB
72K RPM
HDD 3
1 TB
72K RPM
HDD 4 1
TB
72K RPM
Single RAID10 Volume (2 TB after RAID
overhead)
VMFS Filestore (18 TB after VMFS overhead)
PUB UCCX1
UC
VM 1
UC
VM 3
CUP1
UC
VM 5 vSphere ESXi
image
copy 2010 Cisco andor its affiliates All rights reserved 51
UC Application Co-residency
bull See wwwciscocomgouc-virtualized for latest
bull Based on supported OVA for download
OVA reserves cores RAM etc to VMs
Basic rule of thumb fill up blade until out of capacity
If blade contains VM for messaging must reserve core for ESXi
bull Hardware oversubscription not supported
52 copy 2010 Cisco andor its affiliates All rights reserved
Virtual Machine Sizing
bull Virtual Machine virtual hardware defined by an VM template
vCPU vRAM vDisk vNICs
bull Capacity
bullAn VM template is associated with a specific capacity
bull VM templates are packaged in a OVA file
bull There are usually different VM template per release For example CUCM_80_vmv7_v21ova
CUCM_85_vmv7_v21ova
CUCM_86_vmv7_v15ova
Includes product product version VMware hardware version template version
53 copy 2010 Cisco andor its affiliates All rights reserved httptoolsciscocomcucst
copy 2010 Cisco andor its affiliates All rights reserved 54
bull Customer-accessible
bullUC on UCS httpwwwciscocomgouconucs and wwwciscocomgouc-virtualized and wwwciscocomgoucsrnd and wwwciscocomgoswonly (UCS page)
bullUCS in general httpwwwciscocomgoucs
bullVblocks amp Virtual Computing Environment wwwvceportalcom and wwwvcecom
bullFlexPods wwwcisconetappcom
bullCisco Infrastructure as a Service (IaaS) Portal httpwwwciscocomgoiaas
bullCisco Unified Service Delivery Portal httpwwwciscocomgousd
bullldquoCUCM on Virtual Serversrdquo (summary of ldquowhatrsquos newdifferent when virtualizedrdquo) httpwwwciscocomenUSdocsvoice_ip_commcucmvirtualservershtml
bull ldquoVirtualization and UCS 101rdquo httpnewsroomciscocomdlls2010ts_102810html
bull UC on UCS ldquotech valuerdquo httpwwwciscocomenUSsolutionsns340ns339ns638ns914html_TWTVtwtv_episode_74html
copy 2010 Cisco andor its affiliates All rights reserved 55
bull Partner Central ldquoServers OS and Virtualizationrdquo
httpwwwciscocomwebpartnersselltechnologyipcservers_os_virt_for_uchtml
bull Partner Community ldquoServers OS and Virtualizationrdquo
httpswwwmyciscocommunitycomcommunitypartnercollaborationucservers
bull Design info
httpwwwciscocomgouc-virtualized and httpwwwciscocomgoucsrnd
bull ldquoWhatrsquos new whatrsquos different when on VMwarerdquo customer doc
httpwwwciscocomenUSdocsvoice_ip_commcucmvirtualservershtml
bull Product Literature
httpwwwciscocomgoucs and httpwwwciscocomgouconucs
bull Ordering Guide
httpwwwciscocomwebpartnersselltechnologyipcuc_tech_readinesshtml~7
bull Virtual Computing Environment Portal httpwwwvceportalcom
bull Cisco Infrastructure as a Service (IaaS) Portal httpwwwciscocomgoiaas
copy 2010 Cisco andor its affiliates All rights reserved 44
Customer Design
PSTN
IP WAN
SIP Proxy
Unified Communications Manager
Unity Connection
Unified Presence
Unified Contact Center Express
Cisco UCS 5108 Chassis with UCS B200 Blade Servers
Cisco UCS 6100 Fabric Interconnect Switch
Cisco UCS C210 or C200 General-Purpose Rack-mount Server
CUSP
(11K phones)
(3K phones)
(400 phones)
copy 2010 Cisco andor its affiliates All rights reserved 45
HQ Details
Blade 1 Blade 2
CPU-1 CPU-2 CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
Blade 3 Blade 4
CPU-1 CPU-2 CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
CUP-1 UCCX-1 CUP-2
Blade 5 Blade 6
CPU-1 CPU-2 CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
UCCX-1
UCxn-1 Active
Leave
idle for
UCxn
UCxn-2
Active
TFTP-2
SUB-2
SUB-4
SUB-6
SUB-8 CUP-2 UCCX-2
SUB-3 PUB SUB-1
CUP-1
TFTP-1 SUB-5
SUB-7
Blade Slot 7 Blade Slot 8
CPU-1 CPU-2 CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
CUCM VM OVAs
Messaging VM OVAs
Leave
idle for
UCxn
ldquoSparerdquo blade slots available for non-UC workloads such as Cisco Nexus 1000V VMware
vCenter or 3rd-party workloads such as directories email groupware or other business
applications
Contact Center VM OVAs
Presence VM OVAs
copy 2010 Cisco andor its affiliates All rights reserved 46
Rack Server 3
CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
UCxn-2 CCX-1 CUP
Rack Server 2
CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
Rack Server 1
CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
Leave
idle for
UCxn
UCxn-1 TFTP-2 SUB-2
Leave
idle for
UCxn
SUB-1 CCX-2 PUB TFTP-1
Branch Office Details
Rack Server 2
CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
Rack Server 1
CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
PUBTFTP CCX-1 CUP
SUB CCX-2
Leave
idle for
UCxn
Leave
idle for
UCxn
UC
xn
-1
UC
xn
-1
CUCM VM OVAs
Messaging VM OVAs
Contact Center VM OVAs
Presence VM OVAs
copy 2010 Cisco andor its affiliates All rights reserved 47
bull DAS Rack Mount Server (Cisco C-Series)
bull Popular DAS Protocol SCSI
bull iSCSI Access SCSI storage media using IP network
bull Fibre Channel The most popular SAN protocol today
Cable distance ~ 2 km
Popular speed - 4 Gbs
bull NAS (Network Attached Storage) uses NFS (Network File System) protocol over TCPIP
DAS
SCSI
Computer System
SCSI Bus Adapter
SCSI Device Driver
Volume Manager
File System
Application
iSCSI
File System
Application
SCSI Device Driver iSCSI Driver
TCPIP Stack
NIC
Volume Manager
NIC
TCPIP Stack
iSCSI Layer
Bus Adapter
Host
Server
Storage
Transport
Storage
Media
FC SAN
FC
FC HBA
SCSI Device Driver
Volume Manager
File System
Application
Computer System Computer System
Block IO
SAN IP
copy 2010 Cisco andor its affiliates All rights reserved 48
HDD 1
450gig
15K RPM
HDD 2
450gig
15K RPM
HDD 3
450gig
15K RPM
HDD 4
450gig
15K RPM
HDD 5
450gig
15K RPM
Single RAID5 Group (14 TB Usable Space)
PUB SUB1 UCCX1
UC
VM 1
UC
VM 2
UC
VM 3
LUN 2 (720 GB) LUN 1 (720 GB)
UCCX2 CUP1 CUP2
UC
VM 4
UC
VM 5 UC
VM 6
4 to 8 UC VMs per
LUN
(max dependent on
sumvDisks)
NASSAN Array Best Practices for UC
Must be lt2TB per
LUN Recommend
500GB to 15 TB
Use FC class disks
with ~180 IOPS (eg
450 GB 15K or
300 GB 15K)
copy 2010 Cisco andor its affiliates All rights reserved 49
DAS Example UCS C210 M2 TRC1
HDD 1
146 GB
15K RPM
HDD 2
146 GB
15K RPM
HDD 3
146 GB
15K RPM
HDD 4
146 GB
15K RPM
HDD 5
146 GB
15K RPM
HDD 6
146 GB
15K RPM
HDD 7
146 GB
15K RPM
HDD 8
146 GB
15K RPM
HDD 9
146 GB
15K RPM
HDD 10
146 GB
15K RPM
Single RAID1
Volume Single RAID5 Volume (1022 GB after RAID overhead)
VMFS Filestore (947 GB after VMFS overhead)
PUB UCCX1
UC
VM 1
UC
VM 3
CUP1
UC
VM 5
vSphere ESXi
image
Notes
bull VMFS block size limits
max vDisk size
bull Could have gt1 VMFS
datastore on RAID
volume
copy 2010 Cisco andor its affiliates All rights reserved 50
DAS Example UCS C200 M2 TRC1 for BE6K
HDD 1
1 TB
72K RPM
HDD 2
1 TB
72K RPM
HDD 3
1 TB
72K RPM
HDD 4 1
TB
72K RPM
Single RAID10 Volume (2 TB after RAID
overhead)
VMFS Filestore (18 TB after VMFS overhead)
PUB UCCX1
UC
VM 1
UC
VM 3
CUP1
UC
VM 5 vSphere ESXi
image
copy 2010 Cisco andor its affiliates All rights reserved 51
UC Application Co-residency
bull See wwwciscocomgouc-virtualized for latest
bull Based on supported OVA for download
OVA reserves cores RAM etc to VMs
Basic rule of thumb fill up blade until out of capacity
If blade contains VM for messaging must reserve core for ESXi
bull Hardware oversubscription not supported
52 copy 2010 Cisco andor its affiliates All rights reserved
Virtual Machine Sizing
bull Virtual Machine virtual hardware defined by an VM template
vCPU vRAM vDisk vNICs
bull Capacity
bullAn VM template is associated with a specific capacity
bull VM templates are packaged in a OVA file
bull There are usually different VM template per release For example CUCM_80_vmv7_v21ova
CUCM_85_vmv7_v21ova
CUCM_86_vmv7_v15ova
Includes product product version VMware hardware version template version
53 copy 2010 Cisco andor its affiliates All rights reserved httptoolsciscocomcucst
copy 2010 Cisco andor its affiliates All rights reserved 54
bull Customer-accessible
bullUC on UCS httpwwwciscocomgouconucs and wwwciscocomgouc-virtualized and wwwciscocomgoucsrnd and wwwciscocomgoswonly (UCS page)
bullUCS in general httpwwwciscocomgoucs
bullVblocks amp Virtual Computing Environment wwwvceportalcom and wwwvcecom
bullFlexPods wwwcisconetappcom
bullCisco Infrastructure as a Service (IaaS) Portal httpwwwciscocomgoiaas
bullCisco Unified Service Delivery Portal httpwwwciscocomgousd
bullldquoCUCM on Virtual Serversrdquo (summary of ldquowhatrsquos newdifferent when virtualizedrdquo) httpwwwciscocomenUSdocsvoice_ip_commcucmvirtualservershtml
bull ldquoVirtualization and UCS 101rdquo httpnewsroomciscocomdlls2010ts_102810html
bull UC on UCS ldquotech valuerdquo httpwwwciscocomenUSsolutionsns340ns339ns638ns914html_TWTVtwtv_episode_74html
copy 2010 Cisco andor its affiliates All rights reserved 55
bull Partner Central ldquoServers OS and Virtualizationrdquo
httpwwwciscocomwebpartnersselltechnologyipcservers_os_virt_for_uchtml
bull Partner Community ldquoServers OS and Virtualizationrdquo
httpswwwmyciscocommunitycomcommunitypartnercollaborationucservers
bull Design info
httpwwwciscocomgouc-virtualized and httpwwwciscocomgoucsrnd
bull ldquoWhatrsquos new whatrsquos different when on VMwarerdquo customer doc
httpwwwciscocomenUSdocsvoice_ip_commcucmvirtualservershtml
bull Product Literature
httpwwwciscocomgoucs and httpwwwciscocomgouconucs
bull Ordering Guide
httpwwwciscocomwebpartnersselltechnologyipcuc_tech_readinesshtml~7
bull Virtual Computing Environment Portal httpwwwvceportalcom
bull Cisco Infrastructure as a Service (IaaS) Portal httpwwwciscocomgoiaas
copy 2010 Cisco andor its affiliates All rights reserved 45
HQ Details
Blade 1 Blade 2
CPU-1 CPU-2 CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
Blade 3 Blade 4
CPU-1 CPU-2 CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
CUP-1 UCCX-1 CUP-2
Blade 5 Blade 6
CPU-1 CPU-2 CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
UCCX-1
UCxn-1 Active
Leave
idle for
UCxn
UCxn-2
Active
TFTP-2
SUB-2
SUB-4
SUB-6
SUB-8 CUP-2 UCCX-2
SUB-3 PUB SUB-1
CUP-1
TFTP-1 SUB-5
SUB-7
Blade Slot 7 Blade Slot 8
CPU-1 CPU-2 CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
CUCM VM OVAs
Messaging VM OVAs
Leave
idle for
UCxn
ldquoSparerdquo blade slots available for non-UC workloads such as Cisco Nexus 1000V VMware
vCenter or 3rd-party workloads such as directories email groupware or other business
applications
Contact Center VM OVAs
Presence VM OVAs
copy 2010 Cisco andor its affiliates All rights reserved 46
Rack Server 3
CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
UCxn-2 CCX-1 CUP
Rack Server 2
CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
Rack Server 1
CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
Leave
idle for
UCxn
UCxn-1 TFTP-2 SUB-2
Leave
idle for
UCxn
SUB-1 CCX-2 PUB TFTP-1
Branch Office Details
Rack Server 2
CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
Rack Server 1
CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
PUBTFTP CCX-1 CUP
SUB CCX-2
Leave
idle for
UCxn
Leave
idle for
UCxn
UC
xn
-1
UC
xn
-1
CUCM VM OVAs
Messaging VM OVAs
Contact Center VM OVAs
Presence VM OVAs
copy 2010 Cisco andor its affiliates All rights reserved 47
bull DAS Rack Mount Server (Cisco C-Series)
bull Popular DAS Protocol SCSI
bull iSCSI Access SCSI storage media using IP network
bull Fibre Channel The most popular SAN protocol today
Cable distance ~ 2 km
Popular speed - 4 Gbs
bull NAS (Network Attached Storage) uses NFS (Network File System) protocol over TCPIP
DAS
SCSI
Computer System
SCSI Bus Adapter
SCSI Device Driver
Volume Manager
File System
Application
iSCSI
File System
Application
SCSI Device Driver iSCSI Driver
TCPIP Stack
NIC
Volume Manager
NIC
TCPIP Stack
iSCSI Layer
Bus Adapter
Host
Server
Storage
Transport
Storage
Media
FC SAN
FC
FC HBA
SCSI Device Driver
Volume Manager
File System
Application
Computer System Computer System
Block IO
SAN IP
copy 2010 Cisco andor its affiliates All rights reserved 48
HDD 1
450gig
15K RPM
HDD 2
450gig
15K RPM
HDD 3
450gig
15K RPM
HDD 4
450gig
15K RPM
HDD 5
450gig
15K RPM
Single RAID5 Group (14 TB Usable Space)
PUB SUB1 UCCX1
UC
VM 1
UC
VM 2
UC
VM 3
LUN 2 (720 GB) LUN 1 (720 GB)
UCCX2 CUP1 CUP2
UC
VM 4
UC
VM 5 UC
VM 6
4 to 8 UC VMs per
LUN
(max dependent on
sumvDisks)
NASSAN Array Best Practices for UC
Must be lt2TB per
LUN Recommend
500GB to 15 TB
Use FC class disks
with ~180 IOPS (eg
450 GB 15K or
300 GB 15K)
copy 2010 Cisco andor its affiliates All rights reserved 49
DAS Example UCS C210 M2 TRC1
HDD 1
146 GB
15K RPM
HDD 2
146 GB
15K RPM
HDD 3
146 GB
15K RPM
HDD 4
146 GB
15K RPM
HDD 5
146 GB
15K RPM
HDD 6
146 GB
15K RPM
HDD 7
146 GB
15K RPM
HDD 8
146 GB
15K RPM
HDD 9
146 GB
15K RPM
HDD 10
146 GB
15K RPM
Single RAID1
Volume Single RAID5 Volume (1022 GB after RAID overhead)
VMFS Filestore (947 GB after VMFS overhead)
PUB UCCX1
UC
VM 1
UC
VM 3
CUP1
UC
VM 5
vSphere ESXi
image
Notes
bull VMFS block size limits
max vDisk size
bull Could have gt1 VMFS
datastore on RAID
volume
copy 2010 Cisco andor its affiliates All rights reserved 50
DAS Example UCS C200 M2 TRC1 for BE6K
HDD 1
1 TB
72K RPM
HDD 2
1 TB
72K RPM
HDD 3
1 TB
72K RPM
HDD 4 1
TB
72K RPM
Single RAID10 Volume (2 TB after RAID
overhead)
VMFS Filestore (18 TB after VMFS overhead)
PUB UCCX1
UC
VM 1
UC
VM 3
CUP1
UC
VM 5 vSphere ESXi
image
copy 2010 Cisco andor its affiliates All rights reserved 51
UC Application Co-residency
bull See wwwciscocomgouc-virtualized for latest
bull Based on supported OVA for download
OVA reserves cores RAM etc to VMs
Basic rule of thumb fill up blade until out of capacity
If blade contains VM for messaging must reserve core for ESXi
bull Hardware oversubscription not supported
52 copy 2010 Cisco andor its affiliates All rights reserved
Virtual Machine Sizing
bull Virtual Machine virtual hardware defined by an VM template
vCPU vRAM vDisk vNICs
bull Capacity
bullAn VM template is associated with a specific capacity
bull VM templates are packaged in a OVA file
bull There are usually different VM template per release For example CUCM_80_vmv7_v21ova
CUCM_85_vmv7_v21ova
CUCM_86_vmv7_v15ova
Includes product product version VMware hardware version template version
53 copy 2010 Cisco andor its affiliates All rights reserved httptoolsciscocomcucst
copy 2010 Cisco andor its affiliates All rights reserved 54
bull Customer-accessible
bullUC on UCS httpwwwciscocomgouconucs and wwwciscocomgouc-virtualized and wwwciscocomgoucsrnd and wwwciscocomgoswonly (UCS page)
bullUCS in general httpwwwciscocomgoucs
bullVblocks amp Virtual Computing Environment wwwvceportalcom and wwwvcecom
bullFlexPods wwwcisconetappcom
bullCisco Infrastructure as a Service (IaaS) Portal httpwwwciscocomgoiaas
bullCisco Unified Service Delivery Portal httpwwwciscocomgousd
bullldquoCUCM on Virtual Serversrdquo (summary of ldquowhatrsquos newdifferent when virtualizedrdquo) httpwwwciscocomenUSdocsvoice_ip_commcucmvirtualservershtml
bull ldquoVirtualization and UCS 101rdquo httpnewsroomciscocomdlls2010ts_102810html
bull UC on UCS ldquotech valuerdquo httpwwwciscocomenUSsolutionsns340ns339ns638ns914html_TWTVtwtv_episode_74html
copy 2010 Cisco andor its affiliates All rights reserved 55
bull Partner Central ldquoServers OS and Virtualizationrdquo
httpwwwciscocomwebpartnersselltechnologyipcservers_os_virt_for_uchtml
bull Partner Community ldquoServers OS and Virtualizationrdquo
httpswwwmyciscocommunitycomcommunitypartnercollaborationucservers
bull Design info
httpwwwciscocomgouc-virtualized and httpwwwciscocomgoucsrnd
bull ldquoWhatrsquos new whatrsquos different when on VMwarerdquo customer doc
httpwwwciscocomenUSdocsvoice_ip_commcucmvirtualservershtml
bull Product Literature
httpwwwciscocomgoucs and httpwwwciscocomgouconucs
bull Ordering Guide
httpwwwciscocomwebpartnersselltechnologyipcuc_tech_readinesshtml~7
bull Virtual Computing Environment Portal httpwwwvceportalcom
bull Cisco Infrastructure as a Service (IaaS) Portal httpwwwciscocomgoiaas
copy 2010 Cisco andor its affiliates All rights reserved 46
Rack Server 3
CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
UCxn-2 CCX-1 CUP
Rack Server 2
CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
Rack Server 1
CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
Leave
idle for
UCxn
UCxn-1 TFTP-2 SUB-2
Leave
idle for
UCxn
SUB-1 CCX-2 PUB TFTP-1
Branch Office Details
Rack Server 2
CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
Rack Server 1
CPU-1 CPU-2
Core1 Core2 Core3 Core4 Core1 Core2 Core3 Core4
PUBTFTP CCX-1 CUP
SUB CCX-2
Leave
idle for
UCxn
Leave
idle for
UCxn
UC
xn
-1
UC
xn
-1
CUCM VM OVAs
Messaging VM OVAs
Contact Center VM OVAs
Presence VM OVAs
copy 2010 Cisco andor its affiliates All rights reserved 47
bull DAS Rack Mount Server (Cisco C-Series)
bull Popular DAS Protocol SCSI
bull iSCSI Access SCSI storage media using IP network
bull Fibre Channel The most popular SAN protocol today
Cable distance ~ 2 km
Popular speed - 4 Gbs
bull NAS (Network Attached Storage) uses NFS (Network File System) protocol over TCPIP
DAS
SCSI
Computer System
SCSI Bus Adapter
SCSI Device Driver
Volume Manager
File System
Application
iSCSI
File System
Application
SCSI Device Driver iSCSI Driver
TCPIP Stack
NIC
Volume Manager
NIC
TCPIP Stack
iSCSI Layer
Bus Adapter
Host
Server
Storage
Transport
Storage
Media
FC SAN
FC
FC HBA
SCSI Device Driver
Volume Manager
File System
Application
Computer System Computer System
Block IO
SAN IP
copy 2010 Cisco andor its affiliates All rights reserved 48
HDD 1
450gig
15K RPM
HDD 2
450gig
15K RPM
HDD 3
450gig
15K RPM
HDD 4
450gig
15K RPM
HDD 5
450gig
15K RPM
Single RAID5 Group (14 TB Usable Space)
PUB SUB1 UCCX1
UC
VM 1
UC
VM 2
UC
VM 3
LUN 2 (720 GB) LUN 1 (720 GB)
UCCX2 CUP1 CUP2
UC
VM 4
UC
VM 5 UC
VM 6
4 to 8 UC VMs per
LUN
(max dependent on
sumvDisks)
NASSAN Array Best Practices for UC
Must be lt2TB per
LUN Recommend
500GB to 15 TB
Use FC class disks
with ~180 IOPS (eg
450 GB 15K or
300 GB 15K)
copy 2010 Cisco andor its affiliates All rights reserved 49
DAS Example UCS C210 M2 TRC1
HDD 1
146 GB
15K RPM
HDD 2
146 GB
15K RPM
HDD 3
146 GB
15K RPM
HDD 4
146 GB
15K RPM
HDD 5
146 GB
15K RPM
HDD 6
146 GB
15K RPM
HDD 7
146 GB
15K RPM
HDD 8
146 GB
15K RPM
HDD 9
146 GB
15K RPM
HDD 10
146 GB
15K RPM
Single RAID1
Volume Single RAID5 Volume (1022 GB after RAID overhead)
VMFS Filestore (947 GB after VMFS overhead)
PUB UCCX1
UC
VM 1
UC
VM 3
CUP1
UC
VM 5
vSphere ESXi
image
Notes
bull VMFS block size limits
max vDisk size
bull Could have gt1 VMFS
datastore on RAID
volume
copy 2010 Cisco andor its affiliates All rights reserved 50
DAS Example UCS C200 M2 TRC1 for BE6K
HDD 1
1 TB
72K RPM
HDD 2
1 TB
72K RPM
HDD 3
1 TB
72K RPM
HDD 4 1
TB
72K RPM
Single RAID10 Volume (2 TB after RAID
overhead)
VMFS Filestore (18 TB after VMFS overhead)
PUB UCCX1
UC
VM 1
UC
VM 3
CUP1
UC
VM 5 vSphere ESXi
image
copy 2010 Cisco andor its affiliates All rights reserved 51
UC Application Co-residency
bull See wwwciscocomgouc-virtualized for latest
bull Based on supported OVA for download
OVA reserves cores RAM etc to VMs
Basic rule of thumb fill up blade until out of capacity
If blade contains VM for messaging must reserve core for ESXi
bull Hardware oversubscription not supported
52 copy 2010 Cisco andor its affiliates All rights reserved
Virtual Machine Sizing
bull Virtual Machine virtual hardware defined by an VM template
vCPU vRAM vDisk vNICs
bull Capacity
bullAn VM template is associated with a specific capacity
bull VM templates are packaged in a OVA file
bull There are usually different VM template per release For example CUCM_80_vmv7_v21ova
CUCM_85_vmv7_v21ova
CUCM_86_vmv7_v15ova
Includes product product version VMware hardware version template version
53 copy 2010 Cisco andor its affiliates All rights reserved httptoolsciscocomcucst
copy 2010 Cisco andor its affiliates All rights reserved 54
bull Customer-accessible
bullUC on UCS httpwwwciscocomgouconucs and wwwciscocomgouc-virtualized and wwwciscocomgoucsrnd and wwwciscocomgoswonly (UCS page)
bullUCS in general httpwwwciscocomgoucs
bullVblocks amp Virtual Computing Environment wwwvceportalcom and wwwvcecom
bullFlexPods wwwcisconetappcom
bullCisco Infrastructure as a Service (IaaS) Portal httpwwwciscocomgoiaas
bullCisco Unified Service Delivery Portal httpwwwciscocomgousd
bullldquoCUCM on Virtual Serversrdquo (summary of ldquowhatrsquos newdifferent when virtualizedrdquo) httpwwwciscocomenUSdocsvoice_ip_commcucmvirtualservershtml
bull ldquoVirtualization and UCS 101rdquo httpnewsroomciscocomdlls2010ts_102810html
bull UC on UCS ldquotech valuerdquo httpwwwciscocomenUSsolutionsns340ns339ns638ns914html_TWTVtwtv_episode_74html
copy 2010 Cisco andor its affiliates All rights reserved 55
bull Partner Central ldquoServers OS and Virtualizationrdquo
httpwwwciscocomwebpartnersselltechnologyipcservers_os_virt_for_uchtml
bull Partner Community ldquoServers OS and Virtualizationrdquo
httpswwwmyciscocommunitycomcommunitypartnercollaborationucservers
bull Design info
httpwwwciscocomgouc-virtualized and httpwwwciscocomgoucsrnd
bull ldquoWhatrsquos new whatrsquos different when on VMwarerdquo customer doc
httpwwwciscocomenUSdocsvoice_ip_commcucmvirtualservershtml
bull Product Literature
httpwwwciscocomgoucs and httpwwwciscocomgouconucs
bull Ordering Guide
httpwwwciscocomwebpartnersselltechnologyipcuc_tech_readinesshtml~7
bull Virtual Computing Environment Portal httpwwwvceportalcom
bull Cisco Infrastructure as a Service (IaaS) Portal httpwwwciscocomgoiaas
copy 2010 Cisco andor its affiliates All rights reserved 47
bull DAS Rack Mount Server (Cisco C-Series)
bull Popular DAS Protocol SCSI
bull iSCSI Access SCSI storage media using IP network
bull Fibre Channel The most popular SAN protocol today
Cable distance ~ 2 km
Popular speed - 4 Gbs
bull NAS (Network Attached Storage) uses NFS (Network File System) protocol over TCPIP
DAS
SCSI
Computer System
SCSI Bus Adapter
SCSI Device Driver
Volume Manager
File System
Application
iSCSI
File System
Application
SCSI Device Driver iSCSI Driver
TCPIP Stack
NIC
Volume Manager
NIC
TCPIP Stack
iSCSI Layer
Bus Adapter
Host
Server
Storage
Transport
Storage
Media
FC SAN
FC
FC HBA
SCSI Device Driver
Volume Manager
File System
Application
Computer System Computer System
Block IO
SAN IP
copy 2010 Cisco andor its affiliates All rights reserved 48
HDD 1
450gig
15K RPM
HDD 2
450gig
15K RPM
HDD 3
450gig
15K RPM
HDD 4
450gig
15K RPM
HDD 5
450gig
15K RPM
Single RAID5 Group (14 TB Usable Space)
PUB SUB1 UCCX1
UC
VM 1
UC
VM 2
UC
VM 3
LUN 2 (720 GB) LUN 1 (720 GB)
UCCX2 CUP1 CUP2
UC
VM 4
UC
VM 5 UC
VM 6
4 to 8 UC VMs per
LUN
(max dependent on
sumvDisks)
NASSAN Array Best Practices for UC
Must be lt2TB per
LUN Recommend
500GB to 15 TB
Use FC class disks
with ~180 IOPS (eg
450 GB 15K or
300 GB 15K)
copy 2010 Cisco andor its affiliates All rights reserved 49
DAS Example UCS C210 M2 TRC1
HDD 1
146 GB
15K RPM
HDD 2
146 GB
15K RPM
HDD 3
146 GB
15K RPM
HDD 4
146 GB
15K RPM
HDD 5
146 GB
15K RPM
HDD 6
146 GB
15K RPM
HDD 7
146 GB
15K RPM
HDD 8
146 GB
15K RPM
HDD 9
146 GB
15K RPM
HDD 10
146 GB
15K RPM
Single RAID1
Volume Single RAID5 Volume (1022 GB after RAID overhead)
VMFS Filestore (947 GB after VMFS overhead)
PUB UCCX1
UC
VM 1
UC
VM 3
CUP1
UC
VM 5
vSphere ESXi
image
Notes
bull VMFS block size limits
max vDisk size
bull Could have gt1 VMFS
datastore on RAID
volume
copy 2010 Cisco andor its affiliates All rights reserved 50
DAS Example UCS C200 M2 TRC1 for BE6K
HDD 1
1 TB
72K RPM
HDD 2
1 TB
72K RPM
HDD 3
1 TB
72K RPM
HDD 4 1
TB
72K RPM
Single RAID10 Volume (2 TB after RAID
overhead)
VMFS Filestore (18 TB after VMFS overhead)
PUB UCCX1
UC
VM 1
UC
VM 3
CUP1
UC
VM 5 vSphere ESXi
image
copy 2010 Cisco andor its affiliates All rights reserved 51
UC Application Co-residency
bull See wwwciscocomgouc-virtualized for latest
bull Based on supported OVA for download
OVA reserves cores RAM etc to VMs
Basic rule of thumb fill up blade until out of capacity
If blade contains VM for messaging must reserve core for ESXi
bull Hardware oversubscription not supported
52 copy 2010 Cisco andor its affiliates All rights reserved
Virtual Machine Sizing
bull Virtual Machine virtual hardware defined by an VM template
vCPU vRAM vDisk vNICs
bull Capacity
bullAn VM template is associated with a specific capacity
bull VM templates are packaged in a OVA file
bull There are usually different VM template per release For example CUCM_80_vmv7_v21ova
CUCM_85_vmv7_v21ova
CUCM_86_vmv7_v15ova
Includes product product version VMware hardware version template version
53 copy 2010 Cisco andor its affiliates All rights reserved httptoolsciscocomcucst
copy 2010 Cisco andor its affiliates All rights reserved 54
bull Customer-accessible
bullUC on UCS httpwwwciscocomgouconucs and wwwciscocomgouc-virtualized and wwwciscocomgoucsrnd and wwwciscocomgoswonly (UCS page)
bullUCS in general httpwwwciscocomgoucs
bullVblocks amp Virtual Computing Environment wwwvceportalcom and wwwvcecom
bullFlexPods wwwcisconetappcom
bullCisco Infrastructure as a Service (IaaS) Portal httpwwwciscocomgoiaas
bullCisco Unified Service Delivery Portal httpwwwciscocomgousd
bullldquoCUCM on Virtual Serversrdquo (summary of ldquowhatrsquos newdifferent when virtualizedrdquo) httpwwwciscocomenUSdocsvoice_ip_commcucmvirtualservershtml
bull ldquoVirtualization and UCS 101rdquo httpnewsroomciscocomdlls2010ts_102810html
bull UC on UCS ldquotech valuerdquo httpwwwciscocomenUSsolutionsns340ns339ns638ns914html_TWTVtwtv_episode_74html
copy 2010 Cisco andor its affiliates All rights reserved 55
bull Partner Central ldquoServers OS and Virtualizationrdquo
httpwwwciscocomwebpartnersselltechnologyipcservers_os_virt_for_uchtml
bull Partner Community ldquoServers OS and Virtualizationrdquo
httpswwwmyciscocommunitycomcommunitypartnercollaborationucservers
bull Design info
httpwwwciscocomgouc-virtualized and httpwwwciscocomgoucsrnd
bull ldquoWhatrsquos new whatrsquos different when on VMwarerdquo customer doc
httpwwwciscocomenUSdocsvoice_ip_commcucmvirtualservershtml
bull Product Literature
httpwwwciscocomgoucs and httpwwwciscocomgouconucs
bull Ordering Guide
httpwwwciscocomwebpartnersselltechnologyipcuc_tech_readinesshtml~7
bull Virtual Computing Environment Portal httpwwwvceportalcom
bull Cisco Infrastructure as a Service (IaaS) Portal httpwwwciscocomgoiaas
copy 2010 Cisco andor its affiliates All rights reserved 48
HDD 1
450gig
15K RPM
HDD 2
450gig
15K RPM
HDD 3
450gig
15K RPM
HDD 4
450gig
15K RPM
HDD 5
450gig
15K RPM
Single RAID5 Group (14 TB Usable Space)
PUB SUB1 UCCX1
UC
VM 1
UC
VM 2
UC
VM 3
LUN 2 (720 GB) LUN 1 (720 GB)
UCCX2 CUP1 CUP2
UC
VM 4
UC
VM 5 UC
VM 6
4 to 8 UC VMs per
LUN
(max dependent on
sumvDisks)
NASSAN Array Best Practices for UC
Must be lt2TB per
LUN Recommend
500GB to 15 TB
Use FC class disks
with ~180 IOPS (eg
450 GB 15K or
300 GB 15K)
copy 2010 Cisco andor its affiliates All rights reserved 49
DAS Example UCS C210 M2 TRC1
HDD 1
146 GB
15K RPM
HDD 2
146 GB
15K RPM
HDD 3
146 GB
15K RPM
HDD 4
146 GB
15K RPM
HDD 5
146 GB
15K RPM
HDD 6
146 GB
15K RPM
HDD 7
146 GB
15K RPM
HDD 8
146 GB
15K RPM
HDD 9
146 GB
15K RPM
HDD 10
146 GB
15K RPM
Single RAID1
Volume Single RAID5 Volume (1022 GB after RAID overhead)
VMFS Filestore (947 GB after VMFS overhead)
PUB UCCX1
UC
VM 1
UC
VM 3
CUP1
UC
VM 5
vSphere ESXi
image
Notes
bull VMFS block size limits
max vDisk size
bull Could have gt1 VMFS
datastore on RAID
volume
copy 2010 Cisco andor its affiliates All rights reserved 50
DAS Example UCS C200 M2 TRC1 for BE6K
HDD 1
1 TB
72K RPM
HDD 2
1 TB
72K RPM
HDD 3
1 TB
72K RPM
HDD 4 1
TB
72K RPM
Single RAID10 Volume (2 TB after RAID
overhead)
VMFS Filestore (18 TB after VMFS overhead)
PUB UCCX1
UC
VM 1
UC
VM 3
CUP1
UC
VM 5 vSphere ESXi
image
copy 2010 Cisco andor its affiliates All rights reserved 51
UC Application Co-residency
bull See wwwciscocomgouc-virtualized for latest
bull Based on supported OVA for download
OVA reserves cores RAM etc to VMs
Basic rule of thumb fill up blade until out of capacity
If blade contains VM for messaging must reserve core for ESXi
bull Hardware oversubscription not supported
52 copy 2010 Cisco andor its affiliates All rights reserved
Virtual Machine Sizing
bull Virtual Machine virtual hardware defined by an VM template
vCPU vRAM vDisk vNICs
bull Capacity
bullAn VM template is associated with a specific capacity
bull VM templates are packaged in a OVA file
bull There are usually different VM template per release For example CUCM_80_vmv7_v21ova
CUCM_85_vmv7_v21ova
CUCM_86_vmv7_v15ova
Includes product product version VMware hardware version template version
53 copy 2010 Cisco andor its affiliates All rights reserved httptoolsciscocomcucst
copy 2010 Cisco andor its affiliates All rights reserved 54
bull Customer-accessible
bullUC on UCS httpwwwciscocomgouconucs and wwwciscocomgouc-virtualized and wwwciscocomgoucsrnd and wwwciscocomgoswonly (UCS page)
bullUCS in general httpwwwciscocomgoucs
bullVblocks amp Virtual Computing Environment wwwvceportalcom and wwwvcecom
bullFlexPods wwwcisconetappcom
bullCisco Infrastructure as a Service (IaaS) Portal httpwwwciscocomgoiaas
bullCisco Unified Service Delivery Portal httpwwwciscocomgousd
bullldquoCUCM on Virtual Serversrdquo (summary of ldquowhatrsquos newdifferent when virtualizedrdquo) httpwwwciscocomenUSdocsvoice_ip_commcucmvirtualservershtml
bull ldquoVirtualization and UCS 101rdquo httpnewsroomciscocomdlls2010ts_102810html
bull UC on UCS ldquotech valuerdquo httpwwwciscocomenUSsolutionsns340ns339ns638ns914html_TWTVtwtv_episode_74html
copy 2010 Cisco andor its affiliates All rights reserved 55
bull Partner Central ldquoServers OS and Virtualizationrdquo
httpwwwciscocomwebpartnersselltechnologyipcservers_os_virt_for_uchtml
bull Partner Community ldquoServers OS and Virtualizationrdquo
httpswwwmyciscocommunitycomcommunitypartnercollaborationucservers
bull Design info
httpwwwciscocomgouc-virtualized and httpwwwciscocomgoucsrnd
bull ldquoWhatrsquos new whatrsquos different when on VMwarerdquo customer doc
httpwwwciscocomenUSdocsvoice_ip_commcucmvirtualservershtml
bull Product Literature
httpwwwciscocomgoucs and httpwwwciscocomgouconucs
bull Ordering Guide
httpwwwciscocomwebpartnersselltechnologyipcuc_tech_readinesshtml~7
bull Virtual Computing Environment Portal httpwwwvceportalcom
bull Cisco Infrastructure as a Service (IaaS) Portal httpwwwciscocomgoiaas
copy 2010 Cisco andor its affiliates All rights reserved 49
DAS Example UCS C210 M2 TRC1
HDD 1
146 GB
15K RPM
HDD 2
146 GB
15K RPM
HDD 3
146 GB
15K RPM
HDD 4
146 GB
15K RPM
HDD 5
146 GB
15K RPM
HDD 6
146 GB
15K RPM
HDD 7
146 GB
15K RPM
HDD 8
146 GB
15K RPM
HDD 9
146 GB
15K RPM
HDD 10
146 GB
15K RPM
Single RAID1
Volume Single RAID5 Volume (1022 GB after RAID overhead)
VMFS Filestore (947 GB after VMFS overhead)
PUB UCCX1
UC
VM 1
UC
VM 3
CUP1
UC
VM 5
vSphere ESXi
image
Notes
bull VMFS block size limits
max vDisk size
bull Could have gt1 VMFS
datastore on RAID
volume
copy 2010 Cisco andor its affiliates All rights reserved 50
DAS Example UCS C200 M2 TRC1 for BE6K
HDD 1
1 TB
72K RPM
HDD 2
1 TB
72K RPM
HDD 3
1 TB
72K RPM
HDD 4 1
TB
72K RPM
Single RAID10 Volume (2 TB after RAID
overhead)
VMFS Filestore (18 TB after VMFS overhead)
PUB UCCX1
UC
VM 1
UC
VM 3
CUP1
UC
VM 5 vSphere ESXi
image
copy 2010 Cisco andor its affiliates All rights reserved 51
UC Application Co-residency
bull See wwwciscocomgouc-virtualized for latest
bull Based on supported OVA for download
OVA reserves cores RAM etc to VMs
Basic rule of thumb fill up blade until out of capacity
If blade contains VM for messaging must reserve core for ESXi
bull Hardware oversubscription not supported
52 copy 2010 Cisco andor its affiliates All rights reserved
Virtual Machine Sizing
bull Virtual Machine virtual hardware defined by an VM template
vCPU vRAM vDisk vNICs
bull Capacity
bullAn VM template is associated with a specific capacity
bull VM templates are packaged in a OVA file
bull There are usually different VM template per release For example CUCM_80_vmv7_v21ova
CUCM_85_vmv7_v21ova
CUCM_86_vmv7_v15ova
Includes product product version VMware hardware version template version
53 copy 2010 Cisco andor its affiliates All rights reserved httptoolsciscocomcucst
copy 2010 Cisco andor its affiliates All rights reserved 54
bull Customer-accessible
bullUC on UCS httpwwwciscocomgouconucs and wwwciscocomgouc-virtualized and wwwciscocomgoucsrnd and wwwciscocomgoswonly (UCS page)
bullUCS in general httpwwwciscocomgoucs
bullVblocks amp Virtual Computing Environment wwwvceportalcom and wwwvcecom
bullFlexPods wwwcisconetappcom
bullCisco Infrastructure as a Service (IaaS) Portal httpwwwciscocomgoiaas
bullCisco Unified Service Delivery Portal httpwwwciscocomgousd
bullldquoCUCM on Virtual Serversrdquo (summary of ldquowhatrsquos newdifferent when virtualizedrdquo) httpwwwciscocomenUSdocsvoice_ip_commcucmvirtualservershtml
bull ldquoVirtualization and UCS 101rdquo httpnewsroomciscocomdlls2010ts_102810html
bull UC on UCS ldquotech valuerdquo httpwwwciscocomenUSsolutionsns340ns339ns638ns914html_TWTVtwtv_episode_74html
copy 2010 Cisco andor its affiliates All rights reserved 55
bull Partner Central ldquoServers OS and Virtualizationrdquo
httpwwwciscocomwebpartnersselltechnologyipcservers_os_virt_for_uchtml
bull Partner Community ldquoServers OS and Virtualizationrdquo
httpswwwmyciscocommunitycomcommunitypartnercollaborationucservers
bull Design info
httpwwwciscocomgouc-virtualized and httpwwwciscocomgoucsrnd
bull ldquoWhatrsquos new whatrsquos different when on VMwarerdquo customer doc
httpwwwciscocomenUSdocsvoice_ip_commcucmvirtualservershtml
bull Product Literature
httpwwwciscocomgoucs and httpwwwciscocomgouconucs
bull Ordering Guide
httpwwwciscocomwebpartnersselltechnologyipcuc_tech_readinesshtml~7
bull Virtual Computing Environment Portal httpwwwvceportalcom
bull Cisco Infrastructure as a Service (IaaS) Portal httpwwwciscocomgoiaas
copy 2010 Cisco andor its affiliates All rights reserved 50
DAS Example UCS C200 M2 TRC1 for BE6K
HDD 1
1 TB
72K RPM
HDD 2
1 TB
72K RPM
HDD 3
1 TB
72K RPM
HDD 4 1
TB
72K RPM
Single RAID10 Volume (2 TB after RAID
overhead)
VMFS Filestore (18 TB after VMFS overhead)
PUB UCCX1
UC
VM 1
UC
VM 3
CUP1
UC
VM 5 vSphere ESXi
image
copy 2010 Cisco andor its affiliates All rights reserved 51
UC Application Co-residency
bull See wwwciscocomgouc-virtualized for latest
bull Based on supported OVA for download
OVA reserves cores RAM etc to VMs
Basic rule of thumb fill up blade until out of capacity
If blade contains VM for messaging must reserve core for ESXi
bull Hardware oversubscription not supported
52 copy 2010 Cisco andor its affiliates All rights reserved
Virtual Machine Sizing
bull Virtual Machine virtual hardware defined by an VM template
vCPU vRAM vDisk vNICs
bull Capacity
bullAn VM template is associated with a specific capacity
bull VM templates are packaged in a OVA file
bull There are usually different VM template per release For example CUCM_80_vmv7_v21ova
CUCM_85_vmv7_v21ova
CUCM_86_vmv7_v15ova
Includes product product version VMware hardware version template version
53 copy 2010 Cisco andor its affiliates All rights reserved httptoolsciscocomcucst
copy 2010 Cisco andor its affiliates All rights reserved 54
bull Customer-accessible
bullUC on UCS httpwwwciscocomgouconucs and wwwciscocomgouc-virtualized and wwwciscocomgoucsrnd and wwwciscocomgoswonly (UCS page)
bullUCS in general httpwwwciscocomgoucs
bullVblocks amp Virtual Computing Environment wwwvceportalcom and wwwvcecom
bullFlexPods wwwcisconetappcom
bullCisco Infrastructure as a Service (IaaS) Portal httpwwwciscocomgoiaas
bullCisco Unified Service Delivery Portal httpwwwciscocomgousd
bullldquoCUCM on Virtual Serversrdquo (summary of ldquowhatrsquos newdifferent when virtualizedrdquo) httpwwwciscocomenUSdocsvoice_ip_commcucmvirtualservershtml
bull ldquoVirtualization and UCS 101rdquo httpnewsroomciscocomdlls2010ts_102810html
bull UC on UCS ldquotech valuerdquo httpwwwciscocomenUSsolutionsns340ns339ns638ns914html_TWTVtwtv_episode_74html
copy 2010 Cisco andor its affiliates All rights reserved 55
bull Partner Central ldquoServers OS and Virtualizationrdquo
httpwwwciscocomwebpartnersselltechnologyipcservers_os_virt_for_uchtml
bull Partner Community ldquoServers OS and Virtualizationrdquo
httpswwwmyciscocommunitycomcommunitypartnercollaborationucservers
bull Design info
httpwwwciscocomgouc-virtualized and httpwwwciscocomgoucsrnd
bull ldquoWhatrsquos new whatrsquos different when on VMwarerdquo customer doc
httpwwwciscocomenUSdocsvoice_ip_commcucmvirtualservershtml
bull Product Literature
httpwwwciscocomgoucs and httpwwwciscocomgouconucs
bull Ordering Guide
httpwwwciscocomwebpartnersselltechnologyipcuc_tech_readinesshtml~7
bull Virtual Computing Environment Portal httpwwwvceportalcom
bull Cisco Infrastructure as a Service (IaaS) Portal httpwwwciscocomgoiaas
copy 2010 Cisco andor its affiliates All rights reserved 51
UC Application Co-residency
bull See wwwciscocomgouc-virtualized for latest
bull Based on supported OVA for download
OVA reserves cores RAM etc to VMs
Basic rule of thumb fill up blade until out of capacity
If blade contains VM for messaging must reserve core for ESXi
bull Hardware oversubscription not supported
52 copy 2010 Cisco andor its affiliates All rights reserved
Virtual Machine Sizing
bull Virtual Machine virtual hardware defined by an VM template
vCPU vRAM vDisk vNICs
bull Capacity
bullAn VM template is associated with a specific capacity
bull VM templates are packaged in a OVA file
bull There are usually different VM template per release For example CUCM_80_vmv7_v21ova
CUCM_85_vmv7_v21ova
CUCM_86_vmv7_v15ova
Includes product product version VMware hardware version template version
53 copy 2010 Cisco andor its affiliates All rights reserved httptoolsciscocomcucst
copy 2010 Cisco andor its affiliates All rights reserved 54
bull Customer-accessible
bullUC on UCS httpwwwciscocomgouconucs and wwwciscocomgouc-virtualized and wwwciscocomgoucsrnd and wwwciscocomgoswonly (UCS page)
bullUCS in general httpwwwciscocomgoucs
bullVblocks amp Virtual Computing Environment wwwvceportalcom and wwwvcecom
bullFlexPods wwwcisconetappcom
bullCisco Infrastructure as a Service (IaaS) Portal httpwwwciscocomgoiaas
bullCisco Unified Service Delivery Portal httpwwwciscocomgousd
bullldquoCUCM on Virtual Serversrdquo (summary of ldquowhatrsquos newdifferent when virtualizedrdquo) httpwwwciscocomenUSdocsvoice_ip_commcucmvirtualservershtml
bull ldquoVirtualization and UCS 101rdquo httpnewsroomciscocomdlls2010ts_102810html
bull UC on UCS ldquotech valuerdquo httpwwwciscocomenUSsolutionsns340ns339ns638ns914html_TWTVtwtv_episode_74html
copy 2010 Cisco andor its affiliates All rights reserved 55
bull Partner Central ldquoServers OS and Virtualizationrdquo
httpwwwciscocomwebpartnersselltechnologyipcservers_os_virt_for_uchtml
bull Partner Community ldquoServers OS and Virtualizationrdquo
httpswwwmyciscocommunitycomcommunitypartnercollaborationucservers
bull Design info
httpwwwciscocomgouc-virtualized and httpwwwciscocomgoucsrnd
bull ldquoWhatrsquos new whatrsquos different when on VMwarerdquo customer doc
httpwwwciscocomenUSdocsvoice_ip_commcucmvirtualservershtml
bull Product Literature
httpwwwciscocomgoucs and httpwwwciscocomgouconucs
bull Ordering Guide
httpwwwciscocomwebpartnersselltechnologyipcuc_tech_readinesshtml~7
bull Virtual Computing Environment Portal httpwwwvceportalcom
bull Cisco Infrastructure as a Service (IaaS) Portal httpwwwciscocomgoiaas
52 copy 2010 Cisco andor its affiliates All rights reserved
Virtual Machine Sizing
bull Virtual Machine virtual hardware defined by an VM template
vCPU vRAM vDisk vNICs
bull Capacity
bullAn VM template is associated with a specific capacity
bull VM templates are packaged in a OVA file
bull There are usually different VM template per release For example CUCM_80_vmv7_v21ova
CUCM_85_vmv7_v21ova
CUCM_86_vmv7_v15ova
Includes product product version VMware hardware version template version
53 copy 2010 Cisco andor its affiliates All rights reserved httptoolsciscocomcucst
copy 2010 Cisco andor its affiliates All rights reserved 54
bull Customer-accessible
bullUC on UCS httpwwwciscocomgouconucs and wwwciscocomgouc-virtualized and wwwciscocomgoucsrnd and wwwciscocomgoswonly (UCS page)
bullUCS in general httpwwwciscocomgoucs
bullVblocks amp Virtual Computing Environment wwwvceportalcom and wwwvcecom
bullFlexPods wwwcisconetappcom
bullCisco Infrastructure as a Service (IaaS) Portal httpwwwciscocomgoiaas
bullCisco Unified Service Delivery Portal httpwwwciscocomgousd
bullldquoCUCM on Virtual Serversrdquo (summary of ldquowhatrsquos newdifferent when virtualizedrdquo) httpwwwciscocomenUSdocsvoice_ip_commcucmvirtualservershtml
bull ldquoVirtualization and UCS 101rdquo httpnewsroomciscocomdlls2010ts_102810html
bull UC on UCS ldquotech valuerdquo httpwwwciscocomenUSsolutionsns340ns339ns638ns914html_TWTVtwtv_episode_74html
copy 2010 Cisco andor its affiliates All rights reserved 55
bull Partner Central ldquoServers OS and Virtualizationrdquo
httpwwwciscocomwebpartnersselltechnologyipcservers_os_virt_for_uchtml
bull Partner Community ldquoServers OS and Virtualizationrdquo
httpswwwmyciscocommunitycomcommunitypartnercollaborationucservers
bull Design info
httpwwwciscocomgouc-virtualized and httpwwwciscocomgoucsrnd
bull ldquoWhatrsquos new whatrsquos different when on VMwarerdquo customer doc
httpwwwciscocomenUSdocsvoice_ip_commcucmvirtualservershtml
bull Product Literature
httpwwwciscocomgoucs and httpwwwciscocomgouconucs
bull Ordering Guide
httpwwwciscocomwebpartnersselltechnologyipcuc_tech_readinesshtml~7
bull Virtual Computing Environment Portal httpwwwvceportalcom
bull Cisco Infrastructure as a Service (IaaS) Portal httpwwwciscocomgoiaas
53 copy 2010 Cisco andor its affiliates All rights reserved httptoolsciscocomcucst
copy 2010 Cisco andor its affiliates All rights reserved 54
bull Customer-accessible
bullUC on UCS httpwwwciscocomgouconucs and wwwciscocomgouc-virtualized and wwwciscocomgoucsrnd and wwwciscocomgoswonly (UCS page)
bullUCS in general httpwwwciscocomgoucs
bullVblocks amp Virtual Computing Environment wwwvceportalcom and wwwvcecom
bullFlexPods wwwcisconetappcom
bullCisco Infrastructure as a Service (IaaS) Portal httpwwwciscocomgoiaas
bullCisco Unified Service Delivery Portal httpwwwciscocomgousd
bullldquoCUCM on Virtual Serversrdquo (summary of ldquowhatrsquos newdifferent when virtualizedrdquo) httpwwwciscocomenUSdocsvoice_ip_commcucmvirtualservershtml
bull ldquoVirtualization and UCS 101rdquo httpnewsroomciscocomdlls2010ts_102810html
bull UC on UCS ldquotech valuerdquo httpwwwciscocomenUSsolutionsns340ns339ns638ns914html_TWTVtwtv_episode_74html
copy 2010 Cisco andor its affiliates All rights reserved 55
bull Partner Central ldquoServers OS and Virtualizationrdquo
httpwwwciscocomwebpartnersselltechnologyipcservers_os_virt_for_uchtml
bull Partner Community ldquoServers OS and Virtualizationrdquo
httpswwwmyciscocommunitycomcommunitypartnercollaborationucservers
bull Design info
httpwwwciscocomgouc-virtualized and httpwwwciscocomgoucsrnd
bull ldquoWhatrsquos new whatrsquos different when on VMwarerdquo customer doc
httpwwwciscocomenUSdocsvoice_ip_commcucmvirtualservershtml
bull Product Literature
httpwwwciscocomgoucs and httpwwwciscocomgouconucs
bull Ordering Guide
httpwwwciscocomwebpartnersselltechnologyipcuc_tech_readinesshtml~7
bull Virtual Computing Environment Portal httpwwwvceportalcom
bull Cisco Infrastructure as a Service (IaaS) Portal httpwwwciscocomgoiaas
copy 2010 Cisco andor its affiliates All rights reserved 54
bull Customer-accessible
bullUC on UCS httpwwwciscocomgouconucs and wwwciscocomgouc-virtualized and wwwciscocomgoucsrnd and wwwciscocomgoswonly (UCS page)
bullUCS in general httpwwwciscocomgoucs
bullVblocks amp Virtual Computing Environment wwwvceportalcom and wwwvcecom
bullFlexPods wwwcisconetappcom
bullCisco Infrastructure as a Service (IaaS) Portal httpwwwciscocomgoiaas
bullCisco Unified Service Delivery Portal httpwwwciscocomgousd
bullldquoCUCM on Virtual Serversrdquo (summary of ldquowhatrsquos newdifferent when virtualizedrdquo) httpwwwciscocomenUSdocsvoice_ip_commcucmvirtualservershtml
bull ldquoVirtualization and UCS 101rdquo httpnewsroomciscocomdlls2010ts_102810html
bull UC on UCS ldquotech valuerdquo httpwwwciscocomenUSsolutionsns340ns339ns638ns914html_TWTVtwtv_episode_74html
copy 2010 Cisco andor its affiliates All rights reserved 55
bull Partner Central ldquoServers OS and Virtualizationrdquo
httpwwwciscocomwebpartnersselltechnologyipcservers_os_virt_for_uchtml
bull Partner Community ldquoServers OS and Virtualizationrdquo
httpswwwmyciscocommunitycomcommunitypartnercollaborationucservers
bull Design info
httpwwwciscocomgouc-virtualized and httpwwwciscocomgoucsrnd
bull ldquoWhatrsquos new whatrsquos different when on VMwarerdquo customer doc
httpwwwciscocomenUSdocsvoice_ip_commcucmvirtualservershtml
bull Product Literature
httpwwwciscocomgoucs and httpwwwciscocomgouconucs
bull Ordering Guide
httpwwwciscocomwebpartnersselltechnologyipcuc_tech_readinesshtml~7
bull Virtual Computing Environment Portal httpwwwvceportalcom
bull Cisco Infrastructure as a Service (IaaS) Portal httpwwwciscocomgoiaas
copy 2010 Cisco andor its affiliates All rights reserved 55
bull Partner Central ldquoServers OS and Virtualizationrdquo
httpwwwciscocomwebpartnersselltechnologyipcservers_os_virt_for_uchtml
bull Partner Community ldquoServers OS and Virtualizationrdquo
httpswwwmyciscocommunitycomcommunitypartnercollaborationucservers
bull Design info
httpwwwciscocomgouc-virtualized and httpwwwciscocomgoucsrnd
bull ldquoWhatrsquos new whatrsquos different when on VMwarerdquo customer doc
httpwwwciscocomenUSdocsvoice_ip_commcucmvirtualservershtml
bull Product Literature
httpwwwciscocomgoucs and httpwwwciscocomgouconucs
bull Ordering Guide
httpwwwciscocomwebpartnersselltechnologyipcuc_tech_readinesshtml~7
bull Virtual Computing Environment Portal httpwwwvceportalcom
bull Cisco Infrastructure as a Service (IaaS) Portal httpwwwciscocomgoiaas
Recommended