View
16
Download
3
Category
Preview:
Citation preview
Published: 4th September, 2012
Windows Server 2012: Server Virtualization
Module 1A: VM Scale.
Module Manual Author: David Coombes, Content Master
Microsoft Virtual Academy Student Manual ii
Information in this document, including URLs and other Internet Web site references, are subject to change without notice. Unless otherwise noted, the example companies, organizations, products, domain names, e-mail addresses, logos, people, places, and events depicted herein are fictitious, and no association with any real company, organization, product, domain name, e-mail address, logo, person, place or event is intended or should be inferred. Complying with all applicable copyright laws is the responsibility of the user. Without limiting the rights under copyright, no part of this document may be reproduced, stored in or introduced into a retrieval system, or transmitted in any form or by any means (electronic, mechanical, photocopying, recording, or otherwise), or for any purpose, without the express written permission of Microsoft Corporation. Microsoft may have patents, patent applications, trademarks, copyrights, or other intellectual property rights covering subject matter in this document. Except as expressly provided in any written license agreement from Microsoft, the furnishing of this document does not give you any license to these patents, trademarks, copyrights, or other intellectual property. ® 2012 Microsoft Corporation. All rights reserved. Microsoft is either a registered trademark or trademark of Microsoft Corporation in the United States and/or other countries. The names of actual companies and products mentioned herein may be the trademarks of their respective owners.
Microsoft Virtual Academy Student Manual iii
Contents
CONTENTS .................................................................................................................................................................................................................. III
MODULE 1A: VM SCALE. ........................................................................................................................................................................................ 4
Module Overview ................................................................................................................................................................................................ 4
LESSON 1: SCALE UP OVERVIEW ........................................................................................................................................................................ 5
SCALE UP PREREQUISITES ..................................................................................................................................................................................... 6
SCALE UP TECHNOLOGIES .................................................................................................................................................................................... 7
NUMA....................................................................................................................................................................................... 7 Dynamic Memory ..................................................................................................................................................................... 7 Resource Metering ................................................................................................................................................................... 8 SR-IOV ..................................................................................................................................................................................... 8
LESSON 2: NUMA ..................................................................................................................................................................................................... 9
INTRODUCTION TO NUMA ................................................................................................................................................................................ 10
PHYSICAL NUMA .................................................................................................................................................................................................... 11
OPTIMAL PHYSICAL NUMA ................................................................................................................................................................................ 12
NON-OPTIMAL PHYSICAL NUMA .................................................................................................................................................................... 13
GUEST NUMA ........................................................................................................................................................................................................... 14
Using Guest NUMA ................................................................................................................................................................ 15 Guest NUMA and Failover Clustering ................................................................................................................................... 15
LESSON 3: HYPER-V SCALE COMPARISON .................................................................................................................................................. 16
HYPER-V SCALE COMPARISON ........................................................................................................................................................................ 17
Module 1A: VM Scale.
Microsoft Virtual Academy Student Manual 4
Module 1A: VM Scale.
Module Overview
This module explains the scale up technologies in Windows Server® 2012 for virtual machine (VM)
deployments. The module provides details about non-uniform memory access (NUMA), which is the
key scale up technology. It also compares the scale up options in Windows Server 2012 with the
options that were available in previous versions of Windows Server.
Module 1A: VM Scale.
Microsoft Virtual Academy Student Manual 5
Lesson 1: Scale Up Overview
This lesson explains the design prerequisites for VM scale up in Windows Server 2012. It also
describes the key technologies implemented in Windows Server 2012 that enable VM scale up.
Module 1A: VM Scale.
Microsoft Virtual Academy Student Manual 6
Scale Up Prerequisites
There are several key scale up prerequisites that are met when using Hyper-V in Windows Server
2012:
Scale. The virtualization platform must be able to scale up more than just virtual processors;
this includes memory support, performance, networking and communications, and access to
storage.
Live Migration. Scale up technologies must not have any negative impact on Live Migration
capabilities.
Performance. There must be clear and demonstrable increases in performance as the
number of host processor cores is increased. For example, it is not acceptable to obtain only a
75 percent increase in performance for a 100 percent increase in the number of processor
cores.
Virtualized workloads. The virtualization platform must be able to support the virtualization
of all workloads and must be able to scale up those workloads as required. This should include
all workloads, such as email and messaging, databases, and large-scale web applications.
Module 1A: VM Scale.
Microsoft Virtual Academy Student Manual 7
Scale Up Technologies
Hyper-V in Windows Server 2012 uses a range of technologies to help enable scale up for VMs.
NUMA NUMA is the key scale up technology used to scale up VM deployments in Windows Server 2012. It is
described in Lesson 2 of this manual.
Dynamic Memory Dynamic memory enables Hyper-V to assign increased memory capacity to VMs on-the-fly, with no
downtime. In Windows Server 2012, dynamic memory has been improved to include new minimum
memory and Hyper-V smart paging features:
Minimum memory. This enables Hyper‑V to reclaim the unused memory from VMs.
Hyper-V smart paging. This is a memory management technique that uses disk resources
as additional, temporary memory when more physical memory is required to restart a VM
than is currently available. To minimize the performance impact of Hyper-V smart paging, it is
only used when all of the following conditions are true:
Module 1A: VM Scale.
Microsoft Virtual Academy Student Manual 8
o The VM is being restarted.
o There is no physical memory available.
o No memory can be reclaimed from other VMs running on the host.
Resource Metering Resource metering helps to track historical data on VM resource usage. You can use this data in
capacity planning, to help determine appropriate resource allocations when scaling up VM
deployments.
SR-IOV Single Root I/O Virtualization (SR-IOV) support in Windows Server 2012 enables Hyper-V to assign
an SR-IOV virtual function, of a physical network adapter with SR-IOV capability, to be assigned
directly to a VM. This increases network throughput and reduces network latency while also reducing
the host CPU overhead required for processing network traffic.
Module 1A: VM Scale.
Microsoft Virtual Academy Student Manual 9
Lesson 2: NUMA
This lesson introduces NUMA, which is the key technology for scaling up VMs in Windows Server
2012. The lesson explains physical NUMA on the Hyper-V host server and how you can optimize it.
The lesson then describes how to use guest NUMA on VMs.
Module 1A: VM Scale.
Microsoft Virtual Academy Student Manual
10
Introduction to NUMA
NUMA is a technology that helps to manage the potential contention that might occur when
multiprocessor computers attempt to access memory through the system bus.
With NUMA, memory and processors are grouped into nodes:
Local memory is attached directly to the processor.
Remote memory is local to another processor in the system.
Processors can access local memory faster than they can access remote memory, and in an optimal
NUMA architecture, memory access across nodes is minimized or eliminated.
Module 1A: VM Scale.
Microsoft Virtual Academy Student Manual
11
Physical NUMA
Physical NUMA refers to the use of NUMA technologies to help any server workload make efficient
use of processor cores and memory.
With memory and processors grouped in nodes, the allocation of CPU and memory resources is made
with best locality—the system will always attempt to use memory that is in the same local node as
the processor.
High-performance applications—such as Microsoft® SQL Server® 2012 and Internet Information
Services (IIS) 8 in Windows Server 2012—are NUMA-aware, enabling significant performance
enhancements over applications that are not NUMA-aware. With Windows Server 2012 Hyper-V,
virtualization is also now a NUMA-aware workload. For example, when SQL Server starts up, it
checks the underling topology and determines how best to carry out thread allocations, and memory
allocations, to ensure that it is not hopping NUMA nodes.
Module 1A: VM Scale.
Microsoft Virtual Academy Student Manual
12
Optimal Physical NUMA
With optimal NUMA, memory allocation and thread allocations are all within the same NUMA node,
and memory is populated in each NUMA node.
This means that all the NUMA transactions, and all of the memory and CPU allocations, are occurring
within the same NUMA node.
Module 1A: VM Scale.
Microsoft Virtual Academy Student Manual
13
Non-Optimal Physical NUMA
When NUMA is not in an optimal state, the system is imbalanced. In the example shown in the figure,
there are several non-optimal configuration issues:
Memory allocation and thread allocations occur across different NUMA nodes.
There are multiple node hops.
NUMA Node 2 has an odd number of memory modules; an odd number of modules may
prevent memory interleaving, depending on system configuration.
NUMA Node 3 does not have enough memory.
NUMA Node 4 has no local memory; this is the most significant issue because all access to
memory is going to be remote, impacting performance and limiting scalability.
Note that although remote memory access was a more significant issue when systems relied on the
front-side bus for processor-memory communication, even with the development of memory
controllers on the processor, "node hopping" should still be avoided if at all possible.
Module 1A: VM Scale.
Microsoft Virtual Academy Student Manual
14
Guest NUMA
Windows Server 2012 provides guest NUMA support within the VM. Guest NUMA presents a NUMA
topology within the VM that is consistent with the physical NUMA topology; specifically, the default
virtual NUMA topology is optimized to match the host’s NUMA topology, as shown in the figure.
With the projection of the host NUMA topology onto the VM, the VM's operating system can
interrogate the NUMA using industry-standard calls. This means that for any supported operating
system in Hyper-V (including Linux), the VM's operating system can be auto-adjusted and be the
most efficient for that NUMA topology, and when scale up applications are installed on that VM, these
applications can also take advantage of NUMA.
Hyper-V uses the Advanced Configuration and Power Interface (ACPI) Static Resource Affinity Table
(SRAT) as the mechanism to present topology information for all of the processors and memory
describing the physical locations of the processors and memory in the system.
Important: Guest NUMA support for VMs running in Windows Server 2012 only works when
dynamic memory has not been configured on the Hyper-V host.
Module 1A: VM Scale.
Microsoft Virtual Academy Student Manual
15
Using Guest NUMA When a new VM is created using Windows Server 2012 Hyper-V, Hyper-V determines the underlying
host NUMA topology and then automatically creates an optimal guest NUMA. However, using
advanced options, administrators can choose to manually configure the guest NUMA topology and
reconfigure NUMA nodes. There is also a "reset" option, so that manual settings can be automatically
returned to the system-created automatic configuration.
Guest NUMA and Failover Clustering Guest NUMA support also works for high-availability solutions using Windows Server 2012 failover
clustering. Failover clusters evaluate the NUMA configuration of a node before moving a VM; this
ensures that the target node is able to support the VM's workload. This NUMA-awareness helps to
reduce the number of failover operations and, therefore, increases VM uptimes.
Module 1A: VM Scale.
Microsoft Virtual Academy Student Manual
16
Lesson 3: Hyper-V Scale Comparison
This lesson explains the new capabilities in Windows Server 2012 Hyper-V that enable significant
improvements in VM scale up compared with previous releases of Hyper-V.
Module 1A: VM Scale.
Microsoft Virtual Academy Student Manual
17
Hyper-V Scale Comparison
New, and improved, capabilities in Windows Server 2012 Hyper-V enable significant improvements in
VM scale up compared with previous releases of Hyper-V.
Processors and Memory
Hyper-V in Windows Server 2008 R2 supported configuring VMs with a maximum of four virtual
processors and up to 64 gigabytes (GB) of memory. To support large, demanding workloads such as
online transaction processing (OLTP) databases and online transaction analysis (OLTA) solutions,
Hyper-V in Windows Server 2012 expands support for host processors and memory and includes
support for VMs with up to 64 processors and one terabyte of memory. On the Hyper-V host, logical
processor support has increased from 64 in Windows Server 2008 R2 to 320 in Windows Server 2012,
and host memory support has increased to four terabytes.
In Windows Server 2008 R2, the recommended ratio for virtual to host processors was 8:1 for
servers, and 12:1 for client Virtual Desktop Infrastructure (VDI) deployments. With Hyper-V in
Windows Server 2012, these limits do not apply.
Module 1A: VM Scale.
Microsoft Virtual Academy Student Manual
18
Clustering The number of servers in a cluster has increased from 16 in Windows Server 2008 R2 to 64 in
Windows Server 2012; this applies to both physical machines and VMs, so that you can now cluster
up to 64 VMs.
Live Migrations Windows Server 2012 introduces support for Live Storage Migration. For both Live Migration and Live
Storage Migration, there are no built-in limits to the number of simultaneous migrations; you can
migrate as many machines as the host hardware can support.
Recommended