130
VMware® Education Services VMware, Inc. [email protected] Overview of VMware Infrastructure 3 Instructor Manual VMware ESX 3.5 and VirtualCenter 2.5 Overview_A.book Page 1 Friday, September 5, 2008 12:40 PM

Overview of VMWARE Infrastructure 3

Embed Size (px)

Citation preview

Page 1: Overview of VMWARE Infrastructure 3

VMware® Education ServicesVMware, Inc.

[email protected]

Overview of VMwareInfrastructure 3Instructor ManualVMware ESX 3.5 and VirtualCenter 2.5

Overview_A.book Page 1 Friday, September 5, 2008 12:40 PM

Page 2: Overview of VMWARE Infrastructure 3

[email protected]

Copyright/Trademark

All rights reserved. This work and the computer programs to which it relates are the property of, and embody trade secrets and confidential information proprietary to, VMware, Inc., and may not be reproduced, copied, disclosed, transferred, adapted or modified without the express written approval of VMware, Inc.

This manual and its accompanying materials copyright © 2008 VMware, Inc. All rights reserved. Printed in U.S.A. This document may not, in whole or in part, be copied, photocopied, reproduced, translated, transmitted, or reduced to any electronic medium or machine-readable form without prior consent, in writing, from VMware, Inc.

The training material is provided “as is,” and all express or implied conditions, representations, and warranties, including any implied warranty of merchantability, fitness for a particular purpose or non-infringement, are disclaimed, even if VMware, Inc., has been advised of the possibility of such claims.This training material is designed to support an instructor-led training course and is intended to be used for reference purposes in conjunction with the instructor-led training course. The training material is not a standalone training tool. Use of the training material for self-study without class attendance is not recommended.

Copyright © 2008 VMware, Inc. All rights reserved. VMware and the VMware boxes logo are registered trademarks of VMware, Inc. MultipleWorlds, GSX Server, ESX Server, VMware ESX, and VMware ESXi are trademarks of VMware, Inc. Microsoft, Windows and Windows NT are registered trademarks of Microsoft Corporation. Linux is a registered trademark of Linus Torvalds. All other marks and names mentioned herein may be trademarks of their respective owners.

Overview of VMwareInfrastructure 3VMware ESX 3.5 and VirtualCenter 2.5Part Number EDU-ENG-A-OVW35-LECT-INSTInstructor ManualRevision A

Overview_A.book Page 2 Friday, September 5, 2008 12:40 PM

Page 3: Overview of VMWARE Infrastructure 3

Overview of VMware Infrastructure 3 i

C O N T E N T S

M O D U L E 1 Virtual Infrastructure Overview. . . . . . . . . . . . . . . . . . . . . . . . . . 1

What Is Virtual Infrastructure?. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2How Does Virtualization Work? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3ESX Uses a Hypervisor Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4VMware Infrastructure 3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5VMware Infrastructure Components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7Management Made Easy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9View VirtualCenter Inventory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10ESX Storage Choices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12VMFS and NFS Datastores . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13Virtual Network Components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14Flexible Network Connectivity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15Virtual Switches Support VLANs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17Lab for Module 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18

M O D U L E 2 Create a Virtual Machine. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19

What Is a Virtual Machine? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20ESX Virtual Machine Hardware . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21Centralized Virtual Machine Management. . . . . . . . . . . . . . . . . . . . . . . . . 22Fast, Flexible Guest OS Installations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23Reducing Virtual Machine Deployment Time . . . . . . . . . . . . . . . . . . . . . . 24Create a Template . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25Deploy a Virtual Machine from a Template . . . . . . . . . . . . . . . . . . . . . . . . 26Automating Guest OS Customization . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27Lab for Module 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28

M O D U L E 3 CPU and Memory Resource Pools . . . . . . . . . . . . . . . . . . . . 29

CPU Management Supports Server Consolidation. . . . . . . . . . . . . . . . . . . 30Flexible Resource Allocation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31Virtual Machine CPU Resource Controls. . . . . . . . . . . . . . . . . . . . . . . . . . 32Supporting Higher Consolidation Ratios (1) . . . . . . . . . . . . . . . . . . . . . . . 33Supporting Higher Consolidation Ratios (2) . . . . . . . . . . . . . . . . . . . . . . . 34Virtual Machine Memory Resource Controls. . . . . . . . . . . . . . . . . . . . . . . 35Using Resource Pools to Meet Business Needs . . . . . . . . . . . . . . . . . . . . . 37

Overview_A.book Page i Friday, September 5, 2008 12:40 PM

Page 4: Overview of VMWARE Infrastructure 3

ii Overview of VMware Infrastructure 3

Configuring a Pool's Resources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38Viewing Resource Pool Information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39Resource Pools Example: CPU Contention. . . . . . . . . . . . . . . . . . . . . . . . 40Admission Control for CPU and Memory Reservations . . . . . . . . . . . . . . 41

M O D U L E 4 Migrate Virtual Machines Using VMotion. . . . . . . . . . . . . . 43

VMotion Migration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44How VMotion Works (1) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45How VMotion Works (2) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46How VMotion Works (3) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47How VMotion Works (4) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48How VMotion Works (5) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49How VMotion Works (6) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50ESX Host Requirements for VMotion . . . . . . . . . . . . . . . . . . . . . . . . . . . 51Virtual Machine Requirements for VMotion. . . . . . . . . . . . . . . . . . . . . . . 52Lab for Module 4 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53

M O D U L E 5 VMware DRS Clusters. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55

What Is a DRS Cluster? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56Create a DRS Cluster . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57Automating Workload Balance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58Adding ESX Hosts to a DRS Cluster (1). . . . . . . . . . . . . . . . . . . . . . . . . . 60Adding ESX Hosts to a DRS Cluster (2). . . . . . . . . . . . . . . . . . . . . . . . . . 61Automating Workload Balance per VM . . . . . . . . . . . . . . . . . . . . . . . . . . 62Adjusting DRS Operation for Performance or HA . . . . . . . . . . . . . . . . . . 63Lab for Module 5 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64

M O D U L E 6 Monitoring Virtual Machine Performance . . . . . . . . . . . . . 65

VirtualCenter Performance Graphs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66Example CPU Performance Issue Indicator . . . . . . . . . . . . . . . . . . . . . . . 67Are VMs Being CPU Constrained?. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68Supporting Higher Consolidation Ratios. . . . . . . . . . . . . . . . . . . . . . . . . . 69Are VMs Being Memory Constrained?. . . . . . . . . . . . . . . . . . . . . . . . . . . 70Are VMs Being Disk Constrained?. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71Are VMs Being Network Constrained? . . . . . . . . . . . . . . . . . . . . . . . . . . 72Lab for Module 6 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73

Overview_A.book Page ii Friday, September 5, 2008 12:40 PM

Page 5: Overview of VMWARE Infrastructure 3

Contents iii

M O D U L E 7 VirtualCenter Alarms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75

Proactive Datacenter Management. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76Preconfigured VirtualCenter Alarms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77Creating a Virtual Machine-Based Alarm . . . . . . . . . . . . . . . . . . . . . . . . . 78Creating a Host-Based Alarm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79Actions to Take When an Alarm Is Triggered . . . . . . . . . . . . . . . . . . . . . . 80Alarm Reporting Options. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81Configure VirtualCenter Notifications . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82Lab for Module 7 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83

M O D U L E 8 VMware HA. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85

What Is VMware HA? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86Architecture of a VMware HA Cluster . . . . . . . . . . . . . . . . . . . . . . . . . . . 88VMware HA Prerequisites . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89Create a VMware HA Cluster . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90Add an ESX Host to the Cluster . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91Configure Cluster-Wide Admission Control . . . . . . . . . . . . . . . . . . . . . . . 92Failover Capacity Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94Maintain Business Continuity if ESX Hosts Become Isolated . . . . . . . . . . 95Lab for Module 8 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96

M O D U L E 9 VI3 Product and Feature Overview . . . . . . . . . . . . . . . . . . . . 97

Customers Move Rapidly Along the Adoption Curve . . . . . . . . . . . . . . . . 98Standardizing on VMware Infrastructure . . . . . . . . . . . . . . . . . . . . . . . . . . 99Functional Layers in a Virtual Infrastructure . . . . . . . . . . . . . . . . . . . . . . 100New Virtualization Platform Layer Product . . . . . . . . . . . . . . . . . . . . . . . 102ESXi Hypervisor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103From Server Boot to Virtual Machines in Minutes . . . . . . . . . . . . . . . . . 104Additional VI Layer Products and Features . . . . . . . . . . . . . . . . . . . . . . . 105VMware Consolidated Backup (VCB). . . . . . . . . . . . . . . . . . . . . . . . . . . 106VMware Consolidated Backup Operation . . . . . . . . . . . . . . . . . . . . . . . . 108Distributed Power Management (Experimental) . . . . . . . . . . . . . . . . . . . 109Storage VMotion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 110Management and Automation Layer Products . . . . . . . . . . . . . . . . . . . . . 111VMware Update Manager (VUM) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112Update Manager and DRS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113VDI - Virtual Desktop Manager (VDM) . . . . . . . . . . . . . . . . . . . . . . . . . 114

Overview_A.book Page iii Friday, September 5, 2008 12:40 PM

Page 6: Overview of VMWARE Infrastructure 3

iv Overview of VMware Infrastructure 3

Guided Consolidation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .115VMware Site Recovery Manager . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .116Site Recovery Manager Key Components . . . . . . . . . . . . . . . . . . . . . . . .118VMware Converter Enterprise Capabilities . . . . . . . . . . . . . . . . . . . . . . .119Using Lab Manager with VMware Infrastructure . . . . . . . . . . . . . . . . . . 120VMware Lifecycle Manager . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 122Lifecycle Workflow Management. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123Summary of VI3 Products and Features . . . . . . . . . . . . . . . . . . . . . . . . . 124

Overview_A.book Page iv Friday, September 5, 2008 12:40 PM

Page 7: Overview of VMWARE Infrastructure 3

Overview of VMware Infrastructure 3 1

M O D U L E 1

Virtual Infrastructure Overview

1Virtual Infrastructure Overview 1

ImportanceVirtualization is a technology that is revolutionizing the computer industry. It is the foundational technology for virtual infrastructure. This module introduces virtualization and VMware® Infrastructure 3.

Objectives for the Learner• Understand the concept of virtualization• Understand the basic components of VMware Infrastructure 3• Understand virtual network components• Understand storage and datastores

Lesson Topics• Define virtual infrastructure• Describe how virtualization works• Introduce VMware Infrastructure components• Describe virtual network components• Introduce storage, datastores, and VMFS

Course Presentation Guidance

Remember that this course is just an “overview” of VMware Infrastructure 3. It is not a system administration course. This course does not aim to provide detailed technical information and explanations. Customers who actually buy the product will have to send their administrators to one of the system administration courses later on. The goal of this course is to provide a potential customer with just enough technical background to do the following:

• Excite them about the product

• Give them some hands-on experience with the product

• Allow them to make informed decisions about purchasing the product

• Provide them with enough core concepts that they can understand the more advanced products presented by sales, including Update Manager, Lab Manager, and Site Recovery Manager.

Based on the above points, you can see that this course is primarily a sales presentation with some technical depth added. The course includes nearly three hours of hands-on labs. This means you will have about three and a half hours to lecture from the material. There is not enough time to delve into technical details.

Overview_A.book Page 1 Friday, September 5, 2008 12:40 PM

Page 8: Overview of VMWARE Infrastructure 3

2 Overview of VMware Infrastructure 3

What Is Virtual Infrastructure?Slide 1-5

Virtualization is an important concept to understand because it has become widely deployed across both large, multinational corporations and small-to-medium-sized businesses. Virtualization is also the foundational technology of the VMware Infrastructure. Understanding VMware Infrastructure well means understanding the basic concepts of virtualization.

This module introduces both general virtualization concepts and information specific to VMware Infrastructure.

The pages that follow introduce the basic hardware and software components that comprise a VMware Infrastructure. All the hardware and software components introduced here will be used throughout the remainder of the course.

In traditional physical datacenters, there is a tight relationship between servers, disk drives, and network ports, and the operating systems and applications they support. A virtual infrastructure, like VMware Infrastructure, allows us to break this tight relationship. VMware Infrastructure allows the dynamic mapping of compute, storage, and network resources to business applications.

No longer is there a one-to-one relationship between an operating system and the physical hardware. In a VMware Infrastructure, multiple operating systems and their applications share the hardware resources of a physical host, physical LUN, or physical storage and network ports. Sharing hardware resources provides the basis for server consolidation now and server containment later.

Reducing the number of required servers, storage LUNs, storage ports, and network ports reduces both capital and operational costs. Many companies have now standardized on virtual infrastructures and have “virtualize first” policies.

Course Flow

Modules 1-8 introduce only what might be considered the more “core” and “common” VI3 features like encapsulation, virtualized network and storage components, VMotion, DRS, and HA. The products and features built on these core features, like Storage VMotion, Update Manager, VCB, etc., are mentioned only via a quick overview in Module 9.

Virtual Infrastructure allows dynamic mapping of compute, storage, and network resources to business applications.

Virtual Infrastructure provides the opportunity for both currentserver consolidation and future server containment.

Overview_A.book Page 2 Friday, September 5, 2008 12:40 PM

Page 9: Overview of VMWARE Infrastructure 3

Module 1 Virtual Infrastructure Overview 3

Virtual Infrastructure Overview

1How Does Virtualization Work?Slide 1-6

Virtualization is based on the concept of a virtual machine. Virtual machines are created by a virtualization software layer installed above the hardware. Virtual hardware in the virtual machines is created by the virtualization software. Virtual machines share the actual physical hardware.

The virtualization software includes a hypervisor. The hypervisor is a higher supervisory operating system and is used to schedule virtual machine access to the hardware. The hypervisor schedules resources and prevents conflicts. Multiple virtual machines share hardware resources without interfering with each other. Apart from standard network connections, virtual machines have no knowledge of each other and run in entirely separate partitions for security purposes.

The hypervisor can run as an application on an existing operating system or be implemented as its own operating system. A hypervisor running as an application is referred to as a hosted hypervisor, while a standalone hypervisor is sometimes referred to as a bare-metal hypervisor. Bare-metal hypervisors incur less overhead. This typically translates to higher virtual machine performance.

• It allows multiple operating system instances to run concurrently on a single computer within virtual machines.

• A virtualization layer creates the virtual machines.

• The virtualization layer is implemented using either a hosted or bare-metal hypervisor architecture.

Overview_A.book Page 3 Friday, September 5, 2008 12:40 PM

Page 10: Overview of VMWARE Infrastructure 3

4 Overview of VMware Infrastructure 3

ESX Uses a Hypervisor ArchitectureSlide 1-7

VMware® ESX uses a bare-metal hypervisor that allows multiple virtual machines to run simultaneously on the same physical server. ESX does not require an operating system, ESX is the operating system.

The ESX hypervisor is called the VMkernel. The VMkernel is a proprietary, high-performance kernel optimized for running virtual machines.

The ESX service console is used only to provide a management interface to the VMkernel. The service console is a cut-down, secure version of Linux. The service console is scheduled to use hardware resources in a manner similar to normal virtual machines.

ESX 3 and ESXi are both bare-metal hypervisors that partition physical servers into multiple virtual machines. The difference is that ESXi does not use a Linux service console as a management interface. ESXi is managed by VirtualCenter, using an embedded hardware agent.

ESXi is discussed later in the course.

• A bare-metal hypervisor system does not require an operating system. The VMkernel is the hypervisor on an ESX host.

• The service console provides an extensible, secure management interface to the VMkernel.

Overview_A.book Page 4 Friday, September 5, 2008 12:40 PM

Page 11: Overview of VMWARE Infrastructure 3

Module 1 Virtual Infrastructure Overview 5

Virtual Infrastructure Overview

1VMware Infrastructure 3Slide 1-8

VMware Infrastructure 3 is VMware’s premier product family designed for building and managing virtual infrastructures. It is a suite of software that provides virtualization, management, resource optimization, application availability, and migration capabilities.

VMware Infrastructure 3 consists of products and features that include the following:

• A software suite for optimizing and managing IT environments through virtualization• VMware ESX or ESXi• VMware Virtual SMP• VMware High Availability (HA)• VMware VMotion• VMware Distributed Resource Scheduler (DRS)• VMware VMFS• VMware Consolidated Backup (VCB)• VMware Update Manager• VMware Storage VMotion

• VMware VirtualCenter• Provisions, monitors, and manages a virtualized

IT environment

VMware ESX 3 Platform on which virtual machines run

VMware ESXi Alternate platform on which virtual machines run

VirtualCenter Centralized management tool for ESX hosts and virtual machines

Virtual SMP Multiprocessor support (up to four) for virtual machines

VMware HA VirtualCenter high availability feature for virtual machines

VMware VMotion VirtualCenter feature that permits migrating powered-on virtual machines with no disruption

VMware DRS VirtualCenter feature that provides dynamic workload balancing across ESX hosts

Overview_A.book Page 5 Friday, September 5, 2008 12:40 PM

Page 12: Overview of VMWARE Infrastructure 3

6 Overview of VMware Infrastructure 3

Each of these products and features is discussed in more detail throughout the course.

VMware VMFS Cluster-aware file system optimized to hold virtual machine, virtual machine template, and ISO files

VMware Consolidated Backup

Centralized, online backup framework for virtual machines

VMware Update Manager

Automatic patch management of ESX hosts and select guest operating systems

Storage VMotion Migration of powered-on virtual machine files with no disruption

Overview_A.book Page 6 Friday, September 5, 2008 12:40 PM

Page 13: Overview of VMWARE Infrastructure 3

Module 1 Virtual Infrastructure Overview 7

Virtual Infrastructure Overview

1VMware Infrastructure ComponentsSlide 1-9

VMware Infrastructure consists of a number of hardware and software components.

Servers running ESX provide hardware resources to multiple virtual machines. ESX hosts use one or more datastores as repositories for virtual machine, virtual machine template, and ISO files.

VirtualCenter is a Windows-based application that provides centralized management of ESX hosts and virtual machines. VirtualCenter management information is stored in a dedicated VirtualCenter database—both Oracle and SQL databases are supported. VirtualCenter agents are configured on the ESX hosts. VirtualCenter tasks sent to an ESX host are received by the VirtualCenter agent. Connections between the VirtualCenter Server and the VirtualCenter agents are secured by Secure Socket Layer (SSL).

Purchased VMware Infrastructure products and features are licensed through one or more license files. License files are obtainable using a self-service Web portal. Downloaded license files are typically stored in a directory on a centralized license server. A centralized license server simplifies license management. It is possible to store license files on individual ESX hosts, but not all VMware Infrastructure products and features are supported using this option.

Different licensing packages are available, depending on business requirements and cost constraints. A 60-day evaluation license that enables all features is available during installation

Overview_A.book Page 7 Friday, September 5, 2008 12:40 PM

Page 14: Overview of VMWARE Infrastructure 3

8 Overview of VMware Infrastructure 3

One or more VMware Infrastructure Clients (VI Clients) provide graphical user interface–based management of the VMware Infrastructure. The VI Client is a Windows application. Connections between the VI Clients and the VirtualCenter Server are secured by SSL.

Overview_A.book Page 8 Friday, September 5, 2008 12:40 PM

Page 15: Overview of VMWARE Infrastructure 3

Module 1 Virtual Infrastructure Overview 9

Virtual Infrastructure Overview

1Management Made EasySlide 1-10

VirtualCenter and the VI Client interface make centralized management of the VMware Infrastructure easy. During a default VI Client installation, a VI Client icon is added to the user desktop.

You use this icon to launch the VI Client interface. When prompted, you enter a Windows-based user account name and password to log in to the VitualCenter Server.

The VI Client interface provides graphical management of VMware Infrastructure components, including ESX hosts, virtual machines, and virtual machine templates.

• Use the VI Client to log in to VirtualCenter.

• VI Client and VirtualCenter provide easy, centralized, graphical management of the VMware Infrastructure.• ESX hosts • Virtual machines• Templates

Overview_A.book Page 9 Friday, September 5, 2008 12:40 PM

Page 16: Overview of VMWARE Infrastructure 3

10 Overview of VMware Infrastructure 3

View VirtualCenter InventorySlide 1-11

The VI Client displays VMware Infrastructure management objects in a hierarchical format called an inventory. Four different inventory views are available to the user. This slide illustrates the two most common views used to display the VirtualCenter inventory: the Hosts & Clusters view and the Virtual Machines & Templates view. The other two views are the Networks view and the Datastores view.

On the slide, Los Angeles is an example of a datacenter. ESX hosts and their virtual machines are added to a datacenter that often corresponds to a specific geographic location. Migration of running virtual machines is permitted only between ESX hosts within the same datacenter.

Hosts and Clusters, Americas, and Discovered Virtual Machines are all examples of folders. Folders are used to group objects in the inventory. Folders beneath the datacenter in the Virtual Machines and Templates view are rendered in blue.

The objects called kentfield03.priv.vmeduc.com and kentfield04.priv.vmeduc.com are ESX hosts. ESX hosts do not appear in the Virtual Machines and Templates view. ESX hosts are added to the VirtualCenter inventory after they are installed.

Database01 and Database02 are virtual machines. The icons indicate that Database01 is powered on, while Database02 is not. Virtual machines can be organized differently in the different inventory views.

Hosts and Clusters View Virtual Machine and Templates View

Overview_A.book Page 10 Friday, September 5, 2008 12:40 PM

Page 17: Overview of VMWARE Infrastructure 3

Module 1 Virtual Infrastructure Overview 11

Virtual Infrastructure Overview

1DatabaseTemplate and FilePrint Template are virtual machine templates. Virtual machine templates can be used to quickly deploy additional virtual machines. Templates do not appear in the Hosts and Clusters view.

The VirtualCenter security model allows permissions to be applied to any object in the inventory. Permissions are beyond the scope of this course.

Overview_A.book Page 11 Friday, September 5, 2008 12:40 PM

Page 18: Overview of VMWARE Infrastructure 3

12 Overview of VMware Infrastructure 3

ESX Storage ChoicesSlide 1-12

ESX storage support is flexible enough to meet most cost, availability, and performance requirements. ESX supports the following kinds of storage:

• Direct-attached SCSI and SATA storage• Fibre Channel storage• iSCSI storage• NFS storage

Storage accessed by ESX is called a datastore.

Multiple LUNs from different storage types can be combined to form a single datastore. LUNs can be dynamically added to an existing datastore if more storage space is required.

• ESX supports multiple types of storage: • Local SCSI, SATA,

Fibre Channel, iSCSI, NFS

• Flexibility to meet cost, availability, and performance requirements

Overview_A.book Page 12 Friday, September 5, 2008 12:40 PM

Page 19: Overview of VMWARE Infrastructure 3

Module 1 Virtual Infrastructure Overview 13

Virtual Infrastructure Overview

1VMFS and NFS DatastoresSlide 1-13

Storage LUNs are typically formatted with the Virtual Machine File System (VMFS). VMFS is a high-performance, cluster-aware file system designed by VMware to hold virtual machine, virtual machine template, and ISO files.

An alternative file system type is NFS. Like VMFS, NFS is a shared file system and it also holds virtual machine, virtual machine template, and ISO files.

Multiple ESX servers can simultaneously access the same VMFS or NFS datastore. Simultaneous access is designed to support high availability and resource-balancing features like VMotion or VMware DRS and HA clusters, which are covered later in the course.

• Direct attached, Fibre Channel, and iSCSI LUNs use the cluster-aware VMFS file system.

• NFS storage is not formatted with VMFS.• Both VMFS and NFS datastores provide shared

storage for virtual machine files, virtual machine template files, and ISO files.

• Shared datastores support high availability and resource management features.• VMotion• VMware DRS and HA clusters

Overview_A.book Page 13 Friday, September 5, 2008 12:40 PM

Page 20: Overview of VMWARE Infrastructure 3

14 Overview of VMware Infrastructure 3

Virtual Network ComponentsSlide 1-14

ESX uses physical and virtual network components to provide connectivity to virtual machines, the VMkernel, and the service console. One of the key virtual components is the virtual switch.

A virtual switch is a software construct implemented in the VMkernel. A virtual switch might be connected to one or more physical NICs, or none at all. Physical NICs provide virtual switches with connectivity to external devices.

The ability to connect multiple physical NICs to a single virtual switch is called NIC teaming. NIC teams provide high availability through automatic failover and failback. NIC teams also provide automatic load distribution for better performance.

The VMkernel creates virtual NICs for virtual machines, the VMkernel, and the service console. The VMkernel assigns a unique MAC address to each virtual NIC.

The ability to connect multiple virtual machines, the VMkernel, and the service console through a small number of physical NICs and physical switch ports reduces both capital and operational costs.

Overview_A.book Page 14 Friday, September 5, 2008 12:40 PM

Page 21: Overview of VMWARE Infrastructure 3

Module 1 Virtual Infrastructure Overview 15

Virtual Infrastructure Overview

1Flexible Network ConnectivitySlide 1-15

Virtual switch architecture provides the IT architect the flexibility to meet datacenter connection requirements in cost-effective ways.

Virtual switches include connections. There are three connection types:

• A service console port• A VMkernel port• A virtual machine port group

Connections must be defined before using a virtual switch. More than one connection type can exist on a single virtual switch. Or each connection type can exist on its own virtual switch.

As each connection is configured, it is assigned a unique name by the administrator. This name is displayed in the VI Client interface and is used when connecting virtual machines, the VMkernel, or the service console to a network.

A service console port connection provides the access point for the service console to connect to an external management network. Multiple service console connections on different virtual switches and networks can be created to support high availability.

A VMkernel port connection provides the access point for the VMkernel to connect to external IP storage or VMotion networks. Multiple VMkernel connections on different virtual switches and networks can be created to support different types of network traffic. For example, one VMkernel port

• There are three types of network connections:• Service console port – Access to ESX host management network• VMkernel port – Access to VMotion, iSCSI, and/or NFS/NAS

networks• Virtual machine port group – Access to VM networks

• More than one connection type can exist on a single virtual switch. Or each connection type can exist on its own virtual switch.

Virtual machine port groups

uplink ports

Service Console

port

VMkernel port

Overview_A.book Page 15 Friday, September 5, 2008 12:40 PM

Page 22: Overview of VMWARE Infrastructure 3

16 Overview of VMware Infrastructure 3

could be configured to support iSCSI storage traffic while another VMkernel port could be configured to support VMotion traffic. Different types of network traffic should be segregated for security and performance reasons.

A virtual machine port group connection provides an access point for a virtual machine to connect to either internal or external virtual machines, or to other external devices. Each virtual machine port group has its own network configuration. For example, on the same virtual switch or on different virtual switches, virtual machine port group Alpha can be assigned to one network, while virtual machine port group Beta is assigned to another network.

The ability to configure multiple virtual machine port group connections on the same or different virtual switches provides the flexibility to meet most business requirements.

Overview_A.book Page 16 Friday, September 5, 2008 12:40 PM

Page 23: Overview of VMWARE Infrastructure 3

Module 1 Virtual Infrastructure Overview 17

Virtual Infrastructure Overview

1Virtual Switches Support VLANsSlide 1-16

ESX supports virtual LANs with VLAN IDs between 1 and 4095 on VMkernel, service console, and virtual machine connections. VLAN functionality provides yet additional flexibility and cost savings in network configuration.

• VMkernel, service console, and virtual machine port groups support IEEE 802.1Q VLAN tagging

• Example:• Packets from a VM are

tagged as they exit the virtual switch

• Packets are untagged as they return to the VM

Overview_A.book Page 17 Friday, September 5, 2008 12:40 PM

Page 24: Overview of VMWARE Infrastructure 3

18 Overview of VMware Infrastructure 3

Lab for Module 1Slide 1-17

• Using VirtualCenter• In this lab, you perform the following tasks:

•Use the VI Client to log in to VirtualCenter•View the VirtualCenter inventory•View virtual network components•View storage components

Overview_A.book Page 18 Friday, September 5, 2008 12:40 PM

Page 25: Overview of VMWARE Infrastructure 3

Overview of VMware Infrastructure 3 19

M O D U L E 2

Create a Virtual M

achine2

Create a Virtual Machine 2

ImportanceVirtual infrastructure is based on virtual machines. The ability to quickly provision virtual machines is critical. To quickly provision multiple virtual machines, you create a base image virtual machine. Once you have a base image virtual machine, you can convert it to a template and provision additional virtual machines from the template. This dramatically decreases deployment time and reduces costly mistakes.

Objectives for the Learner• Understand virtual machines and virtual machines hardware choices• Create a template deploy a virtual machine from a template

Lesson Topics• Define a virtual machine• Virtual machine hardware• Installing a guest operating system into a virtual machine• Creating templates• Deploying virtual machines from a template• Guest operating system customization

Overview_A.book Page 19 Friday, September 5, 2008 12:40 PM

Page 26: Overview of VMWARE Infrastructure 3

20 Overview of VMware Infrastructure 3

What Is a Virtual Machine?Slide 2-5

A virtual machine is a software construct controlled by the VMkernel. A virtual machine has virtual hardware that appears as physical hardware to an installed guest operating system and its applications. All virtual machine configuration, state information, and data are encapsulated in a set of files stored on a datastore. This encapsulation means virtual machines are portable and can easily be backed up or cloned.

There are multiple methods to create virtual machines. One way to create a virtual machine is by launching an easy-to-use wizard in the VI Client and answering a few simple questions. A second—and faster—method is to use the VI Client to deploy the new virtual machine from a virtual machine template.

• A software platform that, like a physical computer, runs an operating system and applications

• Encapsulated in a discrete set of files. The main files are these:• Configuration file (.vmx)• Virtual disk file (.vmdk)

• Encapsulation supports disaster recovery and high availability features and products.

• Individually created using a VI Client wizard or deployed from a template

Virtual Machine

Overview_A.book Page 20 Friday, September 5, 2008 12:40 PM

Page 27: Overview of VMWARE Infrastructure 3

Module 2 Create a Virtual Machine 21

Create a Virtual M

achine2

ESX Virtual Machine HardwareSlide 2-6

Each guest operating system sees ordinary hardware devices. It does not know these devices are virtual. Furthermore, all VMware® ESX 3 virtual machines have uniform hardware, except for a small number of variations the system administrator can apply. This makes virtual machines uniform and portable across ESX hosts.

Each virtual machine has a total of six virtual PCI slots. One of these is used for the virtual video adapter. Therefore, the total number of virtual Ethernet and SCSI host adapters cannot exceed five. The virtual chipset is an Intel 440BX–based motherboard with an NS338 SIO chip. This chipset ensures compatibility for a wide range of supported guest operating systems, including legacy operating systems like Windows NT.

Up to 4 CD-

ROMs

Up to 2 ports Up to 2 ports

1-4 adapters

1-4 adapters;1-15 devices each

Up to 64GB RAM

1-2 drives

VM chipset 1 CPU (2 or 4

CPUs with VMware

SMP)

ESX virtual machine hardware is scalable enough to meet most business and application needs.

Overview_A.book Page 21 Friday, September 5, 2008 12:40 PM

Page 28: Overview of VMWARE Infrastructure 3

22 Overview of VMware Infrastructure 3

Centralized Virtual Machine ManagementSlide 2-7

Virtual machine management access is provided through a virtual machine console window available in the VI Client, or in an optional Web-based interface (not shown here).

The virtual machine’s console provides the mouse, keyboard, and console screen functionality. You can use the virtual machine console to access the virtual machine BIOS, power-cycle the virtual machine, modify virtual hardware, or install an operating system.

The virtual machine console is not normally used to access the virtual machine’s applications. Tools such as VMware’s Virtual Desktop Manager, or RDP, or VNC, or X Windows, or Web browsers are typically used, instead.

• Send power changes to VM.

• Access VM’s guest OS.

• Send Ctrl+Alt+Del to guest OS.• Press

Ctrl+Alt+Ins in VM console.

• Modify virtual machine hardware.

VM console icon

Overview_A.book Page 22 Friday, September 5, 2008 12:40 PM

Page 29: Overview of VMWARE Infrastructure 3

Module 2 Create a Virtual Machine 23

Create a Virtual M

achine2

Fast, Flexible Guest OS InstallationsSlide 2-8

It is easy to install a guest operating system or an application by using ISO files and virtual CD-ROM devices. To install software, you connect an ISO image loaded on an accessible datastore to the virtual CD-ROM device. To simplify management, a library of ISO images can written to a datastore accessible to all ESX hosts. Although it is possible to map a physical CD in a physical CD-ROM device to the virtual CD-ROM device, using ISO images frees administration staff from having to be physically present in the datacenter. This saves time and reduces costs.

VMware® Infrastructure 3 supports a large variety of operating systems including Windows, Linux, Solaris, and Novell. For a complete list of supported operating systems see the Guest Operating System Installation Guide available on the VMware Web site at http://www.vmware.com/pdf/GuestOS_guide.pdf.

Install from ISO image (mounted on virtual CD-ROM drive) to virtual disk.

Configure VMFS or NFS datastores with a library of ISO images for easy VM deployment and application installation.

Local

VM Console

Overview_A.book Page 23 Friday, September 5, 2008 12:40 PM

Page 30: Overview of VMWARE Infrastructure 3

24 Overview of VMware Infrastructure 3

Reducing Virtual Machine Deployment TimeSlide 2-9

Templates are a VirtualCenter feature. A template is a master image of a virtual machine that is marked as “never to be powered on.” Templates appear in the inventory only while you are using the Virtual Machine and Templates view.

A new virtual machine can be quickly provisioned using a template. The new virtual machine is essentially a clone of the template, although VirtualCenter has the ability to apply guest operating system customization to the clone. Creating a library of templates can dramatically decrease provisioning time and reduce costly mistakes. Templates can be stored in compact disk format to reduce storage costs.

• Templates are a VirtualCenter feature used to create commonly deployed VMs.

• A template is a VM marked as “never to be powered on.”• Disk files stored in

either normal or compact disk format

• Templates can be stored in a VMFS or NFS datastore.

Overview_A.book Page 24 Friday, September 5, 2008 12:40 PM

Page 31: Overview of VMWARE Infrastructure 3

Module 2 Create a Virtual Machine 25

Create a Virtual M

achine2

Create a TemplateSlide 2-10

There are two ways to create a template using the VI Client: Clone to Template and Convert to Template. Which method is chosen depends on whether the original virtual machine is still needed. The original virtual machine is no longer available when converted to a template.

• Create a base image VM, power it off, then …

• Two methods:• Clone VM to

Template• Convert VM to

Template

• Choose Clone to Template if the original VM is still needed.

Overview_A.book Page 25 Friday, September 5, 2008 12:40 PM

Page 32: Overview of VMWARE Infrastructure 3

26 Overview of VMware Infrastructure 3

Deploy a Virtual Machine from a TemplateSlide 2-11

You use the VI Client to provision a new virtual machine from a template. You launch the Deploy Template wizard by right-clicking on the template and answering a few simple questions. The datacenter administrator can choose the new virtual machine display name, its location in the VirtualCenter inventory, its ESX host, and the datastore to use.

To deploy a VM, provide the following:

• Virtual machine name• Inventory location• ESX host, datastore• Guest operating system

customization data

Overview_A.book Page 26 Friday, September 5, 2008 12:40 PM

Page 33: Overview of VMWARE Infrastructure 3

Module 2 Create a Virtual Machine 27

Create a Virtual M

achine2

Automating Guest OS CustomizationSlide 2-12

VirtualCenter can automatically apply unique system information to a virtual machine deployed from a template. This saves time and prevents costly human error. Customization exists for both Windows and Linux guest operating systems. Linux guest customization is automatically enabled when VirtualCenter is installed. Windows guest customization must be manually enabled by installing the Windows sysprep software on the VirtualCenter Server.

• VirtualCenter can automatically apply unique system information to a virtual machine when it is deployed from a template.

• For guest operating system customization to work, it must be enabled in VirtualCenter.• To enable for Windows VMs, install sysprep files on

VirtualCenter Server.• Already enabled for Linux VMs (open-source components

are installed on the VirtualCenter Server)

Overview_A.book Page 27 Friday, September 5, 2008 12:40 PM

Page 34: Overview of VMWARE Infrastructure 3

28 Overview of VMware Infrastructure 3

Lab for Module 2Slide 2-13

Template ProvisioningIn this lab, you perform the following tasks:

•Convert a virtual machine to a template•Convert a template back to a virtual machine•Deploy a virtual machine from a template

Overview_A.book Page 28 Friday, September 5, 2008 12:40 PM

Page 35: Overview of VMWARE Infrastructure 3

Overview of VMware Infrastructure 3 29

M O D U L E 3

CP

U and M

emory R

esource Pools

3

CPU and Memory Resource Pools 3

ImportanceResource pools allow CPU and memory resources to be hierarchically assigned to meet the business requirements of your enterprise. Virtual machine CPU and memory resource controls provide finer-grained tuning options to meet the business requirements of your applications.

Objectives for the Learner• Understand available virtual machine CPU and memory resource

controls• To use resource pools for resource policy control

Lesson Topics• How are virtual machines’ CPU and memory resources managed?• What is a resource pool?• Managing a pool’s resources• A resource pool example• Admission control

Overview_A.book Page 29 Friday, September 5, 2008 12:40 PM

Page 36: Overview of VMWARE Infrastructure 3

30 Overview of VMware Infrastructure 3

CPU Management Supports Server ConsolidationSlide 3-5

Physical servers in a typical datacenter use on average less than 10 percent of their available CPU resources. Higher CPU utilization can be achieved by combining multiple virtual machines on one physical server. This efficient use of CPU resources reduces datacenter capital and operating costs. VMware® ESX hosts often achieve and maintain 80–90 percent CPU utilization.

A virtual machine is configured with at least one virtual CPU (VCPU). When a VCPU needs to run, the VMkernel maps the VCPU to an available “hardware execution context.” A hardware execution context is a processor’s capability to schedule one thread of execution. A hardware execution context is a CPU core or a hyperthread, if the CPU supports hyperthreading. Hyperthreaded or multicore CPUs provide two or more hardware execution contexts on which VCPUs can be scheduled to run.

Using ESX’s virtual symmetric multiprocessor (VSMP) feature means virtual machines can be configured with one, two, or four VCPUs. A single-VCPU virtual machine gets scheduled on one hardware execution context at a time. A two-VCPU virtual machine gets scheduled on two hardware execution contexts at a time, or not at all. A four-VCPU virtual machine gets scheduled on four hardware execution contexts at a time, or not at all.

• A virtual machine can have 1, 2, or 4 virtual CPUs (VCPUs).

• When a VCPU needs to be scheduled, the VMkernel maps a VCPU to a “hardware execution context.”

• A hardware execution context is a processor’s capability to schedule one thread of execution.• A core or a hyperthread• VMkernel load balances

• All the VCPUs in a VM must be simultaneously scheduled.

H.E.C.H.E.C.H.E.C.

H.E.C.H.E.C. H.E.C.H.E.C.

Overview_A.book Page 30 Friday, September 5, 2008 12:40 PM

Page 37: Overview of VMWARE Infrastructure 3

Module 3 CPU and Memory Resource Pools 31

CP

U and M

emory R

esource Pools

3

Flexible Resource AllocationSlide 3-6

CPU and memory shares guarantee a virtual machine will be allocated a certain amount of CPU and memory resources, even during periods of contention. Shares have no effect on resource allocation until contention occurs.

Consider VM B in the graphic. On the first line, VM B is assigned 1,000 of the total 3,000 shares of a resource. All other factors being equal, VM B will be allocated one-third of the resource during periods of contention.

On the second line, VM B’s share allocation has been increased to 3000. At this time, VM B would be allocated three-fifths of the resource during periods of contention.

On the last two lines, VM B still has 3,000 assigned shares, although the total number of assigned shares changes as VM D is powered on or VM C is powered off. When VM D is powered on, VM B is entitled only to three-quarters of the resource. When VM C is powered off, VM B is entitled to three-fifths of the resource.

• Proportional-share system for relative resource management• Used to grant resources according to business requirements• Applied during resource contention

• Prevents virtual machines from monopolizing resourcesNumber of Shares

• Change number of shares

• Power on VM

• Power off VM

Overview_A.book Page 31 Friday, September 5, 2008 12:40 PM

Page 38: Overview of VMWARE Infrastructure 3

32 Overview of VMware Infrastructure 3

Virtual Machine CPU Resource ControlsSlide 3-7

ESX features three virtual machine CPU resource controls that are used to tune virtual machine behavior. These CPU resource controls are dynamic in that they can be modified while the virtual machine is powered on or off.

CPU “limit” defines the maximum amount of physical CPU, measured in MHz, that a virtual machine is allowed.

CPU “reservation” defines the amount of physical CPU, measured in MHz, reserved for this virtual machine at power up. As long as a virtual machine is not using its total reservation, the unused portion is available for use by other virtual machines. The VMkernel will not allow a virtual machine to power on unless it can guarantee its CPU reservation.

Each virtual machine is assigned a number of CPU “shares.” The more shares a virtual machine has relative to other virtual machines, the more often it gets CPU time above its reservation when there is contention for CPU resources.

Limit• A cap on the consumption of physical CPU

time by this VM, measured in MHz

Reservation• A certain number of physical CPU cycles

reserved for this VM, measured in MHz• The VMkernel chooses which CPUs, and may

migrate.• A VM will power on only if the VMkernel can

guarantee the reservation.• A reservation of 1,000MHz might be generous

for a 1-VCPU VM, but not for a 4-VCPU VM.

Shares• More shares means this VM will win

competitions for physical CPU time more often.

Overview_A.book Page 32 Friday, September 5, 2008 12:40 PM

Page 39: Overview of VMWARE Infrastructure 3

Module 3 CPU and Memory Resource Pools 33

CP

U and M

emory R

esource Pools

3

Supporting Higher Consolidation Ratios (1)Slide 3-8

Although physical servers in a datacenter are often configured with large amounts of memory, only a small portion is typically active at any one time. Higher active memory utilization is achieved by combining multiple virtual machines on one physical server. This efficient use of memory resources reduces datacenter capital and operating costs.

The ESX VMkernel manages the server’s RAM. RAM in an ESX host is called “machine memory.” Various pages of machine memory are collected and presented as contiguous memory to each virtual machine. Virtual machines may actually be sharing identical pages of read-only machine memory. This is called “transparent page sharing” and is covered later in this module.

Each virtual machine guest operating system manages the machine memory presented to it by the VMkernel. Memory managed by the virtual machine is called “physical memory” and is analogous to the memory available in a physical server.

The definition of “virtual memory” remains unchanged in a virtual datacenter. Virtual memory is managed by the virtual machine’s guest operating system and is comprised of both real memory and disk space.

Virtual memory• Memory mapped by an application

inside the guest operating system

Physical memory• ESX presents the virtual machines

and the service console with physical pages.

• Identical pages might be shared by multiple virtual machines (transparent page sharing).

Machine memory• Actual pages allocated by ESX from

RAM• Multiple guest operating system

pages might map to same machine page (transparent page sharing).

Hypervisor

Application

Guest OS

Overview_A.book Page 33 Friday, September 5, 2008 12:40 PM

Page 40: Overview of VMWARE Infrastructure 3

34 Overview of VMware Infrastructure 3

Supporting Higher Consolidation Ratios (2) Slide 3-9

ESX uses several features designed by VMware to support efficient use of RAM and higher consolidation ratios. Transparent page sharing is one of these features.

Transparent page sharing helps to reduce the total amount of required RAM by allowing virtual machines to share identical memory pages. The VMkernel dynamically scans ESX memory for read-only pages with identical content.

When pages are found, the duplicates are released and the virtual machines are mapped to the single remaining page. If any virtual machine attempts to modify a shared page, the VMkernel will create a new, private page for that virtual machine to use.

Transparent page sharing• Supports higher server-

consolidation ratios• VMkernel detects identical

pages in VMs’ memory and maps them to the same underlying machine page.• No changes to guest

operating system required• VMkernel treats the shared

pages as copy-on-write.• Read-only when shared• Private copies after write

Overview_A.book Page 34 Friday, September 5, 2008 12:40 PM

Page 41: Overview of VMWARE Infrastructure 3

Module 3 CPU and Memory Resource Pools 35

CP

U and M

emory R

esource Pools

3

Virtual Machine Memory Resource ControlsSlide 3-10

ESX features four virtual machine memory resource controls that are used to tune a virtual machine’s behavior. Three of these memory resource controls are dynamic and can be modified while the virtual machine is powered on.

“Available memory,” measured in megabytes, is assigned to the virtual machine when it is created. It is the total amount of memory presented by the virtual machine to the guest operating system at boot-up. Available memory cannot be changed while the virtual machine is powered on.

Memory “limit,” measured in megabytes, defines the maximum amount of virtual machine memory that can reside in RAM. It never exceeds available memory. By default, available memory and memory limit are set to the same value.

Memory “reservation,” measured in megabytes, is the amount of RAM reserved by the VMkernel for the virtual machine at power-on. As long as a virtual machine has not used its total reservation, the unused portion is available for use by other virtual machines. The VMkernel will not allow a virtual machine to power on, unless it can guarantee the memory reservation.

Each virtual machine is assigned a number of memory “shares.” The more shares a virtual machine has relative to other virtual machines, the more often it is allocated RAM above its reservation when there is memory contention.

Available memory• Memory size defined when the VM was

created

Limit • A cap on the consumption of physical

memory by this VM, measured in MB• Equal to available memory by default

Reservation• A certain amount of physical memory

reserved for this VM, measured in MB

Shares• More shares means this VM will win

competitions for physical memory more often

VMkernel allocates a per-VM swap file to cover each VM’s range between available memory and reservation.

Overview_A.book Page 35 Friday, September 5, 2008 12:40 PM

Page 42: Overview of VMWARE Infrastructure 3

36 Overview of VMware Infrastructure 3

The VMkernel might use disk space as virtual machine virtual memory in unusual circumstances. The reserved disk space is calculated per virtual machine, using the difference between the memory limit and the memory reservation.

Overview_A.book Page 36 Friday, September 5, 2008 12:40 PM

Page 43: Overview of VMWARE Infrastructure 3

Module 3 CPU and Memory Resource Pools 37

CP

U and M

emory R

esource Pools

3

Using Resource Pools to Meet Business NeedsSlide 3-11

Resource pools provide a business with the ability to divide and allocate CPU and memory resources hierarchically as required by business need. Reasons to divide and allocate CPU and memory resources include such things as maintaining administrative boundaries, enforcing charge-back policies, or accommodating geographic locations or departmental divisions. It is possible to further divide and allocate resources by creating child resource pools.

Configuring CPU and memory resource pools is possible only on nonclustered ESX hosts or on VMware® DRS-enabled clusters.

Clusters are indicated in the inventory with pie chart icons.

• A resource pool is a logical abstraction for hierarchically managing CPU and memory resources.

• Configurable on a standalone host or a VMware DRS-enabled cluster

• Provides resources for VMs and child resource pools

Resource Pools

Geography?Department?Function?Hardware?

RootResource

Pool

Overview_A.book Page 37 Friday, September 5, 2008 12:40 PM

Page 44: Overview of VMWARE Infrastructure 3

38 Overview of VMware Infrastructure 3

Configuring a Pool's ResourcesSlide 3-12

Resource pools have CPU and memory resource controls that behave like virtual machine CPU and memory controls. Resource pool resource controls can be modified while virtual machines are running.

CPU and memory limits define the maximum amount of CPU or RAM a resource pool is allowed.

CPU and memory reservations define the amount of CPU or RAM reserved for the resource pool when it is created. The VI Client interface will not allow resource pool creation unless the reservation can be guaranteed.

Each resource pool is assigned a number of CPU and memory shares. The more shares a resource pool has relative to other resource pools (and, possibly, virtual machines), the more often it is allocated CPU and memory resources above its reservations during periods of contention.

Expandable reservations allow a resource pool with insufficient capacity to borrow CPU or memory resources from a parent pool to satisfy reservation requests from a child resource pool or virtual machine. Requests to borrow resources proceed up the pool hierarchy until the top level is reached or a pool with no expandable reservations is encountered. Expandable reservations provide great flexibility but have the potential to be abused.

• Resource pools have the following attributes:• Shares

•Low, Normal, High, Custom

• Reservations, in MHz and MB• Limits, in MHz and MB

•Unlimited access, by default (up to maximum amount of resource accessible)

• Expandable reservation? •Yes: VMs and subpools can draw from this pool’s parent.

•No: VMs and subpools can onlydraw from this pool, even if its parent has free resources.

Overview_A.book Page 38 Friday, September 5, 2008 12:40 PM

Page 45: Overview of VMWARE Infrastructure 3

Module 3 CPU and Memory Resource Pools 39

CP

U and M

emory R

esource Pools

3

Viewing Resource Pool InformationSlide 3-13

Use the resource pool’s Resource Allocation tab to view configuration and current usage information for virtual machine and child pools.

• Display the resource pool’s Resource Allocation tab.

Overview_A.book Page 39 Friday, September 5, 2008 12:40 PM

Page 46: Overview of VMWARE Infrastructure 3

40 Overview of VMware Infrastructure 3

Resource Pools Example: CPU ContentionSlide 3-14

In this example, Finance has been assigned twice as many shares as Engineering because Finance supplies two-thirds of the ESX host’s budget. In this scenario, Engineering virtual machines could actually use more than one-third of the CPU resources as long as Finance does not use its two-thirds.

Although this example focuses only on CPU allocation, memory is allocated using similar methods.

22%

22%

11%

45%

Eng-Test gets ~33% of Engineering’s CPU allocation = Approximately 11% of the PCPU.

% of PCPU allocation

Engineering~33%

Finance~67%

Svr001All VMs below are running onsame physical CPU (PCPU)

Engineering

CPU Shares: 1,000~33% of PCPU

Finance

CPU Shares: 2,000~67% of PCPU

Eng-Test

CPU Shares: 1,000

Eng-Prod

CPU Shares: 2,000

Fin-Test

CPU Shares: 1,000

Fin-Prod

CPU Shares: 2,000

Overview_A.book Page 40 Friday, September 5, 2008 12:40 PM

Page 47: Overview of VMWARE Infrastructure 3

Module 3 CPU and Memory Resource Pools 41

CP

U and M

emory R

esource Pools

3

Admission Control for CPU and Memory ReservationsSlide 3-15

Admission control affects whether a virtual machine is allowed to power on or a resource pool can be created. Admission control is enforced by the VMkernel and is based on reservations. If the VMkernel can guarantee the reservation, the virtual machine will power on or the pool can be created. Admission control, along with proper reservation settings, ensures applications can meet predetermined service-level agreements.

Power on a VM. Create a new subpoolwith its own reservation.

Increase a pool’sreservation.

Expandablereservation?

Can this poolsatisfy reservation?

No

Yes – Go to Parent Pool

Succeed

FailNo

Yes

Minimizes operating system and application CPU and memory starvation

Overview_A.book Page 41 Friday, September 5, 2008 12:40 PM

Page 48: Overview of VMWARE Infrastructure 3

42 Overview of VMware Infrastructure 3

Overview_A.book Page 42 Friday, September 5, 2008 12:40 PM

Page 49: Overview of VMWARE Infrastructure 3

Overview of VMware Infrastructure 3 43

M O D U L E 4

Migrate Virtual M

achines Using V

Motion

4

Migrate Virtual Machines Using VMotion 4

ImportanceVMotion is a valuable tool for delivering higher service levels and improving overall hardware utilization.

Objectives for the Learner• Understand the VMotion migration process• Migrate virtual machines with VMotion• Understand VMotion requirements

Lesson Topics• VMotion migration• VMotion compatibility requirements

Overview_A.book Page 43 Friday, September 5, 2008 12:40 PM

Page 50: Overview of VMWARE Infrastructure 3

44 Overview of VMware Infrastructure 3

VMotion MigrationSlide 4-5

VMotion migration is a VirtualCenter feature that moves a running virtual machine from one VMware® ESX host to another with no virtual machine downtime.

VMotion capitalizes on the fact that the entire state of a running virtual machine is encapsulated in memory and in a set of files on a datastore. VMotion uses a dedicated Gigabit Ethernet network to move the memory from one ESX host to another. Virtual machine files do not have to be moved, because both the source and target ESX hosts have access to the datastore containing the virtual machine’s files. Migrated virtual machines maintain their unique host name, IP address, and MAC address.

VMotion enables higher service levels. First, virtual machines can be moved from one ESX host to another to accommodate planned downtime for hardware maintenance. Second, virtual machines can be moved to balance workloads across multiple ESX hosts.

• A VMotion migration moves a powered-on virtual machine from one ESX host to another.

• Why migrate using VMotion?• Higher service levels by allowing continued VM

operation during scheduled hardware downtime• To balance overall hardware utilization

Overview_A.book Page 44 Friday, September 5, 2008 12:40 PM

Page 51: Overview of VMWARE Infrastructure 3

Module 4 Migrate Virtual Machines Using VMotion 45

Migrate Virtual M

achines Using V

Motion

4

How VMotion Works (1)Slide 4-6

You initiate a VMotion migration using the VI Client. In the example above, the source ESX host is esx01, and the target ESX host is esx02. Both the source and target servers share access to a common datastore containing the virtual machine’s files. VMotion is fully supported on NFS datastores and VMFS datastores residing on either Fibre Channel or iSCSI storage networks. Both servers also share access to the dedicated VMotion Gigabit Ethernet network.

• Users currently accessing VM A on esx01• Initiate migration of VM A from esx01 to esx02

while VM A is up and running.

VMotionNetwork

ProductionNetwork

Overview_A.book Page 45 Friday, September 5, 2008 12:40 PM

Page 52: Overview of VMWARE Infrastructure 3

46 Overview of VMware Infrastructure 3

How VMotion Works (2)Slide 4-7

The virtual machine’s memory state is copied over the VMotion network from the source to the target ESX host. While the virtual machine’s memory is being copied, users continue to access the virtual machine and can update pages in the source ESX host memory. A list of any modified memory pages is kept in a memory bitmap on the source ESX host.

• Pre-copy memory from esx01 to esx02.• Log ongoing memory changes into a memory bitmap

on esx01.

VMotionNetwork

ProductionNetwork

MemoryBitmap

Memory

Overview_A.book Page 46 Friday, September 5, 2008 12:40 PM

Page 53: Overview of VMWARE Infrastructure 3

Module 4 Migrate Virtual Machines Using VMotion 47

Migrate Virtual M

achines Using V

Motion

4

How VMotion Works (3)Slide 4-8

After most of the virtual machine’s memory is copied from the source to the target ESX host, the virtual machine is quiesced: this means that the virtual machine is temporarily placed in a state where no additional activity will occur. This is the only time in the VMotion procedure in which the virtual machine is unavailable. Quiescence typically lasts approximately one second. During this period, VMotion begins to transfer the virtual machine state to the target ESX host. The virtual machine device state and the memory bitmap containing the list of pages that have changed are also transferred during this time.

If a failure occurs during the VMotion migration, the virtual machine being migrated is failed back to the source ESX host. For this reason, the source virtual machine is maintained until the virtual machine on the target ESX host starts running.

• Quiesce virtual machine on esx01.• Copy memory bitmap to esx02.

VMotionNetwork

ProductionNetwork

Memory Bitmap

Overview_A.book Page 47 Friday, September 5, 2008 12:40 PM

Page 54: Overview of VMWARE Infrastructure 3

48 Overview of VMware Infrastructure 3

How VMotion Works (4)Slide 4-9

The remaining memory identified in the bitmap is copied from the source to the target ESX host.

• Copy virtual machine’s remaining memory (as listed in memory bitmap) from esx01.

VMotionNetwork

ProductionNetwork

MemoryBitmap

Copy Pages

Overview_A.book Page 48 Friday, September 5, 2008 12:40 PM

Page 55: Overview of VMWARE Infrastructure 3

Module 4 Migrate Virtual Machines Using VMotion 49

Migrate Virtual M

achines Using V

Motion

4

How VMotion Works (5)Slide 4-10

Immediately after the virtual machine is quiesced on the source ESX host, the virtual machine on the target ESX host is initialized and starts running.

A virtual machine’s entire network identity, including MAC and IP address, is preserved during a VMotion.

To update the physical switch port, the VMkernel sends a Reverse Address Resolution Protocol (RARP) request with the virtual machine’s MAC address to the physical network.

• Start VM A on esx02.

VMotionNetwork

ProductionNetwork

Overview_A.book Page 49 Friday, September 5, 2008 12:40 PM

Page 56: Overview of VMWARE Infrastructure 3

50 Overview of VMware Infrastructure 3

How VMotion Works (6)Slide 4-11

The original virtual machine is finally deleted from the source ESX host. Users now access the virtual machine on the target ESX host.

• Users now access VM A on esx02.• Delete VM A from esx01.

VMotionNetwork

ProductionNetwork

Overview_A.book Page 50 Friday, September 5, 2008 12:40 PM

Page 57: Overview of VMWARE Infrastructure 3

Module 4 Migrate Virtual Machines Using VMotion 51

Migrate Virtual M

achines Using V

Motion

4

ESX Host Requirements for VMotionSlide 4-12

Listed here are several important ESX host requirements for successful VMotion migration. Groups of identical servers should be purchased at the same time to better ensure VMotion compatibility.

• Source and destination ESX hosts must have the following:• Visibility to all SAN LUNs (either FC or iSCSI) and NAS

datastores used by the VM• A Gigabit Ethernet VMotion network• Access to the same virtual machine networks• Compatible CPUs

•Same vendor and features

Overview_A.book Page 51 Friday, September 5, 2008 12:40 PM

Page 58: Overview of VMWARE Infrastructure 3

52 Overview of VMware Infrastructure 3

Virtual Machine Requirements for VMotionSlide 4-13

The VI Client has an easy-to-use VMotion wizard. A series of checks are performed when you select a virtual machine and a destination ESX host for VMotion. The wizard provides validation messages for both the source and the destination ESX hosts once they pass the automatic VMotion requirement checks.

The VMotion wizard also features user-friendly error and warning messages. When an error is encountered, it must be fixed before proceeding. When a warning is encountered, VMotion migration can proceed.

Migrating a virtual machine with the following conditions produces an error (red error icon and message in VMotion wizard):

• Virtual machine has an active connection to a local-only ESX resource.• An internal-only virtual switch• A CD-ROM or floppy device with a local image

• Virtual machine is in a cluster relationship (for example, using MSCS) with another VM.

Migrating a virtual machine with the following conditions produces a warning (yellow warning icon and message in VMotion wizard):

• Virtual machine has an configured but inactive connection to a local-only ESX resource.

• An internal–only virtual switch• A local CD-ROM or floppy image

• Virtual machine has one or more snapshots.

Overview_A.book Page 52 Friday, September 5, 2008 12:40 PM

Page 59: Overview of VMWARE Infrastructure 3

Module 4 Migrate Virtual Machines Using VMotion 53

Migrate Virtual M

achines Using V

Motion

4

Lab for Module 4Slide 4-14

• Migrate Virtual Machines Using VMotion

• In this lab, you perform the following task:

•Migrate a virtual machine using VMotion

VirtualCenterServer

Student 01a Student 02bStudent 02aStudent 01b

ESX Host#1

ESX Host#2

Overview_A.book Page 53 Friday, September 5, 2008 12:40 PM

Page 60: Overview of VMWARE Infrastructure 3

54 Overview of VMware Infrastructure 3

Overview_A.book Page 54 Friday, September 5, 2008 12:40 PM

Page 61: Overview of VMWARE Infrastructure 3

Overview of VMware Infrastructure 3 55

M O D U L E 5

VM

ware D

RS

Clusters

5

VMware DRS Clusters 5

ImportanceVMware® DRS-enabled clusters assist your system administration staff by providing automated resource management for multiple ESX hosts. Less management and more efficient use of existing hardware resources reduces costs.

Objectives for the Learner• To understand the functionality and benefits of a DRS cluster• To create and configure a DRS cluster• To create resource pools in a DRS cluster for multi-ESX host resource

policy control

Lesson Topics• What is a DRS cluster?• Creating a DRS cluster• DRS cluster settings, including the following:

• Automation level• Migration threshold• Placement constraints

Overview_A.book Page 55 Friday, September 5, 2008 12:40 PM

Page 62: Overview of VMWARE Infrastructure 3

56 Overview of VMware Infrastructure 3

What Is a DRS Cluster?Slide 5-5

A cluster is a collection of ESX hosts that have their resources managed as a single unit. The goal of a DRS-enabled cluster is to balance the workload generated by the virtual machines across the ESX hosts in the cluster. The workload-balance computations are performed automatically by DRS. VMware DRS considers user-defined resource policy settings and placement constraints, along with VMotion compatibility, when deciding how to balance ESX host workloads.

Once imbalance is detected and a solution is calculated, DRS can either recommend or perform specific VMotion migrations, depending on the DRS automation level settings.

• Cluster•A collection of ESX hosts and associated virtual machines

• DRS-enabled cluster•Uses VMotion to balance workloads across ESX hosts

•Enforces resource policies accurately (reservations, limits, shares)

•Respects placement constraints• Affinity rules and VMotion compatibility

•Managed by VirtualCenter•Experimental support for automatic power management

• Powers off ESX hosts when not needed

Cluster

Overview_A.book Page 56 Friday, September 5, 2008 12:40 PM

Page 63: Overview of VMWARE Infrastructure 3

Module 5 VMware DRS Clusters 57

VM

ware D

RS

Clusters

5

Create a DRS ClusterSlide 5-6

A DRS cluster is configured using a wizard in the VI Client. The user is prompted to provide the cluster a unique name and enable it for DRS.

1. Right-click your datacenter.2. Choose New Cluster.

Name your cluster, then enable VMware DRS by selecting the check box.

Overview_A.book Page 57 Friday, September 5, 2008 12:40 PM

Page 64: Overview of VMWARE Infrastructure 3

58 Overview of VMware Infrastructure 3

Automating Workload BalanceSlide 5-7

Once DRS has been enabled on a cluster, new configuration choices appear in the wizard’s left-side menu. These choices include VMware DRS, Rules, Virtual Machine Options, and Power Management.

The VMware DRS menu option is where the cluster-wide automation level is configured. The cluster-wide automation level affects how DRS performs its two main functions. These are the two main DRS functions:

DRS features three different cluster-wide automation levels. The automation level determines how much of the decision-making process is granted to VMware DRS when it needs to perform initial placement and dynamic balancing.

Configure the cluster-wide automation level for initial placement and dynamic workload balancing while VMs are running.

ManualManualManual

AutomaticAutomaticFully automated

Partially automated

Automation level

Manual

Dynamic balancing

Automatic

Initial VM placement

Initial placement When a virtual machine is powered on, it must be initially placed on an ESX host.

Dynamic balancing The workloads created by running virtual machines must be balanced across the ESX hosts in the cluster.

Overview_A.book Page 58 Friday, September 5, 2008 12:40 PM

Page 65: Overview of VMWARE Infrastructure 3

Module 5 VMware DRS Clusters 59

VM

ware D

RS

Clusters

5

DRS migration recommendations are ranked using a one-to-five–star metric. Applying a four-star migration recommendation will result in restoring more balance than would occur if applying a one-star recommendation. Five-star recommendations occur when a DRS affinity rule has been broken. Affinity rules are covered later in this module.

The migration threshold determines how DRS responds to migration recommendations when DRS is configured in fully automated mode. The slider bar has five distinct positions, which correspond to the five-star ranking system. For example, moving the slider all the way to left configures DRS to VMotion virtual machines only in response to a five-star recommendation. Moving the slider all the way to the right configures DRS to VMotion virtual machines in response to one-, two-, three-, four-, or five-star recommendations.

Manual When a virtual machine is powered on, DRS displays a star-ranked list of the ESX servers based on their current CPU and memory utilization. The user selects which ESX server to use. When the workloads across the ESX servers in the DRS cluster become unbalanced, DRS displays a ranked list of VMotion recommendations.

Partially automated When a virtual machine is powered on, DRS automatically places it on the best-suited ESX server. When the workloads across the ESX servers in the DRS cluster become unbalanced, DRS displays a ranked list of VMotion recommendations.

Fully automated When a virtual machine is powered on, DRS automatically places it on the best-suited ESX server. When the workloads across the ESX servers in the DRS cluster become unbalanced, DRS automatically VMotions virtual machines to restore balance.

Overview_A.book Page 59 Friday, September 5, 2008 12:40 PM

Page 66: Overview of VMWARE Infrastructure 3

60 Overview of VMware Infrastructure 3

Adding ESX Hosts to a DRS Cluster (1)Slide 5-8

To add an ESX host to a DRS cluster, you drag and drop the ESX host onto the cluster icon in the VirtualCenter inventory. Supply the requested information when prompted by the Add Host wizard.

• Drag and drop ESX host onto cluster and …Drag-and-drop

• Use the Add Host wizard to complete the process.

Overview_A.book Page 60 Friday, September 5, 2008 12:40 PM

Page 67: Overview of VMWARE Infrastructure 3

Module 5 VMware DRS Clusters 61

VM

ware D

RS

Clusters

5

Adding ESX Hosts to a DRS Cluster (2)Slide 5-9

The Add Host wizard will prompt the user to choose how to handle any ESX host resource pools. There are two choices. Existing ESX host resource pools can be maintained as depicted in the graphic above. As an alternative, existing ESX host resource pools can be removed. If the resource pools are removed, the CPU and memory resources of the ESX host are added to the cluster and become available for redistribution to child objects.

When finished with the Add Host wizard, you monitor the DRS configuration progress using the Recent Tasks pane, located at the bottom of the VI Client window.

• When adding a new ESX host or moving an existing ESX host into the DRS cluster, you have the option of keeping the resource pool hierarchy, if there is one, of the existing ESX host.

• For example, add kentfield04 to Lab Cluster.

When adding the host, choose to

create a new resource pool for this host’s

virtual machines and

resource pools.

Overview_A.book Page 61 Friday, September 5, 2008 12:40 PM

Page 68: Overview of VMWARE Infrastructure 3

62 Overview of VMware Infrastructure 3

Automating Workload Balance per VMSlide 5-10

Virtual machines added to ESX hosts in a DRS cluster are automatically added to the DRS cluster as well. By default, each virtual machine is initially placed and dynamically balanced using the cluster-wide automation level. However, the cluster-wide automation level can be overridden per virtual machine. This provides additional flexibility to meet business needs. For example, a virtual machine running a business-critical application could be configured for more manual migration control.

• (Optional) Set automation level per virtual machine.• Fine-grained workload control

Overview_A.book Page 62 Friday, September 5, 2008 12:40 PM

Page 69: Overview of VMWARE Infrastructure 3

Module 5 VMware DRS Clusters 63

VM

ware D

RS

Clusters

5

Adjusting DRS Operation for Performance or HASlide 5-11

DRS cluster operation can be configured to support better application performance or higher application availability through the use of virtual machine affinity rules. There are two types of virtual machine affinity rules: an affinity rule and an anti-affinity rule.

An affinity rule will try to keep the listed virtual machines together on the same ESX host. This is typically done to enhance performance. For example, two virtual machines that pass a lot of network traffic might benefit from using a virtual switch implemented in fast memory, rather than using slower, external physical network components.

An anti-affinity rule will try to keep the listed virtual machines on separate ESX hosts. This is typically done to enhance availability. For example, two virtual machines running a business-critical application can be kept on separate ESX hosts to reduce the possibility of a service outage due to hardware failure.

• Affinity rules• Run virtual machines

on same ESX host.• Use for multi-VM

systems where performance benefits.

• Anti-affinity rules• Run virtual machines

on different ESX hosts.• Use for multi-VM

systems that load-balance or require high availability.

Overview_A.book Page 63 Friday, September 5, 2008 12:40 PM

Page 70: Overview of VMWARE Infrastructure 3

64 Overview of VMware Infrastructure 3

Lab for Module 5Slide 5-12

• Create a DRS Cluster• In this lab, you perform

the following tasks:•Create a DRS cluster•Add ESX hosts to the DRS cluster

•Add resource pools to a DRS cluster

•Test the functionality of the resource pools

Two ESX host teams belong to one cluster team.

VirtualCenterServer

Student 01a Student 02bStudent 02aStudent 01b

ESX Host1

ESX Host2

Cluster Team

Overview_A.book Page 64 Friday, September 5, 2008 12:40 PM

Page 71: Overview of VMWARE Infrastructure 3

Overview of VMware Infrastructure 3 65

M O D U L E 6

Monitoring Virtual M

achine Perform

ance6

Monitoring Virtual Machine Performance 6

ImportanceAlthough the VMkernel and VirtualCenter work proactively to avoid resource contention, maximizing and verifying performance levels requires both analysis and ongoing monitoring.

Objectives for the Learner• Understand VirtualCenter’s user-friendly performance monitoring

capabilities• Monitor a virtual machine’s performance• Determine whether a virtual machine is constrained by a resource, and

solve the problem if one exists

Lesson Topics• Virtual machine performance graphs• Monitoring a virtual machine’s usage of the following:

• CPU• Memory• Disk• Network

Overview_A.book Page 65 Friday, September 5, 2008 12:40 PM

Page 72: Overview of VMWARE Infrastructure 3

66 Overview of VMware Infrastructure 3

VirtualCenter Performance GraphsSlide 6-5

Provide students with a brief overview of the features of VirtualCenter’s performance graphs. Specifically, point out:

- The graph- The legend- The Options link- Save as .csv- The tear-off chart

Be sure to explain the relationship between the graph and the legend.

VirtualCenter features performance graphs for VMware® ESX hosts, virtual machines, clusters, and resource pools. ESX host and virtual machine performance graphs display information about CPU, memory, disk I/O, and network I/O usage. Cluster and resource pool performance graphs display only CPU and memory usage.

Performance graphs provide an easy method to quickly display numerous performance data points. To view a performance graph, select an object in the VirtualCenter inventory and use its Performance tab. Graphs that display real-time data, or historical data for the past day, week, month, or year, are available.

It is possible to export a comma-separated-value file (.csv) using the performance graph interface. The .csv file may be imported into programs such as Microsoft Excel.

Many performance graphs offer not only a wide choice of data types to display but also a choice in the type of graph to display. This flexibility allows large amounts of data to be more easily viewed and interpreted, resulting in better decisions.

The ability to display real-time data allows an enterprise to react to situations as they occur. Capturing up to a year of performance data provides information for trend analysis to better plan for the future.

Overview_A.book Page 66 Friday, September 5, 2008 12:40 PM

Page 73: Overview of VMWARE Infrastructure 3

Module 6 Monitoring Virtual Machine Performance 67

Monitoring Virtual M

achine Perform

ance6

Example CPU Performance Issue IndicatorSlide 6-6

Understanding both ESX host operation and the data types displayed by performance graphs is critical to properly interpreting information and taking correct action. VMware provides many tools to gain this understanding.

Administrators and operators have a large choice of resources to turn to. VMware and its partners publish an array of online manuals, technical papers, and knowledge base articles that feature performance monitoring information and recommendations. Administrators and operators can also attend VMware instructor-led or eLearning training courses. (The graphic above is taken from a VMware training course.)

Virtual machine ready time is the amount of time, measured in milliseconds, that a virtual machine is ready to run but cannot, because there is no available physical CPU to be scheduled on. If all physical CPUs are busy and ready time has increased, it is an indication of CPU contention.

• Ready Time• The amount of time the virtual machine is ready to run

but cannot, because there is no available physical CPU• High ready time indicates possible contention.

Overview_A.book Page 67 Friday, September 5, 2008 12:40 PM

Page 74: Overview of VMWARE Infrastructure 3

68 Overview of VMware Infrastructure 3

Are VMs Being CPU Constrained?Slide 6-7

Above is an example of using VirtualCenter to monitor virtual machine ready time. Monitoring virtual machine ready time is useful as an early indicator of CPU contention. You select the virtual machine in the inventory and click on its Performance tab. You adjust the graph settings to display real-time CPU information that includes the value “CPU Ready.”

Performance monitoring can still be done using guest operating system or application-based tools. For example, the graphic above shows a screen capture of Windows Task Manager running inside a virtual machine. A CPU usage value of 100 percent means that the virtual machine is using all the CPU time currently allotted to it.

If the virtual machine is constrained by CPU:• Add shares or increase CPU reservation.• VMotion this virtual machine.• Shut down, VMotion, or remove shares from other VMs.

Virtual machine’s CPU ready graph in VI ClientTask Manager inside VM

Overview_A.book Page 68 Friday, September 5, 2008 12:40 PM

Page 75: Overview of VMWARE Infrastructure 3

Module 6 Monitoring Virtual Machine Performance 69

Monitoring Virtual M

achine Perform

ance6

Supporting Higher Consolidation RatiosSlide 6-8

The memory “balloon” driver is another ESX feature that supports efficient use of RAM and higher consolidation ratios. It is informally called the balloon driver because of the way it operates. The balloon driver is part of the VMware Tools software and operates as a native guest operating system driver. A balloon driver exists for all supported guest operating systems.

The VMkernel uses the balloon driver to take memory from one virtual machine and give it to another virtual machine when there is contention for RAM. Which virtual machine must yield memory depends on each virtual machine’s relative number of memory shares. The virtual machines with the lower number of memory shares will be ballooned first. A virtual machine’s reserved memory can never be ballooned.

When an ESX host is not under memory pressure, no virtual machine’s balloon is inflated. But when memory becomes scarce, the VMkernel chooses a virtual machine and inflates its balloon. The VMkernel tells the balloon driver in the virtual machine to demand memory from the guest operating system. The guest operating system complies by yielding memory according to its own algorithms. The content of the yielded memory is written to the guest’s paging device, which is normally its disk. The relinquished memory can be assigned to other virtual machines.

When memory pressure diminishes, the relinquished memory is returned to the virtual machine.

inflate balloon (driver demands memory from guest OS)

guest is forced to page out to its own paging area;VMkernel reclaims memory

guest may page in; VMkernel grants memory

deflate balloon(driver relinquishes memory)

ample memory;balloon remainsuninflated

• The VMware Tools vmmemctl balloon driver supports higher consolidation ratios.• VMware Tools is installed in the guest operating systems.

• Deallocate memory from selected virtual machines when machine memory (RAM) is scarce.

Overview_A.book Page 69 Friday, September 5, 2008 12:40 PM

Page 76: Overview of VMWARE Infrastructure 3

70 Overview of VMware Infrastructure 3

Are VMs Being Memory Constrained?Slide 6-9

Above is an example of using VirtualCenter to monitor virtual machine memory ballooning. Monitoring virtual machine memory ballooning is useful as an early indicator of memory contention.

To monitor ballooning activity, you select the virtual machine in the inventory and click on its Performance tab. You adjust the graph settings to display real-time memory information that includes the values “Memory Balloon Target” and “Memory Balloon.” Memory Balloon Target is how much memory the VMkernel wants to balloon from the virtual machine. Memory Balloon is how much memory has actually been ballooned from the virtual machine.

Performance monitoring can still be done using guest operating system or application-based tools. For example, the graphic above shows a screen capture of Windows Task Manager running inside a virtual machine. Guest operating systems tools can be used to determine how the memory is being used.

If the virtual machine is constrained by memory:• Add shares or raise memory reservation.• VMotion this virtual machine.• Shut down, VMotion, or remove shares from other virtual

machines.• Add machine memory (RAM).

Check for high ballooning activity.

Task Manager inside VM

Overview_A.book Page 70 Friday, September 5, 2008 12:40 PM

Page 77: Overview of VMWARE Infrastructure 3

Module 6 Monitoring Virtual Machine Performance 71

Monitoring Virtual M

achine Perform

ance6

Are VMs Being Disk Constrained?Slide 6-10

Above is an example of using VirtualCenter to monitor virtual machine disk I/O. Monitoring virtual machine disk I/O is useful as an early indicator of storage performance issues. You select the virtual machine in the inventory and click on its Performance tab. You adjust the graph settings to display real-time memory information that includes the values “Disk Read Rate” and “Disk Write Rate,” measured in kilobytes per second.

For more information about Iometer, see http://sourceforge.net/projects/iometer.

Performance monitoring can still be done using guest operating system-based, application-based, or storage-based tools. For example, the graphic above shows a screen capture of Iometer running inside a virtual machine.

• Disk-intensive applications can saturate the storage or the path.

• If you suspect that a VM is constrained by disk access:• Measure the resource consumption

using performance graphs• Measure the effective bandwidth

between VM and the storage• To improve disk performance:

• Ensure VMware Tools is installed.• Reduce competition.

• Move other VMs to other storage.• Use other paths to storage.

• Reconfigure the storage.• Ensure that the storage’s configuration (RAID level, cache configuration, etc.) are appropriate.

Overview_A.book Page 71 Friday, September 5, 2008 12:40 PM

Page 78: Overview of VMWARE Infrastructure 3

72 Overview of VMware Infrastructure 3

Are VMs Being Network Constrained?Slide 6-11

Above is an example of using VirtualCenter to monitor virtual machine network I/O. Monitoring virtual machine network I/O is useful as an early indicator of network performance issues. You select the virtual machine in the inventory and click on its Performance tab. You adjust the graph settings to display real-time network I/O information that includes the value “Network Usage,” measured in kilobytes per second.

Performance monitoring can still be done using guest operating system–based, application-based, or network-based tools. For example, the graphic above shows a screen capture of Iometer running inside a virtual machine.

One of the suggestions above for improving virtual machine network performance involves the use of traffic shaping. Unlike CPU, memory, and disk bandwidth, network bandwidth cannot be allocated using shares. Network bandwidth is allocated using traffic shaping, if it is enabled by the administrator.

Using traffic shaping to divide network bandwidth between virtual machine NICs is analogous to cutting a pie into sections and handing each person a piece of the pie. Each person can eat only their section of the pie. However, each person can choose not to eat his or her whole piece. Part of the pie might be left over. In the same way, virtual NICs can use all the network bandwidth they are allocated and no more. However, a virtual NIC might not use all its bandwidth, leaving some available bandwidth unused.

• Network-intensive applications will often bottleneck on path segments outside ESX.• Example: WAN links between

server and client• If you suspect that a VM is

constrained by the network:• Examine performance graphs.• Measure the effective bandwidth

between VM and its peer system.• To improve network

performance:• Confirm that VMware Tools is

installed.• Move VMs to another physical NIC.• Traffic-shape other VMs.• Reduce overall CPU utilization.

Overview_A.book Page 72 Friday, September 5, 2008 12:40 PM

Page 79: Overview of VMWARE Infrastructure 3

Module 6 Monitoring Virtual Machine Performance 73

Monitoring Virtual M

achine Perform

ance6

Lab for Module 6Slide 6-12

• Monitor Virtual Machine Performance• In this lab, you perform

the following task:•Monitor CPU ready time using VirtualCenter

This lab will be performed by each

ESX host team separately.

VirtualCenterServer

Student 01a Student 02bStudent 02aStudent 01b

ESX Host1

ESX Host2

ESX Host Team 1 ESX Host Team 2

Overview_A.book Page 73 Friday, September 5, 2008 12:40 PM

Page 80: Overview of VMWARE Infrastructure 3

74 Overview of VMware Infrastructure 3

Overview_A.book Page 74 Friday, September 5, 2008 12:40 PM

Page 81: Overview of VMWARE Infrastructure 3

Overview of VMware Infrastructure 3 75

M O D U L E 7

VirtualCenter A

larms

7

VirtualCenter Alarms 7

ImportanceVirtualCenter alarms proactively monitor VMware® ESX and virtual machine performance. Alarms allow your system administrators to be more responsive to changes in the datacenter. Alarms send notifications when either the ESX host or the virtual machine state changes or user-defined thresholds are exceeded.

Objectives for the Learner• Understand ESX host and virtual machine alarms• Configure ESX host and virtual machine alarms• Configure VirtualCenter SMTP and SNMP notification settings

Lesson Topics• ESX host–based alarms• Virtual machine–based alarms• VirtualCenter SMTP and SNMP configuration

Overview_A.book Page 75 Friday, September 5, 2008 12:40 PM

Page 82: Overview of VMWARE Infrastructure 3

76 Overview of VMware Infrastructure 3

Proactive Datacenter ManagementSlide 7-5

Alarms are asynchronous notifications of changes in host or virtual machine state. When a host or virtual machine’s load passes certain configurable thresholds, the VI Client displays messages to this effect. You can also configure VirtualCenter to transmit these messages to external monitoring systems.

• VirtualCenter sends notifications when ESX host or VM state changes or when user-defined thresholds are exceeded.

Alarms are indicated in the

inventory.

Status determined by threshold levels in

alarm definition

View of VMs’ CPU and memory utilization on

selected ESX host

Overview_A.book Page 76 Friday, September 5, 2008 12:40 PM

Page 83: Overview of VMWARE Infrastructure 3

Module 7 VirtualCenter Alarms 77

VirtualCenter A

larms

7

Preconfigured VirtualCenter AlarmsSlide 7-6

The highest point in the VirtualCenter inventory—Hosts & Clusters—is the location of the default alarms. You can modify these alarms in place. You can also define finer-grained alarms. For example, you might organize several ESX hosts or clusters into a folder and apply an alarm to that folder.

• Default CPU and memory alarms are defined at the top of the inventory.

• Add custom alarms anywhere in the inventory.

Overview_A.book Page 77 Friday, September 5, 2008 12:40 PM

Page 84: Overview of VMWARE Infrastructure 3

78 Overview of VMware Infrastructure 3

Creating a Virtual Machine-Based AlarmSlide 7-7

When you right-click on a virtual machine and choose Add Alarm, the resulting window has four panels. You use the General panel to name this alarm. You use the Triggers panel to control which load factors are monitored and what the threshold for the yellow and red states are. The Reporting and Actions panels are discussed in upcoming slides.

• Right-click on a virtual machine and choose Add Alarm.

Name anddescribethe new

alarm.

Click anyfield to modify.

PercentagesPowered on,powered off,suspended

Overview_A.book Page 78 Friday, September 5, 2008 12:40 PM

Page 85: Overview of VMWARE Infrastructure 3

Module 7 VirtualCenter Alarms 79

VirtualCenter A

larms

7

Creating a Host-Based AlarmSlide 7-8

The dialog box displayed when you right-click on an ESX host and choose Add Alarm is very similar to that for a virtual machine. The key difference is the list of available triggers.

• Right-click on an ESX host and choose Add Alarm.

Name anddescribethe new

alarm.

Click anyfield to modify.Percentages

Connected,disconnected,not responding

Overview_A.book Page 79 Friday, September 5, 2008 12:40 PM

Page 86: Overview of VMWARE Infrastructure 3

80 Overview of VMware Infrastructure 3

Actions to Take When an Alarm Is TriggeredSlide 7-9

You can specify actions to occur when an alarm is triggered (other than simply displaying it in the VI Client). These actions include the following:

• Sending a notification email• Sending a notification trap• Running a script• Powering on a virtual machine• Powering off a virtual machine• Suspending a virtual machine• Resetting a virtual machine

• Use the Actions tab to send external messages or to automate the response to problems.

Available only for VM-

based alarms

Overview_A.book Page 80 Friday, September 5, 2008 12:40 PM

Page 87: Overview of VMWARE Infrastructure 3

Module 7 VirtualCenter Alarms 81

VirtualCenter A

larms

7

Alarm Reporting OptionsSlide 7-10

If you plan to transmit alarms to some external monitoring system, such as an SNMP monitoring tool, someone’s email, or someone's pager, you probably want to avoid generating a flood of duplicate alarms. Use the controls on the Reporting pane to avoid such a flood.

• Use the Reporting tab to avoid needless re-alarms.

Avoids threshold repeat alarms

Avoids state-changerepeat alarms

Overview_A.book Page 81 Friday, September 5, 2008 12:40 PM

Page 88: Overview of VMWARE Infrastructure 3

82 Overview of VMware Infrastructure 3

Configure VirtualCenter NotificationsSlide 7-11

If you want to transmit SNMP or email alarms, you must supply the IP address of the destination server.

If your SNMP community string is not public, specify it here.

Specify the email address to be used for the From address of email alerts.

• Choose Administration > VirtualCenter Management Server Configuration.

• Click SNMP to specify trap destinations.

• Click Mail to set SMTP parameters.

Overview_A.book Page 82 Friday, September 5, 2008 12:40 PM

Page 89: Overview of VMWARE Infrastructure 3

Module 7 VirtualCenter Alarms 83

VirtualCenter A

larms

7

Lab for Module 7Slide 7-12

ESX host-based and VM-based performance alarms

• In this lab, you perform the following tasks:•Create ESX host-based and VM-based alarms in VirtualCenter

•Monitor CPU Usage alarms in VirtualCenter

VirtualCenterServer

Student 01a Student 02bStudent 02aStudent 01b

ESX Host1

ESX Host2

ESX Host Team 1 ESX Host Team 2

This lab will be performed by each

ESX host team separately.

Overview_A.book Page 83 Friday, September 5, 2008 12:40 PM

Page 90: Overview of VMWARE Infrastructure 3

84 Overview of VMware Infrastructure 3

Overview_A.book Page 84 Friday, September 5, 2008 12:40 PM

Page 91: Overview of VMWARE Infrastructure 3

Overview of VMware Infrastructure 3 85

M O D U L E 8

VM

ware H

A8

VMware HA 8

ImportanceServices that are highly available are important to any business. Configuring VMware® HA can increase service levels.

Objectives for the Learner• Implement a VMware HA cluster

Lesson Topics• Architecture of VMware HA• VMware HA prerequisites• Clustering virtual machines using VMware HA

• Admission control• Restart priorities• Isolation response

Overview_A.book Page 85 Friday, September 5, 2008 12:40 PM

Page 92: Overview of VMWARE Infrastructure 3

86 Overview of VMware Infrastructure 3

What Is VMware HA?Slide 8-5

VMware High Availability (HA) provides easy-to-use, cost-effective high availability for applications running in virtual machines. In the event of server failure, affected virtual machines are automatically restarted on other production servers with spare capacity. VMware HA allows IT organizations to minimize downtime and IT service disruption while eliminating the need for dedicated standby hardware and installation of additional software.

VMware HA continuously monitors all VMware ESX servers in a cluster and detects server failures. An agent placed on each server maintains a “heartbeat” with the other servers in the cluster. ESX server heartbeats are sent every five seconds. If a heartbeat is lost, the agent initiates the restart process of all affected virtual machines on other servers. The heartbeat timeout is 15,000 milliseconds or 15 seconds. VMware HA ensures that sufficient resources are available in the cluster at all times to be able to restart virtual machines on different physical servers, in the event of server failure. Restart of virtual machines is made possible by the distributed locking mechanism in VMFS, which gracefully coordinates read-write access to the same virtual machine files by multiple ESX hosts. VMware HA is easily configured for a cluster through VirtualCenter.

In every cluster, the amount of downtime experienced depends on how long it takes whatever is running to restart when the virtual machine is failed over. The answer to how long it will take to restart the virtual machine is “it depends.”

• A VirtualCenter feature• Configuration, management, and monitoring done

through the VI Client

• Automatic restart of virtual machines in case of physical ESX server failures • Not VMotion

• Provides higher availability while reducing the need for passive standby hardware and dedicated administrators

• Provides restart capability to a range of applications not configurable under MSCS

• Provides experimental support for per-VM failover

Overview_A.book Page 86 Friday, September 5, 2008 12:40 PM

Page 93: Overview of VMWARE Infrastructure 3

Module 8 VMware HA 87

VM

ware H

A8

Virtual Machine Failure MonitoringAn additional VMware HA function called virtual machine failure monitoring allows VMware HA to monitor whether a virtual machine is available or not. VMware HA uses the heartbeat information that VMware Tools captures to determine virtual machine availability.

On each virtual machine, VMware Tools sends a heartbeat every second. Virtual machine failure monitoring checks for a heartbeat every 20 seconds. If heartbeats have not been received within a specified (user-configurable) interval, virtual machine failure monitoring declares that virtual machine as failed and resets the virtual machine.

Virtual machine failure monitoring can distinguish between a virtual machine that was powered on but has stopped sending heartbeats and a virtual machine that is powered-off, suspended, or migrated.

Virtual machine failure monitoring is experimental and not supported for production use. By default, virtual machine failure monitoring is disabled but can be enabled by editing the VMware HA Virtual Machine Options in the VI Client.

Overview_A.book Page 87 Friday, September 5, 2008 12:40 PM

Page 94: Overview of VMWARE Infrastructure 3

88 Overview of VMware Infrastructure 3

Architecture of a VMware HA ClusterSlide 8-6

A key component to the VMware HA architecture is the cluster of ESX servers. VirtualCenter is used to configure the cluster but the ability to perform failovers is independent of VirtualCenter availability. VirtualCenter is not a single point of failure for an HA cluster. In this example, the cluster consists of three ESX servers. When each server was added to the cluster, the VMware HA agent was uploaded to the server. A VMware HA agent on each server provides a heartbeat mechanism on the service console network.

VirtualCenterServer

VMware HA agents are configured by VirtualCenter, but

failover is independent of VirtualCenter.

Overview_A.book Page 88 Friday, September 5, 2008 12:40 PM

Page 95: Overview of VMWARE Infrastructure 3

Module 8 VMware HA 89

VM

ware H

A8

VMware HA PrerequisitesSlide 8-7

For the HA cluster to work properly, there are two prerequisites:

• Each ESX server in the cluster must be configured to use DNS, and DNS resolution of the host’s fully qualified domain name must be successful because VMware HA relies on that name.

• Each ESX server in the cluster should have access to the virtual machines’ files and should be able to power on the virtual machine without a problem. Distributed locking prevents simultaneous access to virtual machines, thus protecting data integrity.

• You should be able to power on a virtual machine from all ESX servers in the cluster.• Access to common resources (shared storage, VM

networks)

• ESX servers should be configured for DNS.• DNS resolution between all ESX servers in the cluster

is needed during cluster configuration and startup.

Overview_A.book Page 89 Friday, September 5, 2008 12:40 PM

Page 96: Overview of VMWARE Infrastructure 3

90 Overview of VMware Infrastructure 3

Create a VMware HA ClusterSlide 8-8

Creating a VMware HA cluster is very similar to creating a DRS cluster. The first step is to select the cluster type. It is best to create a cluster that has both VMware HA and DRS implemented: VMware HA for the reactive solution and DRS for the proactive solution. The job of DRS is to VMotion virtual machines to balance servers’ CPU and memory loads. The job of VMware HA is to reboot virtual machines on a different ESX host when an ESX host crashes. No VMotion is involved in VMware HA.

Why enable both VMware HA and DRS? The decision of initial placement of the virtual machines is done only for DRS clusters. The users can use DRS not just for overall cluster balance but also for initial placement. Thus VMware HA-plus-DRS is a reactive-plus-proactive system—an ideal situation.

Configure cluster for VMware HA and/or DRS.

Enable VMware HA by selecting the check

box.

Overview_A.book Page 90 Friday, September 5, 2008 12:40 PM

Page 97: Overview of VMWARE Infrastructure 3

Module 8 VMware HA 91

VM

ware H

A8

Add an ESX Host to the ClusterSlide 8-9

To add an ESX host to the cluster, you drag and drop the existing standalone server into the HA cluster, then use the Add Host wizard to complete the process.

• Drag and drop ESX host onto cluster and …

• Use the Add Host Wizard to complete the process.

• Consider configuring enough redundant ESX host capacity to restart virtual machines.

Overview_A.book Page 91 Friday, September 5, 2008 12:40 PM

Page 98: Overview of VMWARE Infrastructure 3

92 Overview of VMware Infrastructure 3

Configure Cluster-Wide Admission ControlSlide 8-10

VMware HA cluster configuration requires cluster-wide policies and individual virtual machine customizations.

There are two cluster-wide policy settings: Number of host failures allowed and Admission Control. The number of host failures to tolerate ranges from 1 to 4. For example, if one ESX host fails in the cluster, there should be enough resources on the remaining servers in the cluster on which to run the virtual machines that were on the failed server.

Admission control policies for VMware HA define when or when not to power on a virtual machine. By default, if a virtual machine violates availability constraints, the virtual machine will not be powered on. Availability constraints refer to the cluster’s resource reservations as well as the constraint specifying the number of host failures to tolerate. VMware HA tries to maintain enough spare capacity across the cluster based on these values. The actual spare capacity available can be monitored in the current failover capacity field in a VMware HA cluster’s Summary tab in the VI Client.

You can configure how VMware HA should respond in the event that an ESX host failure occurs and there is insufficient capacity to restart all the virtual machines. At the cluster level, you can specify what the default priority is for virtual machine restarts. You can also specify—on a per-virtual machine basis—how high the priority is to bring each particular virtual machine back online.

Configure number of tolerated ESX host failures and cluster admission control settings.

Which is more important:uptime or resource

fairness?

How much redundant capacity should be

maintained?

Cluster-wide settings. Per-

VM settings for each are also

available.

Can prevent human error from starting more VMs than can be restarted

Overview_A.book Page 92 Friday, September 5, 2008 12:40 PM

Page 99: Overview of VMWARE Infrastructure 3

Module 8 VMware HA 93

VM

ware H

A8

Restart priority is based on the criticality of virtual machines.

For example, in a Windows environment, DNS and domain controllers would normally be specified as the highest restoration priority, due to other servers depending on those infrastructure services.

This priority decision may be influenced if you have redundant DNS and domain controller elements that are forced to be resident on different servers at all times, such as if an anti-affinity rule is applied at a DRS level. Note that this will not prevent someone from manually invoking migrations that cause these virtual machines to be on the same ESX host.

There are also some virtual machines that are not essential in the event of a failure and that may be disabled from being restored. This means that, if the HA cluster will have drastically reduced available resources, shedding these less-essential resource consumers will reduce contention for these limited resources.

You set low, medium, and, high restart priorities to customize failover ordering. The default is medium. High-priority virtual machines are restarted first. Nonessential virtual machines should be set to Disabled (automated restart will skip them).

You can set the default response in the event that an ESX host becomes isolated. You can choose to do the following:

• Leave the virtual machines powered on• Power off the virtual machines

This setting can be specified at the cluster level and on a per-virtual machine basis—as the following page illustrates.

The user can also determine whether to power down the virtual machines or not, on node isolation. This is determined by the Isolation Response setting. The Isolation Response setting of Power off does just that: VMware HA does not do a clean shutdown of the virtual machines.

Isolation response is initiated when an ESX host experiences network isolation from the rest of the cluster. Power off is the default response. The Leave power on setting is intended for these cases:

• Where lack of redundancy and environmental factors make outages likely

• Where virtual machine networks are separate from service console networks (and more reliable)

Isolation events can be prevented if proper network redundancy is employed from the start.

Overview_A.book Page 93 Friday, September 5, 2008 12:40 PM

Page 100: Overview of VMWARE Infrastructure 3

94 Overview of VMware Infrastructure 3

Failover Capacity ExamplesSlide 8-11

In the first example, the VMware HA cluster has been set up to accommodate one host failure. Therefore, if any single ESX host fails in the cluster, the remaining ESX hosts should have enough capacity to run the virtual machines that are on the failed server.

In the second example, the VMware HA cluster has been set up to accommodate up to two host failures. Therefore, if two ESX hosts fail, the remaining ESX host in the cluster should have enough capacity to run all virtual machines.

NOTE

Both of these examples assume that all virtual machines require the same amount of resources.

Failover capacity: 1 host failure Failover capacity: 2 host failures

VMware HA cluster VMware HA cluster

Only 8 VMs could run and still be restarted.

Only 4 VMs could run and still be restarted.

Overview_A.book Page 94 Friday, September 5, 2008 12:40 PM

Page 101: Overview of VMWARE Infrastructure 3

Module 8 VMware HA 95

VM

ware H

A8

Maintain Business Continuity if ESX Hosts Become IsolatedSlide 8-12

Datacenters configured for high availability should include redundant management network connections between the ESX hosts. VMware HA includes a recovery mechanism in the event redundant network connections are not configured.

Network failures can cause “split-brain” conditions. In such cases, ESX hosts are unable to determine if the rest of the cluster has failed or has become unreachable.

Isolation response is used to prevent split-brain conditions and is started under the following conditions:

• An ESX host has stopped receiving heartbeats from other cluster nodes and the isolation address cannot be pinged.

• The default isolation address is the service console gateway, and the default isolation response time is 15 seconds.

Powering virtual machines off releases VMFS locks and enables other ESX hosts to recover them. When the Leave power on option is set, virtual machines may require manual power-off/migration in case of an actual network isolation.

A different isolation address can be specified by using the advanced HA option das.isolationaddress. A different isolation response time can also be specified by using the advanced HA option das.failuredetectiontime. These are cluster-wide settings, which can be set in the Advanced Options menu of the VMware HA properties.

• A network failure might cause a “split-brain” condition.

?

?

• VMware HA waits 15 seconds by default before deciding that an ESX host is isolated.

Overview_A.book Page 95 Friday, September 5, 2008 12:40 PM

Page 102: Overview of VMWARE Infrastructure 3

96 Overview of VMware Infrastructure 3

Lab for Module 8Slide 8-13

• Using VMware HA• In this lab, you perform

the following tasks:•Add VMware HA functionality to an existing cluster

•Cause VMware HA to restart virtual machines following the “crash” of an ESX host

Two ESX host teams belong to one cluster team.

VirtualCenterServer

Student 01a Student 02bStudent 02aStudent 01b

ESX Host1

ESX Host2

DRS/HA Cluster Team

Overview_A.book Page 96 Friday, September 5, 2008 12:40 PM

Page 103: Overview of VMWARE Infrastructure 3

Overview of VMware Infrastructure 3 97

M O D U L E 9

VI3 P

roduct and Feature Overview

9

VI3 Product and Feature Overview 9

ImportanceVMware’s latest products and features take full advantage of the groundbreaking mobility and manageability characteristics of virtual machines explored in the previous modules to deliver scalable, repeatable, and efficient IT processes.

Objectives for the Learner• Understand how other businesses have typically adopted VMware®

Infrastructure• Learn how you can make more efficient use of your existing resources,

reduce costs, and respond to business needs faster with a VMware Infrastructure

Lesson Topics• Standardizing on virtualization• VMware Infrastructure 3 products and features

Course Flow

The intent of this final module is to provide only a very brief introduction to the VMware VI3 product line. Many of these products leverage the core VI3 features introduced in the earlier modules. This module does not attempt to provide technical details of the products. It is meant only to inform customers these products exist and provide a few facts about each product. If the customer has interest in learning more, then consider it a opportunity for sales to get involved.

Overview_A.book Page 97 Friday, September 5, 2008 12:40 PM

Page 104: Overview of VMWARE Infrastructure 3

98 Overview of VMware Infrastructure 3

Customers Move Rapidly Along the Adoption CurveSlide 9-5

Over the last few years, virtualization has gone from a technology being tried out in test/dev to a production server consolidation technology. It is now is gaining momentum as the industry-standard way of computing.

Early adopters of our technology used the hypervisor for basic partitioning. As VMware technology matured and provided means to aggregrate multiple virtualized nodes and centralized management, customers rolled it out into mainstream production environments.

As our customers and our technology matured, virtualization began to go far beyond its original use for server consolidation and live migration of virtual machines. Ensuring availability and uptime helped customers achieve better service levels. Using virtualization for business continuity and disaster recovery helped customers achieve better recovery time objectives (RTOs) and recovery point objectives (RPOs) at a fraction of the cost. With the end-to-end management and automation capabilities available from VMware, it became very easy for customers worldwide to make VMware virtualization the default in the datacenter.

Forty-three percent of customers surveyed last year said that their default policy for all or most new machines was a virtual machine.

20062003 2004 2005

1000

200

0

800

600

400

50

0

100

150

200

250

300

Act

ive

Virt

ual M

achi

nes ESX H

ost Instances

Proof of Concept

DepartmentalRollout

ExpandedRollout

Standardization

Customer Example: Large Wireless Technology Company

Overview_A.book Page 98 Friday, September 5, 2008 12:40 PM

Page 105: Overview of VMWARE Infrastructure 3

Module 9 VI3 Product and Feature Overview 99

VI3 P

roduct and Feature Overview

9

Standardizing on VMware InfrastructureSlide 9-6

Who is this large wireless technology company?

Answer: Qualcomm

This graph illustrates the typical customer adoption phases:

• Proof of concept• Departmental rollout• Expanded rollout• Standardization

This company started a proof of concept in the first half of 2003. Today, it has implemented a VMware-first policy (that is, it has standardized on VMware for x86 workloads). Today, 60 percent of its x86 environment is virtualized (of 1,900 total servers, 1,150 are virtualized).

The number of physical servers has grown from 950 to 1,900 over the past 2.5 years. Because of the much simplified provisioning with virtualization, the company has been able to maintain the same number of server administrators. It provisions 68 new virtual machines per month. This would be impossible in the physical world without dramatic staffing increases. This means that the number of physical servers a single system administrator can manage has more than doubled. This translates into substantial operational savings for the company.

The information on this page is available at http://www.vmware.com/files/pdf/VI3_New_presentation.pdf.

Hypervisor

Virtual Infrastructure

Management & Automation

Hypervisor

Virtual Infrastructure

Hypervisor

Test & Development

Early Adoption Mainstreaming Standardization

Server Consolidation

Infrastructure Management

High Availability

3rd generation2006–2008

2nd generation2003–2005

1st generation1998–2002

• It takes more than just a hypervisor layer to create and successfully manage a virtual infrastructure.

Overview_A.book Page 99 Friday, September 5, 2008 12:40 PM

Page 106: Overview of VMWARE Infrastructure 3

100 Overview of VMware Infrastructure 3

Functional Layers in a Virtual InfrastructureSlide 9-7

VMware Infrastructure 3 products and features provide a wealth of functionality to optimize datacenter operations. The products introduced later in this module include the following:

These products are introduced in the rest of this module. They are ordered according to their placement in the three layers of the Virtual Infrastructure. For example, the first product introduced is in the bottom layer, while the final products introduced are in the top layer. Only those products not covered in earlier modules are covered in this module.

Virtualization Platform

Virtual Infrastructure

Management & Automation

• Update Manager• Virtual Desktop Manager• Guided Consolidation• Site Recovery Manager

• Enterprise Converter• Lab Manager• Lifecycle Manager

• VMware Consolidated Backup• Distributed Power Management• Storage VMotion

• VMware DRS• VMware HA• VMotion

• ESX hypervisor• ESXi hypervisor

• VMFS• VSMP

• VMware provides advanced implementation and management features at all three layers of the virtual infrastructure.

Update Manager Automates patch management and reduces manual tracking and patching of VMware ESX hosts and virtual machines

Virtual Desktop Manager

Provides an integrated desktop virtualization solution that delivers enterprise-class control and manageability with a familiar user experience

Guided Consolidation Guides first-time virtualization users through the process of discovering and converting physical servers into virtual machines

Site Recovery Manager

Automation of disaster recovery setup, testing, failover, and failback

Enterprise Converter Simplifies the discovery and analysis of physical servers and converting these servers into virtual machines

Overview_A.book Page 100 Friday, September 5, 2008 12:40 PM

Page 107: Overview of VMWARE Infrastructure 3

Module 9 VI3 Product and Feature Overview 101

VI3 P

roduct and Feature Overview

9

These products and technologies are discussed in the following pages.

Lab Manager Automates the setup, capture, storage, and sharing of multimachine system configurations

Lifecycle Manager Implements a consistent, automated workflow for provisioning, operating, and decommissioning virtual machines

VMware DRS Monitors utilization continuously across resource pools and intelligently allocates available resources among the virtual machines based on predefined rules that reflect business needs and changing priorities

VMware HA Delivers cost-effective high availability for any application running in a virtual machine, regardless of its operating system or underlying hardware configuration

VMotion Migrates running virtual machines from one ESX host to another with no disruption

VMware Consolidated Backup

Enables LAN-free backup of virtual machines from a centralized proxy server

Distributed Power Management

Minimizes power consumption by consolidating workloads onto fewer ESX hosts while guaranteeing service levels

Storage VMotion Perform live migration of virtual machine disk files across storage arrays with no disruption in service for critical applications

ESX Forms the robust foundation of the VMware Infrastructure 3 suite

ESXi Provides a hardware-integrated hypervisor built on a next-generation thin architecture

VMFS Provides a high-performance cluster file system optimized for virtual machines

Virtual SMP Allows a single virtual machine to use up to four physical processors simultaneously for increased application scalability

Overview_A.book Page 101 Friday, September 5, 2008 12:40 PM

Page 108: Overview of VMWARE Infrastructure 3

102 Overview of VMware Infrastructure 3

New Virtualization Platform Layer ProductSlide 9-8

ESX 3.5 introduces a new hardware-based hypervisor called ESXi. Because ESXi is delivered preinstalled by major OEMs, installation is not required.

• ESX 3.5 introduces the new virtualization platform layer ESXi hypervisor.

• Next-generation, thin hypervisor integrated into server hardware enabling rapid deployment.

Virtualization Platform

• ESXi hypervisor

Overview_A.book Page 102 Friday, September 5, 2008 12:40 PM

Page 109: Overview of VMWARE Infrastructure 3

Module 9 VI3 Product and Feature Overview 103

VI3 P

roduct and Feature Overview

9

ESXi HypervisorSlide 9-9

At 32MB, VMware ESXi weighs in at a fraction of the size of a general-purpose operating system. This compact footprint sets a new bar for security due to a smaller “attack surface.” This small footprint and hardware-like reliability also enable ESXi to be built directly into industry-standard x86 servers while continuing to provide the same great performance and scalability of ESX.

The operating system–independent design of ESXi is optimized for virtualization performance.

ESXi utilizes an intuitive wizard that dramatically reduces deployment time. This makes it possible to go from server boot to running virtual machines in minutes.

ESXi hosts are managed by VirtualCenter. This centralizes and simplifies management of the entire virtual infrastructure.

Both VMware ESX and VMware ESXi support the entire suite of VMware Infrastructure 3 products, features, and solutions. You can use VMware ESX and VMware ESXi side by side in your virtual infrastructure.

• Compact, 32MB footprint• Only architecture with no reliance

on a general-purpose operating system

• Integration in hardware eliminates installation.

• Intuitive wizard-driven startup experience dramatically reduces deployment time.

• Simplified management• Increased security and reliability

Overview_A.book Page 103 Friday, September 5, 2008 12:40 PM

Page 110: Overview of VMWARE Infrastructure 3

104 Overview of VMware Infrastructure 3

From Server Boot to Virtual Machines in MinutesSlide 9-10

Companies can quickly bring new ESXi servers online. All that is required to do so is to power on the server, boot into the ESXi hypervisor, configure an administrator password, optionally modify the network configuration, and connect to the server through either the VI Client or VirtualCenter.

ESXi enables companies to quickly add additional computing resources to their virtual infrastructure.

1. Power on server and boot into hypervisor.

2. Configure Admin password.

3. (Optional) Modify network configuration.

4. Connect VI Client to IP address (or manage with VirtualCenter).

Overview_A.book Page 104 Friday, September 5, 2008 12:40 PM

Page 111: Overview of VMWARE Infrastructure 3

Module 9 VI3 Product and Feature Overview 105

VI3 P

roduct and Feature Overview

9

Additional VI Layer Products and FeaturesSlide 9-11

Virtual Infrastructure

• VMware Consolidated Backup• Distributed Power Management• Storage VMotion

• The Virtual Infrastructure layer includes several products and features based on core features introduced earlier in the course.

• These additional Virtual Infrastructure products and features enable higher availability and greater cost savings.

Overview_A.book Page 105 Friday, September 5, 2008 12:40 PM

Page 112: Overview of VMWARE Infrastructure 3

106 Overview of VMware Infrastructure 3

VMware Consolidated Backup (VCB)Slide 9-12

VMware Consolidated Backup enables LAN-free backup of virtual machines from a centralized proxy server. VCB supports Fibre Channel, iSCSI, NFS, and local storage.

VCB performs file system–consistent backups of guest operating system data, with VMware Tools quiescing the file systems before the backup occurs.

VCB can perform full virtual machine backups of all supported guest operating system types. And VCB can perform file-level backups of supported Windows guest operating systems.

VCB integrates with the existing backup tools and technologies already in place. This improves manageability of existing IT resources and eliminates the need to run a backup agents on every virtual machine.

• An online backup solution for ESX host VMs• Fibre Channel, iSCSI, NFS, and local storage support

• File system-consistent guest OS backup• VMware Tools quiesces file system before backup.

• Supports different backup modes• File-level backup (Windows guests)• Full virtual machine backup (all guests)

• Works with major third-party backup software• Backup is offloaded to a physical Windows 2003

server.• VCB 1.1 is supported in a virtual machine.

Overview_A.book Page 106 Friday, September 5, 2008 12:40 PM

Page 113: Overview of VMWARE Infrastructure 3

Module 9 VI3 Product and Feature Overview 107

VI3 P

roduct and Feature Overview

9

VCB offloads the load associated with performing backups. This leaves the computing power of ESX hosts available for running virtual machines. And VCB can bypass the local area network when performing backups so that the network performance is not affected.

Recent changes to VCB that make it more attractive to the SMB market space:

• In addition to supporting SAN, VCB now supports iSCSI, NAS, and locally attached storage (released in 3.0.2).

• VCB can run in a virtual machine, thereby eliminating the need for a dedicated backup proxy server.

VMware Converter can be used to restore VCB images (released in 3.0.1). This provides a simple graphical technique to restore virtual machines from tape and return them to operation in VI3.

Overview_A.book Page 107 Friday, September 5, 2008 12:40 PM

Page 114: Overview of VMWARE Infrastructure 3

108 Overview of VMware Infrastructure 3

VMware Consolidated Backup OperationSlide 9-13

When a backup is performed using VCB, VMware Tools can be used to quiesce the file systems of guest operating systems. Once the file systems have been quiesced, VCB takes a snapshot of each VMDK being backed up. This enables VCB to back up the data without any disruption to the virtual machines. Next, the VCB server mounts the VMFS file systems so that the virtual disks can be seen. Finally, the third-party backup solution integrated with the VCB server can perform the backup.

All of this occurs without disruption to running virtual machines and can be performed in a LAN-free manner so that the network performance is not affected.

Overview_A.book Page 108 Friday, September 5, 2008 12:40 PM

Page 115: Overview of VMWARE Infrastructure 3

Module 9 VI3 Product and Feature Overview 109

VI3 P

roduct and Feature Overview

9

Distributed Power Management (Experimental)Slide 9-14

VMware Distributed Power Management (DPM) (this functionality is supported experimentally) continuously monitors resource requirements and power consumption across a DRS cluster. When the cluster needs fewer resources, it consolidates workloads and puts ESX hosts in standby mode to reduce power consumption. When resource requirements of workloads increase, DPM brings powered-down ESX hosts back online to ensure service levels are met.

Distributed Power Management allows IT organizations to do the following:

• Cut power and cooling costs in the datacenter during low-utilization periods

• Automate management of energy efficiency in the datacenterWith Distributed Power Management, administrators can define the following:

• Reserve capacity to always be available• Time for which load history can be monitored before power on/off

decisions are made.Power on will also be triggered when there aren’t enough resources available to power on a virtual machine or when more spare capacity needed for HA.

Resource Pool

Physical Servers

• Consolidates workloads onto fewer servers when the cluster needs fewer resources

• Places unneeded servers in standby mode

• Brings servers back online as workload needs increase

• Minimizes power consumption while guaranteeing service levels

• No disruption or downtime to virtual machines

Overview_A.book Page 109 Friday, September 5, 2008 12:40 PM

Page 116: Overview of VMWARE Infrastructure 3

110 Overview of VMware Infrastructure 3

Storage VMotionSlide 9-15

VMware Storage VMotion is a state-of-the-art solution that enables you to perform live migration of virtual machine disk files across heterogeneous storage arrays with complete transaction integrity and no interruption in service for critical applications.

By implementing VMware Storage VMotion in your virtual infrastructure, you gain the ability to perform proactive storage migrations, simplify array refreshes/retirements, improve virtual machine storage performance, and free up valuable storage capacity in your datacenter.

Complete operating system and hardware independence allows Storage VMotion to migrate any virtual machines running any operating system across any type of hardware and storage supported by VMware ESX.

Storage VMotion does all this with zero downtime to the virtual machines.

• Storage-independent migration of virtual machine disks•Zero downtime to virtual machines

•LUN-independent•Supported for Fibre Channel SANs

• Storage array migration• Storage I/O

optimization

Overview_A.book Page 110 Friday, September 5, 2008 12:40 PM

Page 117: Overview of VMWARE Infrastructure 3

Module 9 VI3 Product and Feature Overview 111

VI3 P

roduct and Feature Overview

9

Management and Automation Layer ProductsSlide 9-16

Management & Automation

• Update Manager• Virtual Desktop Manager• Guided Consolidation• Site Recovery Manager

• Enterprise Converter• Lab Manager• Lifecycle Manager

• The Automation layer also includes several products based on core features covered earlier in the course.

• These next-generation Automation layer products reduce the cost and complexity of managing a VMware Infrastructure.

Overview_A.book Page 111 Friday, September 5, 2008 12:40 PM

Page 118: Overview of VMWARE Infrastructure 3

112 Overview of VMware Infrastructure 3

VMware Update Manager (VUM) Slide 9-17

VMware Update Manager is an automated patch management solution for ESX hosts as well as for Microsoft and Linux virtual machines. It reduces risk by doing the following:

• Securing your datacenter from vulnerabilities• Patching both online and offline virtual machines• Snapshotting virtual machines before patching to allow rollback• Patching noncompliant offline/suspended machines in a quarantined

state so that the rest of the network is not exposed to themVMware Update Manager provides for automatic enforcement of patch standards and eliminates cumbersome and error-prone manual tracking of patch levels of ESX hosts and virtual machines.

RHEL guests can only be scanned, not remediated.

• Automates patch management for ESX hosts and select Microsoft and RHEL virtual machines•Scans and remedies online as well as offline virtual machines and online ESX hosts

•Optional virtual machine snapshot before patching allows rollback

• Reduces manual tracking of patch levels of ESX hosts and virtual machines

• Automates enforcement of patch standards

OFF

LINE

UpdateManager

HostServer

Overview_A.book Page 112 Friday, September 5, 2008 12:40 PM

Page 119: Overview of VMWARE Infrastructure 3

Module 9 VI3 Product and Feature Overview 113

VI3 P

roduct and Feature Overview

9

Update Manager and DRSSlide 9-18

When used in conjunction with VMware DRS, VMware Update Manager enables entire datacenters of ESX hosts to be patched automatically with zero downtime to the virtual machines running on those servers.

• Update Manager patches entire DRS clusters.•Each host in the cluster enters DRS maintenance mode, one at a time.

•VMs are migrated off. Host is patched and rebooted if required.

•VMs are migrated back on.

•Next host is selected.• Automates patching of

large number of hosts with zero downtime to virtual machines

Resource Pool

UpdateManager

VMotionVMotion

Overview_A.book Page 113 Friday, September 5, 2008 12:40 PM

Page 120: Overview of VMWARE Infrastructure 3

114 Overview of VMware Infrastructure 3

VDI - Virtual Desktop Manager (VDM)Slide 9-19

VMware VDI is an integrated desktop virtualization solution that delivers enterprise-class control and manageability with a familiar user experience. VMware VDI provides new levels of efficiency and reliability for your virtual desktop environment.

With VMware VDI, you get the proven VMware Infrastructure 3 software along with VMware Virtual Desktop Manager (VDM), an enterprise-class desktop management server that securely connects users to virtual desktops in the datacenter and provides an easy-to-use, Web-based interface to manage the centralized environment.

VMware VDI provides users with desktop business continuity, high availability, and disaster recovery capabilities that until now were available only for mission-critical server applications.

With VMware VDI, end users get a complete, unmodified virtual desktop that behaves just like a normal PC. There is no change to the applications or desktop environment, no application sharing, and no retraining required. Administrators can allow users to install applications, customize their desktop environment, and use local printers and USB devices.

• Enterprise-class, scalable connection broker

• Central administration and policy enforcement

• Automatic desktop provisioning with optional “smart pooling”

• Desktop persistence and secure tunneling options

• Microsoft AD integration and optional two-factor authentication via RSA SecurID

Centralized Virtual Desktops

VMware VDMClients

• End-to-end enterprise-class desktop control and manageability• Familiar end-user experience• Tightly integrated with VMware’s proven virtualization platform (VI3)• Scalability, security, and availability suitable for organizations of all sizes

Overview_A.book Page 114 Friday, September 5, 2008 12:40 PM

Page 121: Overview of VMWARE Infrastructure 3

Module 9 VI3 Product and Feature Overview 115

VI3 P

roduct and Feature Overview

9

Guided ConsolidationSlide 9-20

Guided Consolidation is a new feature in VirtualCenter 2.5. It is intended to guide first-time virtualization users through the process of discovering physical servers suitable for virtualization, collecting performance data from these servers, and converting these servers to virtual machines placed intelligently on the most appropriate hosts.

Guided Consolidation enables new users to quickly realize benefits from server consolidation and reduces training requirements for first-time “virtualizers.”

Guided Consolidation makes server consolidation easier by building new virtualization users through the consolidation process in a wizard-based, tutorial-like fashion.

Analyze

Convert

Discover

• Automatically discovers physical servers

• Analyzes utilization and usage patterns

• Converts physical servers to VMs placed intelligently based on user response

• Lowers training requirements for new virtualization users

• Guides users through the entire consolidation process

Overview_A.book Page 115 Friday, September 5, 2008 12:40 PM

Page 122: Overview of VMWARE Infrastructure 3

116 Overview of VMware Infrastructure 3

VMware Site Recovery ManagerSlide 9-21

Until now, keeping recovery plans and the runbooks that documented them accurate and up-to-date has been practically impossible because of the complexity of plans and the dynamic environment in today’s datacenters. Adding to that challenge, traditional solutions do not offer a central point of management for recovery plans and make it difficult to integrate the different tools and components of disaster recovery solutions.

VMware Site Recovery Manager simplifies and centralizes the creation and ongoing management of disaster recovery plans. Site Recovery Manager turns traditional oversized disaster recovery runbooks into automated plans that are easy to manage, store, and document. And Site Recovery Manager is tightly integrated with VMware Infrastructure 3, so you can create, manage, and update recovery plans from the same place that you manage your virtual infrastructure.

Testing disaster recovery plans and ensuring that they are executed correctly are critical to making recovery reliable. However, testing is difficult with traditional solutions because of the high cost, complexity, and disruption associated with tests. Another challenge is ensuring that staff are trained and prepared to successfully execute the complex process of recovery.

• Simplifies and automates disaster recovery workflows

• Setup, testing, failover, failback

• Provides central management of recovery plans from VirtualCenter

• Turns manual recovery processes into automated recovery plans

• Simplifies integration with third-party storage replication

• Makes disaster recovery rapid, reliable, manageable, affordable

Site Recovery Manager leverages VMware Infrastructure to transform disaster recovery.

Overview_A.book Page 116 Friday, September 5, 2008 12:40 PM

Page 123: Overview of VMWARE Infrastructure 3

Module 9 VI3 Product and Feature Overview 117

VI3 P

roduct and Feature Overview

9

Site Recovery Manager helps you overcome these obstacles by enabling realistic, frequent tests of recovery plans and eliminating common causes of failures during recovery.

Site Recovery Manager provides built-in capabilities for executing realistic, nondisruptive tests without the cost and complexity of traditional disaster recovery testing. Because the recovery process is automated, you can also ensure that the recovery plan will be carried out correctly in both testing and failover scenarios.

Site Recovery Manager leverages VMware Infrastructure to provide hardware-independent recovery to ensure successful recovery, even when recovery hardware is not identical to production hardware.

Overview_A.book Page 117 Friday, September 5, 2008 12:40 PM

Page 124: Overview of VMWARE Infrastructure 3

118 Overview of VMware Infrastructure 3

Site Recovery Manager Key Components Slide 9-22

This slide illustrates the key components of a Site Recovery Manager deployment.

Site Recovery Manager requires that the storage utilized by protected virtual machines be replicated to a secondary site. This can be performed with a variety of third-party replication solutions. VirtualCenter is required to manage both sites.

Site Recovery Manager manages the mapping of components (virtual machines, resource pools, networks, and the like) between the two sites and provides workflow automation for setup, failover, failback, and testing of the disaster recovery environment.

Overview_A.book Page 118 Friday, September 5, 2008 12:40 PM

Page 125: Overview of VMWARE Infrastructure 3

Module 9 VI3 Product and Feature Overview 119

VI3 P

roduct and Feature Overview

9

VMware Converter Enterprise CapabilitiesSlide 9-23

VMware Converter Enterprise enables administrators to quickly and reliably convert local and remote physical machines into virtual machines without any disruption or downtime. Administrators can do the following:

• Import physical machines to virtual machines• Import non-ESX VMware virtual machines• Import Microsoft Virtual Server 2005 virtual machines• Convert third-party backups or disk images to virtual machines

The centralized management console in Converter Enterprise allows users to queue up and monitor multiple simultaneous remote conversions as well as local conversions. This decreases the time and effort required in large-scale virtualization implementations.

Remote conversions are accomplished by the Converter Server downloading a Converter agent to the source system.

Local conversions are accomplished by booting the source system from the Converter CD.

• VMware Converter is a migration tool bundled in VirtualCenter 2.5 that aids in server consolidation.• Imports physical machines to virtual machines• Imports non-ESX VMware virtual machines• Imports Microsoft Virtual Server 2005 virtual machines• Converts third-party backup or disk images to virtual

machines• Reconfigures virtual machines so that they are

bootable inside ESX

• Decreases the time and effort to migrate from a physical infrastructure to a virtual infrastructure• Preserves existing configurations while saving time

and reducing costs and complexity

Overview_A.book Page 119 Friday, September 5, 2008 12:40 PM

Page 126: Overview of VMWARE Infrastructure 3

120 Overview of VMware Infrastructure 3

Using Lab Manager with VMware Infrastructure Slide 9-24

VMware Lab Manager provides the ability to automate the setup, capture, storage, and sharing of multimachine software configurations. Development and test teams can access them on demand through a self-service, Web-based portal. With its shared library and shared pool of virtualized servers and templates, VMware Lab Manager lets you efficiently move and share multimachine configurations across software development and test teams and facilities.

• Provision new environments quickly.• Test• Development• Support

Overview_A.book Page 120 Friday, September 5, 2008 12:40 PM

Page 127: Overview of VMWARE Infrastructure 3

Module 9 VI3 Product and Feature Overview 121

VI3 P

roduct and Feature Overview

9

VMware Lab Manager provides the ability to do the following:

• Allocate resources as needed instead of maintaining multiple static systems that are only used sporadically. VMware Lab Manager lets you pool and share resources between development and test teams for maximum utilization—and increased cost savings.

• Provision new machines nearly instantly with VMware Lab Manager. This eliminates the painstaking, multihour process of gathering machines, installing operating systems, installing and configuring applications, and establishing intermachine connections. Now software developers and QA engineers can fulfill their own provisioning needs, leaving IT in control of user management, storage quotas, and server deployment policies—achieving the best of both worlds.

• Quickly reproduce software defects and resolve them earlier in the software lifecycle—and ensure higher quality software and systems. VMware Lab Manager enables “closed loop” defect reporting and resolution through its unique ability to snapshot complex multimachine configurations in an error state, capture them to the library, and make them available for sharing—and troubleshooting—across development and test teams.

You can give your outsourced partners secure, remote access to your software lab—and maintain your flexibility to rapidly add, remove, or replace outsourced resources as your needs change. Your intellectual property remains securely in Lab Manager’s environment, and you eliminate time-consuming and costly replication of equipment in your partners’ labs.

Overview_A.book Page 121 Friday, September 5, 2008 12:40 PM

Page 128: Overview of VMWARE Infrastructure 3

122 Overview of VMware Infrastructure 3

VMware Lifecycle ManagerSlide 9-25

VMware Lifecycle Manager enables administrators to track and control virtual machines through a consistent approval process throughout the entire lifecycle. Lifecycle Manager automates the steps within the workflow to improve efficiency and productivity, and to ensure strict corporate compliance with company policies.

Lifecycle Manager brings many benefits to the datacenter:

• It employs standardization and best practices for tracking and managing virtual machine deployment and use.

• It eliminates manual and repetitive administrative tasks through automation.

• It prevents virtual machine sprawl and ensures corporate IT compliance.It leverages existing tools like VMware VirtualCenter, change management software, and IT process/runbook automation tools.

• Automate, manage, and control the life of virtual machines.

• Track and report on deployed virtual machines.• Provide process and policies for the following:

• How virtual machines are created• How virtual machines are deployed• How virtual machines are changed• How virtual machines are retired

Create

Change

Retire

Deploy

Overview_A.book Page 122 Friday, September 5, 2008 12:40 PM

Page 129: Overview of VMWARE Infrastructure 3

Module 9 VI3 Product and Feature Overview 123

VI3 P

roduct and Feature Overview

9

Lifecycle Workflow ManagementSlide 9-26

VMware Lifecycle Manager allows administrators to implement a consistent, automated workflow for provisioning, operating, and decommissioning virtual machines. During setup, the IT administrator creates a catalog of virtual machine templates that users can view and select. The IT administrator also defines where virtual machines can be deployed and what types of approvals are required before virtual machine deployment.

Using a self-service portal, users request virtual machines and can track the status of any pending requests. During the request process, the user enters information to help Lifecycle Manager select the specific resources that best support the request. The user can log back in to Lifecycle Manager at any time to check on the request status.

An “approver” approves or denies requests for virtual machines. The approver can be from any department. If the request is approved, the virtual machine is deployed automatically, based on the user-defined criteria and the way in which IT staff has mapped those criteria to existing computing resources.

The final step within Lifecycle Manager is to decommission the virtual machine. The decommissioning process, which consists of archiving and ultimately deleting a virtual machine, provides better resource utilization by ensuring that resources come back into the resource pool for future use. The virtual machine will be decommissioned based on the end date the user enters when first submitting a request.

Request for VM

Provisioning

DeleteArchive

Intelligent Placement

Route for Approval

Automated Deployment

VM Tracking

Policy & Control

Decommission

Overview_A.book Page 123 Friday, September 5, 2008 12:40 PM

Page 130: Overview of VMWARE Infrastructure 3

124 Overview of VMware Infrastructure 3

Summary of VI3 Products and FeaturesSlide 9-27

Virtualization Platform

Virtual Infrastructure

Management & Automation

• Update Manager• Virtual Desktop Manager• Guided Consolidation• Site Recovery Manager

• Enterprise Converter• Lab Manager• Lifecycle Manager

• VMware Consolidated Backup• Distributed Power Management• Storage VMotion

• VMware DRS• VMware HA• VMotion

• ESX• ESXi

• VMFS• VSMP

• VMware Infrastructure is much more than just a hypervisor. It is the most complete virtual infrastructure solution in the market today.

Overview_A.book Page 124 Friday, September 5, 2008 12:40 PM