54
UC VDI Proposal By Ryan Bland, Kyle Dillon, Mark Griffin A Proposal Submitted to The Faculty of the Department of Information Technology In Partial Fulfillment of the Requirements for The Degree of Bachelor of Science In Information Technology University of Cincinnati Department of Information Technology College of Education, Criminal Justice, and Human Services December 2012 Learner’s signature: Mark Griffin Date: 4/16/2013 Learner’s printed name: Mark Griffin Learner’s signature: Kyle Dillon Date: 4/16/2013 Learner’s printed name: Kyle Dillon Learner’s signature: Ryan Bland Date: 4/16/2013 Learner’s printed name: Ryan Bland Advisor’s signature: Russell E. McMahon Date: 4/16/2013 Advisor’s printed name: Russell E. McMahon

UC VDI Proposal

  • Upload
    others

  • View
    1

  • Download
    0

Embed Size (px)

Citation preview

Page 1: UC VDI Proposal

UC VDI Proposal

By

Ryan Bland, Kyle Dillon, Mark Griffin

A Proposal Submitted to

The Faculty of the Department of Information Technology

In Partial Fulfillment of the Requirements for

The Degree of Bachelor of Science

In Information Technology

University of Cincinnati

Department of Information Technology

College of Education, Criminal Justice, and Human Services

December 2012

Learner’s signature: Mark Griffin Date: 4/16/2013

Learner’s printed name: Mark Griffin

Learner’s signature: Kyle Dillon Date: 4/16/2013

Learner’s printed name: Kyle Dillon

Learner’s signature: Ryan Bland Date: 4/16/2013

Learner’s printed name: Ryan Bland

Advisor’s signature: Russell E. McMahon Date: 4/16/2013

Advisor’s printed name: Russell E. McMahon

Page 2: UC VDI Proposal

Bland, Dillon, Griffin 1

Acknowledgements

We would personally like to acknowledge all of our partners and contacts that

made this project possible. We faced many challenges in bringing this project to fruition;

however the load was tremendously lightened financially by donated hardware that we

received. We would like to specifically thank:

Mike Coffey and Matt Oswalt of iPVersant for the servers and additional

computer hardware necessary for the infrastructure

Murali Rathinasamy for professional guidance and hardware donation

To Jeff Karlberg and Dustin Clark of iGel America for use of their thin

client units and management console software

To the VMware user group for their ongoing support, vendor contacts and

knowledge base

Matt Jones of the Winton Woods School district for his implementation

recommendations

Don Rainwater and Paul Schwab of UCIT department, for their guidance

and assistance.

A special thanks to our Faculty Advisor Russell McMahon and Patrick Kumpf for

their continued guidance throughout the project.

Page 3: UC VDI Proposal

Bland, Dillon, Griffin 2

Table of Contents

Section Page

Acknowledgements 1

Table of Contents 2

List of Figures 3

Abstract 4

1. Introduction 5

1.1 Background 5

1.2 Description of Problem 5

1.3 Solution to the Problem 6

2. Discussion

2.1 Design Objectives 10

2.2 Deliverables 10

2.3 User Profiles 12

2.4 Technical elements 16

2.4.0View Infrastructure Overview 16

2.4.1 Preparing for a View Deployment 20

2.4.2 Hardware Specifications 23

2.4.3 Software Requirements 26

2.4.4 Data Store Requirements 26

2.4.5 Network Configuration 29

2.4.6 VMware View Optimizations 30

2.5 Budget 32

2.6 Project Timeline 35

2.7 Proof of Design 37

2.8 Future Recommendations 48

3. Conclusion 49

3.1 Lessons Learned 49

3.2 Final Synopsis 50

References 52

Page 4: UC VDI Proposal

Bland, Dillon, Griffin 3

List of Figures

Figure 1. View Environment Overview 7

Figure 2. View Connection Flow 11

Figure 3. Use Case Diagram 15

Figure 4. Sample View Environment 30

Figure 5. VDI Cost Chart 33

Figure 6. Return on Investment Graph 34

Figure 7. Microsoft Project Gantt Chart 36

Figure 8. Atlassian Suite Project Tool 36

Figure 9. vCenter 37

Figure 10. vCenter - ESXi Host Networking 38

Figure 11. View Administrator 39

Figure 12. View Administrator - Desktop Pools 40

Figure 13. View Administrator - ThinApp Deployment 41

Figure 14. View Administrator - ThinApp Repository 42

Figure 15. Live View Connection Flow 43

Figure 16. FreeNAS - Storage 44

Figure 17. FreeNAS - Reporting 45

Figure 18. Active Directory - Overview 46

Figure 19. Active Directory - Group Policy 46

Figure 20. IGEL Management Console 47

Figure 21. IGEL Management Console - Kiosk Configuration 48

Page 5: UC VDI Proposal

Bland, Dillon, Griffin 4

Abstract

VDI@UC is at its core a business productivity project that looks to solve real

world problems such as application delivery, remote environment access, distributed

environment management, and data loss prevention. It is a feasibility study to determine

whether a linked-clone Virtual Desktop Infrastructure (VDI) is a better way to manage

UC’s lab environments. The VDI@UC project strives to increase the users’ access to

college specific applications as well as reduce administrative overhead. The VDI@UC

project involved implementing a scaled down model of a subset of UC’s lab

environments within a VDI linked-clone model. There were two labs chosen to emulate

including a standard lab and a performance IT lab. The Project compared the ease of

management between both environments to see if there was an opportunity to drive down

operational expenses. The results of the Proof of Concept (POC) concluded that

implementing a VDI environment would not only be a comparable solution to the

university’s current offering but would expand access to applications as well as reduce

administrative overhead. The VDI@UC project has shown that implementing linked-

clone VMware View model would benefit both users of the environment and the

administrators who support it.

Page 6: UC VDI Proposal

Bland, Dillon, Griffin 5

1. Introduction

1.1 Background

The University of Cincinnati has a large program of studies that are supported by

specialized software and hardware systems. These different systems are grouped by

major in lab environments and these labs are distributed throughout each college around

campus. The university has taken this approach to distribute limited resources such as

expensive application licenses, specialized hardware configurations, as well as niche

network configurations all of which can be requirements for completing coursework.

Due to resource constraints many of these lab environments house multiple major

specific application sets. These multipurpose labs can become strained during high

demand periods such as during the exam weeks as well as before holidays or breaks when

they are used for completing last minute assignments and studying. The limited nature of

these multipurpose labs creates constant use and they are prone to hardware failure. This

can impose on multiple classes of students from many different majors as a single point

of failure for major specific application access. This has made several negative impacts

on the UCIT staff as well as the student body.

1.2 Description of Problem

To date managing UC's labs is very difficult and time consuming due to each lab

having its own software and hardware configurations as well as individual patching

requirements. There might be different versions of applications installed at multiple lab

sites to meet hardware constraints or for licensing purposes. This can be further

complicated when specialized hardware systems need repair or replacement at multiple

Page 7: UC VDI Proposal

Bland, Dillon, Griffin 6

physical locations requiring numerous IT staff members to make the repairs. Old

computers and hardware in these labs become deprecating assets which will eventually

become too expensive to fix. The diverse lab sites ultimately lead to more lab downtown

and increased administrative overhead.

Student access to the labs has numerous limitations especially for students

enrolled in classes in multiple colleges. Such limitations as limited lab computers,

physical location, scheduled class/lab time, and damaged lab equipment all can make

completing assignments both difficult and time consuming. Distance learning students

are also limited in the courses they can take, as the majority of expensive software is

available only in certain labs with no outside access. These students must take it upon

themselves to find working copies of software the university has already purchased.

These limited lab environments are straining to both the student body and UCIT at large.

1.3 Solution to the Problem

In an effort to remediate these issues the VDI@UC feasibility study takes a deep

dive into a working VMware View solution. Virtual Desktop Infrastructure (VDI) has the

capability to deliver pre-configured and managed desktops to end users from a highly

available cloud infrastructure. The technology utilizes resource pooling and centralized

management to offer on-demand lab environments with all major specific software. The

virtual labs will be available over the internet to distance learning students and be

accessible from workstations, laptops, mobile devices, and tablets. This project aims to

implement VMware View to provision linked clone virtual desktops as seen in Figure 1.

Page 8: UC VDI Proposal

Bland, Dillon, Griffin 7

Resource Pool

Figure 1

By virtualizing the lab environments into a linked-clone model many new benefits

and opportunities become available. From a management perspective there is no easier

solution to administer, delegate control, budget, or maintain numerous workstations than

linked-clone VDI. Linked-clones allow for massive storage savings by streaming the

operating system (OS) to thousands of virtual desktops from a single golden image. By

virtualizing the OS in this way virtual desktops save on storage by up to ninety percent

(Vmware view architecture planning, 2012). Administrators are able to limit users to

certain desktop pools and assign applications based on their college AD credentials.

Administrators can stream applications using VMware ThinApp to the end users virtual

desktop pools. This helps control software license use through floating license pools and

reduces storage costs by eliminating the software install footprint of the application.

Linked clone virtual desktops are extremely resilient to outages as any fatal error

in the OS can be corrected by refreshing the linked-clone as opposed to fixing or

reloading the OS of a physical workstation. The user data layer is redirected to file shares

Page 9: UC VDI Proposal

Bland, Dillon, Griffin 8

and a roaming profile managed by View Persona Management. This allows the users'

custom application settings, custom computer settings, and saved data to travel with them

as they log in to different desktop clones across pools with better performance than

Windows Roaming Profiles (Vmware view installation, 2012). The nature of linked clones

makes them extremely easy to patch due to the clones all being based off of a single

golden image. Administrators can patch the single golden image, snapshot that image,

and then recompose the thousands of cloned desktops linked to it. This creates efficient

and fast patching for the entire desktop pool from a single point.

The centralized resources and management of linked clone VDI offer superior

reporting services as well as resource efficiency and management. When trying to budget

cost any resources added to the ESXi server cluster or backend Storage Area Network

(SAN) are applied to the entirety of the environment and not to individual workstations.

Resources are better utilized by floating between users as needed and never becoming

tied down to a physical workstation. The server cluster offering up the desktop pools

creates high availability and instant fail over should any outages occur. A vSphere

technology called vMotion allows the transferring of the active state of virtual desktops

between servers in the ESXi cluster without interrupting the end users' session or forcing

them to log off. The vCenter Server offers administrators the ability to manage the

environment from a single point and to run reports against the entire environment,

individual server hosts, desktop pools, or single virtual desktops. From the vCenter

Server administrators can adjust server or desktop settings, manage resource pools,

backend storage, server networking, and monitor the environment in real time.

Page 10: UC VDI Proposal

Bland, Dillon, Griffin 9

Virtual desktops can be grouped into "floating pools" which are logical groupings

of desktops based of a single golden image. The floating pools can be configured to

automatically expand based on user demand. They are able to automatically provision

more desktops as needed to prevent saturation of any desktop pool.

VDI @ UC also supports a Bring Your Own Device (BYOD) infrastructure.

Users are able to use the VMware View client from a number of devices including old

laptops, thin clients, workstations, tablets, and mobile devices from anywhere on the web.

Distance learning students will have access to a fully functioning desktop session with

the software and configurations needed to complete coursework from the comfort of

home. This highly available and accessible environment is perfect for any student that

needs access to multiple labs. Virtual desktops also offer the ability to use OS specific

software on any device. Users who are using Linux or an iPad needing access to VMware

workstation or Microsoft Visio can use their device to connect into a Windows 7 desktop

session. Their personal profile and data will always be with them regardless of which

virtual desktop they log into ensuring a custom computing experience.

Linked-clone VDI has many powerful benefits over traditional lab environments.

VDI has the potential to turn old deprecating assets into powerful lab machines accessing

desktop sessions on a backend server. The high availability of the environment far

surpasses that of physical labs and grants a wide range of access to students from

anywhere over the web with a client or browser. A cloud based solution fully utilizes any

resources given to the environment passing them out between users as they are needed.

Application and patch management become simple administrative tasks and can be

delivered faster and by a smaller administrative staff than the traditional lab model.

Page 11: UC VDI Proposal

Bland, Dillon, Griffin 10

2. Discussion

2.1 Design Objectives

In order to better understand the benefits of a VDI implementation a Proof of

Concept (POC) implementation was created. This infrastructure was used as a foundation

for an environment that could be scaled to meet the university's goal of connecting up to

8,000 remote student users. Inside of this POC, deliverables such as the linked-clone

model, application streaming, and remote access were implemented. After the

implementation of the POC a cost benefit analysis and a hardware recommendation were

created to span the differences between the POC and a full implementation supporting

8,000 users. The following list of deliverables was created that acted as key milestones

for the project.

2.2 Deliverables

The final POC should deliver valuable insight to the university in pursuing a VDI

implementation. The following list specifies the valuable information and features that

will be implemented during the project life cycle.

Physical Prototype – To meet the requirements of the POC three ESXi servers, a

Domain Controller, a vCenter Server, a View Composer component, a View Connection

Server, a Microsoft SQL Server (MSSQL) instance, a FreeNAS server for the SAN, an

NFS file server, and necessary networking equipment will be employed. Together the

aforementioned infrastructure will serve as the POC which all testing and research will be

performed upon.

Page 12: UC VDI Proposal

Bland, Dillon, Griffin 11

Linked-Clone Model – Desktops must be in a linked-clone environment to

accommodate ease of management for all desktops. This creates an entire desktop pool

off of a single golden image which can be easily modified or patched and then

recomposed effectively pushing out changes to every virtual desktop within the pool.

Application Streaming– Lab applications will be streamed to desktop pools through

VMware ThinApp when deemed appropriate. This practice will be undertaken as much

as possible in order to attain maximum storage savings by eliminating application install

footprints.

Figure 2

Remote Access Capability – End users must be able to access the POC environment

from off network from any remote location. Figure 2 depicts the flow of access that a

user would go through while connecting from outside of the network through the View

client. The initial connection would first go through the View Security Server, into the

View Connection Server, authenticating against Active Directory (AD), and finally into

Page 13: UC VDI Proposal

Bland, Dillon, Griffin 12

the target virtual desktop pool. Once authenticated into a desired pool the PCoIP stream

will be channeled directly to the user’s View client.

Sizing Recommendation – Two HP Proliant DL 380 G Series servers as well as a

custom built 8x32 AMD server were employed to build the POC and determine sizing

requirements for large scale deployments. A table will be created based on of the POC

data and VMware's recommendations for a deployment of 8,000 virtual desktops.

Cost Benefit Analysis – Using POC data in conjunction with outside research effectively

showed the cost effectiveness of a VMware View over other major competitors as well as

a current lab infrastructure. Included in this analysis will be all required equipment as

well as software licensing.

2.3 User Profiles

There are three distinct user profiles considered while implementing the POC. Users are

broken down into the categories Administrators, Helpdesk Technicians, and General

Users. Administrators and Helpdesk Technicians will both configure and maintain the

virtual infrastructure through the vCenter Server and from AD. General Users are broken

down into mobile users, local users, and distance users. General Users are both students

and professors that will be accessing the virtual desktops on a day to day basis. While

General Users are defined separately by the devices that they use to access the

environment they will still have the same tasks and permissions. Figure 3 shows a Use

Case Diagram.

Administrators

The Administrators will be tasked with maintaining the virtual environments from the

vCenter Server. They will make configuration changes; create the virtual pools for users

Page 14: UC VDI Proposal

Bland, Dillon, Griffin 13

to log into, and maintain applications. The Administrators will also upgrade the ESXi

hosts with the newest release, upgrade the vCenter Server, update the golden images and

the software baked into them, and update all of the streamed applications. Rarely will the

Administrators need to access the local environment to manage the user base due to the

fact that the current AD structure of the university will be utilized to migrate in current

group policy.

Help Desk Technicians

Help Desk Technicians will manage the user layer of the virtual environment. The tasks

that they will be charged with include troubleshooting errors that the users are

experiencing and maintaining the user’s accounts such as resetting passwords as

necessary. Help Desk Technicians will also have to have AD access in order to create

new Active Directory Objects (ADO) as needed to suit the needs of the users. Help Desk

Technicians will also be in charge of submitting tickets to the Administrators when a

change to the golden images of the virtual desktop pools is needed such as in the case of

critical updates or patches for the OS or baked in software.

General Users

The General Users will be primarily concerned with connectivity and performance of the

virtual infrastructure. The Users will need to have access to a highly available

environment that is able to be used from outside of the university’s campus and network.

Users will also need certain software for their classes and majors respectively and new

ADO’s as well as virtualized software will need to be created by the Administrators in

order to accommodate their needs.

Page 15: UC VDI Proposal

Bland, Dillon, Griffin 14

User Stories

As an off campus CECH IT student I need to be able to gain access to proprietary

software that I need to do my assignments.

Classification :Local User

As a commuting DAAP student I need to be able to access expensive design

software from my home to finish projects on time.

Classification: Distance User

As a UC college student I need to be able to access Microsoft Office products to

edit my homework assignments, on the go especially around final time when all

the lab computers are taken.

Classification: Mobile User

As a UCIT system administrator I need to be able to access administrative

software that will let me make changes to back end servers 24/7.

Classification: Administrator

As a UCIT desk side support member I need 24/7 access to the student computing

environment to assist students with technical problems.

Classification: Help desk Technician

As a commuter student with 2 part time jobs I need to be able to do my

assignments from anywhere at any time and I need IT lab software to do it.

Classification: Mobile /Distance User

As the head of the UCIT Datacenter Incident Response Team I am on call 24/7 in

case an incident arises and I need server administration access all the time.

Classification: Distance Administrator

Page 16: UC VDI Proposal

Bland, Dillon, Griffin 15

As a freshman College of Business student I need to be able to gain access to MS

Power Point to open my professor lectures so I can read them on the way into

class.

Classification :Local User

As a Discreet Mathematics professor I need to be ready at any time to review my

digital grade book and edit the grades of my students, should any of them find me

on my lunch break.

Classification: Mobile User

Figure 3

Page 17: UC VDI Proposal

Bland, Dillon, Griffin 16

2.4 Technical Elements

2.4.0 View Infrastructure Overview

ESXi Hypervisor

ESXi is vSphere's hypervisor. A hypervisor is a virtual machine monitor

that creates, runs, and manages virtual machines. ESXi is a bare metal hypervisor

in that the hypervisor is not installed upon a host OS but rather onto the host

server directly. By not using resources by dealing with running and managing a

typical OS such as patching, security hardening, user access control, and virus

protection the maximum amount of host resources can be dedicated for

virtualization. ESXi has a very small install footprint of 144 MB and is free with

the ability to purchase support ("vSphere hypervisor requirements," 2012).

vCenter Server

VirtualCenter is VMware's virtual management platform. The platform

allows for easy management of multiple ESXi hypervisors through complete

visibility of the environment, automation with VMware Orchestrator, and

scalability through linked mode vCenter Servers. It allows for real time

management of multiple vCenter Servers through a single vSphere client and

enables cluster wide features.

The cluster wide features vCenter offers are dynamic and allow for ease of

management and scalability throughout the environment. Such features as

vSphere vMotion where running virtual machines (VM) can be migrated to a

different host and storage location while maintaining their active states.

Distributed Resource Scheduler is another vCenter feature that allows for finite

Page 18: UC VDI Proposal

Bland, Dillon, Griffin 17

resource management through load balancing multiple hosts' resources within the

cluster and powering off unneeded servers during down time. VMware High

Availability (HA) and Fault Tolerance both require vCenter and guarantee

availability of business critical applications. VMware HA protects against host,

OS, or network failures and will automatically attempt to restart VM's on a new

host in the event of an outage. Fault Tolerance will failover VM's to a shadowed

VM running in tandem in order to prevent any downtown. vCenter is vital to any

VMware View implementation and should have special backup measures

ensuring its longevity.

View Composer

View Composer is the component which allows for a linked-clone model.

The Composer service both creates and deploys linked-clone desktops and

interacts with Active Directory. Linked-clones are virtual desktops which have

their OS streamed to them through something called a replica disk. This read only

disk image is loaded into server memory and the linked-clones keep a delta disk

storing changes against the original replica disk. The replica disk has allowed

VMware to save up to 90% storage space on the backend (VMware view

architecture planning, 2012) through streaming the OS to all linked-clones in a

pool.

Connection Server

The View Connection Server acts as a connection broken between the

View environment and the View clients using the PCoIP protocol. The clients

themselves can be run from a multitude of platforms such as iOS, Android,

Page 19: UC VDI Proposal

Bland, Dillon, Griffin 18

Windows, and Mac OS. The Connection Server can fulfill different roles while

providing this service such as acting as a Standard Connection Server, a Security

Server, a Replica Server, or a Transfer Server.

A Standard Connection Server negotiates the initial connection between

the View client and the View Agent service residing on the linked-clone desktop.

Once this connection is made the PCoIP stream is channeled directly to the client

while the Connection Server keeps track of the client connections and desktop

pool resources. The Security Server handles View client connections from outside

of the network. It sits in a DMZ off of the domain and channels connection flow

through the boarder firewall into the main Standard Connection Server. Replica

Servers are utilized for load balancing Standard Connection Servers by copying

the Standard Connection Server's configuration and synchronizing with it. This

allows for easy scaling as more Standard Connection Servers are needed. Transfer

Servers are utilized to check-in and check-out VM’s so that they can be used in

Local Mode. This facilitates the ability to take VM’s off of the network, use them

while saving changes, and then put them back into the environment. The

Connection Server is vital to VMware View and allows for seamless connection

into the environment from a View client regardless of the platform the client is

installed on.

VMware ThinApp

VMware ThinApp is an application virtualization packager. It allows

applications to be virtualized and streamed to end point devices through

packaging them into a single executable stored on centralized storage. The

Page 20: UC VDI Proposal

Bland, Dillon, Griffin 19

executables run completely separate from the client OS accessing them. This

allows for reduced application conflicts, application cross-platform portability,

effective and fast application deployment, and increased application availability.

By streaming applications the install footprints are eliminated from golden disk

images saving on storage space and applications can be centrally managed.

Virtualized applications can be put into templates within View Administrator

allowing for ease of patching and deployment for entire desktop pools. This

creates a single place where the application can be updated and redeployed within

minutes during maintenance. The storage savings and management benefits that

virtualizing applications offers make it attractive to implement where suitable.

View Person Management

View Persona Management (VPM) is a VMware service similar to

Windows Roaming Profiles (WRP). VPM replicates AD profile data such as user

Windows settings, custom application settings, folder redirection, and registry

settings. VPM surpasses the efficiency and use of WRP in scope of what data is

replicated and carried with the user as well as the performance of the service

(Vmware view administration, 2012). VPM can be fine tuned to suit individual

users and can perform tasks in the background in order to reduce log in time.

Folder redirection and other settings not necessary upon start are downloaded

silently after log on, but if data becomes needed such as the user's My Documents

folder it will be instantly downloaded. All of VPM's features can be adjusted with

a provided GPO making the service extremely easy to manage. This allows for a

custom user experience regardless of which virtual desktop the user logs into as

Page 21: UC VDI Proposal

Bland, Dillon, Griffin 20

their personal data and settings will travel with them wherever they go regardless

of the end connection device.

2.4.1 Preparing for a View Deployment

There are several steps that must be taken before a full VMware View

deployment can take place. This holds especially true for adding a View

infrastructure to a pre-existing environment where many different configurations

vital to business productivity have been implemented. VMware View requires

several Microsoft services in addition to only supporting certain Microsoft Server

OS which must be installed and configured before a VMware View installation

can begin. One of these major requirements is the use of Active Directory (AD)

for authentication as well as user and computer management. AD must be

properly prepared with user accounts, groups, and Organizational Units (OU) in

order to seamlessly integrate VMware View into an existing AD structure. DNS

and DHCP must be running and correctly configured within the environment in

order for View services to work properly. A Microsoft SQL Server instance is

specifically required for View Composer and other databases are required for

different View components. Once these requirements are in place the installation

process of the main View environment can begin.

Active Directory Configuration

User Accounts and Groups

Before beginning initial installations creating special users and groups

within AD is ideal in order to properly manage the environment. The use of pre-

existing AD accounts is supported but is not recommended for several reasons. If

Page 22: UC VDI Proposal

Bland, Dillon, Griffin 21

a user account which is also used for other purposes becomes locked due to

incorrect passwords or any other reason, the services using those accounts within

VMware View will stop working and hundreds of users can potentially lose

access. These accounts are also used to administrate the View environment and

will have to be passed out to employees. If these accounts aren't properly secured

through passwords and permissions they could be used for nefarious purposes.

It is recommended that a unique group be made for vCenter

Administration, Linked Clone Pool Administration, and View Composer Server

Administration. A group created for View Administrators is a good way to keep

track of the accounts and setup general permissions. Each account should be

configured with only the required privileges and the View Composer account is

the only one needing AD permissions. The other accounts can be further restricted

by creating limited roles within vCenter and then delegating those roles to their

respective AD accounts.

Organizational Units

In order to limit VMware View's impact upon AD creating special OU's is

recommended best practice. This helps not only to keep an AD structure

organized but also helps when configuring permissions for the View Composer

account. New Group Policy Objects (GPO) are easily added to new OU's without

the risk of impacting other AD objects, and two GPO's will be added at minimum

in order to optimize the PCoIP protocol and to setup View Persona Management.

OU's should be created for different linked clone pools as seen fit as well as for

Page 23: UC VDI Proposal

Bland, Dillon, Griffin 22

View client kiosks setup to only run the client for connecting into the desktop

pools.

DNS/DHCP Configuration

Within the View environment the View Connection Server is in charge of

establishing connections to each virtual desktop’s VM Agent service. This is done

to monitor virtual desktops, to redirect the virtual desktop stream to end user

devices for connectivity, and to manage these virtual desktops. The Connection

server uses Fully Qualified Domain Names (FQDN) to communicate to each

linked clone desktop which requires DNS/DHCP with correctly configured

forward and reverse lookup zones. View Composer also needs the DHCP service

to keep track of linked clones, assign unique IP addressing, assign DNS, assign

gateway addresses, and to effectively limit the scope of linked clone desktop

pools. View Composer is only able to provision clones to the size of the residing

subnet and when address space runs out the server will stop provisioning. A local

server running these two roles is a requirement of VMware View.

Microsoft SQL Server Configuration

View Composer requires a MSSQL instance in order to install and does

not support any other database. Other databases are also required in the

infrastructure such as for the vCenter Server, the Update Manager service, and the

Connection Server event service. Database replication and backup procedures

ensure fault tolerance and is highly recommended. This will guarantee fast

turnover should a View component become corrupt or have a fatal OS crash. A

dedicated database server is a VMware best practice to avoid slow connection

Page 24: UC VDI Proposal

Bland, Dillon, Griffin 23

times and improve VMware View performance (VMware view installation,

2012).

2.4.2 Hardware Specifications

ESXi Hosts

Minimum Hardware Requirements

The environment the POC is designed to support is 50 concurrent users.

To these specifications three different host servers were employed to run ESXi.

The minimum hardware requirements to run ESXi were the following (vSphere

hypervisor requirements for free virtualization, 2012).

Processor

64-bit x86

2 cores

LAHF or SAHF CPU instructions

NX/XD enabled in BIOS

RAM

8GB

Hardware Virtualization Support

Intel VT-x or AMD RVI

One or more gigabit or 10Gb Ethernet controllers

SCSI Adapter

One or more of the following:

Adaptec Ultra-160

Ultra-320

Page 25: UC VDI Proposal

Bland, Dillon, Griffin 24

LSI Logic Fusion-MPT

NCR/Symbios SCSI

Proof of Concept ESXi Host Server Hardware

VMware best practices state that between eight to ten virtual desktops can

be run per core. Windows 7 virtualization best practices also state that a minimum

of 1GB of RAM with a dual-core 1GHz CPU was needed to offer optimal

performance and ease of use (Server and storage sizing guide for windows 7,

2012). Based upon these minimum hardware requirements and the need for a

deployment of 50 users for the POC the following servers were employed.

HP Proliant DL360 G7

2 x Intel(R) Xeon (R) E5506 @ 2.13GHz

20GB DDR3 RAM

4 x 1Gb Ethernet Adapters

HP Proliant DL360 G6

Intel(R) Xeon(R) E5530 @ 2.4 GHz

20GB DDR3 RAM

4 x 1Gb Ethernet Adapters

Custom Built AMD Server

2 x AMD Opteron(tm) 4234 @ 3.1 GHz

32GB DDR3 RAM

2 x 1Gb Ethernet Adapters

Storage Hardware Requirements

Storage Area Network

Page 26: UC VDI Proposal

Bland, Dillon, Griffin 25

The POC called for centralized storage to take advantage of advanced

failover features such as vSphere High Availability (HA) and vSphere's vMotion

technology. Towards this end the project utilized a high powered HP engineering

machine running FreeNAS. In order to enable disk caching a minimum RAM

requirement of 8GB was required due to FreeNAS using 2GB and a minimum of

6GB of effective RAM was needed to not automatically disable disk caching

(Freenas hardware recommendations, 2012). The hardware specifications of this

machine are listed below.

HP xw8600

1 x Intel(R) Xeon(R) 5200 @ 2.6GHz

1 x Intel(R) Xeon(R) 5400 @ 2.2GHz

8GB DDR2 ECC RAM

Integrated 8 channel SAS 3.0GB/s RAID Controller

2 x 250GB 7k SAS HD

1 x 500GB 7k SAS HD

File Server Requirements

Application streaming using VMware ThinApp and View Persona

Management both call for a Network File Share (NFS) server as storage

endpoints. Windows Server 2008 R2 was chosen as the OS and was installed on a

Dell Xeno Media Server with the File Services Role. A file share for the

application repository, the users' roaming profiles, and the users' redirected

folders were all created on this system and properly locked down with file

permissions.

Page 27: UC VDI Proposal

Bland, Dillon, Griffin 26

Dell Xeno Media Server

1 x CPU @ 2.3 GHz

3GB DDR3 RAM

1 x 1TB 7k SAS HD

1 x 1Gb Ethernet Adapter

2.4.3 Software Requirements

VMware View Components

VMware View components are heavily reliant upon Windows software.

Each of the View components such as the vCenter Server and the Connection

Server can only be installed onto the Windows Server 2008 or 2003 64-bit OS.

The View infrastructure also requires MSSQL Server and Windows AD. The

nature of vSphere encourages the use of server virtualization which is why 64-bit

virtualization is a requirement of the ESXi hosts’ processors. Other required

settings for the View components such as hard disk space, memory, networking,

and hardware can all be carved out of the hosts resources and adjusted within the

virtual servers' settings as needed. This is only made possible when virtualization

is employed over a physical implementation for the View components and allows

for flexibility when meeting hardware requirements of software. A physical

implementation of any View component is a tremendous waste of resources and

reduces the effectiveness of HA features within vSphere.

2.4.4 Data Store Requirements

VMware View has required storage parameters for the View Composer

database, vSphere HA features, application streaming, as well as a user profile

Page 28: UC VDI Proposal

Bland, Dillon, Griffin 27

repository for VPM. There are three data store endpoints within vSphere to meet

these requirements. The data store end points are a MSSQL instance for

databases, NFS storage for both application streaming and user profiles, and

Logical Unit Number (LUN) storage for the main Virtual Machine File System

(VMFS) data store.

Microsoft SQL Server

Due to this database being required by View Composer it proved efficient

to utilize it for the other View component databases. For each View component

database a Data Base Administrator (DBA) user as well as a database user

account were created in order to ensure database security. These accounts can be

given to the server administrators of the View components and will be used in the

Data Source Name (DSN) configuration necessary for the View components to

access the databases. Setting up database backups is highly recommended to

ensure fault tolerance for the environment. Backing up the View component

databases allows for fast and hassle free data recovery without having to prepare a

new Windows Server 2008 R2 installation and configure the OS to reinstall the

View component from scratch. Backups should be done before any new changes

or upgrades are attempted for easy rollback should any errors occur.

Network File Share Storage

In order to implement application streaming, user profiles and redirected

folders Network File Sharing (NFS) was needed. Using the Xeno Media Server a

Windows Server 2008 R2 with the File Services role was implemented. Three

different shares were created on this server to hold different kinds of information

Page 29: UC VDI Proposal

Bland, Dillon, Griffin 28

within the environment. A user profile repository share was recreated and was

locked down with security settings. This was done to ensure that only the users

themselves could access the files stored in their VMware Persona located within

the user repository. The same settings for redirected folders, where only the users

themselves had access, were setup as another share. For the application repository

another share was created on the NFS server. This share was created for ThinApp

streaming applications which required an NFS share. The ThingApp Repository

was secured through file permissions to only be accessible by users that would

need to update, change, and administrate the streamed applications. Read and

execute permissions were granted to the OU's created for the desktop pools to

permit application streaming.

Network Attached Storage

In order for the HA features of vSphere to be utilized centralized storage

must be configured and accessible by all ESXi hosts in the cluster. iSCSI was

chosen for the POC in lieu of the performance gains of the protocol over NFS and

to better emulate a corporate deployment. Using the HP xw8600 engineering

workstation FreeNAS was installed onto a thumb drive for maximum resource

efficiency. Utilizing all three hard disks a ZFS self healing RAID volume was

used and secured through mutual CHAP authentication. The RAID volume was

delivered over two 1Gb Ethernet adapters that had link aggregation in order to

improve performance and to enable multi-pathing from the ESXi cluster. A new

VMFS data store was setup using the ZFS RAID volume and Round Robin

Page 30: UC VDI Proposal

Bland, Dillon, Griffin 29

Pathing was configured on the iSCSI software initiator of each host to increase

performance.

2.4.5 Networking Configuration

Networking Backbone

In a working network with VMware View several elements should be configured

for a successful network implementation. The networking equipment used was entirely

gigabit speed. VMware recommends this practice (Performance best practices, 2012) at a

minimum for the ESXi host NIC cards as well as enabling jumbo frames and creating

VLANS for different types of traffic. Due to the need for two gigabit eight port switches

the project employed two unmanaged switches to significantly save on the cost of the

project by using donated hardware. These switches made it impossible to carve out

VLAN traffic as well as enable jumbo frames, but due to the small user base of 50 users

in the POC a low load was put on the network infrastructure. The environment's network

backbone never experienced bandwidth saturation in spite of the inability to implement

the recommended networking optimizations. For the environment router a gigabit

Linksys DD-WRT based router was employed for connectivity. Listed below is a sample

network diagram of the environment with two hosts as seen in Figure 4.

Page 31: UC VDI Proposal

Bland, Dillon, Griffin 30

View Connection Server vCenter Server View Security Server View Composer Server Database Server

NAS Server

AD/DHCP/DNS Server

Figure 4

2.4.6 VMware View Optimizations

There are several optimizations that can be done within the View

environment that greatly increase performance. These optimizations are well

documented and recommended in both the VMware Installation Guide (Vmware

view installation, 2012) as well as the VMware Administration Guide for

VMware View (Vmware view administration, 2012). These optimizations should

be implemented in all View environments in order to save on hardware costs as

well as to increase resource efficiency.

Networking Optimizations

There are several performance enhancements that can be adjusted within

the View environment to increase network related performance. Most notably is

optimizing the PCoIP protocol to not build to lossless. This feature adjusts the

Page 32: UC VDI Proposal

Bland, Dillon, Griffin 31

protocol's ability to build to a lossless image when sending PCoIP to client end

devices and diminishing the lossless ratio within the feature gives immediate

performance gains. It is easily adjusted from a GPO within AD and several other

settings can be fine tuned within the protocol making it more efficient. Enabling

Jumbo Frames on the networking infrastructure and creating VLANS to segment

different network traffic are best practices that will make better use of networking

resources and allow for greater network throughput (Performance best practices,

2012). From a centralized storage perspective enabling multi-pathing to the SAN

as well as enabling Round Robin Pathing between the different network paths

provides both redundancy and increased storage performance for the environment.

These optimizations are all recommended and should be implemented where

possible.

Environment Optimizations

The View environment can be optimized for better performance through

implementing three best practices. Faster response times are achieved by bundling

the View Composer component with vCenter Server. This is best practice for each

View Composer component and creates resource efficiency. Dedicating an entire

server to running the View Composer service is a waste of resources and makes

View Composer more susceptible to complications when provisioning linked-

clones. Linked-clone performance can be drastically improved by optimizing the

golden image. Several Windows services and settings necessary in a traditional

workstation with local resources can be turned off or uninstalled from the golden

image and will allow the linked-clones to perform faster and waste less resources

Page 33: UC VDI Proposal

Bland, Dillon, Griffin 32

on useless Windows features. VMware provides a shell script to do this all at once

but it is recommended that the script be looked over and commented out as

needed depending on implementation of the View environment. Database

replication is an understated necessity for VMware View. There are several

databases which if corrupt would cripple or bring down VMware View such as

the vCenter Server database. Taking steps to backup these vital pieces of

infrastructure ensure maximum uptime of the environment and easy disaster

recovery.

2.5 Budget

Project Budget

There were zero costs incurred from software and servers, thanks to the

collaboration and cooperation of iPVersant and Murali Rathinasamy. Their donation of

two HP Proliant servers as well as a custom built 8x32 AMD server made this project

possible. All other costs typically associated with VMware’s and Microsoft’s proprietary

software, have been avoided through use of extended free trials gained through

contacting VMware and Microsoft. The only costs incurred during this project were a 7k

terabyte hard drive and a Cisco gigabit switch.

Cost Benefit Analysis

The cost analysis shown in Figure 5 revealed that there is a significant cost

difference between the two biggest competitors in the virtualization space: VMware and

Citrix. These numbers are based on advertised rates with no discounts or negotiated

pricing. Using the catalog rates VMware is the clear winner. This is primarily due to the

cost of the Citrix Xen Desktop licenses being significantly more expensive than VMware

Page 34: UC VDI Proposal

Bland, Dillon, Griffin 33

View. From a scaling perspective VMware fits into the business world better with their

licensing models. By default VMware does concurrent licensing, which means you pay to

have a user connected from any device. Citrix on the other hand has a per user per device

licensing model; which means you have to have a license for every user connection

device.

Figure 5

However Citrix does have a concurrent user licensing offering but it is not their

standard offering and carries a higher price tag. The pricing model shown above in Figure

5 uses Dell and Cisco hardware to scale our environment to 8,000 users. The Dell server

gear shown above is a Poweredge R720 with an Intel E5-2650 processor. This server was

chosen for its two quad core processors so each processor would be able to support

Page 35: UC VDI Proposal

Bland, Dillon, Griffin 34

approximately 80 desktops, pending each desktop’s configuration. The storage

recommended for scaling up to 8,000 users is a Dell Compellent with 51TB of raw

storage. Dell Compellent uses a tiered storage model for speed and resource efficiency.

Data that is accessed frequently will stay in the first tier which is made up of Solid State

Drives (SSD) and is then moved to the second tier which utilizes 15K SAS, and finally to

the third tier which is comprised of 7K SAS when the data becomes accessed less

frequently. The networking equipment is standard Cisco gear. All VMware and Citrix

licenses include 24x7 support.

Figure 6

Figure 6 pictured above shows the return on investment for a build out of this scale.

There is an upfront investment of $1.48M, however as shown in Figure 6 Return on

Investment (ROI) is reached between the fifth and sixth year mark. After achieving ROI a

cost savings of $250,000 a year is attained through the reduced cost of the technology

$0.00

$2,000,000.00

$4,000,000.00

$6,000,000.00

$8,000,000.00

$10,000,000.00

$12,000,000.00

1 2 3 4 5 6 7 8 9 10

Return on Investment

Traditional Model

VDI + Thin Client

Page 36: UC VDI Proposal

Bland, Dillon, Griffin 35

refresh cycle. The average technology refresh cycle is 3 years for standard workstations.

The average technology refresh cycle using the IGEL thin clients recommended in this

model is 5 years. Longer technology refresh cycles lengthen the amount of time between

purchasing new equipment which in turn saves money.

2.6 Project Timeline

Timeline and Project planning

Working on a project that spans over six months and encompasses multiple

phases and deliverables can seem like an impossibly large task. If the project is not

broken down in to smaller deliverable tasks, the scope and scale of the project can

quickly get out of hand. The best way to mitigate these risks is to lay out a set of

deliverables, and create a project timeline in order to identify key milestones and create

appropriate deadlines. The creation and adherence a Gantt chart was an invaluable

progress tool in the completion this project. Without the help of time and task

management tools such as Microsoft project and collaboration tools such as Confluence

from the Atlassian suite project progress and agile changes made by members of the

group could have been miss communicated. Miss communication can often result in a

loss of time or duplication of efforts by multiple members of the VDI@UC Team.

Microsoft Project

Page 37: UC VDI Proposal

Bland, Dillon, Griffin 36

Figure 7

Atlassian Suite Project tool

Figure 8

These tools and weekly project meetings is mainly how the VDI@UC team kept

track of project progress and deadlines, despite all of the members having completely

different work and class schedules. This high level of communication ensured all project

deliverables were completed on time.

2.7 Proof of Design

Page 38: UC VDI Proposal

Bland, Dillon, Griffin 37

vCenter Server

Figure 9

Shown above in Figure 9 is a vSphere client logged into the vCenter Server. This

view shows the data stores, the ESXi host cluster entitled VDI @ UC, as well as all of the

virtual machines in the environment. From this view an administrator can log into any

VM for fast access, monitor the environment, run reports on the environment, and make

configuration changes.

Page 39: UC VDI Proposal

Bland, Dillon, Griffin 38

Figure 10

Figure 10 is showing the network configuration for one of the ESXi hosts in the

cluster. It has two virtual standard switches with various port groups configured to take

care of tasks such as iSCSI, vMotion, and VM networking. From this configuration pane

an administrator can configure all of the networking components of the host including

physical NIC to virtual switch assignment, creating or changing ports, creating new

virtual networking components, and moving VM's between switches. This is only the

Networking configuration pane, and through the different panes an administrator can

configure every aspect of the ESXi host from a single vSphere client logged into

VirtualCenter. Adminstrators can setups Alarms with automatic actions attached to them,

schedule or run performance/utilization reports, view environment maps from different

VM perspectives, schedule tasks, set user permissions, and all other administrative tasks.

Page 40: UC VDI Proposal

Bland, Dillon, Griffin 39

View Administrator

Figure 11

Figure 11 shows a screen shot of View Administrator. This console controls all of

the View components within the environment. VirtualCenter performs the same role with

the ESXi hosts and VM's while View Administrator works with only the View

components. The current view in Figure 11 shows the two pools that are running and

provisioned in our environment. From this administrative console the environment's

connection servers, ThinApps, linked-clones, vCenter Servers, View Composers with

specific AD accounts, and users can all be configured and managed. The console works

in real time and allows for the same features of vCenter within the scope of the View

components.

Page 41: UC VDI Proposal

Bland, Dillon, Griffin 40

Figure 12

Figure 12 shows the details of the IT_Lab pool. There are six available desktops

in this pool, as well as one that is provisioned but is not on consuming resources yet by

being turned on and ready for users. The linked-clone pool settings are all configurable

and allow for many quality of life controls such as when to recompose a clone back to the

base image, when to log users off, which protocols are allowed, and controlling adobe

flash. Users are also assigned to pools in this configuration pane as well as live reporting,

task scheduling, policy settings, and inventory control.

Page 42: UC VDI Proposal

Bland, Dillon, Griffin 41

Figure 13

Figure 13 is a view of the ThinApps that are being applied to the linked-clone

pools. The available apps in the list on the center of the screen are scanned in from an

application repository on a NFS server which has been previously setup within the View

Configuration tab to the left. Below the center of the screen is the templates section. In

this section you can see the IT Lab Template. This template is applied to our IT_Lab Pool

shown in the previous figure, and the ThinApps listed in the template are available to that

desktop pool.

Page 43: UC VDI Proposal

Bland, Dillon, Griffin 42

Figure 14

Figure 14 shows that the ThinApp repository that is located externally on the NFS

fileserver. Only NFS shares are able to be added to the repository lists and certain share

permissions must be enacted. An OU called View Connection Servers should be created

within AD and given Read and Execute permissions in order to successfully deploy

ThinApps to the desktop pools.

Page 44: UC VDI Proposal

Bland, Dillon, Griffin 43

View Client Connection Flow

Figure 15

Figure 15 shows the connection flow of a View client into a desktop. The upper

left of Figure 15 shows entering the connection server address. If the connection flow

needed to go through the Security Server the only change the client would have to make

while connection is to enter the Security Server instead of the Connection Server. The

lower right of Figure 15 shows the user entering in his credentials. The top right of Figure

15 shows the two available pools for vdiuser02. The bottom right of Figure 15 shows the

Page 45: UC VDI Proposal

Bland, Dillon, Griffin 44

VMware View client connecting to a desktop. Finally the last screen of Figure 15 shows

vdiuser02 connecting into a linked-clone desktop.

FreeNAS

Figure 16

Figure 16 shows the storage view of the FreeNAS administration console.

FreeNAS is serving up a LUN to the environment that holds the main VMFS data store.

The LUN is comprised of a three disk ZFS self healing RAID that is secured through

iSCSI with mutual CHAP authentication. This console allows for reporting,

configuration, and monitoring of different storage services such as NFS and iSCSI.

Page 46: UC VDI Proposal

Bland, Dillon, Griffin 45

Figure 17

Figure 17 shows another FreeNAS screen capture showing off some of the

reporting capabilities within the console. Alarms and reports can work together for ease

of administration. An alert could be setup for capacity or load on certain storage

endpoints being served by FreeNAS that in turn will trigger more detailed reporting on

some of the possible problems. FreeNAS was chosen for its open source nature, the

management web GUI, and for its storage capabilities with ZFS RAID volumes.

Active Directory

A requirement of VMware View and of Windows, AD is essential and had to be

configured properly. Figure 18 shown below is a screen capture of the AD structure. The

two OU's created for the environment named View Desktops, Kiosk Desktops, and

Page 47: UC VDI Proposal

Bland, Dillon, Griffin 46

Linked Clone Desktops. This was done in order to better organize the environment as

well as limit the impacts of adding VMware View onto an existing AD.

Figure 18

The View environment was optimized in part through the use of GPO's within

AD. These GPO's optimized PCoIP for streaming as well as optimized VPM for the

POC's specific needs as shown in Figure 19 below.

Figure 19

Page 48: UC VDI Proposal

Bland, Dillon, Griffin 47

IGEL Thin Client Management Console

The Igel Thin Client Managment consol was installed on a virtual machine

running in the envrioement. It allows for centralized management of all Thing Clients on

the network and automatically crawls the network to register them.

Figure 20

Thin clients can be configured in "kiosk mode", meaning the terminal will only

present the user with a login screen. The address for the connection server and even the

desktop pool can all be configured within the management console to automatically

propagate the client upon launch effectively locking down the kiosk to specific resources.

This is a great way to lock down lab stations and reduce administrative overhead. The

kiosk configuration pane is listed below in Figure 21.

Page 49: UC VDI Proposal

Bland, Dillon, Griffin 48

Figure 21

2.8 Future Recommendations

Moving forward with this project into the future should start with a small

implementation acting as a pilot to handle off campus access for distance students and

then be scaled up as the user base increases. Buying into the infrastructure has a high

upfront cost that can be mitigated with this scaled approach. It is inexpensive to expand

the environment's user base as the system becomes more popular among users and will

allow UCIT the time necessary to fine tune VMware View. Due to the high level of

integration within VMware View performance data is only attained through piloting the

environment with live users. Many different pieces of equipment are involved in

delivering virtual desktops to end client devices from the ESXi host server to the network

Page 50: UC VDI Proposal

Bland, Dillon, Griffin 49

infrastructure it traverses. A pilot satisfies many immediate goals of the university such

as providing off campus access, ease of management, BYOD, and increased availability.

UCIT will also be provided with time to train employees and users while planning a full

implementation for the entire student body.

3. Conclusion

3.1 Lessons Learned

There were several lessons learned from the VDI@UC project, including building

out a solution correctly the first time. When the test lab environment was first built, it was

just thrown together to get it working. This was great at first, because it let us start to

experiment with the different features of VMware much earlier than we normally would.

However this ended up becoming a hindrance later in the project. Since the project was

put together so quickly in the beginning, best practices were not followed and led to a

variety of different errors including storage and networking. There was a decision made

amongst the team to rebuild the environment using the lessons we had learned as well as

following best practices. Once best practices were followed as well as implementing the

knowledge gained from having hands on experience with the environment the IT Test

Labs pools were functioning better than we thought they could.

Multiple technical and project management skills were developed throughout the

project. Technical skills that were developed include virtualization, networking, storage,

and a more thorough understanding of Microsoft AD. Virtualization was a huge part of

this project from working with ESXi 5.1 to the different VMware View components.

Networking and storage was an integral part of our project, we not only had to configure

Page 51: UC VDI Proposal

Bland, Dillon, Griffin 50

physical components but also configure the virtual components of ESXi. In class we are

taught extensively about Microsoft AD however we have never had the opportunity to

apply this knowledge to a project of this scale.

The Project Management (PM) skills learned throughout this project included:

managing a multi-phased project, managing multiple priorities, and how to prevent scope

creep. This was a very large project, which spanned the majority of the school year.

When you first get a project like this it is very overwhelming and it is difficult to find

where to begin. Our team quickly came together and established what goals we wanted to

come out of this project, and that became our project plan that we measured ourselves

against. The second PM skill that was improved upon was how to manage and prioritize

our priorities. All of us had a job while working on this project so we had to make sure

that we were performing well at our job, school, and our senior design project. The last

skill that was learned was the need to prevent scope creep. At the beginning of this

project we had a specific set of goals that grew throughout the project. This is something

that negatively affected our performance by shifting our focus away from the main

deliverables and targeting items that should have fallen into a nice to have category. This

is something that should we take on a large project like this again we would see

happening and quickly be able to stop.

3.2 Final Synopsis

This project has showed us that VMware View is the best full featured VDI

offering that is on the market today. In doing this project we learned that VMware has

better pricing and a more full featured suite of management tools. Though VMware has a

Page 52: UC VDI Proposal

Bland, Dillon, Griffin 51

lead in the VDI space it is not the de facto solution in every aspect of virtualization. We

have seen that Citrix XenApp is actually the best solution for streaming applications.

However this being a VDI recommendation POC we fully recommend the VMware View

solution as a replacement for the traditional desktop model in place at UC today.

Page 53: UC VDI Proposal

Bland, Dillon, Griffin 52

References

Boerowie. (2011, Aug 18). Use windows 2008 r2 as an nfs server for vsphere. Retrieved

from http://boerlowie.wordpress.com/2011/08/18/use-windows-2008-r2-as-an-

nfs-server-for-vsphere/

(2012). Freenas hardware recommendations. (n.d.). Retrieved from

http://doc.freenas.org/index.php/Hardware_Recommendations

Gawd, L. (2012, May 5). My vmware view win7 optimization guide. Retrieved from

http://hardforum.com/showthread.php?t=1696256

Knowledge, B. (2013, Feb 16). Installing esxi 5.x on a supported usb flash drive or sd

flash card. Retrieved from

http://kb.vmware.com/selfservice/microsites/search.do?language=en_US

(2012). Performance best practices for vmware vsphere™ 5.0. Palo Alto, CA: VMware,

Inc. Retrieved from

http://www.vmware.com/pdf/Perf_Best_Practices_vSphere5.0.pdf

Resources on windows 7 optimization for vdi. (2011, Aug 30). Retrieved from

http://social.technet.microsoft.com/wiki/contents/articles/4495.list-of-resources-

on-windows-7-optimization-for-vdi.aspx

(2012). Server and storage sizing guide for windows 7. Palo Alto, CA: VMware, Inc.

Retrieved from http://www.vmware.com/files/pdf/view/Server-Storage-Sizing-

Guide-Windows-7-TN.pdf

VLabs. (2012, Nov 16). Thinapp package.ini parameters reference guide. Retrieved from

http://www.vmware.com/pdf/thinapp47_packageini_reference.pdf

VLabs. (2012, November 16). Thinapp package.ini parameters reference guide.

Retrieved from http://www.vmware.com/pdf/thinapp47_packageini_reference.pdf

Vmware. (2012). Vmware view architecture planning.. (Vol. 5.0). Palo Alto:

Page 54: UC VDI Proposal

Bland, Dillon, Griffin 53

VMware. (2012, June 12). Vmware view thinapp deployment guide. Retrieved from

http://www.vmware.com/files/pdf/VMware_ThinApp_Deployment_Guide.pdf

VMware, L. (2012, April 17). Vmware horizon view bootcamp all content. Retrieved

from http://communities.vmware.com/community/vmtn/desktop/view/bootcamp

Vmware vsphere hypervisor. (2012). Retrieved from

http://www.vmware.com/products/vsphere-hypervisor/overview.html

(2012). Vmware view architecture planning. Palo Alto, CA: VMware Inc. Retrieved from

http://pubs.vmware.com/view-50/topic/com.vmware.ICbase/PDF/view-50-

architecture-planning.pdf

(2012). Vmware view administration. Palo Alto, CA: VMware, Inc. Retrieved from

http://pubs.vmware.com/view-50/topic/com.vmware.ICbase/PDF/view-50-

administration.pdf

(2012). Vmware view installation. Palo Alto, CA: VMware, Inc. Retrieved from

http://pubs.vmware.com/view-50/topic/com.vmware.ICbase/PDF/view-50-

installation.pdf

vSphere hypervisor requirements for free virtualization. (2012). Retrieved from

http://www.vmware.com/products/datacenter-virtualization/vsphere-

hypervisor/requirements.html