75
1

SYN 219 Getting Up Close and Personal With MCS and PVS

  • Upload
    citrix

  • View
    802

  • Download
    6

Embed Size (px)

Citation preview

Page 1: SYN 219  Getting Up Close and Personal With MCS and PVS

1

Page 2: SYN 219  Getting Up Close and Personal With MCS and PVS

Who am I, who’s in my team, what are my resposibilities, talk about reference architectures and so on.

2

Page 3: SYN 219  Getting Up Close and Personal With MCS and PVS

Sports foto courtesy of Alfred Cop.

3

Page 4: SYN 219  Getting Up Close and Personal With MCS and PVS

4

Page 5: SYN 219  Getting Up Close and Personal With MCS and PVS

Our starting point for today is the standard model Citrix uses to describe a virtual desktop.

In this model , several layers and involved in handling the provisioning of the virtual desktops.

The Desktop Layer describes the actual golden image, containing or running the apps the user is going to use.

A best practice is to make this image as lean and mean as you can, before rolling it out, to increase scalability and user density on the system.

The Control and Access layer contains the management tools we use to actually perform the provisioning operations. Studio is the main interface from which we control machine creation and assignment of users to these desktops, in combination with other consoles like the PVS management console if we chose to use that as our main provisioning tool.

The hypervisor in this model can be pretty much anything we would like. New kid on the block is the Nutanix Acropolis Hypervisor, or AHV.

Finally at the bottom we find the compute and storage layer, which can be designed and built in many ways, but not every combination of hardware and storage is suitable for high scale virtual desktop environments.

5

Page 6: SYN 219  Getting Up Close and Personal With MCS and PVS

A little history tour.

Automated provisioning has not always been a part of the portfolio. A long long time ago , physicial deployments of XenApp ruled the world, leveraging the IMA architecture.

The acquisition of Ardence, in 2006, brought a tool to the Citrix world that solved a big problem for XenApp customers. No longer did they need to manually or semi-automatically deploy physical installation of Windows servers and scripted installs of XenApp, but servers could be ran from a single image. Later , when VDI was starting to become a big new thing , Citrix redesigned the IMA architecture to allow for larger scale of environments, which is now called the Flexcast Management Architecture. This architecture allowed different types of desktop and apps delivery to be mixed and managed from a single console. Big part of FMA was the introduction of a new provisioning technique called Machine Creation Services.

First iteration of this was focused on VDI, and later Citrix brought XenApp over to the FMA architecture as well.

6

Page 7: SYN 219  Getting Up Close and Personal With MCS and PVS

Finally we now also bring the Nutanix Acropolis Hypervisor into the FMA supported hypervisors. This means that AHV is now fully supported to not only run VM’s, but also leverage MCS and PVS as provisioning techniques.

7

Page 8: SYN 219  Getting Up Close and Personal With MCS and PVS

If we compare PVS and MCS on a high level there are 5 comparisions we can make.

First of all the platform: PVS works for both virtual and physical workloads. MCS only supports a hypervisor based deployment.

Delivery of the master image will go over the network (streamed) , while MCS leverages storage as it’s main delivery.

The source format used with PVS is a VHDX file of the master VM, which means a conversion has to take place first. MCS leverages hypervisor based snapshots.

The storage layer that’s beeing used to store the writes is local disk focused in case of PVS, while with MCS it leverages a datastore approach. These datastores can however be locally attached.

Finally the infrastructure needed; PVS is made up of a separate architecture of one or more PVS servers, and might need a separate network for streaming. MCS is fully integrated into the XenDesktop Controllers.

8

Page 9: SYN 219  Getting Up Close and Personal With MCS and PVS

Depending on your needs, you can choose between MCS and PVS.

MCS offers three modes, but they are all for virtual desktops only.

Pooled: set of non-persistent VM’s, all sharing the same master image, shared across multiple users.

Pooled with PVS: set of non-persistent VM’s shared across multiple users, but with a Personal vDisk attached.

Dedicated: Set of persistent VM’s spun of from a master image.

9

Page 10: SYN 219  Getting Up Close and Personal With MCS and PVS

Pooled VM’s can be random (each time you get a new pristine VM, but it can be a different one than you had previously), or Static which means you log into the same named VM each time. It will still be cleaned on restart.

PVS based desktops only have one mode, which is assigned on first use, and from that point on you will log into the same desktop each time, because it also personalises the PVD which is fixed to the VM.

Dedicated VM’s can be pre-assigned or set to first use, but will in essence require “normal” pc management in terms of updates, patches and software distribution, since it will be a persistent VM after it’s deployment. Once you update the master image, only new desktops spun off the master will run the newer version of the master.

PVS allows you to stream desktops to either physical or virtual machines. The most used mode of PVS is standard image mode, which means non-persistent. Persistent image mode is mostly used to update the master.

10

Page 11: SYN 219  Getting Up Close and Personal With MCS and PVS

Image from Pixabay.com

https://pixabay.com/en/vehicle-chrome-technology-193213/

11

Page 12: SYN 219  Getting Up Close and Personal With MCS and PVS

Machine Creation Services mechanics:

MCS is fully integrated into Citrix Studio and does not require a separate installer or configuration. It’s there on each XenDesktop Controller.

MCS itself is a VM creation framework that can be learned to understand different hypervisors. No code is actually placed on the hypervisor.

The method used to provision VMs and link disks to it, differs per hypervisor, but MCS fully controlls the creation of the VM’s and fires of the commands to the hypervisor’s management platform or API’s.

12

Page 13: SYN 219  Getting Up Close and Personal With MCS and PVS

(Image obtained and edited from http://knowyourmeme.com/memes/philosoraptor)

So you could say the XenDesktop controllers actually speak multiple languages.

First of all it understands how to speak to Vmware ESX, and it will do so by talking to vCenter.

Hyper-V will be contacted through SCVMM, XenServer is beeing addressed through XAPI.

Finally, Nutanix AHV is accessed through the Nutanix AHV plugin, which is in itself beeing accessed through the Provisioning SDK

13

Page 14: SYN 219  Getting Up Close and Personal With MCS and PVS

MCS itself runs as 2 services on every XenDesktop Controller.

First we have the Citrix Machine Creation Service, which interfaces with the chosen hypervisor.

The Citrix AD identity service talks with Active Directory and manages the domain accounts and domain memberships of the VMs.

On the virtual desktops the Virtual Desktop Agent also contains a service, which is the Machine Identity Service.

This services manages the VM uniqueness.

14

Page 15: SYN 219  Getting Up Close and Personal With MCS and PVS

MCS will enable central image management for the admin.

In a pooled static scenario it works as follows:

First of all you create your Golden Master VM, install your apps and the Citrix XenDesktop VDA.

With this Golden Master, you create a Machine Catalog.

When you run through the wizard in Studio, you will be asked which VM and which snapshot to use (it will create a snap if none is present).

This snapshot is flattened and copied to each configured datastore (in the host connection details).

When the image has been copied, a preparation procedure is started to create the identity disk. This VM is then discarded and the main catalog VM’s are beeing cloned.

When cloned, each VM is attached to the master image (read only) and to a diff disk (writable) and an ID disk (read only). The diff disk can grow as writes happen, but the ID disk is a fixed 16 MB size.

15

Page 16: SYN 219  Getting Up Close and Personal With MCS and PVS

MCS will enable central image management for the admin.

In a pooled static scenario it works as follows:

First of all you create your Golden Master VM, install your apps and the Citrix XenDesktop VDA.

With this Golden Master, you create a Machine Catalog.

When you run through the wizard in Studio, you will be asked which VM and which snapshot to use (it will create a snap if none is present).

This snapshot is flattened and copied to each configured datastore (in the host connection details).

When the image has been copied, a preparation procedure is started to create the identity disk. This VM is then discarded and the main catalog VM’s are beeing cloned.

When cloned, each VM is attached to the master image (read only) and to a diff disk (writable) and an ID disk (read only). The diff disk can grow as writes happen, but the ID disk is a fixed 16 MB size.

16

Page 17: SYN 219  Getting Up Close and Personal With MCS and PVS

MCS will enable central image management for the admin.

In a pooled static scenario it works as follows:

First of all you create your Golden Master VM, install your apps and the Citrix XenDesktop VDA.

With this Golden Master, you create a Machine Catalog.

When you run through the wizard in Studio, you will be asked which VM and which snapshot to use (it will create a snap if none is present).

This snapshot is flattened and copied to each configured datastore (in the host connection details).

When the image has been copied, a preparation procedure is started to create the identity disk. This VM is then discarded and the main catalog VM’s are beeing cloned.

When cloned, each VM is attached to the master image (read only) and to a diff disk (writable) and an ID disk (read only). The diff disk can grow as writes happen, but the ID disk is a fixed 16 MB size.

17

Page 18: SYN 219  Getting Up Close and Personal With MCS and PVS

MCS will enable central image management for the admin.

In a pooled static scenario it works as follows:

First of all you create your Golden Master VM, install your apps and the Citrix XenDesktop VDA.

With this Golden Master, you create a Machine Catalog.

When you run through the wizard in Studio, you will be asked which VM and which snapshot to use (it will create a snap if none is present).

This snapshot is flattened and copied to each configured datastore (in the host connection details).

When the image has been copied, a preparation procedure is started to create the identity disk. This VM is then discarded and the main catalog VM’s are beeing cloned.

When cloned, each VM is attached to the master image (read only) and to a diff disk (writable) and an ID disk (read only). The diff disk can grow as writes happen, but the ID disk is a fixed 16 MB size.

18

Page 19: SYN 219  Getting Up Close and Personal With MCS and PVS

MCS will enable central image management for the admin.

In a pooled static scenario it works as follows:

First of all you create your Golden Master VM, install your apps and the Citrix XenDesktop VDA.

With this Golden Master, you create a Machine Catalog.

When you run through the wizard in Studio, you will be asked which VM and which snapshot to use (it will create a snap if none is present).

This snapshot is flattened and copied to each configured datastore (in the host connection details).

When the image has been copied, a preparation procedure is started to create the identity disk. This VM is then discarded and the main catalog VM’s are beeing cloned.

When cloned, each VM is attached to the master image (read only) and to a diff disk (writable) and an ID disk (read only). The diff disk can grow as writes happen, but the ID disk is a fixed 16 MB size.

19

Page 20: SYN 219  Getting Up Close and Personal With MCS and PVS

If you want to do updates, all it takes it to perform the update is to boot up the master, perform the change and choose the “Update Catalog” function in Studio.

It then creates a snapshot and copies a new flattened image to the datastore(s). Depending on the options you choose you can have all VM’s pointed to the new image right away, or do it in a rolling fashion. You can also rollback later if you want.

When you roll out a new image, on restart of the VM’s, they are pointed to the new version, and the diffs are cleared.

20

Page 21: SYN 219  Getting Up Close and Personal With MCS and PVS

If you want to do updates, all it takes it to perform the update is to boot up the master, perform the change and choose the “Update Catalog” function in Studio.

It then creates a snapshot and copies a new flattened image to the datastore(s). Depending on the options you choose you can have all VM’s pointed to the new image right away, or do it in a rolling fashion. You can also rollback later if you want.

When you roll out a new image, on restart of the VM’s, they are pointed to the new version, and the diffs are cleared.

21

Page 22: SYN 219  Getting Up Close and Personal With MCS and PVS

If you want to do updates, all it takes it to perform the update is to boot up the master, perform the change and choose the “Update Catalog” function in Studio.

It then creates a snapshot and copies a new flattened image to the datastore(s). Depending on the options you choose you can have all VM’s pointed to the new image right away, or do it in a rolling fashion. You can also rollback later if you want.

When you roll out a new image, on restart of the VM’s, they are pointed to the new version, and the diffs are cleared.

22

Page 23: SYN 219  Getting Up Close and Personal With MCS and PVS

If you want to do updates, all it takes it to perform the update is to boot up the master, perform the change and choose the “Update Catalog” function in Studio.

It then creates a snapshot and copies a new flattened image to the datastore(s). Depending on the options you choose you can have all VM’s pointed to the new image right away, or do it in a rolling fashion. You can also rollback later if you want.

When you roll out a new image, on restart of the VM’s, they are pointed to the new version, and the diffs are cleared.

23

Page 24: SYN 219  Getting Up Close and Personal With MCS and PVS

Now lets take a look at how Citrix Studio connects with all the different hypervisors.

Citrix Studio normally resides on a XenDesktop controller en interfaces with the different services running on the host.

These services take care of the brokering, manage the hosting connection and do the MCS related tasks we mentioned earlier.

Studio is not the only way to interface with the core of XenDesktop, PowershellCMDlets are also available directly.

When using Vmware, Citrix Studio talks to vCenter. vCenter therefore needs to be made highly available if you want to make sure

You can always manage your environment or have Studio manage the VMs, maintain the idle pools etc.

Should vCenter go down, you will not lose the current sessions, and any VMs that have already registered themselves with the broker are still available for login.

vCenter in turn does the actual power on/power off of the VMs and the tasks related to provisioning.

To get the most your storage and use the benefits on thin provisioning, Citrix recommends using NFS datastores connected to the hypevisor hosts.

VMWare will use its own vmdk disk format.

24

Page 25: SYN 219  Getting Up Close and Personal With MCS and PVS

Now lets take a look at how Citrix Studio connects with all the different hypervisors.

Citrix Studio normally resides on a XenDesktop controller en interfaces with the different services running on the host.

These services take care of the brokering, manage the hosting connection and do the MCS related tasks we mentioned earlier.

Studio is not the only way to interface with the core of XenDesktop, PowershellCMDlets are also available directly.

When using Vmware, Citrix Studio talks to vCenter. vCenter therefore needs to be made highly available if you want to make sure

You can always manage your environment or have Studio manage the VMs, maintain the idle pools etc.

Should vCenter go down, you will not lose the current sessions, and any VMs that have already registered themselves with the broker are still available for login.

vCenter in turn does the actual power on/power off of the VMs and the tasks related to provisioning.

To get the most your storage and use the benefits on thin provisioning, Citrix recommends using NFS datastores connected to the hypevisor hosts.

VMWare will use its own vmdk disk format.

25

Page 26: SYN 219  Getting Up Close and Personal With MCS and PVS

Now lets take a look at how Citrix Studio connects with all the different hypervisors.

Citrix Studio normally resides on a XenDesktop controller en interfaces with the different services running on the host.

These services take care of the brokering, manage the hosting connection and do the MCS related tasks we mentioned earlier.

Studio is not the only way to interface with the core of XenDesktop, PowershellCMDlets are also available directly.

When using Vmware, Citrix Studio talks to vCenter. vCenter therefore needs to be made highly available if you want to make sure

You can always manage your environment or have Studio manage the VMs, maintain the idle pools etc.

Should vCenter go down, you will not lose the current sessions, and any VMs that have already registered themselves with the broker are still available for login.

vCenter in turn does the actual power on/power off of the VMs and the tasks related to provisioning.

To get the most your storage and use the benefits on thin provisioning, Citrix recommends using NFS datastores connected to the hypevisor hosts.

VMWare will use its own vmdk disk format.

26

Page 27: SYN 219  Getting Up Close and Personal With MCS and PVS

Now lets take a look at how Citrix Studio connects with all the different hypervisors.

Citrix Studio normally resides on a XenDesktop controller en interfaces with the different services running on the host.

These services take care of the brokering, manage the hosting connection and do the MCS related tasks we mentioned earlier.

Studio is not the only way to interface with the core of XenDesktop, PowershellCMDlets are also available directly.

When using Vmware, Citrix Studio talks to vCenter. vCenter therefore needs to be made highly available if you want to make sure

You can always manage your environment or have Studio manage the VMs, maintain the idle pools etc.

Should vCenter go down, you will not lose the current sessions, and any VMs that have already registered themselves with the broker are still available for login.

vCenter in turn does the actual power on/power off of the VMs and the tasks related to provisioning.

To get the most your storage and use the benefits on thin provisioning, Citrix recommends using NFS datastores connected to the hypevisor hosts.

VMWare will use its own vmdk disk format.

27

Page 28: SYN 219  Getting Up Close and Personal With MCS and PVS

Now lets take a look at how Citrix Studio connects with all the different hypervisors.

Citrix Studio normally resides on a XenDesktop controller en interfaces with the different services running on the host.

These services take care of the brokering, manage the hosting connection and do the MCS related tasks we mentioned earlier.

Studio is not the only way to interface with the core of XenDesktop, PowershellCMDlets are also available directly.

When using Vmware, Citrix Studio talks to vCenter. vCenter therefore needs to be made highly available if you want to make sure

You can always manage your environment or have Studio manage the VMs, maintain the idle pools etc.

Should vCenter go down, you will not lose the current sessions, and any VMs that have already registered themselves with the broker are still available for login.

vCenter in turn does the actual power on/power off of the VMs and the tasks related to provisioning.

To get the most your storage and use the benefits on thin provisioning, Citrix recommends using NFS datastores connected to the hypevisor hosts.

VMWare will use its own vmdk disk format.

28

Page 29: SYN 219  Getting Up Close and Personal With MCS and PVS

Now lets take a look at how Citrix Studio connects with all the different hypervisors.

Citrix Studio normally resides on a XenDesktop controller en interfaces with the different services running on the host.

These services take care of the brokering, manage the hosting connection and do the MCS related tasks we mentioned earlier.

Studio is not the only way to interface with the core of XenDesktop, Powershell CMDlets are also available directly.

When using Vmware, Citrix Studio talks to vCenter. vCenter therefore needs to be made highly available if you want to make sure

You can always manage your environment or have Studio manage the VMs, maintain the idle pools etc.

Should vCenter go down, you will not lose the current sessions, and any VMs that have already registered themselves with the broker are still available for login.

vCenter in turn does the actual power on/power off of the VMs and the tasks related to provisioning.

To get the most your storage and use the benefits on thin provisioning, Citrix recommends using NFS datastores connected to the hypevisor hosts.

VMWare will use its own vmdk disk format.

29

Page 30: SYN 219  Getting Up Close and Personal With MCS and PVS

Now lets take a look at how Citrix Studio connects with all the different hypervisors.

Citrix Studio normally resides on a XenDesktop controller en interfaces with the different services running on the host.

These services take care of the brokering, manage the hosting connection and do the MCS related tasks we mentioned earlier.

Studio is not the only way to interface with the core of XenDesktop, PowershellCMDlets are also available directly.

When using Vmware, Citrix Studio talks to vCenter. vCenter therefore needs to be made highly available if you want to make sure

You can always manage your environment or have Studio manage the VMs, maintain the idle pools etc.

Should vCenter go down, you will not lose the current sessions, and any VMs that have already registered themselves with the broker are still available for login.

vCenter in turn does the actual power on/power off of the VMs and the tasks related to provisioning.

To get the most your storage and use the benefits on thin provisioning, Citrix recommends using NFS datastores connected to the hypevisor hosts.

VMWare will use its own vmdk disk format.

30

Page 31: SYN 219  Getting Up Close and Personal With MCS and PVS

If you take a look at the NFS datastore, you will find each provisioned VM to be in its own folder. Actually Vmware has the tidiest folder structure, when you compare it to other hypervisors, and it’s easy to see which files do what.

The Master vdisk is placed in a separate folder in the root of the datastore(s)

Directly linked to a pooled static VM are two vmdk files.

The first one is the Identity disk, and will not exceed 16 MB of space. In practice its about half of that.

Then the second disk is a delta.vmdk, but this is actually a redirector disk. More on than in a few slides.

31

Page 32: SYN 219  Getting Up Close and Personal With MCS and PVS

If you open the identity disk vmdk you can see it’s readable text.

It contains many variables that have been set by Studio, to help make the VM unique, even though it has spun off a master disk.

These variables are picked up by the VDA.

Amongst others, you will find a desktop ID there, but also the Catalog ID the VM is a part of and the licensing server this VDA ties to.

32

Page 33: SYN 219  Getting Up Close and Personal With MCS and PVS

If you open the delta.vmdk thats directly linked to the VM (i.e. configured in the VMX file) you will see it’s plain text as well.

In there you can see it is beeing linked to a parent vDisk which is actually the Master image you created the catalog with.

You can see the name of the disk is the same as the Catalog.

Secondly the redirector points another delta.vmdk which is actually the writecache.

33

Page 34: SYN 219  Getting Up Close and Personal With MCS and PVS

When you boot the VM, you will see more files beeing created, of which two files a REDO logs.

The delta.REDO is the actual writecache, and REDO files will be cleared on restart of the VM.

Now these files are not the same as snapshots, since they only save the disk state and not anything else that might have changed in the config of the VM, or it’s memory and CPU state.

34

Page 35: SYN 219  Getting Up Close and Personal With MCS and PVS

Should you copy a 1 GB file to the virtual desktop, you can actually see the write cache grow.

The reason Studio sets up the writecache this way (using a redirector and REDO files ) is very smart because it will not have to do anything to clean out the writecache, it just has to issue a restart of the VM (not a reboot)

35

Page 36: SYN 219  Getting Up Close and Personal With MCS and PVS

Hyper-V’s architecture is similar for the Studio part of course, but Studio needs to interface with System Center Virtual Machine manager to be able to communicate with the Hyper-V hosts.

Just pointing Studio to SCVMM is not enough, you also need to install the SCVMM Admin console on each XenDesktop Controller.

Same thing applies as with Vmware, you need to make sure SCVMM is highly available to be able to have Studio manage the VM’s.

Hyper-V hosts prefer SMB datastores and they will use the VHDX format to provision disks.

36

Page 37: SYN 219  Getting Up Close and Personal With MCS and PVS

If you use XenServer, Studio can talk directly to the XenServer Poolmaster and needs no management layer in between as the management layer for XenServer is more or less embedded

Still you have to make sure the Poolmaster is always reachable.

When using XenServer, Studio again prefers NFS , and on this it will utilise the VHD file format.

37

Page 38: SYN 219  Getting Up Close and Personal With MCS and PVS

If you look at the file structure for XenServer, you will see that vdisks are created with GUIDs as their names. It’s not clear from just the name or folder to hwat VM a disk belongs.

To get that insight you’d have to go the command line route, and use “xe vm list” etcto work out which is with. From the XenCenter GUI you can see what disks are attached.

You also might notice the “Preparation” VM that’s beeing booted up during catalog creating, during which it will do a mini sysprep and generate the identity disk info to be used for the rest of the VM’s. The base disk itself will be copied from the Master VM into each configured datastore.

38

Page 39: SYN 219  Getting Up Close and Personal With MCS and PVS

Once you have the VM’s created you will see these pairs of the id and writecachedisks (a VHD chain is beeing built).

39

Page 40: SYN 219  Getting Up Close and Personal With MCS and PVS

The last hypervisor we look at (and most certainly not the least!) , is the NutanixAcropolis Hypervisor.

AHV has been around for a while but we’ve only really started calling it AHV since june last year when we released a new version of it at Nutanix .Next in Miami (which will be held in Vegas in 2 weeks time btw).

We’re proud to announce full GUI based support for AHV in XenDesktop, which includes the use of Machine Creation Services.

How does it work?

First of all you need XenDesktop 7.9, which has the latest version of the ProvisioingSDK installed on it.

Together with the Nutanix AHV plugin you install on every controller, you can now have Studio talk natively to Acropolis.

Nutatix clusters are automatically highly available because of the distributed architecture, so you only need to point Studio to the Nutanix cluster IP address and you’re done.

It works the same way from that point on, and you create catalogs based on snapshots, which will then lead to provisioned VM’s with ID disks and writecaches.

40

Page 41: SYN 219  Getting Up Close and Personal With MCS and PVS

Under water we do a couple of things differently than the previsou 3 hypervisors.

We use a full clone approach, with copy on write functionality.

Each VM is linked to the master image, but will show up as the full vdisk, also the 16 MB ID disk is beeing attached.

These vDisks are thin provisioned, and while they show a usage of 10 GB in the above example, the actual data usage will be much lower, since we deduplicate and compress data on the storage level.

After every restart of the VM, the writecache disk is reset.

41

Page 42: SYN 219  Getting Up Close and Personal With MCS and PVS

Here you see the actual logical files when looked at the datastore through a WinSCP client, with the writecache disks at the top, and the ID disks at the bottom.

42

Page 43: SYN 219  Getting Up Close and Personal With MCS and PVS

Image from Pixabay.com

https://pixabay.com/en/motor-engine-compartment-mustang-1182219/

So, lets dive into Provisioning Services.

43

Page 44: SYN 219  Getting Up Close and Personal With MCS and PVS

VIDEO

Everyone remember this nice little Ardence video? It sure made an impact, since we are still using this technology today, to literally deliver hundreds of thousands of desktops to end user.

PVS is a streaming service and operates mostly on the network level.

It uses the exact same streaming method regardless of hypervisor, and as such does not need to be adapted or learned new hypervisors.

As long as the hypervisors support PXE or boot iso methodes of booting the VM’s, you’re good to go.

PVS actually intercepts the normal HDD bootprocess and redirects it over the network to a shared vdisk.

The PVS servers you need actually only control the streaming of the vdisks, Studio is still required to do the VM start/restart operations.

Pre-Existing VM’s can be used, or added using a wizard within PVS that sets the boot order of the VM’s.

Before you can use PVS, there is an image conversion process you need to perform.

44

Page 45: SYN 219  Getting Up Close and Personal With MCS and PVS

VIDEO

Everyone remember this nice little Ardence video? It sure made an impact, since we are still using this technology today, to literally deliver hundreds of thousands of desktops to end user.

PVS is a streaming service and operates mostly on the network level.

It uses the exact same streaming method regardless of hypervisor, and as such does not need to be adapted or learned new hypervisors.

As long as the hypervisors support PXE or boot iso methodes of booting the VM’s, you’re good to go.

PVS actually intercepts the normal HDD bootprocess and redirects it over the network to a shared vdisk.

The PVS servers you need actually only control the streaming of the vdisks, Studio is still required to do the VM start/restart operations.

Pre-Existing VM’s can be used, or added using a wizard within PVS that sets the boot order of the VM’s.

Before you can use PVS, there is an image conversion process you need to perform.

45

Page 46: SYN 219  Getting Up Close and Personal With MCS and PVS

Let’s take a look at PVS architecture.

PVS works with a separate infrastructure in addition to Studio, and also needs to be sized correctly for the number of desktops it’s going to provision, and offcourse made HA.

In practice you will always have more than one PVS server, and the number of desktops you can stream per PVS server lies in the hundreds to a couple of thousand depending on the PVS server’s specs.

A best practice is to use separate network segments or vlans to separate the (mostly read-only) treaming traffic of the vdisks.

To allow a VM to read from a vdisk, it needs to be part of a Device Collection on the PVS server, and the actual tie to the vdisk is done on MacAddresses.

This allows for quick swapping of vdisks, by just dragging and dropping.

A vdisk can be in 3 modes, of which Standard and Prviate mode are the most used. Standard mode enables a 1 to many scenario, enables the writecache and makes the vdisk read-only.

Private mode is mostly used to do updates to the vdisk, as it will make the vdiskwriteable, but only allows 1 VM to boot it.

There is a hybrid mode in PVS, but it is hardly ever used.

46

Page 47: SYN 219  Getting Up Close and Personal With MCS and PVS

These vdisks are streamed to the VM’s, who can initiate the bootprocess by either PXE or mounting a boot iso (BDM). PXE requires some extra DHCP settings (options 66 & 67 ) to be set, and might pose a challenge to use in an environment where more services depend on PXE. BDM solves that problem, but needs more configuration on the VM side, and the administraiton of the boot is.

The writes in PVS will go to the writecache, and this can be place on different location. Most often used , is a local disk (the actual disk mounted to the VM, but this can be configured as to be place on a local datastore).

Second option is to put the writecache in RAM, but this is a little tricky because RAM is not as abundantly available as disk is, and when the RAM fills up, the VM will halt or even blue screen. It’s the same thing that could happen when you run out of disk space.

The 3 option is to place the writecache back on the PVS server, but this is hardly every done as it will create serious bottlenecks and other management issues.

A 4th hybrid form has become more popular recently, as it first uses RAM, and when that is full, it will overflow to disk. While it might sound great as it lowers IOPS requirements at first, there are some downsides to it, because when the RAM actually overflows, performance might go down. So sizing this correctly and keeping a tight watch on it is very important.

47

Page 48: SYN 219  Getting Up Close and Personal With MCS and PVS

The vDisk creation process is a bit more time comsuming than MCS since you need to actually copy the entire disk contents to a VHD file using a wizard, after creating the Master VM.

To be able to do this you need to install the Target Device tool on the VM as well as the VDA. It is not part of the VDA.

When you do this, you can then run the Imaging Wizard to create the vdisk. The Imaging Wizard also has some options to tune the VM by optimizing some settings, but it is not very extensive.

Keep using the Best Practices guides and tools for that, that are available in the community.

When you created the vdisk, you can then create a device collection (either manually or using a wizard) an literally drag and drop a vdisk on top of it.

It makes the version management of the solution very easy. Switching back and forth between vdisks only requires a reboot of the devices.

48

Page 49: SYN 219  Getting Up Close and Personal With MCS and PVS

So how can we get the most out of these provisioning techniques.

49

Page 50: SYN 219  Getting Up Close and Personal With MCS and PVS

So how can we get the most out of these provisioning techniques.

50

Page 51: SYN 219  Getting Up Close and Personal With MCS and PVS

The most improtant thing is, to choose the right solution first.

We’ve created this little flow chart to help you.

51

Page 52: SYN 219  Getting Up Close and Personal With MCS and PVS

Image acquired from Imageflip.com

https://imgflip.com/memetemplate/The-Most-Interesting-Man-In-The-World

While PVS didn’t really have any storage issues other than managing the local writecaches, MCS did have issues when it was first relased since it was conceived in a time when SAN’s were still roaming the earth freely and undisputed.

Problem for MCS is that because of this, it was not really a viable technology since SAN’s run out of juice pretty quickly when a high number of VDI vms are hitting it for IOPS.

This has become better over the years, but still when the number ofr desktops increase, the overall performance for each desktop will go down since the SAN has to divide the performance and capacity it has over more and more VM’s

So most of the time when people actually use MCS, they could only really do so when utilizing local storage.

Now this brings up a whole lot of extra management complexity with it.

52

Page 53: SYN 219  Getting Up Close and Personal With MCS and PVS

If you just use local storage, this means you first have to size right. How big will your writecaches become, and how many VM’s do you intend to stick on a host. These disks also need to be in some form of RAID setup to minimize failure impact.

Now when you configure XenDesktop to utilize local storage, this means you have to configure each individual host withing XenDesktop.

And then as soon as you create or update a catalog, the vdisk needs to be copied to all these hosts, extending image rollout times considerably, especially if you have a large server farm.

53

Page 54: SYN 219  Getting Up Close and Personal With MCS and PVS

A great way to solve this is to go the software defined route and opt for distributed filesystem.

While this still leverages local storage (SSD’s preferably so it can’t bottleneck like a SAN would), it is not managed as local storage.

The hypervisor just sees a single datastore, which also means you only need to configure it only once in XenDesktop.

Net benefit of this is that it also just requires just a single copy to be made on image roll out..

There however is a problem with this if you used just a typical run of the mill distributed filesystem, that has no techniques to truly localize data) since it will not be optimized for running a multi-reader scenario.

What will happen is that when the golden master is created, this process will write the vdisk to the storage of the host that performs the copy. While the VM’s local to that host will read the master vdisk locally, the other hosts in the cluster will access the vdisk over the network. This could become a bottleneck when enough VM’s start to read from the master. Even though the vDisk is probably replicated to other hosts , this is only done to assure data availability in case of a host failure. It will not distribute the disk load balancing pursposes. At most, each host might have a small cache to try to avoid some of the read traffic, but these caches are not optimized for multireader scenario’s (only work for 1 VM reading its own disk) or the caches are simply just too small to house the vdisk in the first place.

54

Page 55: SYN 219  Getting Up Close and Personal With MCS and PVS

Nutanix solves this problem ahead of time by way of Shadow Cloning.

As soon as we detect that multiple VM’s (2 or more remote readers) pull data from the vdisk, we mark the main vdisk as immutable which allows for distributed caching of the entire disk. This is done on reads, block by block. This way each VM will automatically work with localized data.

This not only relieves the network for reads (writes are local anyway), but it also seriously improves performance.

This technology is enabled by default and requires no configuration whatsoever.

55

Page 56: SYN 219  Getting Up Close and Personal With MCS and PVS

To summarize the benefits of running MCS on dsitributed storage:

First of all : no more multpiple image copies when rolling out. This seriously speeds up deployment.

Secondly: no need to maintain multiple datastores, making things simpler.

Third: no IO hotspots anymore, increasing performance

56

Page 57: SYN 219  Getting Up Close and Personal With MCS and PVS

To summarize the benefits of running MCS on dsitributed storage:

First of all : no more multpiple image copies when rolling out. This seriously speeds up deployment.

Secondly: no need to maintain multiple datastores, making things simpler.

Third: no IO hotspots anymore, increasing performance

57

Page 58: SYN 219  Getting Up Close and Personal With MCS and PVS

Image from imgflip.com

Here’s a before and after.

It’s obvious that it’s much simpler to manage hred 1 datastore than each indivual host with local datastores.

With the technology of today this is now finally viable.

No need to change the hosting connection properties when you want to shutdown a host for example.

58

Page 59: SYN 219  Getting Up Close and Personal With MCS and PVS

Another couple of benefits of putting MCS on distirbuted storage:

1. Your VM’s stay moveable (i.e. vMotion or Live Migration) can be done, even with local storage. With Nutanix’ Data locality the write cache data will be made local to the new host automatically.

2. Reduced boot times. Not that this always is important since most idle pools will be made ready ahead of the login storms, but it will also lower login times, and improve overall end user perfromance.

3. Since we are no longer tied to a SAN infrastructure or require RAM caching techniques to increase local IOPS, we can reach a much better scalability, since it will be linear.

59

Page 60: SYN 219  Getting Up Close and Personal With MCS and PVS

Shadow clones offer distributed caching of vDisks or VM data that is in a multi-reader scenarios

Up to 50% performance optimization during VDI boot-storms and other multi reader scenario’s.

60

Page 61: SYN 219  Getting Up Close and Personal With MCS and PVS

Does this mean there is no beneif for PVS when using distributed filesystems?

Yes there are many!

But with PVS there was no issue with reading the master image, so that benefit (Shadow Clones) will not apply.

What’s left?

1. No need to manage the local storage required for the writecache and make sure it’s HA (RAID etc)

2. No need to worry about local disks filling up and crashing VM’s. It will just spill over an leverage other hosts storage if needed.

3. No worry about local IOPS.

4. PVS-ed VM’s with a writecache stay movable. Writecache will follow the VM to it’s new host thanks to Data Locality.

5. Simple configuration: just point create the writecache disk of the template VM on the distributed datastore.

6. No need to use RAM caching technology to save on IOPS. This means more RAM is available to VM’s, which means better scalability and higher VM density,

61

Page 62: SYN 219  Getting Up Close and Personal With MCS and PVS

Does this mean there is no beneif for PVS when using distributed filesystems?

Yes there are many!

But with PVS there was no issue with reading the master image, so that benefit (Shadow Clones) will not apply.

What’s left?

1. No need to manage the local storage required for the writecache and make sure it’s HA (RAID etc)

2. No need to worry about local disks filling up and crashing VM’s. It will just spill over an leverage other hosts storage if needed.

3. No worry about local IOPS.

4. PVS-ed VM’s with a writecache stay movable. Writecache will follow the VM to it’s new host thanks to Data Locality.

5. Simple configuration: just point create the writecache disk of the template VM on the distributed datastore.

6. No need to use RAM caching technology to save on IOPS. This means more RAM is available to VM’s, which means better scalability and higher VM density,

62

Page 63: SYN 219  Getting Up Close and Personal With MCS and PVS

Before we end the sesssion we have one more thing.

We have told you in the last 35 minutes what currently is available for MCS and PVS.

But this world isn’t standing still. What’s coming?

So we have a special guest today.

Please give a warm welcome to the man that actually builds this awesome MCS and PVS technolgy: Jeff PinterParssons!

63

Page 64: SYN 219  Getting Up Close and Personal With MCS and PVS

64

Page 65: SYN 219  Getting Up Close and Personal With MCS and PVS

Not a product manager so no commitments

65

Page 66: SYN 219  Getting Up Close and Personal With MCS and PVS

66

Page 67: SYN 219  Getting Up Close and Personal With MCS and PVS

67

Page 68: SYN 219  Getting Up Close and Personal With MCS and PVS

68

Page 69: SYN 219  Getting Up Close and Personal With MCS and PVS

69

Page 70: SYN 219  Getting Up Close and Personal With MCS and PVS

70

Page 71: SYN 219  Getting Up Close and Personal With MCS and PVS

71

Page 72: SYN 219  Getting Up Close and Personal With MCS and PVS

72

Page 73: SYN 219  Getting Up Close and Personal With MCS and PVS

73

Page 74: SYN 219  Getting Up Close and Personal With MCS and PVS

74

Page 75: SYN 219  Getting Up Close and Personal With MCS and PVS

75