Vblock Infra

Embed Size (px)

DESCRIPTION

vblock

Text of Vblock Infra

  • 1

  • 2

  • 3

  • 4

  • 5

  • 6

  • 7

  • 8

  • 9

  • 10

  • 11

  • 12

  • 13

  • 14

  • 15

  • 16

  • 17

  • The Cisco UCS fabric interconnects support 2 different modes which have an affect upon switching and data flow through each fabric interconnect:

    End Host Mode (also referred to as End Host Virtualization or EHV mode) EHV is the default configuration for the UCS fabric interconnects and is considered a best practice for Vblock. EHV mode makes each fabric interconnect appear to upstream Cisco switches as if it were a host with many network interfaces. From a data flow perspective, the end result of EHV is that MAC learning and Layer 2 forwarding are only performed for devices (blades) connected to ports that are designated as server ports. A dynamic pinning algorithm is used to pin each blade VIC to an uplink port at the interconnect, determining the data path for that VIC on that fabric. To upstream switches the uplink port on the fabric interconnect presents itself as a NIC on a host and there is no MAC learning or forwarding performed for uplinks.

    Switch Mode in switch mode (the non-default mode for a Vblock), each fabric interconnect appears to upstream switches as a layer 2 switch, and layer 2 forwarding is performed on behalf of both uplink and server ports at the fabric interconnect. The downside of switch mode is that MAC learning is turned on for both uplink and server ports, causing the fabric interconnect to build a MAC table for upstream data center devices, as well as Vblock devices (the MAC table supports 32,000 entries (13,800 usable) so though this raises the possibility the MAC table could be overrun, its an unlikely scenario). Since switch mode advertises the fabric interconnect as a layer 2 switch, the interconnect becomes available to participate in spanning tree when configured in switch mode, which could make the Vblock subject to STP loop management and port blocking.

    18

  • 19

  • 20

  • 21

  • 22

  • 23

  • 24

  • 25

  • 26

  • 27

  • 28

  • 29

  • Let us consider the example of deploying eight identically configured blade servers. Each blade server should have two vNICs, two vHBAs, the internal disks should be mirrored, it should boot from SAN and the local disks should be scrubbed if the blade is not associated with a service profile. The blades should all communicate over the same VLAN and VSAN. In this case it is possible to create eight Service Profiles ensuring that the policies, VLAN IDs, VSAN IDs are all set identically in each of the profiles. This could be time consuming and would require great care. Service Profile Templates greatly simplify this task and enable rapid deployment of servers. The requirements are now captured in the template and from the template eight identical profiles can be generated and applied to a pre-defined pool of servers.

    Initial service profile template: If a service profile is created from an initial template, then it inherits all the properties of the service profile template. If, however, changes are made to the service profile template, then the service profile must be updated manually because it is not connected to the template.

    Updating service profile template: If a service profile is created from an updating template, then it inherits all the properties of the service profile template. If changes are made to the service profile template, then these changes are automatically made to the service profile.

    30

  • Service Profile Templates are created using the UCS Manager. Template allows one to specify the various attributes of the server discussed earlier. Among other things, the template is used to specify the boot policy, local disk policy, assign WWNN, create vHBAs, assign WWPN, and assign MAC addresses to the vNICs. The corresponding pools that were created are specified in the template. When a Service Profile is created from a Service Profile Template, the UUID and WWNN for the server, the WWPN and MAC address for the vHBAs and vNICs are assigned from the respective pools specified. After creating a Service Profile from the template, the new service profile can then be associated with a blade in the UCS chassis.

    31

  • The LAN tab contains six filterable views

    All

    LAN Cloud

    Appliances

    Internal LAN

    Policies

    Pools

    Traffic Monitoring Sessions

    32

  • 33

  • 34

  • 35

  • The Audit Logs give detailed insight into operations within the UCS frame with hyperlinks to the affected objects within the Infrastructure.

    36

  • 37

  • 38 38

  • 39 39

  • 40 40

  • 41

  • Within the Vblock RBAC schema privileges are cumulative with the Admin and AAA roles overriding all. The Admin and AA roles are Global to the UCS, but other roles can be specific to certain objects and Locales within the hierarchy.

    42

  • 43

  • 44

  • 45

  • The UCS can exist together with external security mechanisms and will validate users against these listed products and schemas.

    46

  • Pools provide the ability to allocate server attributes in a UCS domain while enabling the centralized management of shared system resources.

    47

  • Management IP pools are collections of internal IP addresses that can be used to communicate with servers integrated Keyboard Video Mouse (KVM), also known as the Baseboard Management Controller (BMC), Serial over IP, or Intelligent Platform Management Interface (IPMI). When you create a management IP address pool, each server is assigned an IP address that allows you to connect to each of your servers via the on-board KVM.

    UUID suffix pools are defined so that the UUID of the server in a profile can be moved between servers without matching a server profile to specific hardware. This provides great flexibility in deploying profiles in an environment because the profile is not tied to individual hardware. This is known as stateless computing.

    MAC address pools are defined so that MAC addresses can be associated with specific vNICs. By selecting a unique block of MAC addresses, you can designate a range of MAC addresses to be associated with vNICs unique to your LAN. Note: MAC address pools must be unique within a Layer 2 domain. If multiple UCS fabric interconnects (that is, separate Vblock Infrastructure Packages) are connected to the same aggregation layer, then the MAC address pools must be unique within each UCS domain; otherwise, MAC address conflicts will occur. Introducing UIM (discussed later) will help minimize this possibility.

    48

  • WWNN pools are used in the UCS environment to assign a block of virtualized WWNs that can be assigned to a server when a service profile is created

    Worldwide Port Name Pools (WWPN): When a profile is being built, the number of virtual host bus adapters (vHBAs) can be specified. Each vHBA needs to have a unique virtual WWPN assigned to it. In most cases your WWPN pool should equal the number of blades multiplied by two, because each blade has two virtual HBAs present. Multiple WWPN pools can be created on a per-application basis to minimize SAN zoning requirements.

    Server pools: In the UCS environment, servers can be organized into server pools that can be used to associate servers with a profile. This can be especially useful if your servers have different physical attributes (processor, memory, and internal disk). Note: Servers can belong to multiple server pools and can include servers from any chassis in the system

    49

  • Policies are used to simplify management of configuration aspects such as where to boot from or which server to select (for example, based on the number of CPUs). After you have defined your pools and created VLANs and VSANs, you next need to define your policies. In the UCS environment, many policies have already been defined using default values; however, there are a few policies that need to be defined by the user.

    50

  • 51

  • 52

  • 53

  • 54

  • Direct access to SAN devices through Raw Device Mapping (RDM)

    Provides virtual machines direct and exclusive access to LUN

    May be better performance in some scenarios and allows guest operating system and/or applications direct control of the device

    Adds complexity and is less flexible

    Minimal advantage over VMFS

  • Virtual Machines and Virtual Disks exist as files within a VMFS file system that can either reside on a NFS export or on one or more LUNS. In either case the VMFS file system is typically shared among all members of a cluster. This give the flexibility to move virtual machines between ESX servers. Boot devices are typically for the exclusive use of a single ESX server and contains the ESX hypervisor or virtual operating system.

  • RAID 1/0

    Offers the best all around performance of the 3 supported RAID types.

    Offers very good protection and can sustain double drive failures that are not in the same mirror set.

    Economy is the lowest of the 3 RAID types since usable capacity is only 50% of raw capacity.

    RAID 5

    Offer the best mix of performance, protection and economy.

    Has a higher write performance penalty than 1 since two reads and two writes are required to perform a single write, however for la