Upload
it-brand-pulse
View
135
Download
4
Embed Size (px)
DESCRIPTION
Most medium and large-sized IT organizations have deployed several generations of virtualized servers, and they have become more comfortable with the performance and reliability with each deployment. As IT organizations increased virtual machine (VM) density, they reached the limits of vSphere software, server memory, CPU, and I/O. A new VM engine is now available and this document describes how it can help IT organizations maximize use of their servers running VMware® vSphere® 5.1 (henceforth referred to as vSphere 5.1).
Citation preview
Copyright 2013© IT Brand Pulse. All rights reserved. Document # INDUSTRY2013002, v2 June 2013 Document # INDUSTRY2013002, v4 October 2013
Where IT perceptions are reality
Most medium and large-sized IT organizations have deployed several generations of virtualized servers, and
they have become more comfortable with the performance and reliability with each deployment. As IT
organizations increased virtual machine (VM) density, they reached the limits of vSphere software, server
memory, CPU, and I/O.
A new VM engine is now available and this document describes how it can help IT organizations maximize
use of their servers running VMware® vSphere® 5.1 (henceforth referred to as vSphere 5.1).
Best Practices in vSphere 5.1……...……………..…………………………………………………...………..…3
Increasing VM Density……...……...………………………………………..…………………………………….4
A New 8 Cylinder VM Engine……….………………………...……………………..……………..…………...5
QLogic I/O = Higher VM Density………………..…………………………………………………..…………….6
A New Virtualization Chassis ……...……….……..…………………………………………………………....7
Turbocharged Fuel Injection……………………………………………………………………….……….….....8
Network I/O Planning…...………………...……………………………………….………………...……………..9
Accelerating App Performance ……………………….………………………..……………………..….….10
Faster Storage vMotion…………………………………………………………………………………………….11
Storage Protocol Flexibility…….……………………...………………………..………………………..………12
Low-Latency Connectivity……………………………………………………………………………….….....…13
Doubled I/O Throughput..………………………………………………………………………………………..14
Higher Efficiency………………………………………………………….…………………………………………..15
Linear Scalability ……………………...……………………………………………………………….…………...16
I/O Dashboard and Control...….……………………………………………………………………………....17
Resources..……….………………….……………………..……………………………………………….……………18
Harnessing the Power
In a survey conducted by IT Brand, IT professionals said the
average number of VMs per server would almost double in the
next 24 months.
VMs Per Server
Best Practices in vSphere 5.1
No hardware resource is more important to overall performance
than memory. Plan to ensure each VM has the memory it needs,
but without wasting memory in the process.
Memory
Start with Planning
When planning a VMware installation, it is important to take into account the new capabilities of vSphere
5.1. VMware has added significantly to the scalability of vSphere 5.1. For data centers virtualizing Tier-1
applications, the significant scalability enhancement is the ability to have up to 1TB of memory and 64 virtual
CPU cores (vCPU) per VM. This will ensure that almost all Tier-1 applications should perform well in a
vSphere 5.1 environment.
However, these new capabilities bring new complexities, and with them the need to plan new data center
architectures. This not only includes planning the deployment for today’s needs, but also thoroughly
investigating evolution strategies for applications before bolting down racks and filling them with servers.
Planning which applications are going to run on your virtualized servers is the first step in understanding your
needs. From there, it is critical to define server integration points with existing resources (likely core
switching and storage resources), and how these will be affected by the evolution of existing resources. After
that, planning your approach to vMotion® and capacity growth over the lifetime of your new infrastructure
will help you scope internal I/O requirements
appropriately.
Finally, determining whether to utilize
converged networks or not, and what I/O
performance you need, will enable you to
intelligently discuss your I/O and networking
options with your SAN/LAN equipment
providers. These steps will help you ensure
success when virtualizing your Tier-1
applications.
To fully optimize virtualized data centers,
servers need maximum I/O capacity to support high input/output operation rates and high bandwidth
applications. Increased bandwidth is also needed for server virtualization, which aggregates I/O from multiple
virtual machines (VMs) to the host’s data path. This next-generation combination takes full advantage of new
features that are described in detail in this planning guide. Read on to discover how QLogic can increase your
infrastructure ROI and overall competitiveness.
The Great VM Proliferation
With many server admins working on their 3rd and
4th generation of virtualized servers, the focus has
changed from interoperability and learning the
behavior of vSphere 5.1, to increasing VM density
(#VMs/Physical Host Server). With the availability of
servers based on Intel’s E5 processors (multi-core,
768GB of RAM, PCI Express® Gen3), a new, game-
changing compute platform was introduced. This
new platform allows for new levels of VM density
and for the first time Tier-1 applications that
previously required dedicated server hardware can
now run on virtual servers, achieving improved
performance, scalability and efficiency.
While vSphere 5.1 and multi-core processor based
servers are seeing significant deployments in many
enterprise datacenters, the I/O and network
infrastructure to support these new technologies
lags far behind. In a survey conducted by IT Brand
Pulse, IT professionals said the average number of
VMs per server would almost double in the next 24
months. Approximately 25 percent of IT
professionals surveyed also said what they need
most to increase the VM density is more I/O
bandwidth.
The purpose of this industry brief is to provide a
planning guide to help enterprises deploy Tier-1
applications with adequate bandwidth in a dense
vSphere 5.1 virtualization environment.
Increasing VM Density
Approximately 25 percent of IT professionals surveyed said
what they need most to increase the density of VMs per server
is more I/O bandwidth—QLogic’s core competence.
VM Density
IT Brand Pulse
The average number of VMs per server in
my environment:
What I need most to increase the density of
VMs per physical servers is more:
IT Brand Pulse
A New 8 Cylinder VM Engine
Two more cores, 8MB more cache, six more DIMMs of faster DDR3-
1600 memory increasing to 768GB, double the I/O bandwidth with
PCIe 3.0, and more Intel QuickPath links between processors.
Xeon E5
Xeon® E5 offers double the I/O bandwidth to 10GbE NICs, 10GbE FCoE, and 16Gb Gen 5 Fibre Channel Converged
Network Adapters, and Gen 5 Fibre Channel Host Bus Adapters.
The Intel Xeon E5 Platform
The introduction of the Intel® Xeon® E5 family of processors responds to the call for more virtual server
resources with two more cores, 8MB more cache, and six more DIMMs of faster DDR3-1600 memory,
increasing the total to eight cores, 768GB of RAM, and doubling the I/O bandwidth with PCIe® 3.0.
Intel’s new Xeon E5 promises a significant increase in server I/O by enabling full-bandwidth, four-port 10GbE
server adapters as well as dual-port Gen 5 Fibre Channel server adapter support, addressing the VM density
issue with a substantial increase in I/O bandwidth that host servers require.
Intel Xeon E5 Platform
2X More VMs Per Server with vSphere 5.1, Xeon E5, and QLogic Server Adapters
Any concern about hosting Tier-1 apps on VMs has been elevated with vSphere 5.1, Intel E5 processors, and
next-generation I/O throughput of the QLogic multiprotocol server adapters. Even flagship enterprise
applications such as SAP, which typically require dozens of development, QA/test, and production servers,
have adopted server virtualization within these environments as a best practice. The CPU, memory, storage,
and networking I/O requirements are well documented by application vendors and VMware. If one of your
goals is to increase VM density, the same combination allows for double the number of VMs per server while
enjoying the same level of individual application performance.
Generational Differences in Performance Using VMware VMmark 2.5
Using the VMware VMmark 2.5 virtualization
benchmark, VMware compared the performance of
four-node clusters consisting of servers with four-
core Intel Nehalem processors versus servers with
eight-core Intel EP processors. The new cluster
scored 120 percent higher than the old cluster. The
performance advantages were largely due to the
generational improvements of the eight-core E5-
2665 processor versus the four-core x3460
processor, improved bus speeds, increased
memory, and the resulting increase in I/O
throughput.
Having doubled the throughput rates with the
QLogic QLE2672 Gen 5 Fibre Channel Host Bus
Adapter, the vSphere 5.1 platform can also support
more storage devices and meet bandwidth
requirements. Now, using the same number of
Fibre Channel links as before can support double
the bandwidth, benefitting virtualized applications.
QLogic I/O = Higher VM Density
Performance Differences in Intel
Generation Using VMware VMmark 2.5
Business-critical applications such as ERP, CRM, eCommerce,
and e-mail need high-performance and high-availability I/O
infrastructure to meet business SLAs.
Tier-1 Apps
(Source: VMware Blog)
VMware vSphere 5.1
The newest release of vSphere represents a high-performance VM chassis capable of harnessing the power
of a new CPU engine and drives requirements for an expanded I/O interface. The new I/O capabilities of
QLogic server adapters combined with vSphere 5.1 complement enhancements related to Storage vMotion.
Storage vMotion—vSphere 5.1 enables users to combine vMotion and Storage vMotion into one operation. The
combined migration copies both the VM memory and its disk over the network to the destination host. In larger
environments, this bandwidth-intensive operation enables
VMs and datastores to be migrated between clusters that do
not have a common set of datastores.
Parallel Disk Copies—vSphere 5.1 allows up to four parallel
disk copies per Storage vMotion operation, where previous
versions of vSphere used to copy disks serially. When you
migrate a VM with five VMDK files, Storage vMotion copies
the first four in parallel, then starts the next disk copy as
soon as one of the first four finishes.
Gen 5 Fibre Channel—To help complete the storage
vMotion as soon as possible, vSphere 5.1 includes support
for Gen 5 Fibre Channel. In vSphere 5.0, VMware introduced
support for Gen 5 Fibre Channel Host Bus Adapters, but
they had to be throttled down to work at 8Gb. Now,
vSphere 5.1 supports these Host Bus Adapters running
at Gen 5.
10GbE and SR-IOV—For applications which require low-latency performance, SR-IOV-capable 10GbE server
adapters offer the benefits of direct I/O, which reduce latency and host CPU utilization.
From a storage planning perspective, when comparing vSphere 5.1 to previous versions, there are two
specifications which stand out: the amount of memory/VM (1TB) and the amount of active VMs per machine
(1,024).
With today’s storage usage, a petabyte of storage could be needed to support 1,024 VMs. While this
scenario is unlikely for at least a few years, running 100 VMs with 512GB of virtual memory each on a single
server (which would require 52TB of storage for the memory content alone) is very foreseeable.
A New Virtualization Chassis
In vSphere 5.1, support for Gen 5 Fibre Channel Adapters
helps ensure adequate I/O to maximize VM densities and
quickly complete vMotion and Storage vMotion migrations.
Gen 5 Fibre Channel
Executing Storage vMotions is now easier than ever, increasing
the need for high-performance 10GbE or Gen 5 Fibre Channel
storage networks.
vMotion and Storage vMotion
QLogic Server Adapters
vSphere 5.1 and Xeon E5 processors streamline the procedure of moving VMs and their associated storage
with vMotion and Storage vMotion and support low-latency networking traffic with SR-IOV. Moving terabytes
of data across VMs and the migration of virtual servers requires low-latency, high-performance I/O adapters.
QLogic offers a family of server adapters for Gen 5 Fibre Channel, 10GbE, or converged network connectivity,
providing the bandwidth for increased VM scalability and to power Tier-1 application workloads.
Turbocharged Fuel Injection
The latest generation of Converged Network Adapters from QLogic
support Ethernet LANs, NAS, iSCSI SANs, and FCoE SANs, as well as native
Fibre Channel SANs.
CNA
QLogic Server Adapters
2600 Series
3200 Series
8300 Series
Server Adapter
Description Fibre Channel Host Bus Adapter
Intelligent Ethernet Adapter
Converged Network Adapter
Speed 16Gbps 10Gbps 10Gbps
Protocols Fibre Channel TCP/IP LAN and NAS, iSCSI SAN
TCP/IP LAN and NAS, iSCSI SAN, and FCoE SAN
Use in Virtualized Server
Highest performance Fibre Channel SAN
connectivity for storage- intensive applications
Consolidate multiple 1GbE server connections to LAN and NAS on one
high-speed Ethernet wire
Consolidate server connections to LAN and
SAN on one Ethernet wire
Picking the Right I/O Pieces, and Making Them Work Together
Tier-1 applications are uniquely demanding in many dimensions. Their needs with respect to CPU power,
memory footprint, high availability/failover, resiliency, and responsiveness to outside stimuli is typically
unmatched within the enterprise. Moreover, Tier-1 applications also tend to be tightly integrated with other
applications and resources within the enterprise. Because of this, virtualizing a Tier-1 application requires
rigorous planning of the I/O strategy. There are five steps to follow:
Identify the I/O fabrics that the Tier-1 applications will use (it may very well be “all of them”).
Quantify the data flows for each fabric when the application was operating on a standalone system.
Estimate vMotion I/O needs for failovers and evolution. Note that most vMotion traffic will be storage
I/O; if the data stays within one external array during vMotion, vSphere’ vStorage API for Array
Integration (VAAI) capability can reduce the I/O traffic.
Determine your primary and secondary I/O paths for multi-pathing on all of your networks.
Determine QoS levels for the Tier-1 apps.
One simplifying option available is to utilize a multi-protocol network adapter that can function as either a
Fibre Channel or Converged Network Adapter. The QLogic QLE2672 is an example of such an adapter; it can
be reconfigured in the field to operate on 4/8 or Gen 5 Fibre Channel or 10Gb converged Ethernet networks.
Network I/O Planning
Networking Considerations When Virtualizing Tier-1 Applications
Pre-planning deployments is the most effective way to ensure that
SLAs will be achieved, and expensive surprises will be avoided. Planning
Accelerate VM Data with QLogic FabricCache™
10000 Series Adapters
The QLogic FabricCache™ 10000 Series Adapter is the industry's first caching SAN adapter.
This new class of server-based PCIe flash/Fibre Channel Host Bus Adapters uses the Fibre
Channel network to cache and share SAN metadata. Adding large caches to servers places
the cache closest to the application and in a position where it is insensitive to congestion.
An advantage to this approach is that PCIe flash‐based caching can be shared across multiple physical
machines. FabricCache features a shared-cache architecture that satisfies the cache coherence requirement
for clustered application environments and eliminates the cache hot/cold issue with multi-server virtualized
infrastructures supporting live VM migrations and automated workload balancing capabilities, ensuring a
consistent user experience throughout the VM migration process.
10000 Series FabricCache Adapters FabricCache Adapters Provide Links to Fibre Channel Array and Cache Hot VM Data in Server
Accelerating App Performance
A QLogic architecture for sharing and replicating cache on a
Fibre Channel Host Bus Adapter in a SAN. FabricCache
Only one caching Host Bus Adapter in the accelerator cluster is ever actively caching each LUN’s traffic. All other members of the
accelerator cluster process all I/O requests for each LUN through that LUN’s cache owner, so all storage accelerator cluster
members work on the same copy of data. Cache coherence is guaranteed without the complexity and overhead of coordinating
multiple copies of the same data.
Shared PCIe SSD
Gen 5 Fibre Channel Improves Storage VMotion I/O Throughput
A process known as Storage vMotion allows for a non-disruptive migration of running VM disk files between
two different physical storage devices. This process allows for the VM to remain running with no need to take
its workload offline to move the VM’s files to a different physical storage device. Additional use cases for
storage vMotion include migration of data to new storage arrays or larger capacity, better performing LUNs.
NPIV zoning and LUN masking must be properly configured to ensure the VM and the host server continue to
have access to the storage after the migration is completed. Storage vMotion across a Gen 5 Fibre Channel
link can finish in half the time it takes with a 8Gb Fibre Channel link.
From vSphere 5.1 onwards, up to four parallel disk copies run simultaneously, making it imperative that all
paths related to Storage vMotion should be supported by high-performance Gen 5 Fibre Channel networks
in order to reduce the time it takes to evacuate storage safely to a new destination and resume normal
operations. Additionally, 10GbE links can replace 1GbE links to ensure adequate bandwidth for Storage
vMotion in Ethernet environments.
Faster Storage vMotion
A single-port QLE2670 Gen 5 Fibre Channel Host Bus Adapter can double throughput for Storage vMotion.
Storage vMotion at Gen 5
This bandwidth-intensive operation enables live migration of
virtual disk files from one storage device to another storage
device on the same host.
Storage vMotion
QLE2670 Gen 5
Fibre Channel HBAs
QLE3240 10GbE
Intelligent Ethernet
Adapters
QLogic 8300 Series Converged Network Adapters Enable Convergence at the
VM Edge
QLogic Converged Network Adapter solutions leverage core technologies and expertise, including the most
established and proven driver stack in the industry. These adapters are designed for next-generation,
virtualized, and unified data centers with powerful multiprocessor, multicore servers. Optimized to handle
large numbers of VMs and VM-aware network services with support for concurrent NIC, FCoE, and iSCSI
traffic.
One QLogic 8300 Series Converged Network Adapter can be configured for connectivity to an Ethernet
network and to deliver storage networking via Fibre Channel over Ethernet simultaneously. Key features and
benefits include:
Powerful iSCSI and FCoE hardware offloads improve system performance
Advanced virtualization technologies supported through secure SR-IOV or through switch- and OS-
agnostic NIC Partitioning (NPAR)
Combined with QLogic’s Quality of Service (QoS) technology, delivers consistent and guaranteed,
application aware performance in dense VM environments.
Storage Protocol Flexibility
The Fibre Channel over Ethernet (FCoE) protocol allows Fibre Channel
traffic to run over a converged Ethernet network for LAN and SAN traffic
over one wire.
FCoE
For organizations maintaining a parallel LAN and SAN architecture all the way to the server adapter, QLogic offers the QLE8300
Series of adapters supporting 10GbE LAN, NAS and iSCSI SAN traffic, as well as Fibre Channel over Ethernet traffic.
Network Convergence at the Host Server Adapter and Fabric Convergence Adapter Convergence, Separate Fabrics
Both ports used for LAN, NAS
and SAN traffic over Ethernet
Ethernet ToR Switch Ethernet ToR Switch FCoE ToR Switch
1 port used as FCoE CNA
1 port used as Ethernet NIC
QLogic 8300 Series Converged Network Adapters Offload the VM Kernel from
Switching Virtual NICs
Single Root I/O Virtualization is a standard that allows one PCI Express (PCIe) adapter to be presented as
multiple separate logical devices to VMs for partitioning adapter bandwidth. The hypervisor manages the
Physical Function (PF) while the Virtual Functions (VFs) are exposed to the VMs. In the hypervisor, SR-IOV-
capable network devices offer the benefits of direct I/O, which includes reduced latency and reduced host
CPU utilization.
With SR-IOV, pass-through functionality can be provided from a single adapter to multiple VMs through VFs.
To deploy SR-IOV today, an organization needs to ensure a minimum level of infrastructure (server hardware
and OS) support for SR-IOV.
Low-Latency Connectivity
Latency is the time between the start and completion of one action
measured in microseconds (µs). Latency
With SR-IOV-enabled on a 10GbE NIC, pass-through functionality can be provided from a single adapter to
multiple VMs through VFs.
Implementing Pass-Through Functions with SR-IOV
8300 Series
Converged
Network
Adapters
In Transaction-Intensive and Bandwidth-Intensive Environments
With the availability of Gen 5 Fibre Channel Host Bus Adapters, larger block I/Os have doubled their
throughput compared to 8Gb per second bandwidth. Together with better CPU efficiency per I/O,
throughput for random I/Os for small block sizes have increased because their throughput is no longer
bounded by the Fibre Channel link speed of 8Gbps as before.
For virtualized environments, the ability of the adapter to scale with workload is the most critical measure of
performance. Testing by QLogic shows the advantage of the QLogic QLE2672 as workloads increase for both
read-only and read-write workloads. This performance advantage (50 percent for read-only and 25 percent
for mixed read-write) has a significant impact on application performance, which is critical to meeting SLAs
for Tier-1 applications.
Having doubled the throughput rates with the newly available QLogic Gen 5 Fibre Channel Host Bus
Adapters, the vSphere 5.1 platform can support more storage devices and meet bandwidth requirements
using the same number of Fibre Channel links. The doubled bandwidth for large block transfers can benefit
applications like VMware vSphere Storage vMotion. The improvement in random read IOPS at a 4KB block
size, for example, can benefit database application clients. Applications are no longer limited by the existing
8Gb Fibre Channel bandwidth to meet their peak performance requirements.
Doubled I/O Throughput
Throughput The amount of data processed or transferred in a given
amount of time measured in megabytes per second (MBps).
“Throughput” and “Bandwidth” are used interchangeably.
IOPS and MBps Performance as Workloads Increase
In testing by QLogic, the QLE2672 out-performed the Emulex LPe16002B in terms of IOPS and MBps performance.
Higher Efficiency
FlexSuite Adapters feature QLogic I/OFlex technology, a field-
configurable upgrade to use the same hardware for Gen 5 Fibre
Channel or 10GbE server connectivity.
FlexSuite
In Transaction-Intensive and Bandwidth-Intensive Environments
For virtualized environments, the most critical measure of performance is the ability to scale as the number
of VMs and application workloads increase. In testing conducted by QLogic, the QLE2672 FlexSuite Gen 5
Fibre Channel Adapter delivered:
3X the transactions and 2X the bandwidth of 8Gb Fibre Channel Adapters.
The QLE2672 also demonstrated a 50 percent advantage over competitive products for read-only
performance and 25 percent better mixed read-write performance.
This superior performance of QLogic Gen 5 Fibre Channel Adapters translates to support for both higher VM
density and more demanding Tier-1 applications.
QLogic achieves superior performance by leveraging the advanced Gen 5 Fibre Channel and PCIe Gen3
specifications—while maintaining backwards compatibility with existing Fibre Channel networks. The unique
port-isolation architecture of the QLogic FlexSuite Adapters ensures data integrity, security, and deterministic
scalable performance to drive storage traffic at line rate across all ports.
Furthermore, QoS enables IT administrators to control and prioritize traffic.
10GbE Intelligent Networking Eliminates I/O Bottlenecks
QLogic’s 10GbE intelligent Ethernet architecture, combined with new virtualization software features, allow
multiple and flexible receive queues, such as NetQueue, and significantly reduces the delays inherent in
current virtualization implementations by:
• Eliminating some of the hypervisor overhead. This frees up processor resources to support heavier weight
applications on the VMs or to run more VMs per server.
• Eliminating the queuing bottleneck in today’s software-based approach. The current approach creates a
single first-in, first-out queue for incoming packets from the Ethernet adapter through the hypervisor to the
various VMs. Because neither the hypervisor nor the Ethernet adapter knows which packet goes to which
interface, there is substantial packet processing performed in the hypervisor to determine which packet goes
where. It is a processor-intensive task that consumes a great deal of time and CPU cycles.
Linear Scalability
The improvement factor for RAM memory per VM for vSphere 5.1—
addressing the biggest issue in scaling VMs. 8X
More Virtual CPUs for Scaling Tier-1 Apps with vSphere 5
If you’re concerned about hosting tier-1 apps on VMs, the argument about virtualizing tier-1 apps is over.
Even flagship enterprise applications such as SAP, Microsoft® SQL Server® 2012, and Exchange 2010 have
adopted server virtualization as a best practice. As an example, the CPU, memory, storage, and networking
requirements are well documented by SAP.
More VMs with vSphere, Xeon E5, and QLogic Server Adapters
Fabric-based networks are a fundamental requirement in supporting highly virtualized data centers. Fibre
Channel SANs are the nucleus of the next-generation vSphere 5.1 data center. If your goal is to increase VM
density, vSphere 5.1 combined with the latest generation of servers and QLogic server adapters allow you to
more than double the number of VMs per server while enjoying the same level of performance.
vSphere 5.1 will enable your businesses to leverage the built-in architecture of both products to increase
availability, improve agility, and overcome scalability and performance concerns.
vSphere 5.1 delivers improvements on all key virtualization metrics—making I/O performance critical.
vSphere Version vSphere 4 vSphere 5.1 Factor
Host
HW Logical Processors 64 160 LPs 5x
Physical Memory 1TB 4TB 4x
Virtual CPUs per Host 512 2048 4x
VM
Virtual CPUs per VM 8 64 8x
Memory per VM 255GB 2TB 8x
Active VMs per Host 320 512 1.7x
Cluster Max Nodes 32 32 1x
Max VMs 1280 4,000 3x
I/O Dashboard and Control
vCenter Server plug-ins extend the capabilities of vCenter Server by
providing more features and functionality. Some plug-ins are
installed as part of the base vCenter Server product.
vCenter Plug-ins
QConvergeConsole Plug-In for Single Pane Adapter Management
Installation and management of high-performance server adapters can be complicated if it’s not integrated
with vSphere. The QLogic QConvergeConsole plug-in for vCenter is available with vCenter Server—providing
an intuitive and easy-to-use dashboard view of all QLogic adapters.
The result is that VM server administrators are armed with the power of QLogic adapter management
software and 100 percent compatibility with vSphere.
Powerful QLogic QConvergeConsole adapter management software is accessible in-box with the vCenter plug-in.
Related Links
What’s New in vSphere 5 Performance
What’s New in vSphere 5 Networking
What’s New in vSphere 5 Storage
What’s New in vSphere 5.1 Networking
What’s New in vSphere 5.1 Storage
VMware vSphere 5.1 Configuration Maximum
QLogic Fibre Channel Adapters
QLogic Converged Network Adapters
Acceleration for Microsoft SQL Servers
About the Authors
Rahul Shah, Director, IT Brand Pulse Labs
Rahul Shah has more than 20 years of experience in senior engineering and product
management positions with semiconductor, storage networking, and IP networking
manufacturers, including QLogic and Lantronix. At IT Brand Pulse, Rahul is responsible for
managing the delivery of technical services ranging from hands-on testing to product launch collateral. You
can contact Rahul at [email protected].
Tim Lustig, Director of Corporate Marketing, QLogic Corporation
With more than 18 years of experience in the storage networking industry, Tim has authored
numerous papers and articles on all aspects of IT storage, and has been a featured speaker at
many industry conferences on a global basis. As the Director of Corporate Marketing at
QLogic, Tim is responsible for corporate communications, third-party testing/validation,
outbound marketing activities, and strategic product marketing directives of QLogic. His
responsibilities include customer research, evaluation of market conditions, press and media relations, social
media, and technical writing.
Resources