32
THE LEAN, MEAN IT OPS TEAM HURRY UP AND WAIT ON 40 GBE JONATHAN EUNICE: EMBRACING IN-HOUSE IT JAN. 2013 M i MODERN INFRASTRUCTURE MODERN INFRASTRUCTURE: CREATING TOMORROW’S DATA CENTERS THE DATA CENTER OF THE FUTURE How new building designs and energy-efficient techniques could transform tomorrow’s computing facilities. PAGE 14

MODERN INFRASTRUCTURE - Bitpipedocs.media.bitpipe.com/io_10x/io_107820/item_622783...those up-front time and capital investments should pay big dividends. Then, in the back-to-the-future

  • Upload
    others

  • View
    2

  • Download
    0

Embed Size (px)

Citation preview

Page 1: MODERN INFRASTRUCTURE - Bitpipedocs.media.bitpipe.com/io_10x/io_107820/item_622783...those up-front time and capital investments should pay big dividends. Then, in the back-to-the-future

MODERN INFRASTRUCTURE • JANUARY 2013 1

THE LEAN, MEAN IT OPS TEAM

HURRY UP AND WAIT ON 40 GBE

JONATHAN EUNICE: EMBRACING IN-HOUSE IT

JAN. 2013

Mi MODERN INFRASTRUCTURE

MO

DE

RN

IN

FR

AS

TR

UC

TU

RE

: C

RE

AT

ING

TO

MO

RR

OW

’S D

ATA

CE

NT

ER

S

THE DATA CENTER

OF THE FUTURE

How new building designs and energy-efficient techniques

could transform tomorrow’s computing facilities.

PAGE 14

Page 2: MODERN INFRASTRUCTURE - Bitpipedocs.media.bitpipe.com/io_10x/io_107820/item_622783...those up-front time and capital investments should pay big dividends. Then, in the back-to-the-future

Home

Editor’s Letter

Currents

Bob Plankers: In Hot Water

Steve Gunderson: Do Your Home- work on Your Cloud Provider

Brian Madden: Windows 8? No Thanks!

The Data Center of the Future

Hurry Up and Wait on 40 GbE

The Lean, Mean IT Operations Team

Jonathan Eunice: In Defense of In-House IT

MODERN INFRASTRUCTURE • JANUARY 2013 2

IT’S OFFICIALLY 2013, which, forgive me, still sounds so futuristic. Maybe I’m dating myself here, but I can’t help but think: Shouldn’t we already be zipping around with personal nuclear-powered jetpacks, Jetsons-like, flanked by robots and getting our sustenance from a pill? And shouldn’t our data centers be floating around in space or implanted in our brains?

But when it comes to data centers, sci-ence fiction author William Gibson had it right: “The future is already here—it’s just not very evenly distributed.” Today, there are plenty of data centers out there that give us a glimpse of things to come.

In this month’s cover story, TechTarget senior technical editor Steve Bigelow finds that tomorrow’s data centers look a lot like today’s versions, only better. More efficient servers, support for a greater range of am-bient temperatures, increased use of free cooling techniques, and a small degree of local cogeneration to improve the facility’s ability to withstand short power outages will all converge to make data centers that won’t end up on the front page of the Sunday New York Times for their inefficiency—but that aren’t unrecognizable either.

And unlike pundits who envision a world with only a handful of mega data centers, Bigelow predicts that going forward, we will actually build more and smaller data cen-ters in order to take advantage of positive

environmental conditions and favorable energy prices. Don’t count out the private data center just yet!

Speaking of the future, can you believe that 40 Gigabit Ethernet (GbE) and even 100 GbE networking gear is shipping and available, and deployments are well under way? But, unlike previous generations, upgrading to the next version of Ethernet won’t be so simple, writes contributor David Strom. The shift away from Cat 6 cabling to a new high-density fibre connector called QSFP will force infrastructure architects to really plan ahead and order properly sized cables in advance, rather than cut them on-premises. But for shops running IP stor-age or pushing the virtualization envelope, those up-front time and capital investments should pay big dividends.

Then, in the back-to-the-future department, there’s a column on liquid cooling. Introduced in the 1970s, the tech-nology is making a comeback, kind of like bell-bottoms and disco. Contributor Bob Plankers thinks liquid cooling is an idea whose time has finally come, because, as he puts it, “Air is a poor heat conductor.” Already, forward-looking data center op-erators like eBay are using liquid cooling to achieve energy efficiency numbers that others can only dream of, and the Leibniz Supercomputing Centre in Germany is using an IBM System x iDataPlex cluster with direct water cooling to generate peak performance of three petaflops, while con-suming 40% less energy than a comparable air-cooled system.

Now that’s a future I can relate to. n

Editor’s Letter

Welcome to the Future

By Alex BarrettEditor in Chief

Page 3: MODERN INFRASTRUCTURE - Bitpipedocs.media.bitpipe.com/io_10x/io_107820/item_622783...those up-front time and capital investments should pay big dividends. Then, in the back-to-the-future

Home

Editor’s Letter

Currents

Bob Plankers: In Hot Water

Steve Gunderson: Do Your Home- work on Your Cloud Provider

Brian Madden: Windows 8? No Thanks!

The Data Center of the Future

Hurry Up and Wait on 40 GbE

The Lean, Mean IT Operations Team

Jonathan Eunice: In Defense of In-House IT

MODERN INFRASTRUCTURE • JANUARY 2013 3

Currents{ all the news you can use about modern infrastructure }

ONE ON ONE

VMware’s Stephen Herrod Looks Out OVER THE PAST decade, server virtualization made software a powerful force in the data center, giving IT departments more flex-ibility and bringing about more efficient operations. It turns out that virtualization was the easy part. Applying its principles to other areas of the data center is still a work in progress.

The natural extensions of the server virtualization trend—cloud computing, software-defined networking, changes in business models—have not had such a clear path to success. Cloud security concerns still persist. Software-defined networking has yet to realize its potential. And VMware’s move to vRAM licensing and pricing, a more cloudlike model, proved so unpopular that the company abandoned it after only a year. Stephen Herrod, VMware’s chief technology officer, discussed these changing data center dynamics and more in this One on One interview.

With all of the talk about software-defined data centers, storage and networking, why should IT professionals trust software

vendors like VMware with their hardware?It actually is a bit of a nuanced message. The software-defined data center is about delivering all the services via software. The main goal for this whole thing is to automate those things that weren’t previously autom-atable.

A lot of our hardware partners are mak-ing their hardware offerings multi-tenant and more easily automatable so that we can also plumb them into the system. It is not strictly that everything in the world runs in software.

As the line between software and hardware blurs, how will this change the roles within an IT department?We really see this notion of cloud operations. We’ve been calling it CloudOps as sort of the corollary to DevOps. You need to under-stand how the hardware is plumbed through. We’ve already seen this with virtualization in general: You can’t have an individual storage silo and a network silo and a server silo.

Who do you see handling end-user computing initiatives?There used to be the Windows team or the desktop team, but it tends to be a different

Page 4: MODERN INFRASTRUCTURE - Bitpipedocs.media.bitpipe.com/io_10x/io_107820/item_622783...those up-front time and capital investments should pay big dividends. Then, in the back-to-the-future

Home

Editor’s Letter

Currents

Bob Plankers: In Hot Water

Steve Gunderson: Do Your Home- work on Your Cloud Provider

Brian Madden: Windows 8? No Thanks!

The Data Center of the Future

Hurry Up and Wait on 40 GbE

The Lean, Mean IT Operations Team

Jonathan Eunice: In Defense of In-House IT

Currents

MODERN INFRASTRUCTURE • JANUARY 2013 4

team that takes on mobile. The notion of general IT services is also a different team. It tends to be pretty high in the organization, especially if you’re trying to show an overar-ching suite.

I think there are roles that don’t yet exist that will be in charge of all end-user access, especially as more things are outside the firewall and are not on inventory owned by the company.

Is the company satisfied with the uptake of its vCloud-branded services?It’s growing at one of the fastest clips of any part of our business. [VMware Service Pro-vider Partners] are a bit more focused on the

enterprise side of things versus the new-age developer side of things.

How many vCloud-branded partners are there worldwide?There’s the vCloud data center service part-ners, and that is the highest tier, that has a lot of different commitments that they make when they join. That means they’re selling the entire suite, and they go through some certifications of the facilities and how they train and teach. And I think we’re up to 10 on that front.

But then we go down to something we call vSphere- and vCloud-powered partners, which means they’re using it as a primary

OVERHEARD | Gartner and AWS Conferences

“ Ninety percent of enter- prises will bypass broadscale deployment of Windows 8.”

—PETER SONDERGAARD, senior vice president of research, Gartner,

at the Gartner Symposium/ITxpo

“ I’ve hugged a lot of servers in my life, and believe me, they do not hug you back. They hate you.”

—Amazon CTO WERNER VOGELS in his keynote speech

at AWS re: Invent

“ There is a light at the end of the tunnel, but it’s a train coming at me.”—DAVID CAPPUCCIO, managing vice president of Gartner,

in his keynote at the Gartner Data Center Conference

“ By 2015, big-data demand will reach 4.4 million jobs globally, but only one-third of those jobs will be filled.”

—DARYL PLUMMER, managing vice president,

chief of research and chief Gartner fellow,

at the Gartner Symposium/ITxpo

“ Let’s be honest: Working in IT operations is the equivalent of working in the boiler room in certain organizations.”

—CAMERON HAIGHT, Gartner research vice president, at the Gartner Data Center Conference

“ Eighty-five percent of CEOs will be impacted by the economic downturn this year, yet growth is the No. 1 priority.”

—GENE HALL, CEO, Gartner, at the Gartner

Symposium/ITxpo

Page 5: MODERN INFRASTRUCTURE - Bitpipedocs.media.bitpipe.com/io_10x/io_107820/item_622783...those up-front time and capital investments should pay big dividends. Then, in the back-to-the-future

Home

Editor’s Letter

Currents

Bob Plankers: In Hot Water

Steve Gunderson: Do Your Home- work on Your Cloud Provider

Brian Madden: Windows 8? No Thanks!

The Data Center of the Future

Hurry Up and Wait on 40 GbE

The Lean, Mean IT Operations Team

Jonathan Eunice: In Defense of In-House IT

Currents

MODERN INFRASTRUCTURE • JANUARY 2013 5

component of their offering, and that gets well over 3,000 partners. At last check, we were at 29 different countries as well.

I liken it to kind of a cloud franchise model, in the sense that there are going to be local tastes and local requirements on data that we think a local partner can see better than if we were to try to have some global- scale cloud. Data rules—they’re dif-ferent in every state, not to mention every country.

Last year, regarding the vRAM licensing model, you said, “We needed to really put our money where our mouth is and rec-ognize the world is going toward pools of computing resources and clouds.” Now that VMware has reverted to the previous licens-ing model, is it a problem that its future vision does not match up with its business model?We had a good theory on why [vRAM li-censing] would work, but the customers just didn’t like it right now. In actuality, the

amount of money that customers pay really wasn’t different in either model. I think it was just too complex for what people were ready for.

With all the tools out there for managing, monitoring and backing up virtual envi-ronments, what can IT departments do to simplify their virtualization and cloud management?There is a diversity of tools from so many partners now. They’re all creating a bunch of different and interesting ideas along the way. It means that people need to understand and pick and choose wisely.

Suites are everywhere for VMware now, and our goal is to really not have customers know that there are a lot of different func-tionalities. We did two big launches: the vCloud Suite 5.1, which really pulled togeth-er 12 different products … and likewise on the end-user, consumerization side, an alpha of the Horizon Suite that pulls together six different efforts. —COLIN STEELE

READER SNAPSHOT | Cloud and DevOps

D�Is DevOps here to stay?

No 22%

Unsure 14%

D�Is the importance of cloud SLAs overblown?

No, they’re the founda- tion 22%

Yes, they’re all the same 78%

Yes 64%

FROM A SURVEY OF SEARCHDATACENTER.COM READERS FROM A SURVEY OF SEARCHCIO.COM READERS

Page 6: MODERN INFRASTRUCTURE - Bitpipedocs.media.bitpipe.com/io_10x/io_107820/item_622783...those up-front time and capital investments should pay big dividends. Then, in the back-to-the-future

Home

Editor’s Letter

Currents

Bob Plankers: In Hot Water

Steve Gunderson: Do Your Home- work on Your Cloud Provider

Brian Madden: Windows 8? No Thanks!

The Data Center of the Future

Hurry Up and Wait on 40 GbE

The Lean, Mean IT Operations Team

Jonathan Eunice: In Defense of In-House IT

Currents

MODERN INFRASTRUCTURE • JANUARY 2013 6

EXPLAINED

Colocation vs. Hosting vs. MSPs vs. CloudDEPENDING ON THE size, scope and purpose of a business, outsourcing IT might be the best way to minimize costs. The terms colo-cation, dedicated hosting, managed services and cloud services are often used inter-changeably, but they aren’t the same at all.

To make matters more confusing, many colocation facilities also offer managed ser-vices and cloud computing under the same umbrella, so telling them apart becomes important for businesses to ensure they get the level of outsourcing they need.

VARIETIES OF IT OUTSOURCINGColocation (colo) refers to renting space in a data center. A colo can provide everything from power, cooling, building security and networking to lockable cabinet cages and internal monitoring capabilities. In a colo, customers supply the server and storage hardware they want as long as it works with the power and cooling infrastructure, and perform all the management.

A dedicated hosting service is similar to a colo, but the vendor typically provides the servers as well. All the equipment resides at the hosting provider, which takes care of managing the infrastructure and applica-tions. Customers have a choice of hardware and software, and have control of the sys-tems. The hoster owns the equipment.

If a company wishes to own and maintain its equipment, but doesn’t want to handle certain tasks, a managed service provider (MSP) can help. Services offered by such providers commonly include Internet access, virtual private networks, security

services, data protection and call center management. For example, a company might want to set up its own storage systems and servers but contract out security.

The phrase managed service provider has become synonymous with “management service provider.” The latter once referred to management of other data center infra-structure, such as monitoring, backup and database administration, but those services are now frequently offered by cloud service providers.

Cloud services include shared infrastruc-ture, computing power, software or other services offered over a network. Built on top of a virtualization layer, cloud providers usually offer application program interfaces (APIs) that customers can use to interact with the service programmatically. Many vendors offer pay-as-you-go services to allow this shared infrastructure to stretch further.

IT OUTSOURCING DIFFERENTIATORSArguably, the major difference between all these IT outsourcing options is the availabil-ity of capabilities like automation, scalability and access.

This is less of an issue with fixed managed

The terms coloca tion, dedicated hosting, managed services and cloud services are often used inter­changeably, but they aren’t the same at all.

Page 7: MODERN INFRASTRUCTURE - Bitpipedocs.media.bitpipe.com/io_10x/io_107820/item_622783...those up-front time and capital investments should pay big dividends. Then, in the back-to-the-future

Home

Editor’s Letter

Currents

Bob Plankers: In Hot Water

Steve Gunderson: Do Your Home- work on Your Cloud Provider

Brian Madden: Windows 8? No Thanks!

The Data Center of the Future

Hurry Up and Wait on 40 GbE

The Lean, Mean IT Operations Team

Jonathan Eunice: In Defense of In-House IT

Currents

MODERN INFRASTRUCTURE • JANUARY 2013 7

services, but when a business decides to go with colocation, it generally plans to use its own personnel to set up the servers, main-tain databases and be on call if something goes wrong. Companies using space in a colo typically have access to information such as power, uptime, cooling and other monitor-ing data. What they won’t necessarily have is the scalability and automation offered by a cloud platform.

For those things, there are cloud services.

Cloud providers don’t often share infra-structure information the way colos do, but scaling out to more servers is as simple as a click; sometimes that scaling can be auto-mated. Without scalability and automation, you could say the cloud is basically dedicated hosting.

In the end, evaluating business needs and grasping distinctions between the different services are the best ways to get the right type of IT outsourcing. —ERIN WATKINS

SUMMING IT UP | Cloud

0 1 year 2 years 3 years 4 years

0 10% 20% 30% 40%

N=252; SOURCE: CLOUD SECURITY ALLIANCE AND ISACA 2012 CLOUD COMPUTING MARKET MATURITY SURVEY RESULTS

N=252; SOURCE: CLOUD SECURITY ALLIANCE AND ISACA 2012 CLOUD COMPUTING MARKET MATURITY SURVEY RESULTS

Platform as a Service

Infrastructure as a Service

Software as a Service

Service and product developers

Public and customers

IT organizations

Line-of-business users

Consultants and analyst community

D How many years will it take for cloud services to mature?

D Who is driving cloud innovation?

Page 8: MODERN INFRASTRUCTURE - Bitpipedocs.media.bitpipe.com/io_10x/io_107820/item_622783...those up-front time and capital investments should pay big dividends. Then, in the back-to-the-future

Home

Editor’s Letter

Currents

Bob Plankers: In Hot Water

Steve Gunderson: Do Your Home- work on Your Cloud Provider

Brian Madden: Windows 8? No Thanks!

The Data Center of the Future

Hurry Up and Wait on 40 GbE

The Lean, Mean IT Operations Team

Jonathan Eunice: In Defense of In-House IT

Currents

MODERN INFRASTRUCTURE • JANUARY 2013 8

PITCHING TO THE C SUITE

Six Questions Your CIO Will Ask About the CloudCLOUD COMPUTING, MOBILE devices and the rise of consumer-users have put increasing pressure on IT departments to deliver the services that users are clamoring for. But IT faces a quandary. How can it deliver on-demand services without opening the floodgates to uncontrolled IT environ-ments?

“The amount of consumer access to tech-nology is way out of control for enterprise IT,” said Carl Brooks, an analyst at the 451 Group in Boston. “IT needs to come to grips with all these intersecting trends. A CIO might release an email saying, ‘The compa-ny will not support iPads,’ and a sales guy might email back 10 minutes later, ‘But I just bought 3,000 of them.’”

For CIOs, this trend convergence pres-ents a particular challenge. IT pros in the C suite rely on their IT staff in the trenches to make informed purchasing decisions. Those IT managers need to give their CIOs on-the-ground advice on the implications of the cloud, which can have a major impact on so many aspects of a virtual infrastructure.

Cloud and virtualization architect Bob Plankers offers some advice for IT managers on how the CIO conversation could go.

1 CIO: Why should we consider a private cloud?IT manager: The whole idea behind a pri-vate cloud is centralization. Being able to control the environment, keeping it patched, automating activities within it, adding us-ers—it’s a big thing. I’m not saying it won’t be a struggle. But it’s not so much about tech-nology as it is about corralling people. It’s all

about people—people fighting for the status quo—and process.

Some of our departments are likely to resist with objections like, “Hey, we like our own mail server. We have our own permis-sion models, and our own groups.” We need to work it out so that they can still have those benefits, but have these apps moved into central IT’s server where they can be more secure.

2 Can using a public cloud really save us money?To be honest, it depends, and using a public cloud might not be cheaper for us. We hav-en’t reached capacity on data center space, so it’s not as though using the cloud will make up for being over capacity internally. So we need to determine what would it actu-ally cost, will workloads perform well, what is a provider’s support like?

The truth is you can get anything you want in terms of the public cloud, but how much are we willing to pay—and is it then worth it to go to the cloud? We should also consider our expertise in-house to accom-plish some of these objectives ourselves.

That’s on the public cloud side. But a private cloud is a no-brainer to keep certain workloads in-house and get the benefits of automation and self-service.

3 What about security in public clouds? There’s no denying that in the public cloud, we’re in a shared system, Part of the fear about the cloud is that you don’t know who your neighbors are or if they are behaving poorly. Encryption for your data in storage with a cloud provider—that’s important.

We also need to make the determination about how important the data is. We’re not storing credit card numbers, but we are storing weblogs and possibly some employee information.

Page 9: MODERN INFRASTRUCTURE - Bitpipedocs.media.bitpipe.com/io_10x/io_107820/item_622783...those up-front time and capital investments should pay big dividends. Then, in the back-to-the-future

Home

Editor’s Letter

Currents

Bob Plankers: In Hot Water

Steve Gunderson: Do Your Home- work on Your Cloud Provider

Brian Madden: Windows 8? No Thanks!

The Data Center of the Future

Hurry Up and Wait on 40 GbE

The Lean, Mean IT Operations Team

Jonathan Eunice: In Defense of In-House IT

Currents

MODERN INFRASTRUCTURE • JANUARY 2013 9

4 What about downtime and outages?We need some downtime to patch and we’re not an organization that has to design sys-tems to avoid any downtime. But, like a retail organization, we need to know how much even five minutes of downtime a year will cost us and negotiate service-level agree-ments to avoid intolerable amounts.

We’re not FedEx. For every five minutes the FedEx website is down, the company loses $1 million. At the same time, we want several nines of uptime. Every nine you add to 99% gets logarithmically more expensive. Achieving four nines is no big deal in our data center, including monthly patches. But get-ting to that fifth nine can be really expensive.

5 Are we ready for this kind of transition?How ready do we need to be? If we don’t start now, when will we? Let’s pick some-thing to move to the cloud and start small. It’s an iterative process. If you want to auto-mate configuration management, let’s look at configuration management tools.

We may not have exactly what we need or want, but there’s a ton of open source out there as well as commercial, vendor-sup-ported technologies for configuration man-agement, for example.

6 How long would a cloud move take?For us—like any decent-sized organization—it is quite a process to move all data center resources. There are interdependencies be-tween systems—you’ve got security. And of course, working with vendors is an interest-ing proposition. They are trying to sell you things to address perceived or real problems in the cloud.

There are a ton of mechanisms and tools out there. Just taking a look at what we might need and evaluating our environment will probably take six months. Having an idea of the problem you want to solve will

help you create a timeline. A year is probably more reasonable. We should think about the entire process in terms of years, not months.

—LAUREN HORWITZ

NEWS IN REVIEW

Amazon Reaches Out to Enterprise IT at re: InventAMAZON WEB SERVICES’ first-ever trade show was a good first step, experts say, but there’s still work to do to appeal to an enterprise IT audience.

A show like re: Invent 2012 signaled that Amazon Web Services (AWS) wants to win enterprise business. In many ways an amal-gamation of other tech conferences, espe-cially VMworld, the show gave enterprise IT pros and analysts the kind of conference experience they expect.

And Amazon made it clear that it’s gun-ning for enterprise customers.

During breakout sessions, Amazon officials worked hard to dispel some of the impressions of AWS held by enterprise IT pros, particularly when it comes to security and compliance. Several security sessions—the most popular among them the detailed overview of AWS security provided by Ama-zon CISO Stephen Schmidt—were delivered before a standing-room-only crowd.

“This is what we do,” Schmidt said of large-scale data center security operations. “And we do it all the time.”

From the keynote stage, AWS execs talked trash about “old guard” enterprise IT ven-dors, contrasting the low-margin, high-vol-ume business model of AWS with the high-margin business model of competitors.

Page 10: MODERN INFRASTRUCTURE - Bitpipedocs.media.bitpipe.com/io_10x/io_107820/item_622783...those up-front time and capital investments should pay big dividends. Then, in the back-to-the-future

Home

Editor’s Letter

Currents

Bob Plankers: In Hot Water

Steve Gunderson: Do Your Home- work on Your Cloud Provider

Brian Madden: Windows 8? No Thanks!

The Data Center of the Future

Hurry Up and Wait on 40 GbE

The Lean, Mean IT Operations Team

Jonathan Eunice: In Defense of In-House IT

Currents

MODERN INFRASTRUCTURE • JANUARY 2013 10

WOOING ENTERPRISES WITH DATA WAREHOUSINGAnnounced at re: Invent, Amazon’s new Redshift data warehouse is also a direct appeal to enterprises, and the announce-ment generated plenty of interest at the show.

RedShift is currently available as a pre-view from AWS. Amazon claims the plat-form can scale automatically to petabytes in size and will compete with IT’s “old guard” vendors on price. Typically, enterprises pay $19,000 to $25,000 per terabyte of data per year with traditional data warehouses, ac-cording to statistics gathered by analyst firm ITG in June 2011. Amazon’s data warehouse can run for as little as $1,000 per terabyte per year, executives said.

If Redshift works as advertised, this may be pricing the enterprise can’t refuse, according to David Linthicum, CEO of Blue Mountain Labs.

“It’s going to be very difficult for data warehouse managers to walk into their boss’s office and make the argument that they shouldn’t go to cloud when it’s one-tenth the price,” he said.

Amazon further sweetened the deal by announcing a 24% to 28% price cut for its Simple Storage Service, or S3.

But there are other places where the enterprise and AWS are ships passing in the night. Netflix, for example, was heavily rep-resented at the show, with a keynote by CEO Reed Hastings and other representatives heading up at least a half-dozen sessions. But Netflix’s problems are not enterprise problems, experts say.

“When they roll Netflix out, I always roll my eyes,” Linthicum said. “That’s great if I want to do a video streaming business and compete with Netflix. … But it’s not a large-scale enterprise like an Exxon-Mobil or GM running core processes on AWS.” Amazon

executives further departed from the beaten enterprise path when they offered a wither-ing dismissal of the private cloud concept, which is often embraced by enterprises.

Finally, a keynote by Werner Vogels, CTO of Amazon.com, seemed to indicate an as-sumption on Amazon’s part that enterprises will simply re-architect their applications to suit AWS, defining such apps as “21st-centu-ry architectures.”

“At Amazon, we are no longer constrained by [physical] resources,” Vogels said. “So the new world is that you can build architec-tures that are unconstrained by resources, except maybe the speed of light.

“If you come back in five years, we may have a solution for that as well,” Vogels quipped.

SAME PLANET, DIFFERENT UNIVERSEBut a full-blown data warehouse is a far cry from where most enterprises are with public cloud deployments today. Instead, enterprise users are asking for live migra-tion, which VMware has offered for years, and is still distant (if planned at all) for AWS.

Before and during the show, enterprise users said they could use more documenta-tion and reference architectures from Am-azon for a variety of applications, especially if they have to overhaul apps in order to migrate to the AWS cloud. If this was specifi-cally addressed at re: Invent, it wasn’t driven home to the same extent as other messages.

Overall, AWS remains a do-it-yourself business, the Home Depot of cloud comput-ing providers rather than the IKEA enter-prises would like to see before they invest. The company has a highly recognizable name and offers plenty of raw materials, but today, users still have to find their own way toward what they want to build.

—BETH PARISEAU

Page 11: MODERN INFRASTRUCTURE - Bitpipedocs.media.bitpipe.com/io_10x/io_107820/item_622783...those up-front time and capital investments should pay big dividends. Then, in the back-to-the-future

Home

Editor’s Letter

Currents

Bob Plankers: In Hot Water

Steve Gunderson: Do Your Home- work on Your Cloud Provider

Brian Madden: Windows 8? No Thanks!

The Data Center of the Future

Hurry Up and Wait on 40 GbE

The Lean, Mean IT Operations Team

Jonathan Eunice: In Defense of In-House IT

MODERN INFRASTRUCTURE • JANUARY 2013 11

In the Mix

MY FATHER USED to joke that his fashion sense was trendy every 20 years. Not long after he made that comment, the grunge movement brought his flannel shirts and worn jeans back to the mainstream.

Data center cooling might follow a similar pattern: As data center densities increase, liquid cooling is gaining favor as an efficient technique. It was in style during the 1970s with mainframes and during the 1980s in Cray supercomputers, and it’s coming back in the form of specialized racks and direct liquid-cooled servers from IBM and others.

Why the resurgence of liquid cooling? When it comes right down to it, air is a poor heat conductor. Much of the heat carried away from a server is actually transported by the water vapor in the air. That’s partially why the American Society of Heating, Re-frigeration and Air-Conditioning Engineers (ASHRAE) specifies minimum relative hu-midity and server operating temperatures.

And, to cool effectively, you need to start with colder air, which takes energy to pro-duce. If you can’t make the air cooler, you need to move it faster, and fans consume more energy.

This is tremendously inefficient. ASHRAE’s power usage effectiveness (PUE) metric measures how much power is wast-ed in providing 1 watt worth of computing work. The Uptime Institute found that the average data center has a PUE of 1.8. So for

every 1.8 watts consumed by the data center, only 1 watt reaches the computing workload.

THE TWO-PRONGED APPROACHSo how do we drive up efficiency and drive down costs? First, vendors must increase servers’ operating temperature ranges. For example, Dell’s Fresh Air initiative has opti-mized several server models to run at 80 de-grees Fahrenheit, the top end of ASHRAE’s temperature recommendations.

Second, push liquid cooling. Many data centers are skipping computer room air conditioners. Instead, they’re installing racks with integrated cooling and planning for liquid cooling directly to servers. EBay’s Phoenix data center accommodates in-row, rear-door or direct liquid cooling, even for the modular data centers on the roof. The online auctioneer boasts an unprecedented PUE of 1.018 for individual liquid-cooled rooftop modules in the winter, an overall year-round PUE of 1.35 and the flexibility to add liquid-cooled servers indoors.

Do we even care about these efficiencies in the face of the cloud? Of course we do. And while some organizations have moved to public clouds, many others are keeping their data centers. But liquid cooling isn’t something that people consider often, be-cause moving air to cool servers is so tradi-tional. Like my father’s plaid flannel, liquid cooling’s time has come again. n

BOB PLANKERS is a virtualization and cloud architect at a major Midwestern university. He is also the author of The Lone Sysadmin blog.

In Hot Water

By Bob Plankers

Page 12: MODERN INFRASTRUCTURE - Bitpipedocs.media.bitpipe.com/io_10x/io_107820/item_622783...those up-front time and capital investments should pay big dividends. Then, in the back-to-the-future

Home

Editor’s Letter

Currents

Bob Plankers: In Hot Water

Steve Gunderson: Do Your Home- work on Your Cloud Provider

Brian Madden: Windows 8? No Thanks!

The Data Center of the Future

Hurry Up and Wait on 40 GbE

The Lean, Mean IT Operations Team

Jonathan Eunice: In Defense of In-House IT

MODERN INFRASTRUCTURE • JANUARY 2013 12

THE BENEFITS AND capabilities of the cloud are by now well-established. The cloud indus-try offers a continuum of different choices, ranging from pure infrastructure to infra-structure platforms integrated with services. When comparing cloud providers, custom-ers can fairly easily evaluate the service capabilities, technical features and benefits of each, as well as any niche services.

But, before you build a decision ma-trix and fill it with criteria such as pricing, features and benefits, carefully qualify each vendor’s base critical infrastructure (backup generators, UPS, cooling infrastructure, se-curity). Here are a few questions to help you look under the cloud provider’s hood:

n Does the base critical infrastructure have the redundancy you need to meet your service-level agreements?

n What visibility will you have into the per-formance of the critical infrastructure?

n Does the provider have the right skills?

At our consultancy, we’ve observed surprising issues with critical infrastruc-ture. Many providers advertise their highly redundant architectures, but we’ve found many potential risks. Some observations from the front lines have been particularly startling. A facility advertising N+1 backup power generation, for example, had only a single generator. Another described an N+1

uninterruptible power supply (UPS) infra-structure—but used only a single UPS. In an-other case, a facility owned and operated by a team had no previous experience managing a commercial data center, and no experience whatsoever in managing critical infrastruc-ture. Others promise “fully redundant net-work connectivity” but deliver services via a “collapsed ring” (i.e., with no physical di-versity, so a single backhoe could completely disconnect the facility). In an online world, where both revenue and reputation rely on 100% IT uptime, risking your business by moving into the cloud without knowing and managing the risk could be a recipe for disas-ter. So you need to be sure you know what’s going on inside that cloud, too.

The benefits and capabilities of the cloud are so compelling that cloud services will continue to play an increasing role in com-panies’ hybrid (private, colocation, cloud) compute platform infrastructure. Custom-ers will choose the cloud for a multitude of reasons, including flexibility, speed of deployment and disaster recovery.

But if your expectations for cloud services require the reliability and availability of concurrently maintainable infrastructure, avoid simply buying a brand and relying on vendor claims. To avoid nasty surprises down the road, understand the risk profile of the base critical infrastructure of your potential provider. It’s reasonable to consid-er your cloud services as transient, but think of your relationship with the provider as a long-term one. nSTEVE GUNDERSON is a principal at Transitional Data Services.

Do Your Homework on Your Cloud Provider

By Steve Gunderson

From the Front Lines

Page 13: MODERN INFRASTRUCTURE - Bitpipedocs.media.bitpipe.com/io_10x/io_107820/item_622783...those up-front time and capital investments should pay big dividends. Then, in the back-to-the-future

Home

Editor’s Letter

Currents

Bob Plankers: In Hot Water

Steve Gunderson: Do Your Home- work on Your Cloud Provider

Brian Madden: Windows 8? No Thanks!

The Data Center of the Future

Hurry Up and Wait on 40 GbE

The Lean, Mean IT Operations Team

Jonathan Eunice: In Defense of In-House IT

MODERN INFRASTRUCTURE • JANUARY 2013 13

End-User Advocate

NOW THAT WINDOWS 8 has been released, every enterprise IT department is playing the “If, when, how” game to determine how Win-dows 8 will be rolled out. So, for enterprise IT folks, I’d like to add another data point to your calculations: The perspective of end users is, “No thanks! Seriously, we’re fine with Windows 7. Don’t go to Windows 8 on our behalf.”

While there’s debate in the press about how much Microsoft is spending to adver-tise the Windows 8 launch, we consumers can see that the budget is huge. Billboards, TV commercials, online banners, bus wraps and full-page magazine cover wraps throw Windows 8 in our faces. And yeah, every Windows-based computer or tablet that we get this holiday season will run the operating system, but that doesn’t change our core po-sition on Windows 8 at work, which is, “No thanks—really. We’re good!”

From a user standpoint, the biggest change in Windows 8 is the new touch-based tile start screen, which replaces the tradi-tional desktop. While it looks awesome on a tablet, it doesn’t make sense on our existing devices, which are all keyboard- and mouse-based laptops and desktops. Please don’t inflict the tile world on us!

STILL LOVIN’ APPLEWhen it comes to tablets, we love iPads. End

users want to remind you that even though Microsoft is pushing the familiarity of Win-dows for its Windows 8 tablets, the reality is that none of our familiar Windows desktop applications are meant to be touch-based. So if you think that we want to touch our laptop screens to use our traditional applications, remember that we didn’t like the Windows XP tablets you gave us in 2001, and we’re not going to like the Windows 8 ones now.

And don’t try to give us the lightweight Windows RT tablets because they feel like iPads. We’ll remind you that they don’t run traditional Windows desktop applications and that we’d rather just have the iPad. (But don’t worry: We’ll still gladly let you manage our iPads with enterprise mobile application management software. And we’re happy us-ing Quickoffice on them—there’s no need to wait for the “real” Microsoft Office for iOS.)

The other reality is that the many Win-dows 8 “enterprise” improvements aren’t tied to Windows 8 at all. If you want to move us to Office 365, Office 15, SharePoint and SkyDrive, I can speak for all your users when I loudly shout, “Yes! Please do it!” But of course all of these applications work won-derfully on Windows 7. So don’t get swept up in the hype and make us use Windows 8 just to get the cool new Office productivity features.

The bottom line is that we recognize that we live in a Windows world. We’re fine with that. But that world can stay on Windows 7 until 2020. We’re fine with that too. n

BRIAN MADDEN is an opinionated, supertechnical, fiercely independent desktop virtualization and consumeriza-tion expert. Write to him at [email protected].

Windows 8? No Thanks!

By Brian Madden

Page 14: MODERN INFRASTRUCTURE - Bitpipedocs.media.bitpipe.com/io_10x/io_107820/item_622783...those up-front time and capital investments should pay big dividends. Then, in the back-to-the-future

Home

Editor’s Letter

Currents

Bob Plankers: In Hot Water

Steve Gunderson: Do Your Home- work on Your Cloud Provider

Brian Madden: Windows 8? No Thanks!

The Data Center of the Future

Hurry Up and Wait on 40 GbE

The Lean, Mean IT Operations Team

Jonathan Eunice: In Defense of In-House IT

MODERN INFRASTRUCTURE • JANUARY 2013 14

SARA STOPPED FOR a moment to sip her morn-ing coffee and watch the live feed of a stun-ning sunrise over the Sierra Nevada moun-tain range displayed across the viewing wall in her office. It was a welcome distraction from her 12-hour shift in corporate’s IT management bunker deep below New Mexi-co’s desert. The moment’s peace was broken by a tweeting alarm as her virtual assistant flickered to life.

“I’m sorry to interrupt you, Sara,” the hologram said.

“What is it, Ivan?” Sara snapped, dabbing spilled coffee from her blouse. “Is that rout-er backbone in Newark saturated again?”

“No, Sara. There is a critical power alarm at the Reykjavík facility.” The hologram gestured toward the viewing wall, opening a real-time diagram of the power grid and another detailed utility map of Iceland’s capital city. “Atlantic storms have caused a catastrophic fault in local power distribu-tion. The city’s utility, Orkuveita Reykjavi-kur, reports that repairs may take up to 24 hours.”

Sara looked at the diagrams and grimaced. “What’s the workload status?”

“All 78,135 workloads were automatical-ly migrated to other regional facilities in Edinburgh and Copenhagen. No data loss at

N E X T - G E N E R AT I O N I N F R A S T R U C T U R E

The Data Center of the FutureIt’s more science and less fiction: Next­generation data centers will adopt and refine many of the technologies that have already proven their value.

By Stephen J. Bigelow

Page 15: MODERN INFRASTRUCTURE - Bitpipedocs.media.bitpipe.com/io_10x/io_107820/item_622783...those up-front time and capital investments should pay big dividends. Then, in the back-to-the-future

Home

Editor’s Letter

Currents

Bob Plankers: In Hot Water

Steve Gunderson: Do Your Home- work on Your Cloud Provider

Brian Madden: Windows 8? No Thanks!

The Data Center of the Future

Hurry Up and Wait on 40 GbE

The Lean, Mean IT Operations Team

Jonathan Eunice: In Defense of In-House IT

MODERN INFRASTRUCTURE • JANUARY 2013 15

Next-Generation Infrastructure

this time. However, network load patterns are high for this time of day.” The hologram paused, processing possible alternatives. “I recommend switching to local cogenera-tion.”

“How long will it take the Tokamak to fire up?” Sara asked, recalling her short course on cold-fusion physics.

“It will take 30 minutes to bring the fu-sion unit online and restore the workloads to Reykjavík.”

“Take care of it, Ivan,” Sara said. “Let me know when Reykjavík is back up, and give us hourly status updates until main utility power is restored.”

“Thank you, Sara,” the hologram said, flickering out of sight.

Sara sat back, took another sip of coffee and called up the company’s global IT facil-ity reports on the viewing wall. “What a way to start a Monday.”

ABSOLUTE POWERWhen it comes to IT and data centers, it’s difficult to predict the future. But a holo-gram-operated management interface is hardly the next technology on the horizon. Still, technology has moved much faster than anyone could have imagined, and advances promise to continue over the next decade. Data centers will depend on these improvements to ensure adequate power, cooling and approaches to facility design. Let’s consider the steady—though hardly revolutionary—refinements that industry observers expect in power, cooling and facil-ities.

Today’s data centers demand more power than ever. Even when companies shift some computing needs to outsourcing or cloud providers, IT still proliferates far more busi-ness workloads, as well as mobile and online services—and there’s no end in sight.

The good news is that data center power demands are moderating. In 2007, Stan-ford professor Jonathan Koomey predicted that data center power consumption would increase 100% from 2005 to 2010. In reality, the increase has been only about 36%, at-tributable primarily to economic conditions and the broad adoption of server virtualiza-tion.

n Servers lighten their load. Server designs are increasingly energy-efficient and able to operate reliably at higher data center temperatures. It’s not just a matter of higher utilization. Servers are now more ener-gy-aware, able to mitigate power use when workloads are idle. Previous generations of servers may have used 400 watts but still consumed 60% to 70% of that power even without performing any useful work, ac-cording to John Stanley, senior analyst for data center technologies at the 451 Group. “We’ve improved energy efficiency a lot, and newer generations of 1U commodity servers are improving,” he said. “Now an idle server might only use 25% to 50% of [its] total pow-er.” Next-generation servers will use only a fraction of their total power when idle and actively power off when unneeded.

Servers are also evolving to be pur-pose-built rather than general-purpose, which enables the most energy-efficient uti-lization, said Pete Sclafani, CIO at 6connect, a provider of network automation services in San Francisco, Calif. “SeaMicro, AMD and Intel are introducing ARM processors and tailoring servers for specific purposes rather than throwing a Xeon at every com-puting problem,” he said. Chassis changes will further differentiate servers for specific purposes, such as using servers with lots of DRAM slots for high levels of virtualization.

Virtualization and server designs will also have a profound influence on data center

Page 16: MODERN INFRASTRUCTURE - Bitpipedocs.media.bitpipe.com/io_10x/io_107820/item_622783...those up-front time and capital investments should pay big dividends. Then, in the back-to-the-future

Home

Editor’s Letter

Currents

Bob Plankers: In Hot Water

Steve Gunderson: Do Your Home- work on Your Cloud Provider

Brian Madden: Windows 8? No Thanks!

The Data Center of the Future

Hurry Up and Wait on 40 GbE

The Lean, Mean IT Operations Team

Jonathan Eunice: In Defense of In-House IT

MODERN INFRASTRUCTURE • JANUARY 2013 16

Next-Generation Infrastructure

cooling needs. With fewer purpose-built servers able to sustain higher environmen-tal temperatures, there is simply less heat to remove, and this reduces the amount of power needed to drive the cooling systems (whether mechanical HVAC, chiller pumps, heat exchanger fans or other technologies).

n Power distribution gets efficient. Reduc-ing losses within a facility’s power distribu-tion network is another potential improve-ment. Utility power enters a data center at high voltages that are stepped down to much lower voltages before being distributed to server racks and backup power systems. Each time utility voltage is converted, there are inevitable energy losses. “Some folks will run 480 volts AC all the way to the racks,” Stanley said, versus 240 or even 120 volts AC. “There is less conversion and less wiring.” Large data center operators such as Google have even evaluated the distribution of DC power directly to servers and other equipment, which would eliminate conver-sions, and over the next decade this may also

become more commonplace to optimize energy use.

n Power costs, availability and alterna-tives grow. Although Koomey’s report says that data centers currently demand only about 2% of total global energy, the cost and availability of power is a growing con-cern as the power grid continues to age and governments implement more aggressive carbon emission standards. “Power can be an issue in regional markets like Manhattan or San Francisco,” Stanley said. “Bringing an extra 10 megawatts into New York may be a problem. Businesses will ... deploy a larger number of smaller data centers in second- tier locations with inexpensive power and good connectivity.”

Experts say that cogeneration—gen-erating electricity on-site to replace or supplement utility power—with existing technologies won’t address all data center power problems. For example, solar panels are inefficient, wind is unpredictable and inconsistent, fuel cell hydrogen takes a great

IDEA IN BRIEFOVER THE NEXT decade, data center power and cooling technologies won’t undergo major transfor- mation, but rather incremental advancement. Next-generation data centers have an opportunity to improve energy efficiency through various approaches, including the following:

n Lighter-load servers that are custom rather than general purpose-built;

n More efficient power distribution and new mixtures of cooling technologies, such as free cooling, evaporative cooling and a chiller;

n More appropriate data center site selection and moves toward building a greater number of small-er facilities;

n A gradual move toward more elevated data center temperatures;

n New cooling improvements, such as variable-speed motors; and

n More realistic data center power usage targets. n

Page 17: MODERN INFRASTRUCTURE - Bitpipedocs.media.bitpipe.com/io_10x/io_107820/item_622783...those up-front time and capital investments should pay big dividends. Then, in the back-to-the-future

Home

Editor’s Letter

Currents

Bob Plankers: In Hot Water

Steve Gunderson: Do Your Home- work on Your Cloud Provider

Brian Madden: Windows 8? No Thanks!

The Data Center of the Future

Hurry Up and Wait on 40 GbE

The Lean, Mean IT Operations Team

Jonathan Eunice: In Defense of In-House IT

MODERN INFRASTRUCTURE • JANUARY 2013 17

Next-Generation Infrastructure

deal of energy to produce in the first place, and we’re nowhere near practical fusion reactors. Rather than invest in cogeneration sources on-site, data centers of the future will likely supplement power needs (or offset major utility prices) by contracting with an alternative energy provider such as a local wind or solar farm.

“Look for more complex mixtures of power and how it’s delivered,” Sclafani said. He suggested that some data centers may adopt the Kyoto wheel (See “Kyoto Wheel Cooling,” below) for short-term power ride-through (the ability for equipment to con-tinue operating throughout a utility outage),

perhaps supported by a natural gas power plant for long-term power production and further supplemented by solar panels on the roof.

n Power efficiency metrics should matter. The greatest problem for future data centers is adopting meaningful metrics to gauge en-ergy efficiency. Current metrics like power usage effectiveness (PUE), carbon usage effectiveness (CUE) and water usage effec-tiveness (WUE) are well-established, but none measures the efficiency of IT.

Experts agree that the ultimate metric would be an objective measurement of “use-

KYOTO WHEEL COOLINGD This type of heat wheel separates outside and inside air effectively,

making it appropriate for data center use.

SOURCE: CHATSWORTH PRODUCTS

Page 18: MODERN INFRASTRUCTURE - Bitpipedocs.media.bitpipe.com/io_10x/io_107820/item_622783...those up-front time and capital investments should pay big dividends. Then, in the back-to-the-future

Home

Editor’s Letter

Currents

Bob Plankers: In Hot Water

Steve Gunderson: Do Your Home- work on Your Cloud Provider

Brian Madden: Windows 8? No Thanks!

The Data Center of the Future

Hurry Up and Wait on 40 GbE

The Lean, Mean IT Operations Team

Jonathan Eunice: In Defense of In-House IT

MODERN INFRASTRUCTURE • JANUARY 2013 18

Next-Generation Infrastructure

ful IT work per watt.” The problem is de-fining what constitutes useful IT work. The “useful IT work” performed by a scientific computing community differs from that per-formed by Web providers, financial services companies and so on. This concept requires a move from comparative metrics to a focus on internal metrics: defining what works for a specific organization (or application) and basing the metric on that need.

KEEPING OUR COOLPart of the energy used to power servers and other equipment is shed as heat rather than computing work, so data centers face the problem of eliminating this heat before it damages equipment.

While this reality of data center opera-tion won’t change, the cooling demands of computing equipment, and the technologies used to achieve that cooling, have changed radically. Experts don’t predict revolution-ary new cooling technologies in the years ahead, but rather continued refinement and broad adoption of alternative cooling sys-tems.

n A mix of cooling technologies emerges. Future data centers cannot rely exclusively on mechanical refrigeration, but it’s un-likely that computer room air conditioning (CRAC) units will disappear in 10 years’ time. Stanley expects future data center designs to feature a mix of traditional and alternative cooling technologies. “Free cool-ing is not an all-or-nothing strategy,” he says. “I’ve seen some data centers use free cool-ing, evaporative cooling and a small chiller for the last few degrees.”

n More appropriate data center locations materialize. Environmental cooling alterna-tives can be adopted in almost any location,

but to reduce the use of mechanical refrig-eration, next-generation data center sites must be selected to accommodate environ-mental cooling. For example, it’s possible to build a data center in the desert, but chances are that air-side economizers will run only during evening hours—leaving energy-guz-zling CRAC units to run during the day. In the coming decade, businesses will opt to build next-generation data centers in cooler climates, near sources of cold water, beneath suitable geologies or where other environ-mental cooling features are accessible.

n Operating temperatures rise. It’s simple logic: You can reduce cooling needs with equipment that doesn’t need as much cool-ing in the first place. In 2011 the American Society of Heating, Refrigerating and Air Conditioning Engineers (ASHRAE) updated its 2004 document “Thermal Guidelines for Data Processing Environments” and outlined two new classes of data center equipment capable of sustained operation at elevated temperature and humidity levels. For example, ASHRAE Class A4—the cur-rent highest class—computing equipment will operate from 5 degrees to 45 degrees Celsius at 8% to 90% relative humidity.

Still, the move to higher operating tem-peratures (and the reduction in cooling needs) is not a single event; it is a process that will take data centers through the next decade. Robert McFarlane, principal at Shen Milsom & Wilke, a consulting and design firm, noted that server vendors have started marketing servers that accommodate higher ASHRAE classes, but operators need all the enhanced equipment in place before higher data center temperatures can be allowed. Replacing hundreds (or even thousands) of servers for high-temperature environments may take several technology refresh cycles so professionals can get comfortable with

Page 19: MODERN INFRASTRUCTURE - Bitpipedocs.media.bitpipe.com/io_10x/io_107820/item_622783...those up-front time and capital investments should pay big dividends. Then, in the back-to-the-future

Home

Editor’s Letter

Currents

Bob Plankers: In Hot Water

Steve Gunderson: Do Your Home- work on Your Cloud Provider

Brian Madden: Windows 8? No Thanks!

The Data Center of the Future

Hurry Up and Wait on 40 GbE

The Lean, Mean IT Operations Team

Jonathan Eunice: In Defense of In-House IT

MODERN INFRASTRUCTURE • JANUARY 2013 19

elevated data center temperatures.McFarlane said that if the enhanced

ASHRAE classes are universally adopted by server makers, higher operating tempera-ture capabilities may become a standard feature of next-generation servers—eventu-ally allowing all future data centers to adopt elevated temperatures.

n Incremental cooling improvements con-tinue. Despite the absence of new cooling alternatives on the horizon, next-generation data centers can deploy improvements to established cooling systems and make the most of alternative cooling methodologies.

For example, McFarlane noted the importance of using variable-frequency drives in cooling systems. A great deal of mechanical equipment operates motors at a fixed speed—the motors are on or off—but introducing variable-speed motors can save significant energy. For example, water-side economizers rely on pump motors to move water and air, but it may save energy to run those motors at lower speeds. In many cases, retrofitting existing pumps and motors with variable-speed units and controls can be cost-effective for renovation projects.

Proper airflow will be a critical attribute of future data centers. Stanley says that careful airflow evaluation, containment and leakage management will remain effective for all but the highest-density systems. “You can cool up to 20 to 25 kW per rack with proper airflow,” he said, noting that most racks run at 5 to 10 kW. “That’s a much high-er density than we see today.”

Another incremental improvement is a closer coupling between the cooling medium and servers. For example, rather than using chilled water to cool the air introduced to a data center, the water can be circulated to server racks or even to the processor chips within servers. While these concepts are

hardly innovative, experts agree that reduc-ing the distance between the cooling source and cooling targets is an important strategy for future cooling efficiency.

Perhaps the most futuristic cooling scheme is full-immersion cooling—sub-merging the entire server in a bath of chilled, nonconductive medium such as oil. In September 2012, Intel Corp. announced the successful conclusion of a yearlong test that submerged an entire server within a chilled coolant bath. With “direct contact” cooling, the heat sink basically becomes the cool-ant, allowing even higher clock speeds for greater computing performance. (See “Blub, Blub, Blub ... Immersion-Cooled Servers” on page 20.) “One major advantage here is ride-through,” said McFarlane. “With immer-sion, there is enough thermal mass to keep equipment cool until generators and backup systems come up.”

n Operational sustainability is key. Next-generation data centers can signifi-cantly reduce power and cooling demands, and it is possible that traditional mechanical cooling may be all but removed. Still, there will always be a need to move heated air or pump water. Cooling won’t become passive.

But as new facilities emerge to maxi-mize the use of free cooling such as air and water economizers, operators are pushing the limits of these technologies. Plan for growth, contingencies and problems in data center cooling. “Facility operators must be vigilant and rigorous,” said Vincent Renaud, managing principal at the Uptime Institute. “Operational sustainability is as important as efficiency.”

ON THE HORIZON: STILL BUILDING DATA CENTERSAlthough proponents of the cloud may pre-

Next-Generation Infrastructure

Page 20: MODERN INFRASTRUCTURE - Bitpipedocs.media.bitpipe.com/io_10x/io_107820/item_622783...those up-front time and capital investments should pay big dividends. Then, in the back-to-the-future

Home

Editor’s Letter

Currents

Bob Plankers: In Hot Water

Steve Gunderson: Do Your Home- work on Your Cloud Provider

Brian Madden: Windows 8? No Thanks!

The Data Center of the Future

Hurry Up and Wait on 40 GbE

The Lean, Mean IT Operations Team

Jonathan Eunice: In Defense of In-House IT

MODERN INFRASTRUCTURE • JANUARY 2013 20

dict an eventual move to universal outsourc-ing, data center experts note that data center construction is alive and well. Outsourcing will become a common practice to some extent, but it presents issues for corporate management that wants to ensure privacy, data security and exclusive control of its business’ future.

Consider cloud resellers that are simply selling services through a cloud that they don’t control any more than you do. “How many cloud providers are built on Amazon Web Services?” Sclafani said. “It may not even be the provider’s own cloud.” Every business that embraces outsourcing must ensure custody of its data regardless of how

BLUB, BLUB, BLUB ... IMMERSION-COOLED SERVERSSCIENCE FICTION HAS long envisioned the massive computing systems of future spacecraft to be totally submerged in a bath of cold liquid. It’s easy to understand why: Liquids can conduct heat much more efficiently than air. This is a principle of physics that has long been un-derstood. However, computer designers have deliberately stayed away from liquid cooling

techniques because common cooling mediums are either conductive or caus-tic—two character-istics that are abso-lutely detrimental to electronics.

But the fictional vision of sub-merged computer hardware may become a reality sooner than we

imagined. In September of 2012, Intel com-pleted a yearlong study on server cooling using simple mineral oil, which is just as good at conducting heat as water, yet is not electrically conductive and does not corrode and damage electronic components submerged within.

Since mineral oil is also a fairly heavy and viscous material, a cooling bath provides

protection against short-term loss of cooling power. Imagine a swimming pool. It takes a long time for the mass of pool water to warm up or cool down. If air conditioning or cooling fans fail, hardware can suffer the adverse effects of skyrocketing air temperatures in a matter of minutes.

But if the server is submerged in a bath of cold liquid, the liquid will continue to absorb heat for a considerably longer time in the event that the liquid’s refrigeration unit fails. (See photo at left.) This kind of ride-through behavior alleviates much of the worry expe-rienced with traditional mechanical cooling disruptions.

And immersion cooling can be extreme-ly effective. According to Green Revolution Cooling, which developed the immersion baths Intel used, it is possible for immersion cooling to handle heat loads up to 100 kW for a 42U (7 foot) rack. This is much more than current air-cooled racks that run up to 12 to 30 kW.

But there is also a downside: Liquid coolant is just plain messy. Imagine having to remove a server from a liquid bath to troubleshoot or upgrade it. Technicians will need some type of protective garb and an entirely new protocol for working on immersion-cooled equipment. Spills from large baths can also create serious safety and cleanup problems. n

Next-Generation Infrastructure

Page 21: MODERN INFRASTRUCTURE - Bitpipedocs.media.bitpipe.com/io_10x/io_107820/item_622783...those up-front time and capital investments should pay big dividends. Then, in the back-to-the-future

Home

Editor’s Letter

Currents

Bob Plankers: In Hot Water

Steve Gunderson: Do Your Home- work on Your Cloud Provider

Brian Madden: Windows 8? No Thanks!

The Data Center of the Future

Hurry Up and Wait on 40 GbE

The Lean, Mean IT Operations Team

Jonathan Eunice: In Defense of In-House IT

MODERN INFRASTRUCTURE • JANUARY 2013 21

Next-Generation Infrastructure

it’s outsourced. This underlying issue is unlikely to change as governments ratchet up privacy and regulatory constraints on business into the future.

“A lot of people are building their own places for their own enterprise gear, appli-cations and so forth,” Renaud said, noting that the traditional arguments for a build are holding steady—especially in highly regulat-ed industries—and will probably drive future builds. “There seems to be a sense of push-back, but people are building.”

Renaud notes international resistance to outsourcing and colocation. “There’s a lot of data center builds going on in Russia now,” he said. “They are dead set against using colocation.”

n Build a larger number of smaller facilities. The industry may find examples in massive data center builds like Google or Yahoo, but Sclafani notes that monolithic data centers may not be appropriate for most business-es. Instead, future data center designs are likely to be smaller and more distributed to maintain performance and move computing and storage closer to users. “For higher-per-forming applications with user interaction, you can’t have a data center five or six hops away,” he said.

n The environment becomes more import-ant. Site selection for a future data center will always include close examination of power costs, cooling requirements and local taxation, but environmental impact will be-come far more important to next-generation data center designs. Consider a data center that relies on free air cooling in concert with a traditional CRAC unit. Locating the new data center next to a busy freeway or a few miles from an industrial center may seem like a convenient choice, but the smog will quickly clog air filters and reduce cooling

air flow (along with free air effectiveness). This will cause the CRAC unit to work much harder and escalate energy costs while af-fecting energy-efficiency metrics like PUE.

The growing use of carbon taxes will also encourage future data centers to more close-ly evaluate energy sources. The social and marketing “costs” of using electricity from fossil fuel sources may become much higher than the cost per kilowatt-hour.

A closer emphasis on environmental con-cerns will require more careful instrument-ing and monitoring within the future data center. In the previous example, a reduction in inlet air flow rates without a change in fan speeds means that the air filters are clogged and require maintenance. “We have to get away from long, static replacement or main-tenance schedules and tie facilities and IT closer together,” Sclafani said. “We have to make the buildings smarter.”

n Design future data centers for actual usage. For decades, IT has vastly overprovi-sioned new data center designs to power and cool rack densities that, in most cases, only operate at a small fraction of the anticipated load. “The average rack in the data center [is] at 2 kW; that’s much lower than the 20 kW per rack planned for,” Renaud said, not-ing that tremendous amounts of capital can be saved through more careful analysis and capacity planning. “Press IT to know what is

The social and market­ ing “costs” of using electricity may become much higher than the cost per kilowatt­hour.

Page 22: MODERN INFRASTRUCTURE - Bitpipedocs.media.bitpipe.com/io_10x/io_107820/item_622783...those up-front time and capital investments should pay big dividends. Then, in the back-to-the-future

Home

Editor’s Letter

Currents

Bob Plankers: In Hot Water

Steve Gunderson: Do Your Home- work on Your Cloud Provider

Brian Madden: Windows 8? No Thanks!

The Data Center of the Future

Hurry Up and Wait on 40 GbE

The Lean, Mean IT Operations Team

Jonathan Eunice: In Defense of In-House IT

MODERN INFRASTRUCTURE • JANUARY 2013 22

Next-Generation Infrastructure

actually needed.” Containerized data cen-ters have grown a great deal, but experts say that this kind of computing does not offer the flexibility provided in a data center build. A container, however, may be an ideal choice

to host a Hadoop cluster or supplement a fa-cility facing a temporary computing crunch. In addition, companies that cannot afford to construct a complete facility up front (or don’t want to take the risk) can deploy a modular data center design in phases as needs evolve.

And finally, don’t allow IT or facilities to design your next data center. A business must find a developer with demonstrated ex-

pertise in data center design technologies—it’s a specialty and should be approached that way.

McFarlane notes that too many new builds use traditional developers who don’t fully understand how to deploy the latest cooling technologies or streamline airflow patterns for next-generation cooling sys-tems. Don’t make this mistake. “We need to educate the industry that there are solutions much better than what is still done in data centers,” said McFarlane.

Future data centers will not be a fanciful science fiction romp; there is simply too much time, money and business activity to risk on unproven or experimental technolo-gies. But many well-understood and estab-lished technologies today will undoubtedly influence your next build, reducing power demands and cooling requirements and offering a facility that best fits the mission of your business. n

STEPHEN BIGELOW is senior technology writer in the Data Center and Virtualization group. He has written more than 15 feature books on computer troubleshooting, including Bigelow’s PC Hardware Desk Reference and Bigelow’s PC Hardware Annoyances. Find him on Twitter @Stephen_Bigelow.

Too many data center developers don’t fully understand the latest cooling technologies or how to streamline airflow patterns.

Page 23: MODERN INFRASTRUCTURE - Bitpipedocs.media.bitpipe.com/io_10x/io_107820/item_622783...those up-front time and capital investments should pay big dividends. Then, in the back-to-the-future

Home

Editor’s Letter

Currents

Bob Plankers: In Hot Water

Steve Gunderson: Do Your Home- work on Your Cloud Provider

Brian Madden: Windows 8? No Thanks!

The Data Center of the Future

Hurry Up and Wait on 40 GbE

The Lean, Mean IT Operations Team

Jonathan Eunice: In Defense of In-House IT

MODERN INFRASTRUCTURE • JANUARY 2013 23

BELIEVE IT OR not, the 40 Gigabit Ethernet era is already upon us. The standard has long since been ratified, and products are shipping. But for the time being, 40 Gigabit Ethernet (GbE) is stuck in first gear.

As data centers virtualize more of their servers and storage, the need for speedy net-work connections increases. But something happened on the way from 10 Gigabit Ether-net to 40 GbE: IT departments are sticking with the status quo and are taking their time upgrading their connections.

A few reasons for the delay include exist-ing wiring infrastructure, where these faster Ethernet switches are placed on networks,

slower adoption of 10 GbE networks (which has been mostly on servers) and the prepon-derance of copper gigabit connections.

The standards for 40 GbE have been around for more than a year, and a number of routers, switches and network cards al-ready operate at this speed. Vendors such as Cisco Systems, Dell’s Force10 division, Mel-lanox, Hewlett-Packard, Extreme Networks, Gnodal and Brocade offer such hardware.

The price per 40 GbE port is typically $2,500—about 500 times the price per giga-bit Ethernet port. For example, the Gnodal GS0072, a 2U-high, 72-port, 40 GbE switch, sells for $180,000.

D ATA C E N T E R N E T W O R K I N G

Hurry Up and Wait on 40 GbEThe technology is here, but enterprises aren’t adopting it. What needs to happen to push the networking standard into prime time?

By David Strom

Page 24: MODERN INFRASTRUCTURE - Bitpipedocs.media.bitpipe.com/io_10x/io_107820/item_622783...those up-front time and capital investments should pay big dividends. Then, in the back-to-the-future

Home

Editor’s Letter

Currents

Bob Plankers: In Hot Water

Steve Gunderson: Do Your Home- work on Your Cloud Provider

Brian Madden: Windows 8? No Thanks!

The Data Center of the Future

Hurry Up and Wait on 40 GbE

The Lean, Mean IT Operations Team

Jonathan Eunice: In Defense of In-House IT

MODERN INFRASTRUCTURE • JANUARY 2013 24

Analyst firm Gartner estimates that only one-fifth of network interfaces are running at 10 gigabits per second, and only about 1% are running 40 GbE. With such a small share, 40 GbE could double in a few years.

GOODBYE, CAT 6 CABLINGBefore that happens, though, there is the question of wiring up all those 40 GbE ports. This is the biggest issue for 40 GbE net-works. Previous versions of Ethernet could use standard Category 6 (Cat 6) copper wir-ing and RJ45 connectors, which have been around for decades and are readily deployed.

But not so for 40 GbE. “I don’t know any enterprise who is running 40 GbE now,” says Mike Fratto, an analyst at Current Analysis. “What is holding most people back is that new cabling is required, and there is no easy way to upgrade it in the field, either.”

The technology runs on Quad Small Form-factor Pluggable cabling, or QSFP—a high-density fiber connector with 12 strands of fiber. Unlike standard two-strand fiber connections, it isn’t “field terminated,” meaning that an electrician can’t hook up a QSFP connector on site. Data center manag-ers need to determine their cabling lengths in advance and preorder custom cables

WHAT DOES A 40 GBE CABLE LOOK LIKE?IF YOU THINK that 12 pieces of fiber in a 40 GbE connector are a lot, consider how much fiber 100 GbE network cables will need! High-speed networks represent a big jump in cabling complexity—a barrier to adoption, and why so many IT departments have stuck with copper connections for so long. It also means that any current investments in fiber cabling probably aren’t going to cut it for the next gener-ation of high-speed networks. n

SOURCE: CISCO SYSTEMS INC.

Data Center Networking

Page 25: MODERN INFRASTRUCTURE - Bitpipedocs.media.bitpipe.com/io_10x/io_107820/item_622783...those up-front time and capital investments should pay big dividends. Then, in the back-to-the-future

Home

Editor’s Letter

Currents

Bob Plankers: In Hot Water

Steve Gunderson: Do Your Home- work on Your Cloud Provider

Brian Madden: Windows 8? No Thanks!

The Data Center of the Future

Hurry Up and Wait on 40 GbE

The Lean, Mean IT Operations Team

Jonathan Eunice: In Defense of In-House IT

MODERN INFRASTRUCTURE • JANUARY 2013 25

that are manufactured with the connectors already attached. (See “What Does a 40 GbE Cable Look Like?” on page 24.)

“There is no way you can meet the [elec-trical and mechanical] tolerances with man-ually splicing 12 individual pieces of fiber,” said Kevin Cooke, manager of solutions architecture at Teracai Corp., a value-added reseller (VAR) in Syracuse, N.Y., that has in-stalled several 40 GbE networks. “This fun-damentally changes the way these 40 GbE projects are managed. You can’t buy a bulk spool and cut and terminate as you need it. This means you are going to pay more for premade cables, and you will have to mea-sure these lengths more carefully, too.”

Cabling hurdles notwithstanding, imple-menting the latest and greatest high-speed networks can wreak havoc on networking teams and processes.

“The long-term future is in higher-speed Ethernet,” said Mike Pusateri, an indepen-dent digital media executive and a former technology executive at The Walt Disney Co. “But it isn’t a panacea, and you just can’t throw more or faster hardware at your net-work to make it run faster.”

At Disney, Pusateri wrestled with imple-menting faster networks to move gigantic 250 GB files around. When Disney first put in 10 GbE, the network actually ran slower.

IT staffers found numerous network bottle-necks that needed resolution.

“You bump into things like immature net-work interface card drivers that can hang up your entire system,” Pusateri said. “It is like building an eight-lane superhighway with dirt off-ramps and interchanges. Unless you upgrade everything, it is totally useless. You have to look at the entire system and under-stand all the relationships.”

SERVER VIRTUALIZATION, STORAGE LEAD THE WAYDespite these challenges, one place where 40 GbE will find a home is in data centers with very high-density virtualized servers. Think telecom service providers or cloud-based hosting vendors that are installing blade servers. These organizations use gear like Cisco’s Unified Computing System, or UCS, that runs dozens of virtual machines (VMs) on one box.

“Virtualization was the prime driver to 10 GbE, and it will be the prime driver for 40 GbE in the future,” said Eric Hanselman, an analyst at The 451 Group. “Once you start getting higher densities of virtual machines per servers, you need to have the network capacity to match it.”

Indeed, the sudden uptick in 10 GbE-

Data Center Networking

40 GBE PORT FORECASTAccording to the Dell’Oro Ethernet Switching Report, “We ex-pect that customers will gravitate to 40 GbE, not because they need all the bandwidth, but because two 10 GbE ports won’t be sufficient.” However, these numbers are somewhat suspect. The report counts all QSFP ports using splitter cables into four 10 GbE ports as a single 40 GbE port, which really doesn’t show usage of the faster technology. n

YEAR NUMBER OF GBE PORTS

2011 7,700

2012 225,000

2013 569,100

2016 5,273,200 DELL’ORO ETHERNET SWITCHING REPORT, AUGUST 2012

Page 26: MODERN INFRASTRUCTURE - Bitpipedocs.media.bitpipe.com/io_10x/io_107820/item_622783...those up-front time and capital investments should pay big dividends. Then, in the back-to-the-future

Home

Editor’s Letter

Currents

Bob Plankers: In Hot Water

Steve Gunderson: Do Your Home- work on Your Cloud Provider

Brian Madden: Windows 8? No Thanks!

The Data Center of the Future

Hurry Up and Wait on 40 GbE

The Lean, Mean IT Operations Team

Jonathan Eunice: In Defense of In-House IT

MODERN INFRASTRUCTURE • JANUARY 2013 26

Data Center Networking

connected servers may take some data cen-ter networks by surprise. “All the next- generation servers have options for built-in 10 GbE networks now,” Hanselman said. “This has changed the pricing dynamic in servers, and as more of these come into corporations, the networking environments may not be ready for them. They may end up upgrading their server technology faster than their network core infrastructure.”

Another trend that is helping push 10 and 40 GbE forward is that storage area networks (SANs) are already running at these speeds with either InfiniBand or Fibre Channel over Ethernet. Numerous high- performance computing installations have adopted InfiniBand, and there are efforts to improve its performance and push it faster.

In addition, Internet hosting providers such as ProfitBricks.com have all-InfiniBand back ends for ultra-low-latency networks. As storage vendors come up with lower-latency connections, expect to see some competition with 40 GbE in the future.

It’s also possible that some data center managers may skip 40 GbE altogether and jump straight to 100 GbE.

“40 GbE is something I’ve been consider-ing for our in-house SAN, although if I were investing now, I’d probably go straight for 100 GbE,” said Tony Maro, CIO of EvriChart, a medical records management company in White Sulphur Springs, W. Va.

THE TOP-OF-RACK UPLINKOne way to move into higher-speed net-works is to use so-called top-of-rack switch-es, which aggregate just the servers in the cabinet below and are connected short distances to other top-of-rack switches.

Today, a typical configuration consists of gigabit connections to individual servers, and then 10 GbE to the core from each rack.

These are being upgraded now to 10 GbE server connections and 40 GbE to the core.

“We will see more 40 GbE uplinks with true 40 GbE optics begin to ramp more ag-gressively in the data center,” wrote Dell’Oro Group in a report on Ethernet switches.

The top-of-rack approach gets around having to rewire the entire data center and makes it easy to use precut QSFP optical cables. Teracai chose this method for one of its government clients.

“It was more cost-effective to do 40 GbE uplinks between switches than to use multiple 10 GbE uplinks,” Cooke said. They replaced 16 10 GbE uplinks with four 40 GbE links and saved $750,000 in connectors with no significant change in networking perfor-mance or throughput.

As more 40 GbE equipment enters the market, expect this price difference to in-crease, making it even more cost-effective to aggregate uplinks. Many analysts have forecast that the cost of 40 GbE equipment will drop to much less than four times the amount of 10 GbE in the near future, to the tune of hundreds of dollars per port.

When that happens, look for 40 and 100 GbE to take off. Until then, high-speed Ethernet will make only incremental in-roads into the most demanding environ-ments.

“We are seeing the same basic pattern as with Gigabit Ethernet,” said Current Analy-sis’ Fratto. “The fastest networks appear in the biggest peering Internet providers and with heavier financial trading applications and don’t go mainstream until they are more commonly found on commodity servers’ motherboards.” n

DAVID STROM has 25 years of experience as an expert in network and Internet technologies and has written and spoken extensively on topics including VoIP, email, cloud computing and wireless.

Page 27: MODERN INFRASTRUCTURE - Bitpipedocs.media.bitpipe.com/io_10x/io_107820/item_622783...those up-front time and capital investments should pay big dividends. Then, in the back-to-the-future

Home

Editor’s Letter

Currents

Bob Plankers: In Hot Water

Steve Gunderson: Do Your Home- work on Your Cloud Provider

Brian Madden: Windows 8? No Thanks!

The Data Center of the Future

Hurry Up and Wait on 40 GbE

The Lean, Mean IT Operations Team

Jonathan Eunice: In Defense of In-House IT

MODERN INFRASTRUCTURE • JANUARY 2013 27

FOR BETTER OR worse, today’s IT operations team is different from the one you joined five, 10 or 20 years ago, with fewer people and resources managing an ever-increasing number of systems.

The most obvious culprit is the economy. In 2009 and 2010, IT budgets fell sharply. According to Gartner, they shrank 8.1% in 2009, and another 1.1% the year after. And while IT budgets started growing again in 2011, they are only at the level they were in 2005.

At the same time, fewer IT staffers are managing more systems. Gone are the

days of the 25:1 system-to-system-adminis-trator ratio; today that number is closer to 250:1.

“This is the new normal,” said Mike Sargent, general manager for enterprise management at CA, the management soft-ware vendor. “The effective demands on IT are going up exponentially, and it is under massive pressure to keep costs under con-trol.”

But most IT teams have responded with ingenuity in keeping the lights on and the hard drives spinning. How are successful IT operations teams keeping afloat?

I T O P S

The Lean, Mean IT Operations TeamAs the cloud gains prominence, business continuity may be taking on a new flavor.

By Alex Barrett

Page 28: MODERN INFRASTRUCTURE - Bitpipedocs.media.bitpipe.com/io_10x/io_107820/item_622783...those up-front time and capital investments should pay big dividends. Then, in the back-to-the-future

Home

Editor’s Letter

Currents

Bob Plankers: In Hot Water

Steve Gunderson: Do Your Home- work on Your Cloud Provider

Brian Madden: Windows 8? No Thanks!

The Data Center of the Future

Hurry Up and Wait on 40 GbE

The Lean, Mean IT Operations Team

Jonathan Eunice: In Defense of In-House IT

MODERN INFRASTRUCTURE • JANUARY 2013 28

IT Ops

LESS IS MOREOn the systems and facilities side of things, IT teams have been consolidating with abandon. Virtualization has famously been the primary way for IT departments to keep up with demand while keeping a lid on costs. Likewise, large companies consolidated multiple data centers into fewer centralized ones, slashing costs while also establishing consistent standards and better availability across far-flung geographies.

All that consolidation has had serious implications for IT operations staffs. One manufacturer went from having IT staffers in all its regional facilities to a shared service center model. Festo, a global industrial auto-mation and pneumatics manufacturer with 15,500 employees in 15 countries, eliminat-ed most local IT ops people in the move and regrouped them in one of three company data centers.

“We operated locally for a long time, but then we worked to regionalize and centralize IT operations,” said Steve Damadeo, Festo IT operations manager for the Americas.

For example, Festo used to run mail ser-vices in 60 countries; now mail is centralized in its European, North American and Asian data centers. Festo plans to further consol-idate services like file and print into these regional data centers as well. “There’s no point in having 10 systems when I can do it with two,” he said.

Enterprise IT has also learned to stream-line common tasks so they can be handled by IT operations teams in less-expensive parts of the world.

Mark Szynaka is a cloud architect at Network Strategies Inc., an IT consultancy in New York City, and has worked at several large Wall Street firms. There, his work con-sisted largely of automating network man-agement processes so they could be handled remotely.

Using ITIL (or IT Infrastructure Library) best practices, Szynaka said the goal was to take a rules-based approach to discov-ering errors and correlating events and to interface with in-house ticketing systems manned by company help desk staff sitting far from headquarters.

“Most of the day-to-day care and feeding is going to Texas and India,” Szynaka said—especially tedious tasks. Under this system, only those problems that cannot be easily reconciled “bubble up” to senior, in-house IT staffers.

Indeed, tiering IT operations tasks and teams has become increasingly common. The California health care giant Kaiser Per-manente split its IT operations team in two a couple of years ago. The first is an offshore team of Kaiser IT employees focused on “red” and “green” systems availability using the IBM Tivoli Netcool/Omnibus operations management console; the second is a critical application support team in the U.S. that proactively addresses performance prob-lems using vCenter Operations, VMware’s performance analytics suite.

Creating the two teams was “a natural consequence” of implementing performance analytics, said Ian Dodd, Kaiser Permanen-te’s director for service delivery manage-ment.

“Availability monitoring is by definition reactive. A ticket comes in, they react,” Dodd said. Not so with possible performance prob-

“ There’s no point in having 10 systems when I can do it with two.”—Steve Damadeo, IT operations manager at Festo

Page 29: MODERN INFRASTRUCTURE - Bitpipedocs.media.bitpipe.com/io_10x/io_107820/item_622783...those up-front time and capital investments should pay big dividends. Then, in the back-to-the-future

Home

Editor’s Letter

Currents

Bob Plankers: In Hot Water

Steve Gunderson: Do Your Home- work on Your Cloud Provider

Brian Madden: Windows 8? No Thanks!

The Data Center of the Future

Hurry Up and Wait on 40 GbE

The Lean, Mean IT Operations Team

Jonathan Eunice: In Defense of In-House IT

MODERN INFRASTRUCTURE • JANUARY 2013 29

IT Ops

lems. “You can’t take a ticketing approach to a problem that may happen in a couple of hours; you have to give these guys time.”

But there are costs to distributing IT operations staffers around the globe. While this global village ethos makes it easier to hire for specific technical skills or language skills (to say nothing of staffing the third shift), many IT pros miss the old days.

“There are days when I think that would be much easier if everyone were in the same room,” said Festo’s Damadeo. He laments the constant video conferences, and strug-gles to tone down his fast-talking New Yorker style. And it’s tough to ensure that everyone is on the same page. “I spend a lot of time on communication,” he said.

AUTOMATE ALL THINGSUltimately, consolidation and centralization are a game of diminishing returns. Many organizations have hit a ceiling on how many systems they can virtualize or how many virtual machines (VMs) they can stuff on a server. Likewise, latency and bandwidth lim-itations curtail global organizations’ ability to centralize processing into a single data center facility, and staffs simply can’t get any smaller and still run effectively.

With these tactics maxed out, where does IT turn for greater efficiency?

In a word: automation. Like countless industries before it, IT operations pros are hard at work automating time-consuming and error-prone workflows in search of efficiency.

Cypress Semiconductor Corp. in San Jose, Calif., has IT operations distributed around the world and, over the past couple of years, has worked to automate onerous IT work-flows such as creating and deleting employee email accounts. Before automation, the pro-cess used to take a week or more, said Venki

Sundaresan, senior IT director, but can now be accomplished in about one day, including obtaining all the necessary approvals.

Cypress uses the OpsOne IT process automation platform, a Software as a Ser-

vice (SaaS) offering from Appnomic. The fact that OpsOne is delivered as SaaS makes it low maintenance for the IT team, and automating complex processes has freed the operations team for more strategic, val-ue-added work, said Sundaresan.

But many IT operations pros are resis-tant to automation, fearing that it will put them out of a job. That’s a fallacy, said John Allspaw, senior vice president of technical operations at Etsy.com, an online craft mar-ketplace. “By that measure, Google would have 50 people working for them,” Allspaw said.

In fact, Allspaw has found the opposite to be true at Etsy: “The more automation we have, the more people I hire,” he said, as more efficient workflows open up the doors for IT staffers to pursue new projects.

Further, you can’t just automate a process and send staff on their merry way. “When things go wrong with automation, which they absolutely and always will, we need to have the people that wrote the automation to introspect it and to help repair it,” Alls-paw said.

In particular, configuration management and provisioning have recently benefitted

IT operations pros are hard at work automating time­consuming and error­prone workflows in search of efficiency.

Page 30: MODERN INFRASTRUCTURE - Bitpipedocs.media.bitpipe.com/io_10x/io_107820/item_622783...those up-front time and capital investments should pay big dividends. Then, in the back-to-the-future

Home

Editor’s Letter

Currents

Bob Plankers: In Hot Water

Steve Gunderson: Do Your Home- work on Your Cloud Provider

Brian Madden: Windows 8? No Thanks!

The Data Center of the Future

Hurry Up and Wait on 40 GbE

The Lean, Mean IT Operations Team

Jonathan Eunice: In Defense of In-House IT

MODERN INFRASTRUCTURE • JANUARY 2013 30

IT Ops

from automation advances, Allspaw said. Etsy, for example, makes heavy use of Op-scode Chef and Cobbler, a Linux installation server.

But automation is no silver bullet, Alls-paw conceded. “When automation is your hammer, then every problem looks like a nail,” he said. But not every problem needs to have automation as its solution. “The question is: Should we automate it, how and where and why should we automate it?”

OUTSOURCED NATIONAnd when all else fails, you can always out-source the IT operations grunt work.

Take network monitoring. “It’s a big commitment. A lot of firms don’t have the manpower or the expertise and aren’t inter-ested in hiring [them],” said Craig Schotke, manager for the monitoring services team at YJT Solutions, a tech consulting firm in Chi-cago that offers network monitoring services using CA Nimsoft Monitor.

In recent years, YJT has seen a surge of

interest in outsourced network monitoring, initially from small emerging companies, and more recently from established mid-sized companies. Oftentimes, they’ll have the network monitoring in-house, but will choose to outsource the management of it.

“The trend we’re seeing is of companies reducing the size of their IT staff,” Schotke said. He believes that in the next few years, the majority of companies will have out-sourced their email, and “that trend is going to continue with other services.”

In organizations born in the cloud, IT op-erations staffs are already practically nonex-istent, said Network Strategies’ Szynaka.

Szynaka counts several companies in the media and entertainment industry among his customers, whose internal IT staff con-sists largely of data managers and program-mers—“the heart of the strategic differenti-ation for their business.” For infrastructure, they use Amazon Web Services extensively, combining turnkey Amazon Machine Imag-es like content delivery networks, databas-es and Hadoop into virtual private clouds created by Szynaka and his cohorts.

“We build [our customers] a virtual pri-vate cloud, then give them the keys and lock ourselves out,” Szynaka said. “They don’t need an operations team—they have me.” n

ALEX BARRETT is the editor in chief of Modern Infra-structure. Write to her at [email protected] or tweet her at @aebarrett.

When all else fails, you can always out ­source the IT operations grunt work.

Page 31: MODERN INFRASTRUCTURE - Bitpipedocs.media.bitpipe.com/io_10x/io_107820/item_622783...those up-front time and capital investments should pay big dividends. Then, in the back-to-the-future

Home

Editor’s Letter

Currents

Bob Plankers: In Hot Water

Steve Gunderson: Do Your Home- work on Your Cloud Provider

Brian Madden: Windows 8? No Thanks!

The Data Center of the Future

Hurry Up and Wait on 40 GbE

The Lean, Mean IT Operations Team

Jonathan Eunice: In Defense of In-House IT

MODERN INFRASTRUCTURE • JANUARY 2013 31

Are We There Yet?

THE RELATIONSHIP BETWEEN cloud comput-ing and in-house IT can seem like the one between a rebellious teenager and an exas-perated parent: frequently at odds, and not understanding each other.

As the upstart challengers, cloud pro-ponents have aggressively promoted the model as the way forward. There’s nothing wrong with that—there’s no progress against the status quo without strong belief and a willingness to break prevailing assumptions. And it’s working. In the past five years, cloud and virtualization have scored win after win, completely resetting the bar for how flexible, dynamic and efficient IT should be.

But there’s also more than a little “If you’re not with us, you’re against us” cliqu-ishness and “It’s cloud or nothing” bravado. The hyperbole tries to consign owned IT infrastructure—i.e., the vast majority of all IT out there, including most virtualized in-frastructure—to a legacy computing dustbin. If cloudies had a slogan, it would be “Accept no substitutes!” Any efforts to build the so-called private cloud? Insufficient!

If you want to provoke real derision from the cloudoscenti, suggest that cloud isn’t magic, but rather the natural outcome of virtualization. Or note that the latest gener-ation of owned infrastructure—“everything in one box” products like HP’s CloudSystem or IBM’s PureFlex—have some of the same virtues as cloud.

Then stand safely back: The knives are about to come out. “That utterly misses the point! The whole point is to get away from hosting your own infrastructure!”

CLOUD ZEALOTRYAnd here we come to the crux. What exactly is the point of the past 10 years of evolution toward virtualized, flexible, networked com-puting? Is it to improve IT flexibility, eco-nomics, service levels and outcomes? Is it to get instant, on-demand access to IT resourc-es? Is it to buy IT on an Opex, not a Capex pricing model? Or, finally, is it to get IT out of the infrastructure business entirely?

For many, it’s this last, most radical no-tion. Nothing but abdication will do. But it’s not abdication to some magical, idealized cloud, but to real, specific service providers like Amazon Web Services (AWS), GoGrid, Google, Heroku, Rackspace or Verizon. In other words, it’s Internet-era outsourcing to real-world vendors—each of which have their own strengths, and real weaknesses.

There’s definitely a virtue in periodically rethinking what you’re doing and whether you want to keep doing things the same way. Some who historically have had internal IT resources will want to outsource much of that. And new business ventures—espe-cially ones with a strong Web or on-demand aspect—will lean toward cloud back ends. It’s also true that very few enterprises will ever operate at the same immense scale as the largest cloud shops.

But the “everything cloud, all the time” narrative tends to ignore that for many IT

In Defense of In-House IT

By Jonathan Eunice

Page 32: MODERN INFRASTRUCTURE - Bitpipedocs.media.bitpipe.com/io_10x/io_107820/item_622783...those up-front time and capital investments should pay big dividends. Then, in the back-to-the-future

Home

Editor’s Letter

Currents

Bob Plankers: In Hot Water

Steve Gunderson: Do Your Home- work on Your Cloud Provider

Brian Madden: Windows 8? No Thanks!

The Data Center of the Future

Hurry Up and Wait on 40 GbE

The Lean, Mean IT Operations Team

Jonathan Eunice: In Defense of In-House IT

MODERN INFRASTRUCTURE • JANUARY 2013 32

Are We There Yet?

shops, job one is improving IT and business outcomes—not getting out of IT infrastruc-ture. Today’s owned infrastructure uses technologies and approaches similar to what you find in cloud data centers. And it completely ignores that ownership has clear advantages of its own. For instance, it offers local control, customizability, visibility, auditability, and service and performance levels you can’t often get—much less guar-antee—in outsourced IT. If something goes wrong, or you want your IT team to move fast in a particular direction, that’s achiev-able when the staff reports to you. Cloud providers are often operationally strong, but try calling Amazon or Google and getting them to do something. In the cloud, you’re just another utility customer.

The cloud used to provide a different class of flexibility than in-house IT. But after years of evolution, on-premises IT strategies and products are converging. They use simi-lar techniques, provide similar benefits and enjoy some of the same economics. It’s not apples versus apples, but for many recipes, you can substitute pears, peaches, blueberries—and sometimes, the dish is better if you do.

Cloudies reject the possibility that their approach can be approximated. But they have pretty clear emotional and business stakes in the outcome, and no desire to be co-opted or diluted. From this vantage point, cloud-in-a-box platforms and public cloud are asymptotically converging, and are rare-ly an either/or dichotomy. For most enter-prises and IT shops, the right approach is to use the best tool for the job. The best out-comes will be achieved on infrastructures that blend what’s good about outsourced networked computing with what’s good about local, under-our-control computing. Say hello to the hybrid cloud. n

JONATHAN EUNICE is principal IT adviser at analyst firm Illuminata Inc.

Modern Infrastructure is a SearchDataCenter.com e-publication

Margie SemilofEDITORIAL DIRECTOR

Alex BarrettEDITOR IN CHIEF

Lauren HorwitzEXECUTIVE EDITOR

Christine CignoliSENIOR FEATURES EDITOR

Phil SweeneyMANAGING EDITOR

Eugene Demaitre ASSOCIATE MANAGING EDITOR

Laura AberleASSOCIATE FEATURES EDITOR

Linda KouryDIRECTOR OF ONLINE DESIGN

Rebecca KitchensPUBLISHER

[email protected]

TechTarget275 Grove Street, Newton, MA 02466

www.techtarget.com

© 2013 TechTarget Inc. No part of this publica-tion may be transmitted or reproduced in any form or by any means without written permis-sion from the publisher. TechTarget reprints are

available through The YGS Group.

About TechTarget: TechTarget publishes me-dia for information technology profession-als. More than 100 focused websites enable quick access to a deep store of news, advice and analysis about the technologies, products and processes crucial to your job. Our live and virtual events give you direct access to inde-pendent expert commentary and advice. At IT Knowledge Exchange, our social community, you can get advice and share solutions with

peers and experts.