39
Knowledge Sharing 2009 Book of Abstracts An EMC Proven Professional Publication

EMC Proven Professional Knowledge Sharing 2009 Book of Abstracts

  • View
    1.351

  • Download
    2

Embed Size (px)

DESCRIPTION

EMC Proven Professional Knowledge Sharing 2009 Book of Abstracts

Citation preview

Knowledge Sharing2009 Book of Abstracts

An EMC Proven Professional Publication

3

Knowledge Sharing Winners 2008 Awards:

(left to right) Diedrich Ehlerding, Lalit Mohan,

Brian Russell, and Paul Brant with Alok

Shrivastava, Senior Director, EMC Education

Services; Frank Hauck, Executive Vice

President, Global Marketing and Customer

Quality; and Tom Clancy, Vice President

EMC Education Services.

For the third consecutive year, we are pleased to congratulate our EMC®

Proven™ Professional Knowledge Sharing authors. This year’s Book of

Abstracts demonstrates how the Knowledge Sharing program has grown

into a powerful forum for sharing ideas among information storage

professionals. In 2009, Knowledge Sharing articles were downloaded more

than 104,000 times, underscoring the power of the knowledge sharing

concept. You may view the 2009 articles and the monthly release of our 2009

Knowledge Sharing articles at http://education.emc.com/knowledgesharing.

Our Knowledge Sharing authors also play a leading role in our new EMC Proven

Professional community. It’s a great place to collaborate with other Proven

Professionals, ask questions about the program, or share your experiences.

Visit the community at http://education.emc.com/provencommunity.

The EMC Proven Professional program had another great year—we recently

awarded our 39,000th certification. Also, we recently announced publication

of “Information Storage and Management,” the first technology book from

EMC. It will be a valuable addition to any IT professional’s reference library.

Our continuing success is built on the foundation of committed professionals

who participate, contribute, and share. We thank each of you who participated

in the 2009 Knowledge Sharing competition.

Tom Clancy Alok Shrivastava

Vice President Senior Director

EMC Education Services EMC Education Services

Thank You!

EMC Proven Professional: Knowledge Sharing 20094

First-Place Knowledge Sharing Article . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7Is Cloud Computing the Game Changer Your Company Needs in These Tough Times?Bruce Yellin, EMC Corporation

Second-Place Knowledge Sharing Article . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8Customized Tool for Automated Storage ProvisioningKen Guest, A Large Telecommunications Company

Sejal Joshi, A Large Telecommunications Company

Third-Place Knowledge Sharing Article . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9Reclaiming SAN Storage—The Good, the Bad, and the UglyBrian Dehn, EMC Corporation

Best of Content Management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10Architecting an Enterprise-Wide Document Management PlatformJacob Willig, Documentum Consultant

Best of Tiered Storage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11Real-Life Application of Disaster RecoveryFaisal Choudry, Magirus UK Ltd.

Best of Backup and Recovery . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12Best Practices for Implementing and Administering EMC NetWorkerAnuj Sharma, Ace Data Devices

Backup and RecoveryA Load-Balancing Algorithm for Deploying Backup Media Servers . . . . . . . . . . . . . . . 13Krasimir Miloshev, EMC Corporation

Backing Up Applications with NetWorker Modules . . . . . . . . . . . . . . . . . . . . . . . . . . . 14Aaron Kleinsmith, EMC Corporation

DLm40xx Implementation and Upgrade Guide . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15 Mike Smialek, EMC Corporation

Implementing Deduplicated Oracle Backups with NetWorker Module for Oracle . . . 16Chris Mavromatis, EMC Corporation

NDMP Localization/Internationalization Support for NetWorker . . . . . . . . . . . . . . . . 17Jyothi Deranna, EMC Corporation

Using Disks in Backup Environments and Virtual Tape Library (VTL) Implementation Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18Emin Calikli, Gantek Technologies

Table of Contents

5

Business ProcessEnterprise Standards and Automation for Storage Integration and Installation at Microsoft . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19Aaron Baldie, EMC Corporation

Significant Savings are Within Your Reach When You Understand the True Cost of Storage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20Bruce Yellin, EMC Corporation

The Efficient, Green Data Center . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21Raza Syed, EMC Corporation

ConnectivityBest Practices for Deploying Celerra NAS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22Ron Nicholl, A Large IT Division

CLARiiON and FCiP: A Practical Intercontinental DR and HA Solution . . . . . . . . . . . . . 23Jaison K. Jose, EMC Corporation

Oracle Performance Hit | a SAN Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24Kofi Ampofo Boadi, JM Family, Inc.

Preventative Monitoring in the NAS Environment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25 Robert Wittig, EDS, an HP Company

Create a Comparative Analysis of an Oracle Database Using Storage Architectures NAS and SAN . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26Sergio Hirata, Columbia Storage

Volnys Borges Bernal, Universidade de São Paulo/LSITec

Content ManagementCustom Documentum Application Code Review . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27Christopher Harper, EMC Corporation

Tiered StorageBusiness Continuity Planning for Any Organization . . . . . . . . . . . . . . . . . . . . . . . . . . . 28Smartha Guha Thakurta, EMC Corporation

Data Migration Strategy (EMC SRDF via “Swing Frame”) . . . . . . . . . . . . . . . . . . . . . . . 29Sejal Joshi, A Large Telecommunications Company

Ken Guest, A Large Telecommunications Company

Data Storage Performance—Equating Supply and Demand . . . . . . . . . . . . . . . . . . . . . 30Lalit Mohan, EMC Corporation

Integrating Linux and Linux-based Storage Management Software with RAID System-based Replication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31Diedrich Ehlerding, Fujitsu Technology Solutions

Table of Contents (Continued)

EMC Proven Professional: Knowledge Sharing 20096

Table of Contents (Continued)

Simplifying/Demystifying EMC TimeFinder Integration with Oracle Flashback . . . . . 32 Robert Mosco Jr., EMC Corporation

Service-Oriented Architecture (SOA) and Enterprise Architecture (EA) . . . . . . . . . . . . 33Charanya Hariharan, Pennsylvania State University

Dr. Brian Cameron, Pennsylvania State University

SRDF/Star Software Uses and Best Practices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34Bill Chilton, EMC Corporation

Using EMC ControlCenter File Level Reporting for CIFS Shares . . . . . . . . . . . . . . . . . . 35Chad DeMatteis, EMC Corporation

Michael Horvath, Fifth Third Bancorp

VirtualizationCloud Computing Services—A New Approach to Naming Conventions . . . . . . . . . . . . 36Laurence A. Huetteman, Technology Business Consultant

Leveraging Cloud Computing for Optimized Storage Management . . . . . . . . . . . . . . 37Mohammed Hashim, Wipro Technologies

Rejaneesh Sasidharan, Wipro Technologies

“ The emergence of cloud computing and its

impact in these tough economic times are

the crux of the Knowledge Sharing charter .

As a former computer science teacher,

expanding my awareness of new topics and

guiding others is my mission—that is how

new concepts are shaped .”

Bruce Yellin, EMC Corporation

7

Is Cloud Computing the Game Changer Your Company Needs in These Tough Times?Bruce Yellin, EMC Corporation

There have been countless articles written about cloud computing; a Google search will return millions of hits. Your colleagues discuss it at lunch and vendors bring it up in their presentations. From the enterprise perspective, it could be the perfect storm of value propositions—cloud computing drives the cost of IT down, is available on demand, and is scaleable. Cloud computing could even become a new competitive edge for your company. Maturing at a rapid pace, it will be the next chapter in data processing. But is it ready for your company? Can the hype of cloud computing live up to its promise?

I call cloud computing a “game changer” because it is a service, a platform and operating environment poised to transform the status-quo computing model. Advocates argue it will revolutionize how information will be delivered. It is difficult to find a leading technology company that isn’t already delivering some type of cloud product or discussing their plans for the cloud. Many of us already rely on Yahoo or Google for personal e-mail, upload photos on Flickr for others to browse, and use Facebook to stay in touch. But when you use these titles at work, are you met with open arms, or does company policy frown upon their use?

Cloud computing allows the user to access an application without having to own it, install it on a computer, or maintain it. The cloud offers the freedom to access an application from anywhere a browser can run—desktop, laptop, intelligent phone, etc. It can increase a company’s processing and storage capacity, and provide services without taking up data center space. On a personal level, some cloud computing services are free. For the enterprise, pricing is subscription-based or pay-as-you-go, allowing a company to enhance existing service or provide new IT functionality without a major investment.

This article examines what cloud computing is, total cost of ownership, privacy/security impacts, universal access concerns, and hybrid strategies.

EMC Proven Professional: Knowledge Sharing 20098

Customized Tool for Automated Storage ProvisioningKen Guest, A Large Telecommunications Company Sejal Joshi, A Large Telecommunications Company

Homegrown tools for storage provisioning automation can improve efficiency and implement a common standard in a multi-vendor storage environment. This enables teams to work more efficiently and eliminates the need for expensive SRM tools that may not meet all business requirements. In the current economic environment, this is even more important as groups are being downsized and budgets cut while storage capacity and fiber switch ports increase at a breakneck speed.

In 2009, we provisioned 6.0 PB of enterprise-class SAN-based storage, compared to 4.3 pro-visioned in 2007, a year-to-year increase of 140 percent. Our current environment is growing at a rate of ~100 TB per week. SRM tools do not scale to meet our requirements and service level agreements (SLAs) with this growth rate. Using our custom implementation, we can provision ~50 TB of storage across multiple frames in less than one hour and reduce overall re-work and human error. This includes design validation, LUN creation/masking, and zoning.

This article discusses how to use vendor-provided CLI software to create/implement custom-ized provisioning tools. SRM-based provisioning tools are great for organizations with small storage footprints. However, they do not scale in larger and more diverse, enterprise-class, multi-data-center environments due to implementation costs and increased total cost of ownership (TCO).

Instead, we can implement a customized provisioning solution with a small data center foot-print and very little additional cost to the company. This article discusses how to implement end-to-end storage provisioning automation using multi-vendor storage/switch platforms. The process includes a centralized ticketing system to track the storage request throughout its lifecycle. This enables the overall storage automation process. The workflow tool tracks approvals and design information, providing the feed to the automation scripts and informa-tion to the system administrator to build the file systems based on the design.

Due to central automation, we can guarantee standards across multiple data centers/envi-ronments and easily create standard reports.

9

“ I wrote this article because I realized that

no material existed that would provide

storage users with a best-practices meth-

odology for reclaiming storage .”

“ The best thing about being a Knowl-

edge Sharing author is the amount of

knowledge I gained personally as I was

researching and writing the article . I think

the person who gained the most from

writing this article was me .”

Brian Dehn, EMC Corporation

“My bonus is based on saving $5M worth of storage.”

“We are cutting the storage budget so you need to reuse capacity.”

“We cannot purchase additional storage until we increase utilization.”

Reclaiming storage capacity for reuse reduces IT capital expenditures, increases storage utilization, contributes to “green computing” initiatives, and can address each of the issues above. Proper planning and execution of a storage reclamation effort is key to avoiding prob-lems and realizing maximum benefits.

IT professionals, especially storage administrators, usually know how much storage is allo-cated and available for allocation. “Orphaned” storage, or capacity that appears to be used but is not, is more difficult to find. We must understand storage configuration states and the storage configuration hierarchy to find this treasure trove of reclaimable storage. Most of the layers in the storage configuration hierarchy include potentially reclaimable capacity. The level of effort required to reclaim that capacity, however, may not be worth the return on investment, depending on where it exists. Identifying candidates for reclamation makes it even more challenging.

This article provides a best-practices “blueprint” for a successful reclamation strategy. The following topics are discussed:

• Understanding the storage configuration hierarchy• Identifying capacity for potential reclamation• Determining whether the benefit is worth the cost• Using storage resource management tools to facilitate the effort

This article will help you achieve storage reclamation objectives (the good), while reducing time and cost (the bad), and avoiding significant problems (the ugly).

Reclaiming SAN Storage—The Good, the Bad, and the UglyBrian Dehn, EMC Corporation

EMC Proven Professional: Knowledge Sharing 200910

“ I keep learning as this is very much unex-

plored territory . These are exciting times .

I never had an article posted in the past,

so the prospect of this article being posted

makes me feel very proud . It would be

great if many more people would become

aware of the necessity of implementing

document management in a platform-like

architecture .”

Jacob Willig, Documentum Consultant

Architecting an Enterprise-Wide Document Management PlatformJacob Willig, Documentum Consultant

This article presents best practices discovered while implementing EMC Documentum® within large organizations using a central-platform approach. Documentum is typically implemented as a dedicated host for document management and storage. The implementation is tailored to the specific set of functional requirements and does not necessarily consider future expan-sion into different functional areas across the organization. If the need arises, a new point solution is often implemented. It, too, is tailored to the new and existing requirements.

This approach fulfills the short-term business need for functionality, but it becomes increas-ingly more expensive to maintain and operate the growing number of point solutions. This minimizes the effectiveness of a content management solution as the overhead costs re-occur over time.

Large organizations adopting an enterprise view of content management, and implementing document management as a strategic platform, are able to host document flows uniformly. The support team can add a new document flow easily and efficiently as everything in the platform is set up generically. This approach, though, will incur additional startup costs to enable future, controlled growth.

This article discusses several of the challenges I faced while implementing document manage-ment in a platform architecture. I will explain both the challenges and solutions.

Topics include information management, object model, security, workflows, and functional, application and technical support.

11

EMC business continuity (BC) and disaster recovery (DR) offer a myriad of options to protect your data. Organizations sometimes implement them without a thorough analysis of how and when they would use the technologies to recover data if needed. Even if familiar with the technology, procedures can be so arduous that the organization has no chance of meeting their committed recovery point objective (RPO) and recovery time objective (RTO). “State-of-the-art” recovery technologies cannot help if no one knows how to use them.

Organizations need procedures to document how to implement their recovery technologies. This is even more important if an external organization did the implementation. What ques-tions should you consider?

• When the implementation ends, can the customer grasp the complexities of the new technologies and use them if needed?

• Can we make these complex solutions easier, especially when organizations don’t have the “luxury” of appointing a full-time disaster recovery team?

SMEs and technology professionals face these issues when proposing BC and DR solutions. This article uses a case scenario to examine these questions in relation to a recent real-life solution, one that I proposed and implemented. The solution included multiple sites, using EMC CLARiiON® systems (CX3-10 systems) at each site, EMC MirrorView™/Asynchronous, and EMC SnapView™. ESX™ Servers are the attached hosts, so we implemented Site Recov-ery Manager (SRM) using the recently released SRA adapter for MirrorView/A.

The project was successful, but raised interesting afterthoughts, especially regarding end-users’ perceptions and technology expectations. In conclusion, this article will offer advice on how to address these issues.

Real-Life Application of Disaster RecoveryFaisal Choudry, Magirus UK Ltd.

EMC Proven Professional: Knowledge Sharing 200912

Best Practices for Implementing and Administering EMC NetWorkerAnuj Sharma, Ace Data Devices

EMC NetWorker® is the fastest performing backup application in the market. Integration with replication and snapshot technologies helps you meet the most aggressive RTO and RPO requirements and transform backup to disk or backup to tape in an off-host, off-hours process. It supports a broad set of operating systems, databases, applications, and topologies.

EMC NetWorker’s compatibility with various operating systems, applications, and databases makes it successful in today’s competitive industry. However, it must be implemented properly to get the most out this wonderful product. There are some practices to keep in mind to make the backups and recovery more effective and beneficial for the organization.

This article covers the various practices that I performed when implementing and administering EMC NetWorker. They include:

• Implementing NetWorker on various operating systems• Making it foolproof in case of NetWorker server disaster• Implementing NetWorker in a bidirectional as well as unidirectional hardware

firewall, including various scenarios (i.e., when some of the clients are in DMZ)• Working with the NetWorker ports• Implementing NetWorker in a cluster• Integrating e-mail alerts with NetWorker• Implementing persistent binding through EMC NetWorker• Integrating EMC Avamar® for deduplication• Probe-based backups

13

A Load-Balancing Algorithm for Deploying Backup Media ServersKrasimir Miloshev, EMC Corporation

Finding the best backup client’s distribution over designated media servers can be considered part of the general load-balancing problem. Our goal is to distribute the backup client’s data in the best possible way among newly designated media servers responsible for the backup read/write operations.

This article suggests a load-balancing approach based on one criterion—the amount of data that must be backed up on each client. It begins by introducing the basic components of backup infrastructures: the master server, clients, media servers, and storage backup devices. It presents two approaches for deploying media servers and provides a structure for decision making.

Next, we investigate a load-balancing schema where there are only 10 media backup clients and two media servers. We calculate capacity and learn a load-distribution algorithm. Finally, the article aligns the basic load-balancing algorithm to program implementation. In two easy steps, you’ll be able to implement the algorithm using C or Korn-shell.

We can reduce the backup window by balancing the backup load among all the backup media servers. Even when data size is the only criteria we use, we can expect to achieve visible performance improvement and shorter backup windows.

EMC Proven Professional: Knowledge Sharing 200914

Backing Up Applications with NetWorker ModulesAaron Kleinsmith, EMC Corporation

This article describes how to configure EMC NetWorker® to back up a database application. It includes NetWorker’s technical setup to help protect popular databases that relate to released NetWorker modules. Backup administrators who configure and monitor the online database backups through NetWorker benefit most from this article.

The article discusses traditional database backup methodologies such as online (hot) back-ups, offline (cold) backups, and transaction log backups. The article describes the backup procedure to capture a consistent copy of an online database and transactions logs from the source disks using the application server resources that manage the primary copy of the database and data.

It provides enough technical information to help a storage administrator who manages NetWorker verify and monitor a properly set-up client.

This article discusses:

• Overview and explanation of NetWorker modules• General concepts and planning applicable to all NetWorker modules• NetWorker configuration settings applicable to all modules and specific settings

and considerations for each NetWorker module within the NetWorker software

15

“ My idea for this article came from par-

ticipating on project implementations .

Besides the opportunity to have the

article published, writing it forces me to

organize many implementation notes and

experiences into an orderly layout . Having

others benefit from my experience helps

IT services organizations deliver projects

efficiently and provides customer TCE .”

Mike Smialek, EMC Corporation

Implementation of a data library for mainframe, DLm40xx series, requires coordination between multiple system and people resources. They include the mainframe tape library system, z/OS operating system software, NAS configuration and configuration of the DLm ACP, and VTE components. EMC Celerra® Replicator V2.0 is also required if replicating data to another DLm. This Implementation and Upgrade Guide walks through the implementation process to define the information necessary to configure each component.

This guide is designed for solution architects, implementation specialists, maintenance and support services personnel, and storage administrators who want to understand how to implement or upgrade a DLm system. It presents necessary Linux commands and scripts to allow a mainframe-centric person to do a DLm configuration.

The DLm Implementation and Upgrade Guide addresses several key areas:

• DLm mainframe checklist to gather current customer tape information• Input form to provide required IP addresses and phone numbers• PC software and hardware needed to configure DLm • DLm40xx hardware installation requirements• z/OS updates to tape catalog, HCD, SMS/ACS, MTL, OAM, and Esoterics• NAS configuration• Running DLm Linux scripts to define Tapelibs and NFS mount points• Running SCRIPT80 to change permissions• Installing DLm Healthcheck script and mainframe reporting• DLm z/OS utilities• ESCON or FICON CHPID updates• Test and acceptance procedures• IP replication configuration• Troubleshooting

DLm40xx Implementation and Upgrade Guide Mike Smialek, EMC Corporation

EMC Proven Professional: Knowledge Sharing 200916

Implementing Deduplicated Oracle Backups with NetWorker Module for OracleChris Mavromatis, EMC Corporation

More and more customers are now evaluating the cost benefits of data deduplication. This is partly due to the explosive growth of software deduplication technology (such as EMC Avamar®).

EMC NetWorker® Module for Oracle is a mature product, with thousands of customers, that offers a strong backup solution for an Oracle database. The first quarter of 2009 will be the first time that NetWorker Module for Oracle will have deduplication integration for Avamar. This feature empowers users to conduct Oracle deduplication backups and restores via the integrated use of a deduplication storage node (Avamar server). A deduplication backup can be a manual backup initiated within RMAN or a scheduled backup via the NetWorker Management Console scheduler framework.

There are a number of differences and additional configuration items that we must consider when deploying NetWorker Module for Oracle 5.0 to perform deduplication backups. This article outlines installation, configuration, pitfalls, and best practices. It provides answers to questions that can help customers better embrace this paradigm shift in backup solutions.

This article highlights these considerations and provides guidance for deployment. It is not intended to be a step-by-step guide, nor does it replace the Installation Guide or Release Notes. It does assume a level of knowledge using NetWorker, Avamar, Oracle, and NetWorker Module for Oracle.

Sales, systems engineers, support personnel, and customers will benefit from learning how to deploy this solution.

17

Computer internationalization and localization are important because of the numerous differences that exist among countries, regions, and cultures with respect to language (not only distinct languages but also dialects and other differences within a single language for weights and measures, currency, date and time formats, and more).

The EMC NetWorker® NDMP client-connection feature provides NAS with fast, flexible backup and restore of mission-critical data residing on filers. The Network Data Management Protocol (NDMP) is a TCP/IP-based protocol that specifies how network components communicate with one another to move data across the network for backup and recovery. NetWorker with NDMP client connections provides backup and recovery support for more than eight NAS hardware providers. With NetWorker 7.4, the NDMP client is responsible to back up and recover the non-ASCII data from the NAS filers.

EMC NetWorker 7.4 is a full-fledged internationalization release; the product is I18N and L10N compatible. Many changes were made in the NDMP client-connection feature to handle non-ASCII characters. NAS vendors each have their own mechanisms to store the non-ASCII characters in their specific filers. NDMP is a single interface that helps NetWorker understand how each vendor is handling non-ASCII data. Using NDMP, NetWorker is able to back up filers’ data and store it without data corruption.

This article explains how NetWorker backs up and recovers non-ASCII data residing in the NAS filers using the NDMP client-connection feature. It describes how to configure NAS filers for non-ASCII data backup/recovery and explains how the changes are done while configuring a NAS filer as an NDMP client. It offers guidelines to configure filers from different vendors and best practices and troubleshooting tips to avoid data corruption and increase performance.

NDMP Localization/Internationalization Support for NetWorkerJyothi Deranna, EMC Corporation

EMC Proven Professional: Knowledge Sharing 200918

“ I used current, real-world data

protection requirements and personal

experiences to write my article . Valuable

data size is increasing day to day and

protection becomes more important and

critical .”

Emin Calikli, Gantek Technologies

Protecting data is vital for availability and business continuity. There are many data protection solutions in the IT market. However, they address recovery time objectives (RTO) and recovery point objectives (RPO). Recovery time represents the time it takes to restore the data; recovery point represents the “data currency” at the backup state. These two concepts are not indepen-dent and must be integrated based on company requirements.

Many organizations are experiencing shrinking backup windows and increasing data loads. During a restore from tape, slower recovery operations result in longer production system downtime. Companies are seeking faster backup and recovery solutions.

Disks could be a viable solution for meeting fast data recovery requirements. The service level agreement (SLA) depends on the application and the customer. It is often difficult to meet all SLAs with a single data protection solution. Backup vendors are trying to tailor their applications to use disks efficiently; this approach requires intelligent development processes.

Virtual Tape Library (VTL) could help us reduce or eliminate the following problems:

• Physical tape damage• Cost of tape drives • Highly utilized tape drive resources (VTL staging or post processing)• Backup windows• Security problems (backup encryption)

We have to know:

• VTL implementation methods• Differences of disk usage between OLTP applications and backups• Impacts of block size on throughput and bandwidth

Using Disks in Backup Environments and Virtual Tape Library (VTL) Implementation ModelsEmin Calikli, Gantek Technologies

19

Enterprise Standards and Automation for Storage Integration and Installation at MicrosoftAaron Baldie, EMC Corporation

Microsoft’s most challenging problem is how to keep tens of thousands of servers up to date with drivers and firmware in an environment that is spread across many global data centers. More and more, these data centers are subject to budget and staff cuts. Personnel are becoming less specialized, and in some cases, have no technical training at all other than the ability to power cycle equipment.

The EMC account team has worked directly with Microsoft’s IT staff to overcome these challenges and provide standards for drivers, firmware, and complete automation packages that integrate with existing processes to allow for storage connectivity across the enterprise. This allows Microsoft IT staff to rapidly deploy EMC storage to any server globally and manage this storage with a limited, centralized IT staff.

To accomplish this, the EMC Support Matrix is refreshed on a six-month cycle to provide a baseline of supported drivers, firmware, and applications, and their compatibility with the OS and hardware. A matrix is published to lock in the revisions so a deployment kit can be created once this standard is established.

All drivers are then downloaded to a central location and rolled into an automation package that can be integrated into Microsoft’s own deployment process. Testing is performed for multiple scenarios across all currently supported OS versions for both upgrades from the existing standards and new deployments. Any issues found during this test process are triaged with Microsoft and fixed before the latest versions are released to gold. The local account team accomplishes all of these tasks and owns the project from start to finish.

EMC and Microsoft’s strong alignment and rapid deployment of EMC hardware achieves one of the largest ratios of storage to administrator at about 1.5 PB per head count across 150 EMC CLARiiON® systems and 10 EMC Symmetrix® DMX™ systems.

EMC Proven Professional: Knowledge Sharing 200920

“ Knowledge sharing is the basis by which

we all share our collective expertise to

benefit others in IT . With cost often being

part of the storage equation, communicat-

ing my technical training in terms of dollars

and cents turned out to be both fun and

educational .”

Bruce Yellin, EMC Corporation

Significant Savings are Within Your Reach when You Understand the True Cost of StorageBruce Yellin, EMC Corporation

Do you find yourself struggling with your company’s insatiable craving for more storage? Will any of your storage suppliers’ claims of “faster, cheaper, and better” really save your company money? How about the stark reality that your dwindling IT budget is causing you sleepless nights? Has the time come to expand the outdated, state-of-the-art storage infrastructure you leased just three years ago?

You may be asked to trim capital and operating storage expenses by hundreds of thousands of dollars, while simultaneously introducing innovation to your organization. This seemingly contradictory set of storage demands also impacts your internal service level agreements. Where do you begin? Which concepts will deliver significant short- and long-term savings?

As a veteran of the storage industry, I have heard questions such as, “How much does a gigabyte cost?” or “I don’t have a lot of money to spend” countless times. Neither is the right place to start when trying to determine the actual cost of storage, nor how to make your storage budget go farther.

I have also witnessed an explosion of data growth; some pundits claim the rate is as high as 60 percent per year. Whatever the rate, we will have to store more data tomorrow than yesterday. In addition, corporate policies and regulations require us to save that data for longer periods of time. That means storage, which translates into more floor space, more power, more staff, and more complexity.

This article explores the challenges facing the IT storage manager and offers insight into navigating a course of action to provide budget relief, while offering better services to internal and external customers. It explores issues such as frame expansion and future-proof architectures, environmental impact, virtualization, deduplication, risk avoidance, stretch-ing the lifespan of existing equipment, cost-effective education, required negotiation and financial skills, cutting fat from a budget, and much more.

21

This article will help you build an efficient, green data center that will yield financial and environmental benefits. Data center power and cooling and virtualization are two key strategies to help you reap these benefits. We discuss power and cooling optimization, as well as virtualization and other related data center technologies that are required to develop and operate an efficient, green data center. This discussion occurs in the context of a virtualization-leveraged data center.

Virtualization has been positioned as the core enabler for driving broader efficiencies. It includes server and storage procurement and utilization, information protection (backup and recovery), business continuity and disaster recovery (local and remote replication), infrastruc-ture management, and infrastructure consolidation and automation. This article provides you with a breadth of knowledge about the major data center functions that have a direct or indirect impact on operational and environmental efficiency.

Our discussion is not limited to technology, but includes other relevant areas of a data center that are either impacted by or have an impact on the technology infrastructure and IT opera-tions. It identifies major areas of consideration as well as step-by-step guidance about how to implement power and cooling, virtualization, and other related technologies. Designs and architectural drawings for optimization are included.

This article consists of three sections:

• Introduction focuses on financial and environmental impacts of inefficient data centers and the case for building efficient and green data centers.

• Considerations for Building Efficient, Green Data Centers focuses on high-level data center operating environments, IT processes, and technology considerations for data center efficiency.

• Implementing an Efficient, Green Data Center focuses on implementation processes, steps, and technologies; and describes designs and architectural drawing in detail.

The Efficient, Green Data CenterRaza Syed, EMC Corporation

EMC Proven Professional: Knowledge Sharing 200922

“ The process of creating my article helped

cement and expand on my knowledge .”

Ron Nicholl

Deploying EMC Celerra® NAS involves many pieces of the IT infrastructure ranging from backend storage, to network topology, and beyond. Choosing a solid design can make the difference between mediocre performance and exceeding your customers’ expectations.

NAS solutions are quickly becoming a viable alternative to mitigate the cost of a SAN-based storage solution. A Celerra NAS solution offers much of the same functionality traditionally seen on the storage array over IP, including replication, checkpoints, mirroring, and more. Applying best practices to your design will improve performance and reliability.

This article includes:

1. How to lay out the backend storage devices 2. How to implement a network configuration that allows for greater flexibility and

reliability3. Planning Microsoft Windows domain interaction4. Best practices for backing up the Celerra environment5. Monitoring the performance of the Celerra solution

There are many components to a successful NAS design. The network topology can be lever-aged to provide a greater scope of service; the CIFS and NFS clients can be configured for greater performance and reliability. Implementing best-practices standards can reduce the customers’ cost of ownership and improve reliability. This article provides a quick reference to configuring Celerra.

Best Practices for Deploying Celerra NASRon Nicholl, A Large IT Division

23

EMC CLARiiON® offers possibilities that meet almost all the needs of today’s high-demand business. We can easily achieve complex industry requirements when we join this small magic box to other technology. I would like to share an Intercontinental disaster recovery (DR)solution that was achieved with the help of several products, including EMC CLARiiON, MirrorView™/AS, SnapView® Clones, SnapView SnapShots, FCiP, and Vsan.

We had to find a solution to implement a primary site in Europe and a disaster recovery site in the U.S. for a customer-facing application of EMC, so it was very important to have a robust solution with a proper DR plan. FCiP was our first choice to manage data movement flawlessly over the Atlantic; we could use Internet connectivity and VPN to create a tunnel between the two sites. We segregated the data replication SAN by separating the ports to a special VSAN extending to the DR site using FCiP.

CLARiiON was the obvious choice for this midrange application. The amount of data was huge, but it was not very dynamic. Data availability was the primary concern. How would we connect two sites separated by thousands of miles with a CLARiiON? Our best answer was with MirrorView/AS since the data is sent through the FCiP tunnel to the DR CLARiiON.

Then, we needed a backup solution. This environment was hosted in a third-party data hosting facility. A tape-based or external backup option often costs more money. SnapView Clones were our answer. A gold copy of production and DR LUNs were set up to protect against data corruption or data loss caused by human error.

MirrorView A/S, SnapView SnapShots, SnapView Clones, FCiP, and Vsan—all of these products’ features are utilized in this unique disaster recovery solution. This not just a concept; it has been implemented and is working in a production environment. I am happy to share it with you.

CLARiiON and FCiP: A Practical Intercontinental DR and HA SolutionJaison K. Jose, EMC Corporation

EMC Proven Professional: Knowledge Sharing 200924

Oracle Performance Hit | a SAN AnalysisKofi Ampofo Boadi, JM Family, Inc.

Performance problems can be avoided or minimized if we design the right disk layout. RAID type definitions for specific components are essential to Oracle’s performance. Not keeping defined components on the same spindles is equally crucial. This article uses a real-life case study to explain Oracle’s components and illustrate the effects of a poorly designed SAN on Oracle’s performance. A hit!

Please be aware that the order of the analysis is irrelevant; it is the content that matters. Applications’ performance relies heavily on the SAN. Performance issues can be centric to the host, connectivity device, or the storage array, or a combination. This article elaborates on the effect that the SAN can have on Oracle’s performance with an emphasis on the EMC CLARiiON® storage array. I will touch briefly on the Symmetrix since most of the concepts and analysis in the article apply to the EMC Symmetrix® as well. The items below will be addressed in detail via a case study.

1. Define and analyze sequential and random writes and how they impact Oracle’s performance design.

2. Define the components of Oracle architecture and their importance. 3. What are Redo Groups and why are they important to Oracle’s performance? 4. Detail each component’s behavior on the SAN and which disk layout best fits for

optimal performance.5. CLARiiON has a limited performance-tuning ability, it has a non-scalable cache,

hence which architectural designs do you need to avoid?6. How host-side analysis and hit can contribute to the performance of Oracle. 7. Switch-level specifications and alerts that can significantly contribute to the

performance of applications.8. The Case study! This details most of the issues administrators run into and suggests

the best resolutions.

Performance analysis can be approached in several ways. The key is to use the appropriate performance tools to understand what you are analyzing. Please note that different approach-es may lead to the same result.

25

Preventative Monitoring in the NAS Environment Robert Wittig, EDS an HP Company

The EMC Celerra® actively monitors the environment for warnings and failures. As the NAS environment expands, it becomes increasingly important to assess the current health of each frame and verify that the current configuration fully utilizes redundant Celerra capabilities. Verification, testing, and preventative monitoring are important aspects of maintaining the Celerra’s reliability and availability.

Once configured, we must test and monitor the Celerra environment to verify that it will prop-erly handle any faults and continue to provide the services for which it was designed. Testing should not stop once the system is in production; it should continue at regular intervals to ensure continuous functioning if a failure occurs, and to notify the appropriate support personnel in the event of a failure. Finally, we should make non-intrusive checks at regular intervals to verify that regular support activities have not adversely impacted any part of the environment.

This article identifies methods and preventative measures to identify configuration issues, verify redundant hardware, and ensure that configured notifications function properly. The objective is to provide the Celerra storage administrator with a set of actions to check the status of a running environment, verify redundant operations, validate the configuration, and confirm that failure notifications are functioning properly.

We will examine four parts of the Celerra environment:

• Provisioned storage• Redundant data mover configuration• Warning and failure notifications • Celerra Connect Home functionality

Finally, this article suggests methods that can be applied to gather these checks into a single automatic process. This process can be regularly executed to provide evidence of the validated configuration and identify potential problems before they impact the availability of the services or of the entire Celerra.

EMC Proven Professional: Knowledge Sharing 200926

“ I´m finishing my Science Computing

Master’s degree and I saw in Knowledge

Sharing an opportunity to publish my work .

I used to read a lot of works comparing

Fibre Channel, iSCSI, and NAS using file

system benchmark tools, but a real ap-

plication has different behavior than a file

system benchmark tool . My idea is to open

a discussion about a price/performance

relation for the DBMS-based application in

some customer scenarios .”

Sergio Hirata, Columbia Storage

Create a Comparative Analysis of an Oracle Database Using Storage Architectures NAS and SANSergio Hirata, Columbia Storage Volnys Borges Bernal, Universidade de São Paulo/LSITec

Today’s storage architecture market is divided among three large groups: direct-attached storage (DAS), network-attached storage (NAS), and storage area network (SAN). The stor-age system, among others, affects any application’s performance. The application’s overall performance is also affected by the storage network technology, the data storage commu-nication protocol, and the storage network components. Performance is measured using response time.

Storage managers have difficulty aligning the application’s needs to the appropriate storage architecture. Many factors are involved in this decision, including the compatibility between the host bus adapter, switches, and storage systems, and the latency, cost, and management tasks. Also, storage managers must consider the desired availability level for the application and achieve the service level agreements (SLAs) negotiated with different departments.

Application simulators are an alternative to choose the best data storage technology or archi-tecture. The present work uses a simulator of an order entry application to generate the I/O operations at an Oracle database installed on a SAN Fibre Channel and SAN iSCSI and NAS with NFS architectures. It´s expected that the results will indicate whether an Oracle database needs a Fibre Channel infrastructure or an iSCSI pipe has enough throughput to support it.

This article is a guide to choosing the most accurate storage architecture for an Oracle application.

27

An EMC Documentum® consultant typically is the last point of contact when things have gone wrong. Firefighting is the term given for this type of corrective work.

These assignments are caused by solutions that are developed by someone with limited knowledge of our systems. This lack of knowledge causes issues both in the design of the application and the way it is implemented. This daunting task typically presents itself as heaps of documentation, one or more DFC/WDK projects containing source code, and a time-boxed schedule that prevents a full review of documentation and code.

How should we approach reviewing code written by a third party who doesn’t necessarily conform to the standards you are accustomed to?

This article provides basic principles on what to look for and explains some of the common ways that our systems are misused leading to poor performance. I will provide the technical rationale for each instance where we discuss why or why not to use a particular approach. Also, we will discuss corrective measures for each encountered problem. I will present the technical solutions for all of the cases we discuss. The cases are ‘real’ and have been encountered “in the wild.”

Custom Documentum Application Code ReviewChristopher Harper, EMC Corporation

EMC Proven Professional: Knowledge Sharing 200928

Business Continuity Planning for Any OrganizationSmartha Guha Thakurta, EMC Corporation

This article introduces the methodologies to develop a company’s information survival strategy. The goal is to analyze the organization’s critical information assets, do a risk- mitigation analysis and data recovery planning encompassing change management.

The overall objective of a business continuity plan is that “in this demanding market, a proactive approach aimed at assuring continuity of business processes and applications amid major and minor disruptions is absolutely essential.”

The broad scope of the article includes:

• An introduction to business continuity planning • Business continuity planning objectives• Defining disaster and its types with different points of view• Global best practices • Benchmark case study of implementing BCP for an organization• Methodology, barriers, and challenges• Change management and emergency decision making• Recommendations and conclusions

After reading this article, you will understand:

• The importance of business continuity planning• The benefits and cost savings to stakeholders• The roadmap/project plan developed for the organization • Business process mapping and re-engineering for continuity of operations

Methodology and plan of work:

• Experiences from professional life• Benchmarking with industry best practices• Research data from global experts• Review with the mentor at regular basis• Findings and knowledge gathering from the field and the organization

29

Increasing data center power and cooling requirements impact IT infrastructures’ scalability. Storage consolidation provides relief for power and cooling and also reduces total cost of ownership (TCO). Simplifying storage infrastructures and ease of management are the two reasons that businesses use storage consolidation. Businesses have had to scale their stor-age infrastructures to accommodate capacity, performance, and high-availability require-ments due to massive data growth.

This article provides guidelines for using storage-based replication (EMC SRDF®) to migrate/consolidate data from multiple storage arrays to just a few. It explains how to migrate data using SRDF from (5670) to (5772) code using a swing frame method (EMC Symmetrix® DMX™-2) running (5671) code. This migration also involved multiple Oracle databases, so maintaining data consistency was critical.

We were able to accomplish this task in a very short amount of time. There are advantages and disadvantages of using storage-based or host-based migration methods. The article discusses these and provides guidelines for migrating data from DMX-2 and DMX-3 arrays to DMX-4 using SRDF.

Data Migration Strategy (SRDF via “Swing Frame”)Sejal Joshi, A Large Telecommunications Company Ken Guest, A Large Telecommunications Company

EMC Proven Professional: Knowledge Sharing 200930

“ I decided on a topic based on the direct

financial and operational benefit to custom-

ers and to IT services providers . I wanted

to find a way to engage customers more

closely in the current climate of tight IT

budgets by trying to get more return on

investment .”

“ I am delighted to see my article

published! It’s akin to a mother

looking at her newborn!”

Lalit Mohan, EMC Corporation

Data Storage Performance—Equating Supply and DemandLalit Mohan, EMC Corporation

Individual components, including storage, contribute to the cumulative outcome of performance. When the storage processing duration is proportionately long, “demand” is the workload generated by host computer systems, and “supply” is the processing service provided by the data storage system. The “quality of performance” experienced by the business relies on how well supply meets demand.

We must match projected demand with capability to supply for selecting and designing data storage components. The resulting solution would operate at an optimum level, where demand equals supply. In this article, we apply the demand-supply analogy to build a universal framework using data storage domain performance characteristics as proxies representing demand and supply.

This will be done in light of several popular information technology infrastructures, for example, messaging, enterprise resource planning, and relational database applications in an open systems environment, and mainframe host applications in a proprietary environment.

Among the topics discussed are:

• Relevant terms and definitions• Characteristics of “demand” placed on data storage components• “Supply” capability of the data storage component• Combining demand and supply into the working framework• Options for improving performance capability• Case scenarios to illustrate key points• Recommendations in conclusion• Assumptions, impact, and remedy• Limitations and improvements

This article helps you better plan and design optimum data storage infrastructures. It helps you support centralization of business information assets into efficient shared service centers, a necessity in the current financial climate. This aggregation may enhance the value of information to management, improving return on investment.

31

“ I wrote this article because I got involved

in a project that needed solutions . I wanted

to contribute some kind of an article

which will hopefully be useful for the EMC

Proven™ Professional community and

anyone else who might read it .”

Diedrich Ehlerding, Fujitsu Technology Solutions

Integrating Linux and Linux-based Storage Management Software with RAID System-Based ReplicationDiedrich Ehlerding, Fujitsu Technology Solutions

All major database and ERP software vendors release their products on Linux. As with other operating systems, we must replicate using RAID array functionality to meet the demands for short backup windows, fast restore processes, and fast system copy processes. The legacy device names that are traditionally used in Linux without any storage management software are inappropriate for enterprise-class configurations. The main problem is that these name spaces are not persistent over server reboots. They cannot guarantee that the system will find its data at the same device node which it saw before the reboot.

This article discusses various naming spaces within Linux—legacy names, device mapper names, IO multipathing software names, volume management layers, and file system layers. All these layers create their own naming spaces. With RAID system-based replication, we must take care to have the proper name for the replica; in a shared storage configuration, we have to provide an identical name on all cluster nodes.

The article reviews naming schemes with respect to persistence and replication issues.

The article discusses software layers:

• Linux native layers (legacy sd names, device mapper names)• Multipathing drivers: EMC PowerPath® names and Linux native multipath names• lvm2 as an example of a volume manager• File system issues (labelled file systems, file system uuids)

And replication or shared storage usage scenarios for:

• Local cluster• Stretched cluster/disaster recovery configurations• Off-host backup• System copy

EMC Proven Professional: Knowledge Sharing 200932

Simplifying/Demystifying EMC’s TimeFinder Integration with Oracle Flashback Robert Mosco Jr., EMC Corporation

Integrating EMC TimeFinder® business continuity application with Oracle can be a somewhat tricky endeavor. This article describes Oracle’s Flashback and EMC TimeFinder technologies and how the two applications can help end users recover data.

Oracle Flashback and EMC TimeFinder are two separate technology applications. Under-standing them, and then describing their integration for the purpose of recovering data, is the main goal of this article. I use diagrams and commands to show how both technologies can recover data. Once you have an understanding of each technology, the article progresses to an integration phase showing how the two applications can be used to develop a “repair” and/or a “recovery” plan.

Discussions include point-in-time recovery, Flashback rewind, and recovery time objectives. The article presents the following:

• Enabling and disabling Oracle Flashback• Setting up a TimeFinder Oracle business continuity (BC) environment • Dropping and/or deleting data (to simulate a data corruption situation)• Recovering or flashing back to a previous point in time• Developing a recovery or a get-well plan

All of these topics include diagrams and commands with simple explanations to help you understand the power of both technologies.

33

Most companies are re-evaluating the way they purchase, deploy, manage, and use business applications due to challenging market conditions, competitive pressures, and new tech-nologies. Software buyers want applications that leverage existing investments; customers demand solutions that provide quantifiable performance improvement.

In response, companies must evolve into agile enterprises that can rapidly change direction. Yet their structures, processes, and systems are often inflexible, rendering them incapable of rapid change. Adding hardware, software, packages, staff, or outsourcing are not solutions. This is not a computer problem, it is a business problem.

To address this growing gap between IT and business, companies are adopting an end-to-end enterprise architecture approach to re-align IT development with business objectives. EA is a framework that covers all the dimensions of IT architecture for the enterprise; SOA provides an architectural strategy that uses the concept of “services” as the underlining business-IT alignment entity.

These forces drive the IT industry to deliver breakthrough technologies, many at the founda-tion layer. SOAs, specifically, are at the cusp of change. This article focuses on the relation-ship between EA and SOA and the resulting impact on business. These are the key research questions in this research:

• Are there any business impacts to marrying EA and SOA?• How do organizations fit SOA with EA?• Is it better to adopt either SOA or EA, and not both?

Service-Oriented Architecture (SOA) and Enterprise Architecture (EA)Charanya Hariharan, Pennsylvania State University Dr. Brian Cameron, Pennsylvania State University

EMC Proven Professional: Knowledge Sharing 200934

SRDF/Star Software Uses and Best PracticesBill Chilton, EMC Corporation

Disaster recovery is becoming more critical as new laws are being passed to protect data, and legislation mandates extended data-retention policies. Many companies are building redundant data centers to avoid potential losses. The largest financial institutions are build-ing three data centers, two located in close proximity with the other in a different region of the country or in a different country altogether. These companies are managing three data centers and keeping all the information consistent by deploying EMC SRDF®/Star software.

SRDF/Star is data-replication software that uses synchronous and asynchronous data trans-fer to maintain consistency in multiple sites. The intent is to provide redundancy so that if one of the data centers experiences a disaster, the other sites can continue to replicate data and take over processing immediately. Star is exceptional as a disaster recovery software program, but what else can you realize with this software and what are the best ways to deploy it?

The documentation on SRDF/Star explains how the software works and how to install it, but does not discuss best practices. This article seeks to bridge the gap between installation and deployment by reviewing:

• Building a test Star• Load balancing applications across data centers• Eliminating downtime while working on servers• Migrating data while staying consistent• Switching between concurrent and cascading, and back again• Best practices and troubleshooting hints

35

Using EMC ControlCenter File Level Reporting for CIFS Shares Chad DeMatteis, EMC Corporation Michael Horvath, Fifth Third Bancorp

It can be a daunting task to gather file and folder statistics and properties from a NAS CIFS share with deep directory trees. It is especially difficult when you’re using Microsoft Windows native tools that may have poor enumeration performance. These activities consume a great deal of time in an environment with tens of thousands of folders and millions of files.

This article discusses how EMC ControlCenter® Network File System Assisted Discovery feature, introduced in v6.0, can be used to provide EMC Celerra® CIFS administrators with file- and folder-level reporting, covering UNC FLR configuration considerations and lessons learned during an EMC ControlCenter deployment. It provides practical examples of how you can use CIFS FLR reports to quickly determine file age and type distribution, top storage users, and utilization trending. These reports provide administrators with information to maximize storage utilization and address CIFS storage consumption issues before they impact end users.

The article addresses the following topics:

• Considerations and lessons learned during assisted discovery of network file system in an EMC ControlCenter such as domain ID and host agent selection

• Steps to configure and schedule data collection policies for network file systems tak-ing into account collection criteria and CIFS folder and file counts

• Performance considerations for CIFS data collection, providing examples of perfor-mance statistics from the Celerra and the host agent server during CIFS scans

• Examples of how EMC StorageScope™ file-level CIFS share reports can be used to show aged and dormant CIFS files for archiving, file type distribution for reclamation, and top CIFS storage users

EMC Proven Professional: Knowledge Sharing 200936

Cloud Computing Services—A New Approach to Naming ConventionsLaurence A. Huetteman, Technology Business Consultant

It has become increasingly difficult to have meaningful discussions about cloud computing without a common language. You will see every inconsistency in naming conventions for cloud technology when searching the topic on the Web; this inconsistency is also present during consulting or business conversations.

Wikipedia defines cloud computing by using a six-layer stack of components with terms like client, application, services, platform, etc., with the word “cloud” preceding each component. Others refer to everything as a service, while others try to map SOA, utility computing, and grid to cloud computing services. Even EMC’s suite of cloud computing offerings, while technically impressive, seems to be a loosely coupled group of point solutions with no real structure or clear naming conventions (Hulk/Maui evolved into Atmos™, Mozy™, Pi, etc.) that are referred to generically as cloud computing.

Vendors, partners, and customers spend valuable cycles deciding which model to follow in their discussions, or worse, inventing their own. My article simplifies this process and applies logic to inconsistent naming conventions by proposing a new naming convention for cloud computing service offerings. It is easily understood and applied across the broad spectrum of services. It is intuitive, so it can be easily adopted, and based on a logical model. If successful, this model will be flexible enough to accommodate the anticipated growth of this evolving field of technology.

One consistent theme in all cloud computing discussions is the concept of tiering or layers of services. The thought here is to use an existing scientific classification and to map the layers to current and potentially future cloud computing components or services. One logical choice is to leverage the familiar and widely accepted term, cloud, but map the model to the common atmospheric cloud terminology. This article presents such a model to help you effectively engage with other IT professionals.

37

“ EMC’s Knowledge Sharing program is

becoming a new torch bearer to spread

technical awareness among global profes-

sionals . This is an ideal platform where

technologists from diverse backgrounds

contribute tremendously toward widening

their technical spheres . Besides, this initia-

tive exposes the various technical/non-

technical aspects of newer technologies

and product advancements in a concise

and lucid manner .”

Mohammed Hashim, Wipro Technologies

“ The knowledge that we acquire today has a

value exactly balanced to our talent to deal

with it . Tomorrow, when we know more,

we recall that part of knowledge and use

it better; the EMC Knowledge Sharing pro-

gram is a magnificent invention which has

given me an abundance of global technical

astuteness .”

Rejaneesh Sasidharan, Wipro Technologies

Leveraging Cloud Computing for Optimized Storage ManagementMohammed Hashim, Wipro Technologies Rejaneesh Sasidharan, Wipro Technologies

Cloud computing refers to spreading IT computing resources across Internet cloud boundar-ies and offering selective access through consolidated service providers located at strategi-cally placed data centers. Generally, users pay for computing capacity on-demand and are not concerned with the essential technologies or challenges used to achieve the increased and diverse storage scalability, server, and other resource capacity and extensibility.

This article focuses on cloud computing, cloud models, storage, solutions, and comparing the different setups. It also describes features of storage optimization, security, leveraging the current IT infrastructure, and the advantages and disadvantages of the model.

The article presents the following:

1. Overview of SOA, SaaS, distributed, grid, and cloud computing2. Cloud architecture and applying cloud computing to storage3. Cloud models and outlining the cloud storage solution4. Managing solutions over storage infrastructure with optimal performance5. Security in the clouds and comparing cloud-based services6. Advantages and risks of cloud computing7. Potential future of the cloud

Many would embrace the ability to immediately increase capacity or add capabilities without investing in new infrastructure, training new personnel or licensing new software. This article is helpful for any engineer who is involved with storage design and management.

EMC Proven Professional: Knowledge Sharing 200938

Archiving Cries for a Holistic ArchitecturePaul Kingston, Solutions Architect EMC Corporation, United States

39

Information is your most valuable asset. EMC knows how to make that asset pay dividends—through integrated hardware, software, and services. We’ve been doing it for some of the greatest companies around the globe for almost 30 years—and we can do it for you.

Our reputation for excellence doesn’t end there. Our world-class education is offered in every corner of the world; and EMC Proven Professional certification is the leading storage and information management certification program in the industry. It aligns our award-winning training with exams, letting you choose a certification in a variety of areas by role or interest.

Want to learn more? Visit us on the web at http://education.EMC.com.

EMC2, EMC, EMC ControlCenter, EMC Proven, Atmos, Avamar, Celerra, CLARiiON, Documentum, MirrorView, Mozy, NetWorker, PowerPath, SnapSure, SnapView, SRDF, StorageScope, Symmetrix, Symmetrix DMX, TimeFinder, and where information lives are registered trademarks or trademarks of EMC Corporation in the United States and other countries. All other trademarks used herein are the property of their respective owners. © Copyright 2009 EMC Corporation. All rights reserved. Published in the USA. 05/09 Brochure H2771.5

Contact UsOnline: http://education.EMC.com

E-mail: [email protected]

Phone: 1-888-EMC-TRNG (888-362-8764)

International: [email protected] +44 208 758 6080 (UK)

+49 6196 4728 666 (Germany) [email protected]

+61 3 9212 6859 (ANZ)

+65 6333 6200 (South Asia)

[email protected] +81 3 3345 5900 (Japan)

[email protected] +82 22125 7503