56
Sun Microsystems, Inc. 4150 Network Circle Santa Clara, CA 95045 U.S.A. 650 960-1300 http://www.sun.com/blueprints Network Design Patterns: N-Tier Data Centers Deepak Kakadia, Staff Engineer, Network Architect, Sun Microsystems Richard Croucher, Chief Architect, Professional Services, Sun Microsystems Sun BluePrints™ OnLine—October 2003 Part No. 817-3997-10 Revision 1.0, 10/9/03 Edition: October 2003

Network Design Patterns: N-Tier Data Centers

Embed Size (px)

Citation preview

Page 1: Network Design Patterns: N-Tier Data Centers

Sun Microsystems, Inc.4150 Network CircleSanta Clara, CA 95045 U.S.A.650 960-1300

http://www.sun.com/blueprints

Network Design Patterns:

N-Tier Data Centers

Deepak Kakadia, Staff Engineer, Network Architect,Sun Microsystems

Richard Croucher, Chief Architect, ProfessionalServices, Sun Microsystems

Sun BluePrints™ OnLine—October 2003

Part No. 817-3997-10Revision 1.0, 10/9/03Edition: October 2003

Page 2: Network Design Patterns: N-Tier Data Centers

PleaseRecycle

Copyright 2003 Sun Microsystems, Inc. 4150 Network Circle, Santa Clara, California 95045 U.S.A. All rights reserved.

Sun Microsystems, Inc. has intellectual property rights relating to technology embodied in the product that is described in this document. Inparticular, and without limitation, these intellectual property rights may include one or more of the U.S. patents listed at http://www.sun.com/patents and one or more additional patents or pending patent applications in the U.S. and in other countries.

This product or document is protected by copyright and distributed under licenses restricting its use, copying, distribution, and decompilation.No part of this product or document may be reproduced in any form by any means without prior written authorization of Sun and its licensors,if any. Third-party software, including font technology, is copyrighted and licensed from Sun suppliers.

Parts of the product may be derived from Berkeley BSD systems, licensed from the University of California. UNIX is a registered trademark inthe United States and other countries, exclusively licensed through X/Open Company, Ltd.

Sun, Sun Microsystems, the Sun logo, Sun BluePrints, Sun ONE, Sun StorEdge, Sun Fire, SunDocs, Jump Start, iPlanet, Enterprise Java Beans,Java Database Connectivity, and Solaris are trademarks or registered trademarks of Sun Microsystems, Inc. in the United States and othercountries. All SPARC trademarks are used under license and are trademarks or registered trademarks of SPARC International, Inc. in the USand other countries. Products bearing SPARC trademarks are based upon an architecture developed by Sun Microsystems, Inc.

The OPEN LOOK and Sun™ Graphical User Interface was developed by Sun Microsystems, Inc. for its users and licensees. Sun acknowledgesthe pioneering efforts of Xerox in researching and developing the concept of visual or graphical user interfaces for the computer industry. Sunholds a non-exclusive license from Xerox to the Xerox Graphical User Interface, which license also covers Sun’s licensees who implement OPENLOOK GUIs and otherwise comply with Sun’s written license agreements.

U.S. Government Rights—Commercial use. Government users are subject to the Sun Microsystems, Inc. standard license agreement andapplicable provisions of the Far and its supplements.

DOCUMENTATION IS PROVIDED "AS IS" AND ALL EXPRESS OR IMPLIED CONDITIONS, REPRESENTATIONS AND WARRANTIES,INCLUDING ANY IMPLIED WARRANTY OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE OR NON-INFRINGEMENT,ARE DISCLAIMED, EXCEPT TO THE EXTENT THAT SUCH DISCLAIMERS ARE HELD TO BE LEGALLY INVALID.

Copyright 2003 Sun Microsystems, Inc., 4150 Network Circle, Santa Clara, Californie 95045 Etats-Unis. Tous droits réservés.

Sun Microsystems, Inc. a les droits de propriété intellectuels relatants à la technologie incorporée dans le produit qui est décrit dans cedocument. En particulier, et sans la limitation, ces droits de propriété intellectuels peuvent inclure un ou plus des brevets américains énumérésà http://www.sun.com/patents et un ou les brevets plus supplémentaires ou les applications de brevet en attente dans les Etats-Unis et dansles autres pays.

Ce produit ou document est protégé par un copyright et distribué avec des licences qui en restreignent l’utilisation, la copie, la distribution, et ladécompilation. Aucune partie de ce produit ou document ne peut être reproduite sous aucune forme, par quelque moyen que ce soit, sansl’autorisation préalable et écrite de Sun et de ses bailleurs de licence, s’il y en a. Le logiciel détenu par des tiers, et qui comprend la technologierelative aux polices de caractères, est protégé par un copyright et licencié par des fournisseurs de Sun.

Des parties de ce produit pourront être dérivées des systèmes Berkeley BSD licenciés par l’Université de Californie. UNIX est une marqueenregistree aux Etats-Unis et dans d’autres pays et licenciée exclusivement par X/Open Company Ltd.

Sun, Sun Microsystems, le logo Sun, Sun BluePrints, Sun ONE, Sun StorEdge, Sun Fire, SunDocs, Jump Start, iPlanet, Enterprise Java Beans,Java Database Connectivity, et Solaris sont des marques de fabrique ou des marques déposées, ou marques de service, de Sun Microsystems,Inc. aux Etats-Unis et dans d’autres pays. Toutes les marques SPARC sont utilisées sous licence et sont des marques de fabrique ou des marquesdéposées de SPARC International, Inc. aux Etats-Unis et dans d’autres pays. Les produits portant les marques SPARC sont basés sur unearchitecture développée par Sun Microsystems, Inc.

L’interface d’utilisation graphique OPEN LOOK et Sun™ a été développée par Sun Microsystems, Inc. pour ses utilisateurs et licenciés. Sunreconnaît les efforts de pionniers de Xerox pour la recherche et le développement du concept des interfaces d’utilisation visuelle ou graphiquepour l’industrie de l’informatique. Sun détient une licence non exclusive de Xerox sur l’interface d’utilisation graphique Xerox, cette licencecouvrant également les licenciés de Sun qui mettent en place l’interface d’utilisation graphique OPEN LOOK et qui en outre se conforment auxlicences écrites de Sun.

LA DOCUMENTATION EST FOURNIE "EN L’ÉTAT" ET TOUTES AUTRES CONDITIONS, DECLARATIONS ET GARANTIES EXPRESSESOU TACITES SONT FORMELLEMENT EXCLUES, DANS LA MESURE AUTORISEE PAR LA LOI APPLICABLE, Y COMPRIS NOTAMMENTTOUTE GARANTIE IMPLICITE RELATIVE A LA QUALITE MARCHANDE, A L’APTITUDE A UNE UTILISATION PARTICULIERE OU AL’ABSENCE DE CONTREFAÇON.

Page 3: Network Design Patterns: N-Tier Data Centers

Network Design Patterns: N-TierData Centers

This article describes recommendations and best practices for designing and

implementing N-Tier architectures for web-based service delivery infrastructures. It

documents a set of fully tested configurations, comparing solutions using two

different network equipment providers.

The design and implementation of optimal N-Tier architectures requires a thorough

analysis of functional and non-functional requirements, including performance,

availability, security, scalability and flexibility. The degree to which non-functional

requirements are met and exceeded often contributes to the quality factor of an

architecture. In this paper, we present some proven design principles and concepts,

which we hope are of value towards the successful delivery of N-Tier architecture

solutions.

For the purposes of this paper, we distinguish network “architecture” from “design”

as follows:

Architecture is a high-level description of how the major components of the system

interconnect with each other from a logical and physical perspective.

Design is more of a process that specifies, in sufficient detail for implementation,

how to construct a network of interconnected nodes that meets or exceeds functional

and non-functional requirements (performance, availability, scalability, and so on).

There currently exists a sufficient level of available public information describing the

construction of N-Tier data center architectures from a software standpoint.

However, from a systems and network architecture perspective, the best way to go

about constructing these architectures might not be as clear. This article attempts to

fill that void, by describing:

■ The recommended network design patterns and reasons behind these designs

■ Actual implementations that have been tested, tuned and proven in actual

deployments

1

Page 4: Network Design Patterns: N-Tier Data Centers

This article also describes two popular approaches to network architectures, each

with its own merits and limitations.

This article addresses the following topics:

■ “Defining Architectures” on page 2

■ “Implementation Overview” on page 10

■ “Designing the Network Configuration” on page 11

■ “Configuring the Physical Network” on page 35

■ “Network Security” on page 48

The focus of this article is limited to data flow. Systems and network management is

a topic in itself and is not discussed in this paper. Readers should have sufficiently

detailed information to extend and modify this discussion to their particular needs.

Defining Architectures

This section breaks architecture considerations into two sections:

■ “Logical Architectures” on page 2

■ “Physical Architectures” on page 6

Logical Architectures

A typical high-level functional overview of an N-Tier data center architecture is

shown in FIGURE 1. The tiers are logically segregated by functionality as follows:

1. Web service tier—The set of load-balanced, front-end web servers that handles the

initial ingress client HTTP web requests. Each web server typically includes a

servelet engine or JSP engine, and the capability to execute locally hosted cgi/binscripts which execute native code.

2. Application service tier—The set of application server instances that might span

one large multi-CPU server or several smaller multi-CPU servers. This tier provides

greater functionality and sophicated processing capabilities and services, such as EJB

containers, an ORB, or connector modules that integrate with legacy servers. The

application server usually interfaces with the web server. In recent years, the web

server has been included in the application server text image for increased

performance.

3. Naming service tier—The set of servers, possibly load balanced, that provides

secondary infrastructure support services for managing and maintaining naming

information for hosts addressing data, configuration data, and so on. This can

include services such as DNS and LDAP.

2 Network Design Patterns: N-Tier Data Centers • October 2003

Page 5: Network Design Patterns: N-Tier Data Centers

4. Data service tier—The set of highly available servers that provides persistent data

storage services for critical corporate data. Due to the the sensitive nature of the

data, this tier is usually highly protected in terms of security and availability. We

briefly describe new virtual tiered firewalling techniques that provide increased

security without sacrificing performance, using hardware-based firewalls.

FIGURE 1 Logical High-Level Overview of N-Tier Architecture

Note – Client access represents all the client systems that can access the reference

architecture (client systems, web browsers, and so forth). This access is not

considered part of the N-Tier architecture

The actual reference implementation described leveraged Sun™ ONE technology.

The Sun ONE software stack provides the software needed to deploy applications

using this reference implementation. The tiers of the N-Tier data center architecture,

shown in FIGURE 1, are mapped to the Sun ONE software as follows:

■ Web service tier—Supported by Sun ONE Web Server, Enterprise Edition software

■ Naming service tier—Supported by Sun ONE Directory Server software

■ Application service tier—Supported by Sun ONE Application Server software

Application service tier

Naming service tier

Web service tier

Client access

Reference architecture service tiers

LDAP

DNS

Data service tier

SAP

Oracle

Defining Architectures 3

Page 6: Network Design Patterns: N-Tier Data Centers

■ Data service tier—Provided by Oracle 9i database software, which is not part of

the Sun ONE software stack

FIGURE 2 illustrates in greater detail how the logical N-Tier architecture is mapped to

the software systems architecture, and how Sun ONE and other software

components fit to create a complete system.

4 Network Design Patterns: N-Tier Data Centers • October 2003

Page 7: Network Design Patterns: N-Tier Data Centers

FIGURE 2 Logical N-Tier Software Systems Architecture

Firewall

Firewall

Firewall

JDBC

Connections

Naming

Messaging

Mail

Security

JPDA

Administration

and monitoring

Servlet

container

EJB

container

HTTP

server

ORB

JDBC

Connections

Naming

Messaging

Mail

Security

JPDA

Administration

and monitoring

Servlet

container

EJB

container

HTTP

server

ORBWeb

server

Router

Internet

or

Intranet

Oracle

Legacy

Sun ONE application server instances

Data

service

tier

Application

service

tier

Web

service

tier

LDAP

LDAP

DNS

DNS

Naming

service

tier

Identity and policy

Platform

Service container

Web servicesApplications core

web services

Service

delivery

Service

integrationWeb

server

RouterService creation, assembly and deployment

Defining Architectures 5

Page 8: Network Design Patterns: N-Tier Data Centers

Physical Architectures

This logical architecture is then mapped to a physical realization as shown in

FIGURE 3, which describes the physical architecture. This mapping is influenced

heavily at this point by non-functional requirements, such as performance and

availability. For example, redundant servers and redundant network connectivity are

a result of availability requirements and possibly performance requirements.

Wirespeed multilayer switches might satisfy performance requirements, provided

that the switch performs packet processing within timing constraints. Most Layer 2

and Layer 3 switches are implemented using ASICs, which are commonly available.

Higher layer packet service implementations differentiate vendors from each other.

Some vendors, such as Foundry in the ServerIron product line, implement Layer 7

using general purpose CPUs. This is much slower that an ASIC or an FPGA

implementation.

6 Network Design Patterns: N-Tier Data Centers • October 2003

Page 9: Network Design Patterns: N-Tier Data Centers

FIGURE 3 Physical N-Tier Architecture Showing Hardware Components

Client 1 Client 2

L2-L3 Edge switch

Webservicetier

Edgenetwork

Sun Fire6800

Sun Fire6800

Sun Fire6800

Sun Fire6800

Clientaccess

Applicationservicetier

Dataservicetier

Namingservicetier

Core switches

Sun Fire 280R Sun Fire 280R Sun Fire 280R Sun Fire 280R

Sun Fire 280R Sun Fire 280R Sun Fire 280R Sun Fire 280R

T3

T3

Defining Architectures 7

Page 10: Network Design Patterns: N-Tier Data Centers

TABLE 1 through TABLE 3 describe the actual tested hardware components and

configurations. This architecture is tested and proven. However, it is easily

extensible for adapting to specific customer requirements while keeping the

fundamental pattern intact. For example, increasing or decreasing the number of

redundant servers of a particular tier directly impacts the degree of availability and

performance (if configured as Active Active), but does not violate the fundamental

architecture.

TABLE 1 Tested Systems Configuration: Sun Servers and Storage

Service Tier Function Equipment FeaturesMemory(RAM)

StorageCapacity

OperatingEnvironment Software

Web Web servers (4) Sun Fire™ 280R

servers

Two 900-

MHz CPUs

2 GB 36 GB Solaris 9 10/02

with patches

Sun ONE

Web Server

6.0, Service

Pack 5,

Enterprise

Edition

Naming DNS servers (2) Sun Fire 280R

servers

Two 900-

MHz CPUs

2 GB 36 GB Solaris 9 10/02

with patches

n/a

Directory

server

(2) Sun Fire 280R

servers

Two 900-

MHz CPUs

4 GB 36 GB Solaris 9 10/02

with patches

Sun ONE

Directory

Server 5.1

Application Application

servers

(2) Sun Fire 6800

servers

24 1-GHz

CPUs

64 GB 200 GB Solaris 9 10/02

with patches

Sun ONE

Application

Server 7.0

Data Data servers (2) Sun Fire 6800

servers

24 1-GHz

CPUs

32 GB 36 GB Solaris 9 10/02

with patches

VERITAS

Volume

Manager 3.5

software,

Oracle 9i

Release 2

software

Database

storage

(2) Sun StorEdge™

T3 Arrays

n/a n/a 200 GB n/a n/a

8 Network Design Patterns: N-Tier Data Centers • October 2003

Page 11: Network Design Patterns: N-Tier Data Centers

TABLE 2 Tested Network Configuration 1 - Extreme Networks Equipment

Function Equipment Features Operating Environment

Edge switch (1) Extreme Networks

Summit7i switch

24 1-Gb ports Extreme OS 6.2.2 build 18

Core switch (2) Extreme Networks

BlackDiamond 6800

switches

Chassis switch with 7

blades. Each blade has 8

1-Gb ports (56 ports

total)

Extreme OS 6.2.2 build 18

TABLE 3 Tested Network Configuration 2 - Foundry Networks Equipment

Function Equipment Features Operating Environment

Edge switch (1) Extreme Networks

Summit7i switch

24 1-Gb ports Extreme OS 6.2.2 build 18

Core switch (2) Foundry BigIron

4000 Layer 2/3

Server load-

balancing

switch

(2) Foundry

ServerIron XL load-

balancing switches

Defining Architectures 9

Page 12: Network Design Patterns: N-Tier Data Centers

Implementation Overview

Once the design and component selection is complete, follow these implementation

steps to set up your service tiers and their software:

▼ To Implement Systems as Service Tiers

1. Install computer systems and storage hardware.

2. Establish network connectivity.

3. Configure partitions and domains on the Sun Fire 6800 servers in the applicationand data service tiers.

4. Install and configure the Solaris™ Operating System on all systems.

5. Configure the Sun StorEdge T3 arrays on the Sun Fire 6800 servers in the dataservice tier.

6. Install Oracle 9i Release 2 database software on the Sun Fire 6800 servers in thedata service tier.

7. Install the Sun ONE software on the following systems:

a. Web service tier—Install the Sun ONE Web Server software on the four SunFire 280R servers in this service tier.

b. Naming service tier—Configure DNS on two of the Sun Fire 280R servers.

c. Naming service tier—Install and configure the Sun ONE Directory Server 5.1software on the other two Sun Fire 280R servers.

Refer to the Sun ONE Directory Server 5.1 documentation collection (see “Related

Resources” on page 54 for more information).

d. Application service tier—Install and configure the Sun ONE Application Serversoftware on two Sun Fire 6800 servers.

Note – See http://docs.sun.com for installation and hardware guides.

10 Network Design Patterns: N-Tier Data Centers • October 2003

Page 13: Network Design Patterns: N-Tier Data Centers

Designing the Network Configuration

One can argue that the heart of any architecture composed of multiple nodes is the

network architecture. The network architecture is vital to assembling components or

building blocks in such a way that the system delivery is high performance, flexible,

highly available, scalable, and secure. We describe a recommended methodology for

creating these architectures. We also describe actual implementations which show

the contrasts between two different network vendors, and how to optimally leverage

each of them. This section discusses the following topics:

■ “Logical Network Architecture” on page 11

■ “IP Services” on page 11

■ “Designing Networks for Availability” on page 19

■ “Networks and VLANs” on page 23

■ “Inter-Tier Traffic Patterns” on page 32

Logical Network Architecture

The logical network design is composed of segregated networks, implemented

physically using virtual local area networks (VLANs) defined by the network

switches. The internal network uses private IP address spaces (for example,

10.0.0.0 ) as specified in RFC 1918 for security and portability advantages. Look

ahead to FIGURE 10 for a high-level overview of the logical network architecture.

The management network provides centralized data collection and management of

all devices. Each device has a separate interface to the management network to

avoid contaminating the production network in terms of security and performance.

The management network is also used for automating the installation of the

software using Solaris JumpStart™ technology.

Although several networks physically reside on a single active core switch, network

traffic is segregated and secured using static routes, access control lists, and VLANs.

From a practical perspective, this is as secure as separate individual switches.

IP Services

The following subsections provide a description of some of the more important

emerging IP services that are often important components of a complete network

design for a Sun ONE deployment. These IP services are divided into two categories:

Designing the Network Configuration 11

Page 14: Network Design Patterns: N-Tier Data Centers

■ Stateful Session Based - This class of IP services requires that the switch

maintain session state information, so that a particular client’s session state is

maintained across all packets. We show that this requirement has severe

implications for highly available solutions, and limits scalabiity and performance.

■ Stateless Session Based - This class of IP services does not require that the switch

maintain any state information associated with a particular flow.

Many functions can be implemented by either network switches and appliances or

by the Sun ONE software stack. In this section, we describe:

■ How these new IP services work

■ What benefit they provide

■ Availability strategies

Later sections describe similar functions that are included in the Sun ONE integrated

stack.

Modern multilayer network switches perform many advanced Layer 3 IP services in

addition to basic routing. These services are implemented as functions that operate

on a packet by modifying the packet headers and controlling the rate at which they

are forwarded. IP services include functions such as Server Load Balancing, Quality

of Service, Application Redirection, Network Address Translation, and others. In this

section we start our discussion with Server Load Balancing, an important service for

data centers, and then describe adjacent services which can be cascaded.

Server Load Balancing - Class 2 Stateless

The Server Load Balancing (SLB) function maps incoming client requests destined

for a virtual IP (VIP) address and ported to a real server IP address and port. The

target server is selected from a set of identically configured servers, based on a pre-

defined algorithm that considers the loads on the servers as criteria for choosing the

best server at any instant.

The purpose of server load balancing is to provide one layer of indirection to

decouple servers from the network service that clients interface with. Thus, the

server load-balancing function can choose the best server to service a client request.

Decoupling increases availability because if some servers fail, the service is still

available from the remaining functioning servers. Decoupling increases flexibility

because servers can be added or removed without impacting the service. Other

redirection functions can be cascaded to provide compound functionality.

SLB mapping functions differ from other mapping functions, such as redirection.

Redirection makes mapping decisions based on criteria such as ensuring that a

particular client is redirected to the same server to take advantage of caches, cookie

persistence, and so on. FIGURE 4 shows an overview of the various mapping

functions, and how they can intercept a request and rewrite the IP header according

to the provisioned configuration rules.

12 Network Design Patterns: N-Tier Data Centers • October 2003

Page 15: Network Design Patterns: N-Tier Data Centers

FIGURE 4 IP Services Switch Functions on Incoming Packets

FIGURE 4 shows that a typical client request is destined for an external VIP, with IP

address a.b.c.d and port 123. The SLB algorithm eventually intercepts the packet,

and rewrites the destination IP address corresponding to the real server, which was

chosen by a particular algorithm. The packet is then returned as indicated by the

source IP address.

Application Redirection - Class 2 Stateless

The Application Redirection function intercepts a client’s HTTP request and

redirects the request to another destination, which is usually a group of cache

servers. Application Redirection rewrites the IP destination field. This is different

from proxy switching, where the socket connection is terminated and a new

connection to the server is created to fetch the requested web page.

Application Redirection serves the following purposes:

■ Reduces the load on one set of web servers and redirects it to another set, usually

cache servers for specific content

■ Intercepts client requests and redirects them to another destination for control of

certain types of traffic based on filtered criteria

FIGURE 5 illustrates the functional model of Application Redirection, which only

rewrites the IP header.

VIP1 = a.b.c.d:123 VIP2 = e.f.g.h:456

Real Dest. IP[1,2,3,...n]

srcIP:e.f.g.h srcPort: 456 dstIP:RealIP http://www.a.com/index.html

URLStringMatch

HTTPHeader Cookie Cache

SSLSession

ID

srcIP:a.b.c.d srcPort: 123 dstIP:VIP1 http://www.a.com/index.html

Packet A

Packet A1 Packet A2

SLB function, finds the best server, and rewrites the dstIP of the target real server - Real IP

First Function rewrote srcIP and port, so that the real server will reply to this switch, which is at srcIP e.f.g.h and port 456. Dest is set to the SLB function

Client

srcIP:e.f.g.h srcPort: 456 dstIP:VIP2 http://www.a.com/index.html

SLB

CustomAlgorithm

RoundRobin

LeastConnections

Designing the Network Configuration 13

Page 16: Network Design Patterns: N-Tier Data Centers

FIGURE 5 Application Redirection Functional Model

Content Switching - Class 1 Stateful

Content Switching is also known as Layer 7 processing, proxy switching, or URL

switching. This function accepts a client’s incoming HTTPrequest, terminates the

socket connection, and creates another socket connection to the target web server,

which is chosen based on a user-defined rule. The Content Switching function then

fetches the requested web page and returns it to the client.

FIGURE 6 shows an overview of the functional Content Switching mode:

Application Redirection

Filter has a defined destination IP

Client request meets filter criteria, request is intercepted, IP dest is rewritten to new destination IP addr DEST = B

servergroup 1DEST = A

srcIP:a.b.c.d srcPort: 123 dstIP:VIP1 http://www.a.com/index.html

client http requestDEST = A

Client

srcIP:a.b.c.d srcPort: 123 dstIP:VIPA http://www.a.com/index.html

servergroup 2DEST = B

srcIP:a.b.c.d srcPort: 123 dstIP:VIPB http://www.a.com/index.html

14 Network Design Patterns: N-Tier Data Centers • October 2003

Page 17: Network Design Patterns: N-Tier Data Centers

FIGURE 6 Content Switching Functional Model

Content Switching with full Network Address Translation (NAT) serves the

following purposes:

■ Isolates internal IP addresses from being exposed to the public Internet.

■ Allows reuse of a single IP address. For example, clients can send their web

requests to www.a.com or www.b.com , where DNS maps both a.com and b.comdomains to a single IP address. The proxy switch receives this request, with the

packet containing an HTTP header in the payload that contains the target domain

(a.com or b.com ), and redirects this request to the appropriate group of servers.

■ Allows parallel fetching of different parts of web pages from servers optimized

and tuned for that type of data. For example, a complex web page may need GIFs,

dynamic content, and cached content. With Content Switching, one set of web

servers can hold the GIFs, and another can hold the dynamic content. The proxy

switch can make parallel fetches and retrieve the entire page at a faster rate than

would otherwise be possible.

■ Ensures requests with cookies or Secure Sockets Layer (SSL) session IDs are

redirected to the same server to take advantage of persistence.

FIGURE 6 shows that the client’s socket connection is terminated by the proxy

function. The proxy retrieves as much of the URL as is needed to make a decision

based on the retrieved URL. In this example, various URLs map to various server

client httprequest

Layer 7 Switching Function

VIP

http://www.a.com/SMA/stata/index.html servergroup1

http://www.a.com/SMA/dnsa/index.html servergroup2

http://www.a.com/SMB/statb/index.html servergroup3

http://www.a.com/SMB/CACHEB/index.html servergroup4

http://www.a.com/SMB/DYNA/index.html servergroup1

servergroup 1stata

servergroup 2dnsa

servergroup 3statb

servergroup 4cacheb

servergroup 5dynab

- terminate socket connection- get URL- check against rules- forward to servergroup/slb function- or get valid cookie, with server ID and forward it to the same server

Designing the Network Configuration 15

Page 18: Network Design Patterns: N-Tier Data Centers

groups, which are virtual IP addresses. At this stage, the request may be forwarded

to the server, or passed off to the SLB function that is waiting for traffic destined for

the server group.

The proxy is configured with a virtual IP, so the switch forwards all client requests

destined for this virtual IP to the proxy function. The proxy function rewrites the IP

header, particularly the source IP and port, so that the server sends back the

requested data to the proxy instead of directly to the client.

Network Address Translation - Class 1 Stateful

Network Address Translation (NAT) is a critical component for security and proper

traffic direction. There are two basic types of NAT: half and full. Half NAT rewrites

the destination IP address and MAC address to a redirected location such as a web

cache, which returns the packet directly to the client because the source IP address is

unchanged. Full NAT is where the socket connection is terminated by a proxy, so the

source IP and MAC are changed to that of the proxy server.

NAT serves the following purposes:

■ Security—Prevents exposing internal private IP addresses to the public.

■ IP Address Conservation—Requires only one valid exposed IP address to fetch

Internet traffic from internal networks with invalid IP addresses.

■ Redirection—Intercepts traffic destined for one set of servers and redirects it to

another by rewriting the destination IP and MAC addresses. The redirected

servers are able to send back the request directly to the clients with half NAT

translated traffic because the original source IP address has not been rewritten.

NAT is configured with a set of filters, usually a quintuple Layer 3 rule. If the

incoming traffic matches a certain filter rule, the packet IP header is rewritten or

another socket connection is initiated to the target server, which itself can be

changed, depending on the rule.

Secure Sockets Layer (SSL) Session ID Persistence - Class 1Stateful

SSL can be implemented in a variety of ways: in software, hardware, or both. SSL

can be terminated at the target server, an intermediate server, an SSL network

appliance, or at an SSL-capable network switch. An SSL appliance, such as Netscaler

or Array Networks, tends to be implemented with a PC board and have a PCI-based

card containing the SSL accelerator ASIC. Hence the SSL acceleration is

implemented in libraries, which offload only the mathematical computations. The

rest of the SSL processing is implemented in software, directing selective functions

to the hardware accelerator. Clearly, one immediate limitation is the PCI bus. Other

16 Network Design Patterns: N-Tier Data Centers • October 2003

Page 19: Network Design Patterns: N-Tier Data Centers

newer SSL devices have an SSL accelerator integrated into the datapath of the

network switch. These are very advanced products, just starting to emerge from

startups such as Wincom Systems.

FIGURE 7 shows an example of switch activity. Once a client has made initial contact

with a particular server, which may have been selected based on SLB, the switch

ensures that subsequent requests are forwarded to the same SSL server based on the

SSL ID that the switch has stored during the initial SSL handshake. The switch keeps

state information based on the client’s initial request for HTTPS on port 443, which

contains a hello message. This first request is then forwarded to the server selected

by the SLB algorithm or by another function. The server responds to the client’s

hello message with an SSL session ID. The switch then intercepts this SSL session

and stores it in a table. The switch forwards all of the client’s subsequent requests to

the same server, as long as each request contains the SSL session ID in the HTTP

header. There may be several different TCP socket connections that span the same

SSL session. State is maintained by the SSL Session ID in each HTTP request sent by

the same client.

FIGURE 7 Network Switch with Persistence Based on SSL Session ID

An appliance can be added for increased performance in terms of SSL handshakes

and bulk encryption throughput. FIGURE 8 shows how an SSL appliance could be

deployed. Client requests first come in for a particular URL using the HTTPS

protocol on port 443. The switch recognizes that this needs to be directed to the

appliance, which is configured to provide that SSL service. A typical appliance, such

as Netscaler, can also be configured to provide Content Switching and Load

Balancing in addition to SSL acceleration. The appliance then reads or inserts

cookies and resubmits the HTTP request to an appropriate server, which maintains

state based on the cookie in the HTTP header.

SSL Server1Client

SSL tunnel - SSL session IDHTTP Session 1HTTP Session 2HTTP Session 3HTTP Session 4

NetworkSwitch

Designing the Network Configuration 17

Page 20: Network Design Patterns: N-Tier Data Centers

FIGURE 8 Tested SSL Accelerator Configuration

Cookie Persistence - Class 1 Stateful

The HTTP/1.0 protocol was originally designed to provide static pages in one

transaction. As more complex web sites evolved, which required multiple HTTP gets

on the same server, it was later discovered that performance was severely limited by

the tearing down and opening of TCP socket connections. This problem was solved

by HTTP 1.1, which allowed persistent connections. Immediately after a socket

connection, the client could pipeline multiple requests. However, as more complex

web sites evolved for applications such as the shopping cart, persistence across

multiple HTTP 1.1 requests was required. The problem was further complicated by

proxies and load balancers that interfered with the traffic being redirected to the

same web server. Another mechanism was required to maintain state across multiple

HTTP 1.1 requests. The solution was the introduction of two new headers in the

HTTP request, Set-Cookie and Cookies, as defined in RFC 2109. These headers

carried the state information between the client and server. Typically, most load-

balancing switches have enough intelligence to ensure that a particular client’s

session with a particular server is maintained, based on the cookie that is inserted by

the server and maintained by the client.

Client

Internet

SSL http

http

MultilayerSwitch

Session Persistence Based on SSL Session ID

Session Persistence Based on Cookie

Sun ONEWeb Servers

SSL Accelerator ApplianceKey Exchange and Bulk Encryption

https

18 Network Design Patterns: N-Tier Data Centers • October 2003

Page 21: Network Design Patterns: N-Tier Data Centers

FIGURE 9 Network Availability Strategies

Designing Networks for Availability

FIGURE 9 shows a cross-sectional view of the tier types and the functions performed

at each tier. It also shows the availability strategies for the network and web tier.

External tier availability strategies are outside the scope of this article. Our

discussion is limited to the services tiers: Web, Application Services, Naming, and so

on.

Designing network architectures for optimal availability requires maximizing two

orthogonal components:

■ Intra Availability—Refers to maximizing the function that considers the

estimated failure probability of the components themselves. The components that

cause the failure are only considered by the following equation:

FAvailability = MTBF ÷ (MTBF + MTTR)

Where MTBF is Mean Time Between Failures and MTTR is Mean Time to

Recovery.

Client Tier External NetworkConnectivity

Network Tier Services Tier

Layer 3 redundancysession based servicesrequires session sharing. Other stateless servicesfailover with no problem.

Sun Servers with IPMP-dual NICs

Designing the Network Configuration 19

Page 22: Network Design Patterns: N-Tier Data Centers

■ Inter Availability—Refers to minimizing the impact of failures caused by factors

external to the system by the surrounding environment, such as single points of

failure (SPOFs), power outages, or a technician accidently pulling out a cable.

It is not sufficient to simply maximize the FAvailability function. The SPOF and

environmental factors also need to be considered. The networks designed in this

article describe a highly available architecture which conforms to these design

principles.

FIGURE 10 provides an overview of the logical network architecture. This shows how

the tiers map to the different networks, which are also mapped to segregate VLANs.

This segregation allows inter-tier traffic to be controlled by filters on the switch, or

possibly a firewall (which is the only bridge point between VLANs).

20 Network Design Patterns: N-Tier Data Centers • October 2003

Page 23: Network Design Patterns: N-Tier Data Centers

FIGURE 10 Logical Network Architecture Overview

The following list describes each subnetwork shown in FIGURE 10:

■ External network—The external-facing network that directly connects to the

internet. All IP addresses must be registered, and it is advisable to secure them

with a firewall.

Externalnetwork

192.168.10.0/24

Web servicenetwork

10.10.0.0/24

Namingservices network

10.20.0.0/24

Load balancer

Load balancer

Managementnetwork

10.100.0.0/24

Accessto allnetworks

Databaseservice network10.100.0.0/24

Backupnetwork

10.110.0.0/24

Devicenetwork

10.0.0.0/24

Applicationservices network

10.30.0.0/24

Clientnetwork

172.16.0.0/24

Productionnetwork

Externalnetwork

Managementnetwork

Designing the Network Configuration 21

Page 24: Network Design Patterns: N-Tier Data Centers

The following networks are assigned non-routable IP addresses, as specified by

RFC 1918, which can also be based on the following:

■ 10.0.0.0 - 10.255.255.255 (10/8 prefix)

■ 172.16.0.0 - 172.31.255.255 (172.16/12 prefix)

■ 192.168.0.0 - 192.168.255.255 (192.168/16 prefix)

■ Web Services network—A dedicated network containing web servers. Typical

configurations include a load-balancing switch. This switch can be configured to

allow the web server to return the client’s HTTP request directly, or require the

load-balancing device to return the request on behalf of the provider’s web server.

■ Naming Services network—A dedicated network consisting of servers that

provide LDAP, DNS, NIS, and other naming services. The services are for internal

use only, and should be highly secure. Internal infrastructure support services

must be sure that requests originate and are destined for internal servers. Most

requests tend to be read intensive, hence the potential use of caching strategies for

increased performance.

■ Management network—A dedicated service network that provides management

and configuration for all servers, including the JumpStart installation option for

new systems.

■ Backup network—A dedicated service network that provides backup and restore

operations. This network is pivotal to minimizing disturbances to other

production service networks during backup and other network bandwidth-

intensive operations.

■ Device network—A dedicated network that attaches IP devices for storage, as

well as other devices.

■ Application Services network—A dedicated network, typically consisting of

larger multi-CPU servers that host multiple instances of the Sun ONE Application

Server software image. These requests tend to only require low network

bandwidth, but can span multiple protocols, including HTTP, CORBA,

proprietary TCP, and UDP-based protocols. The network traffic can also be

significant when Sun ONE application server clustering is enabled. Every update

to a stateful session bean triggers a multicast update to all servers on this

dedicated network so that participating cluster nodes update the appropriate

stateful session bean. Network utilization increases directly proportional to the

intensity of session bean updates.

■ Database network—A dedicated network, typically consisting of one or two

multi-CPU database servers. Network traffic typically consists of jdbc traffic

between the application server and the web server.

22 Network Design Patterns: N-Tier Data Centers • October 2003

Page 25: Network Design Patterns: N-Tier Data Centers

Networks and VLANs

Each service is deployed in a dedicated Class C network where the first three octets

represent the network number. The design represents an innovative approach where

separate Layer 2 devices are not required because the functionality was collapsed

into the core switch. Decreasing the management and configuration of separate

devices while maintaining the same functionality is a major step toward cutting

costs and increasing reliability.

FIGURE 11 shows how a traditional configuration requires two Layer 2 switches. A

particular VLAN spans the six segments, which gives each interface access to the

VLAN on failover.

FIGURE 11 Traditional Availability Network Design Using Separate Layer 2 Switches

The design shown in FIGURE 12 results in the same network functionality, but

eliminates the need for two Layer 2 devices. This is accomplished using a tagged

VLAN interconnect between the two core switches. Collapsing the Layer 2

functionality reduces the number of network devices. This provides a reduced risk

of unit failure, lower cost, and reduced manageability issues.

Edge switch

Masterswitch

(layer 3)

Standbyswitch(layer 3)

Layer 2switch

Layer 2switch

Client network

Networkinterface 0

Networkinterface 1

Sun server

Designing the Network Configuration 23

Page 26: Network Design Patterns: N-Tier Data Centers

FIGURE 12 Availability Network Design Using Large Chassis-Based Switches

Solaris IPMP Host Network Interface RedundancyIntroduction

Solaris 8 introduced a novel technology that added network load balancing and path

failover with IP Multi Path (IPMP) built into the base operating system. IPMP is

used to provide failover between multiple physical network interfaces. Its

advantages over NAFO and other network interface redundancy approaches are

support for outbound load balancing, and a single mechanism to support both

clustered and non-clustered nodes. Failover is typically fast enough to preserve

application session connectivity. The default failover time is ten seconds. Normal

TCP/IP recovery ensures no loss of data. Interfaces are collected into groups. The

minimum number of interfaces assigned to a group is two. Where larger domains

are built, the interface group can scale to the total number of installed network

interface cards (NICs) that are attached to the same VLAN.

Client network

Edgeswitch

Standby core switch 2Master core switch 1

Networkinterface 0

Networkinterface 1

Sun server

24 Network Design Patterns: N-Tier Data Centers • October 2003

Page 27: Network Design Patterns: N-Tier Data Centers

Note – Currently, configuration of Quad Fast Ethernet cards is limited to eight

cards, and configuration of Gigabit Ethernet cards is limited to six cards. Additional

I/0 cards can be added by blacklisting and adding as needed. Please consult a Sun

Systems Engineer for the latest configuration rules.

Solaris IPMP Host Network Interface Redundancy Internals

FIGURE 13 describes the internal components of Solaris IPMP. The key is the Group

Identifier (GID) associated with each physical MAC. IP forwards packets destined

for a particular GID, as opposed to a physical interface.

Designing the Network Configuration 25

Page 28: Network Design Patterns: N-Tier Data Centers

FIGURE 13 IPMP Internals

Further, IP has the ability to loadbalance across all physical interfaces that have a

common GID. As can be seen, each physical interface has a unique MAC and a

unique test IP address (TIP). The Virtual IP address associated with each physical

interface that belongs to the same GID has two modes of operation:

VIP

TIP

GID

MAC

PHY

10.6.97.1

TCP

Stream head

10.6.97.53

ha

8:0:20:ff:9b:b1

ce0

VIP

TIP

GID

MAC

PHY

10.6.97.1

10.6.97.54

ha

8:0:20:ff:9b:b2

ce1

Application

in.mpathd

Port 1Port 2

Port 3

Port 23

Port 1Port 2

Port 3

Port 23

Layer 3 FDB

NetworkPort

Layer 2 FDB

MAC ADDRPort

Port 1Port 2

Port 3

Port 23

Port 1Port 2

Port 3

Port 23

Layer 3 FDB

NetworkPort

Layer 2 FDB

MAC ADDRPort

Sun server

Network switch

Network switch

26 Network Design Patterns: N-Tier Data Centers • October 2003

Page 29: Network Design Patterns: N-Tier Data Centers

1. Active Active —The VIP must be specified using the addif command for each

active interface. Only active interfaces are load balanced for outgoing packets.

2. Active Standby—The VIP is not specified on a particular interface, and the

standby command is issued for the standby interface. In this case, the standby

interface does not participate in outbound load balancing. It associates with the VIP

only in the event that an active interface fails and remains down.

It is important to configure the network switch carefully when using IPMP. As

shown in FIGURE 13, there are two types of forwarding tables: Layer 3 for IP-based

forwarding, and Layer 2 forwarding. It is important that the MAC addresses are

different: the switch must have a unique port associated with each MAC address.

Otherwise the switch may get confused, depending on the network forwarding

implementation. Also, if a standby interface has just failed over, the VIP will be

associated with the old, failed MAC and the switch may incorrectly forward to the

failed NIC. It is important to use tested network switches that know, when a link

fails, to remove the appropriate entries from the forwarding table are re-ARPed for

an updated entry.

IPMP detects failures using two methods:

■ Link level—The NIC knows when the link is physically down. The driver issues a

message to IP to perform failure detection operations based on the configurations.

■ IP level—The daemon in.mpathd detects Layer 3 failures with the default router.

Periodically in.mpathd tests the connectivity between the TIP and the default

router, ensuring correct Layer 3 connectivity. In the event of a failure, a message is

sent to IP to to perform failure detection operations, based on the configurations.

Solaris IPMP Host Network Interface RedundancyAvailability

A typical highly available configuration includes a Sun server that has dual NIC

cards, which increases the availability of these components by several orders of

magnitude. For example, the Gigabit Ethernet card, part number X2069A, by itself

has an MTBF of 199,156 hours, and assuming approximately 2 hours MTTR, has an

availability of 0.999989958. With two cards, the MTBF increases so that availability

becomes 9 9’s at .9999999996 availability. This small incremental cost has a big

impact on the overall availability computation.

FIGURE 14 shows the Sun server redundant NIC model using IPMP in Active Standby

mode. The server has two NICs, ge0 and ge1 , with fixed IP addresses a.b.c.d and

e.f.g.h . The virtual IP address w.x.y.z is the IP address of the service. Client

requests use this IP address as the destination. This IP address floats between the

two interfaces: ge0 or ge1 . Only one interface can be associated with the virtual IP

address at any one instant. If the ge0 interface owns the virtual IP address, then data

Designing the Network Configuration 27

Page 30: Network Design Patterns: N-Tier Data Centers

traffic follows the P1 path. If the ge0 interface fails, the ge1 interface takes over and

associates the virtual IP address, and then data traffic follows the P2 path. Failures

can be detected within two seconds, depending on the configuration.

.

FIGURE 14 High-Availability Network Interface Cards on Sun Servers in Active Standby

Layer 3—Integrated Virtual Router Redundancy Protocol(VRRP) RFC 2338 and IPMP

By combining the availability technologies of routers and server NICs, we can create

a reusable cell that can be reused in any deployment where servers are connected to

routers. This reusable cell is highly available and scalable. In most LAN

environments, there are many servers or client nodes, with one default router. If that

default router fails, or the link that connects the hosts to the default router fails, then

connectivity to the outside network is down. To solve this problem, Virtual Router

Redundancy Protocol (VRRP), as defined in RFC 2338, was introduced. VRRP is

basically a multicast protocol used by switches to monitor one another’s health.

When a backup switch detects a failure of the master switch, it assumes the IP

address of the master. This is the same IP address used by all servers of a particular

VLAN as the default route. To integrate IPMP and VRRP so that both redundancy

protocols interoperate properly, care must be taken to ensure correct operating and

corrective operations in the event of a failure. FIGURE 15 describes a working

configuration. Lines 1 and 2 show the VRRP protocol used by the routers to monitor

one another. If one router detects that the other has failed, the surviving router

assumes the role of master and inherits the IP address and MAC address of the

master.

Lines 3 and 5 in FIGURE 15 show how a switch can verify that a particular connection

is up and running. This connection can be port-based, link-based, or based on Layers

3, 4, and 7. The router can make synthetic requests to the server and verify that a

HCS HCS

VLAN

ge0:a.b.c.d

w.x.y.z

ge1:e.f.g.h

P1

P2

IPMP-dual NIC

28 Network Design Patterns: N-Tier Data Centers • October 2003

Page 31: Network Design Patterns: N-Tier Data Centers

particular service is up and running. If the router detects that the service has failed,

the VRRP can be configured on some switches to consider this failure as part of the

election algorithm and tie the failure to the priority of the VRRP router.

Simultaneously, the server is also monitoring links. Currently, IPMP consists of a

daemon, in.mpathd , that constantly pings the default router. As long as the default

router can be pinged, the master interface (ge0 ) assumes ownership of the IP

address. If the in.mpathd daemon detects that the default router is not reachable,

automatic failover occurs, which brings down the link and floats over the IP address

of the server to the surviving interface (ge1 ).

In the lab, we were able to tune IPMP and VRRP to achieve failure detection and

recovery within one second. There is a trade-off, however. Because the control

packets are on the same network as the production network, and because VRRP is a

CPU-intensive task, false failures are possible if the switches, networks, or servers

become overloaded. This is because the device might take longer than the strict

timeout to respond to the peer’s heartbeat.

FIGURE 15 Design Pattern for IPMP and VRRP Integrated Availability Solution

N-Tier Data Center Logical Network Design

The logical network design for the N-Tier data center (FIGURE 16) incorporates server

redundant network interfaces and integrated VRRP and IPMP.

1

43 6

5

2VRRP

VRRP

ge0

ge1

in.mpathd

Designing the Network Configuration 29

Page 32: Network Design Patterns: N-Tier Data Centers

FIGURE 16 Logical Network Architecture With Virtual Routers, VLANs, and Networks

172.16.0.1

Clients

Edge switch

192.168.0.1

Master

Servers

Slave

192.168.0.2

10.10.0.110.50.0.1

10.20.0.110.40.0.1

10.30.0.1

192.168.0.2

10.10.0.1 10.50.0.1

10.20.0.1 10.40.0.1

10.30.0.1

30 Network Design Patterns: N-Tier Data Centers • October 2003

Page 33: Network Design Patterns: N-Tier Data Centers

TABLE 4 summarizes the eight separate networks and associated VLANs.

The edge network connects redundantly to the internal reference architecture

network. One of the core switches has ownership of the 192.16.0.2 IP address,

which means that switch is the master and the other is in slave mode. When the

switch is in slave mode, it does not respond to any traffic, including ARPs. The

master also assumes ownership of the MAC that floats along with the virtual IP

address of 192.16.0.2 .

Note – If you have multiple NICs, make sure each NIC uses its unique MAC

address.

Each switch is configured with the identical networks and associated VLANS that

are shown in TABLE 4. An interconnect between the switches extends each VLAN, but

is tagged to allow multiple VLANs to share a physical link. (This requires a network

interface, such as cards that use the ce driver, that supports tagged VLANS.) The

Sun servers connect to both switches in the appropriate slots, where only one of the

two interfaces will be active.

Most switches support Routing Information Protocol (RIP and RIPv2), Open Shortest

Path First (OSPF), and Border Gateway Protocol v4 (BGP4). However, static routes

provide a more secure environment. A redundancy protocol based on Virtual Router

Redundancy Protocol (VRRP, RFC 2338) runs between the virtual routers. The MAC

address of the virtual routers floats among the active virtual routers, so that the ARP

caches on the servers do not need any updates when a failover occurs.

TABLE 4 Network and VLAN Design

Name Network Default Router VLAN Purpose

client 172.16.0.0 172.16.0.1 client Client load generation

edge 192.16.0.0 192.16.0.1 edge Connects client network to internal

reference architecture network

web 10.10.0.0 10.10.0.1 web Web services

ds 10.20.0.0 10.20.0.1 ds Directory services

db 10.30.0.0 10.30.0.1 db Database services

app 10.40.0.0 10.40.0.1 app Application services

dns 10.50.0.0 10.50.0.1 dns DNS services

mgt 10.100.0.0 10.100.0.1 mgt Management and administration

san 10.0.0.0 10.0.0.1 san Devices network

Designing the Network Configuration 31

Page 34: Network Design Patterns: N-Tier Data Centers

Inter-Tier Traffic Patterns

When a client makes a request, the request can be handled two ways depending on

the type of request: A web server might return information to the client directly, or it

might forward the request to an application server for further processing.

In the case where the client’s request is for static content, such as images, the request

is handled directly by the web service tier. These requests are handled quickly, and

do not present a heavy load to the client or server.

In the case where the client requests dynamically generated content that requires

Java server pages (JSP) or servlet processing, the request is passed to the application

service tier for processing. This is often the bottleneck for large-scale environments.

The application server runs the core of the application that handles the business

logic to service the client request, either directly or indirectly. While handling the

business logic, the application server can use many supporting resources, including

directory servers, databases, and even other web application services.

FIGURE 17 illustrates how the data flows through the various system interfaces

during a typical application services request. TABLE 5 provides a description of each

numbered interaction in the illustration. Load balancers provide availability,

performance and scalability by spreading the incoming load across multiple web

servers. The web servers retrieve static content from an NFS server. To prevent NFS

from becoming the bottleneck, the web server can cache this content data onto a

local disk using cachefs . This preserves their statelessness but reduces NFS

network traffic and associated latencies. New changes to the web server content are

pulled according to demand onto the web servers as the data is accessed. Newly

added web servers can have their caches preloaded using the cachefspack [1m]command.

It is important to note that the following is highly extensible and sufficiently

modular to allow for flexible variations according to customer needs.

32 Network Design Patterns: N-Tier Data Centers • October 2003

Page 35: Network Design Patterns: N-Tier Data Centers

FIGURE 17 Logical Network Describing Application Data Flow

1 12

2 11

3 54

89

10

76

Clients

Switchingservices

Loadbalancer

Static contentNFS server

Web services

Directoryservices

Applicationservices

Databaseservices

Designing the Network Configuration 33

Page 36: Network Design Patterns: N-Tier Data Centers

TABLE 5 Sequence of Events for FIGURE 17

Item Interface1 Interface2 Protocol Description

1 Client Switch HTTP/

HTTPS

Client initiates web request. Client

communication can be HTTP or HTTPS

(HTTP with secure socket layer). HTTPS

can be terminated at the switch or at the

web server.

2 Switch Web server HTTP/

HTTPS

Switch redirects client request to

appropriate web server.

3 Web Server Application

server

Application

server web

connector

over TCP

The web server redirects the request to

the application server for processing.

Communication passes through a web

server plug-in over a proprietary TCP-

based protocol.

4 Application

server

Directory

server

LDAP The Java 2 Enterprise Edition (J2EE)

application hosted by the application

server identifies the requested process as

requiring specific authorization. It sends

a request to the directory server to verify

that the user has valid authorization.

5 Directory

server

Application

server

LDAP The directory server successfully verifies

the authorization through the user’s

LDAP role. The validated response is

returned to the application server.

Application server then processes

business logic represented in J2EE

application.

6 Application

server

Database

server

JDBC The business logic requests data from a

database as input for processing. The

requests may come from servlets, Java

Data Access Objects, or Enterprise

JavaBeans™ (EJBs) that in turn use Java

DataBase Connectivity (JDBC™) to access

the database.

7 Database

server

Application

server

JDBC The JDBC request can contain any valid

SQL statement. The database processes

the request natively and returns the

appropriate result through JDBC to the

application server.

34 Network Design Patterns: N-Tier Data Centers • October 2003

Page 37: Network Design Patterns: N-Tier Data Centers

Configuring the Physical Network

The next step involves implementing a real network based on the logical network

design. This process involves:

■ “Physical Network Connectivity” on page 38

■ “Configuring Switches” on page 40

There are several approaches to realize a network that functionally satisfies the

logical architectural requirements.

The Sun ONE N-Tier Data Center is vendor independent, so you can use the

network equipment that best suits your environment. In the lab, we built two

different network configurations. One configuration uses Extreme Networks Black

Diamond 6800 Core Switches, (FIGURE 18), and the other uses Foundry Networks Big

Iron Core Switch and Server Iron XL Load Balancer (FIGURE 19). The Extreme

Networks switch that we used has built-in load balancing, so there is no need for an

external load-balancing device. The Foundry Networks products required a separate

load-balancing switch.

Similar implementations could be realized using products from Cisco or Nortel.

8 Application

server

Web server Application

server Web

connector

over TCP

The J2EE application completes the

business logic processing, packages the

data for display, usually through a JSP

that renders HTML, and returns the

response to the web server.

9 Web server Switch HTTP/

HTTPS

Switch receives reply from web server.

10 Switch Client HTTP/

HTTPS

Switch rewrites IP header and returns

request to client.

TABLE 5 Sequence of Events for FIGURE 17 (Continued)

Item Interface1 Interface2 Protocol Description

Configuring the Physical Network 35

Page 38: Network Design Patterns: N-Tier Data Centers

FIGURE 18 Sun ONE N-Tier Network Configuration With Extreme Networks Equipment

Client 1 Client 2

L2-L3 edge switch

Webservicetier

Sun Fire6800

Sun Fire680010.30.0.101

Sun Fire6800

Sun Fire6800

10.30.0.100

Clientaccess

Extremeswitches

Applicationservicetier

Dataservicetier

Namingservicetier

Extreme switch192.168.10.2

Sun Fire 280R Sun Fire 280R Sun Fire 280R Sun Fire 280R

Sun Fire 280R Sun Fire 280R Sun Fire 280R Sun Fire 280R

T3

T3

192.168.10.1

10.10.0.110.20.0.110.40.0.110.30.0.1

10.50.0.1

10.10.0.100 10.10.0.101 10.10.0.102 10.10.0.103

10.20.0.100 10.20.0.101 10.20.0.102 10.20.0.103

10.40.0.100 10.40.0.101

Extreme switch192.168.10.3Core Core

36 Network Design Patterns: N-Tier Data Centers • October 2003

Page 39: Network Design Patterns: N-Tier Data Centers

FIGURE 19 Sun ONE N-Tier Network Configuration With Foundry Networks Equipment

Client 1 Client 2

Level 2-3 edge switch

Webservicetier

Sun Fire6800

Sun Fire680010.30.0.101

Sun Fire6800

Sun Fire6800

10.30.0.100

Clientaccess

Applicationservicetier

Dataservicetier

Namingservicetier

Master core192.168.10.2

Sun Fire 280R Sun Fire 280R Sun Fire 280R Sun Fire 280R

Sun Fire 280R Sun Fire 280R Sun Fire 280R Sun Fire 280R

T3

T3

192.168.10.1

10.10.0.110.20.0.110.40.0.110.30.0.1

10.50.0.1

10.10.0.100 10.10.0.101 10.10.0.102 10.10.0.103

10.20.0.100 10.20.0.101 10.20.0.102 10.20.0.103

10.40.0.100 10.40.0.101

Standby core192.168.10.3

Server load-balancer switches

Foundryswitches

Configuring the Physical Network 37

Page 40: Network Design Patterns: N-Tier Data Centers

Physical Network Connectivity

FIGURE 20 is a standardized diagram showing the IP addresses for both the Extreme

Networks and Foundry Networks configurations.

The Extreme Networks configuration would be mapped directly to mls1 and mls2,

because these switches incorporate SLB functionality. Foundry, however, requires

additional load-balancing appliances.

FIGURE 20 elements are described further in TABLE 6.

38 Network Design Patterns: N-Tier Data Centers • October 2003

Page 41: Network Design Patterns: N-Tier Data Centers

FIGURE 20 Physical Network Connections and Addressing

Client1Client2

Edge

mls1 mls2

ge0:172.16.0.101/24ge0:172.16.0.102/24

172.16.0.1/24

192.168.0.1/24

192.168.0.2/24 192.168.0.2/24

192.168.0.101/24 192.168.0.102/24

1 2 3 4

5 67 hme0:10.100.16.101hme0:10.100.16.10210.100.16.1

10.100.168.2

hme010.100.10.101

10.100.168.2

hme0:10.100.10.105

hme0:10.100.20.101 hme0:10.100.20.103

192.168.0.2/24 192.168.0.2/24

hme0:10.100.30.101 hme0:10.100.30.103

ge0:10.10.0.101/24

ge1:10.10.0.102/24

ge0:10.10.0.103/24

ge1: 10.10.0.104/24

ge0:10.10.0.105/24

ge1:10.10.0.106/26

ge0:10.10.0.107/24

ge1:10.10.0.108/26

app1

app2

app1

app2

10.10.0.1/24 10.10.0.1/24

ge0:10.40.0.101/24 ge1:10.40.0.102/24

ge0:10.40.0.103/24 ge1:10.40.0.104/24

ge0:10.40.0.105/24 ge1:10.40.0.106/24

ge0:10.10.0.107/24 ge1:10.40.0.108/24

web1 web1 web1 web1

ds1 ds2

10.20.0.1/24 10.20.0.1/24

ge0:10.20.0.101/24

ge1:10.20.0.102/24 ge1:10.20.0.104/24

ge0:10.20.0.103/24

10.30.0.1/24 10.30.0.1/24

ge1:10.30.0.104/24

db1 db2

ge0:10.30.0.101/24

ge1:10.30.0.102/24

ge0:10.30.0.103/24

Configuring the Physical Network 39

Page 42: Network Design Patterns: N-Tier Data Centers

Configuring Switches

A high-level overview of the switch configuration is shown in FIGURE 21.

TABLE 6 Physical Network Connections and Addressing

Switch Description Port Speed (Mb/s)BaseAddress Netmask

edge Client network to external

network router

1,2,3,4 1000 172.16.0.1 255.255.255.0

edge External network - mls1 5,6 1000 192.168.10.1 255.255.255.0

mls1 External network 1 1000 192.168.10.2 255.255.255.0

mls1 Web/application service

router

3,4,5,6 1000 10.10.0.1 255.255.255.0

mls1 Directory service router 7,8 1000 10.20.0.1 255.255.255.0

mls1 Database service router 9,10 1000 10.30.0.1 255.255.255.0

mls2 External network 1 1000 192.168.10.2 255.255.255.0

mls2 Web/application service

router

3,4,5,6 1000 10.10.0.1 255.255.255.0

mls2 Directory service router 7,8 1000 10.20.0.1 255.255.255.0

mls2 Database services router 9,10 1000 10.30.0.1 255.255.255.0

40 Network Design Patterns: N-Tier Data Centers • October 2003

Page 43: Network Design Patterns: N-Tier Data Centers

FIGURE 21 Collapsed Design Without Layer 2 Switches

Extreme Networks Summit 7

i

Extreme Networks - B

lack Diamond 6808

Slot 1

client

172.16.0.1

web

10.10.0.1

Slot 2

ds

10.20.0.1

Slot 3

db

10.30.0.1

Slot 4

app

10.40.0.1

Slot 5

DNS

10.50.0.1

Slot 6

Slot 7

Slot 8

mgt

10.100.0.1

Extreme Networks - B

lack Diamond 6808

Slot 1

web

10.10.0.1

Slot 2

ds

10.20.0.1

Slot 3

db

10.30.0.1

Slot 4

app

10.40.0.1

Slot 5

DNS

10.50.0.1

Slot 6

Slot 7

Slot 8

mgt

10.100.0.1

edge

192.168.0.2

esrp interconnect -

web, ds, d

b, app, D

NS

edge

192.168.0.2

esrp interconnect -

web, ds, d

b, app, D

NS

edge

192.168.0.2

Configuring the Physical Network 41

Page 44: Network Design Patterns: N-Tier Data Centers

Configuring the Extreme Networks Switches

For the Sun ONE N-Tier, two Extreme Networks BlackDiamond switches were used

for the core switches and one Summit7i switch was used for the edge switch.

Note – Network equipment from Foundry Networks can be used instead. Refer to

“Configuring the Foundry Networks Switches” on page 43.

▼ To Configure the Extreme Networks Switches

1. Configure the core switches.

The following example shows an excerpt of the switch configuration file.

## MSM64 Configuration generated Thu Dec 6 20:19:20 2001# Software Version 6.1.9 (Build 11) By Release_Master on 08/30/01 11:34:27

configure slot 1 module g8xconfigure slot 2 module g8xconfigure slot 3 module g8xconfigure slot 4 module g8xconfigure slot 5 module g8xconfigure slot 6 module g8xconfigure slot 7 module f48tconfigure slot 8 module f48t.....................................................configure dot1q ethertype 8100configure dot1p type dot1p_priority 0 qosprofile QP1configure dot1p type dot1p_priority 1 qosprofile QP2configure dot1p type dot1p_priority 2 qosprofile QP3configure dot1p type dot1p_priority 3 qosprofile QP4.....................................................enable sys-health-checkconfigure sys-health-check alarm-level logenable system-watchdogconfig qosprofile QP1 minbw 0% maxbw 100% priority Low minbuf 0% maxbuf 0 Kconfig qosprofile QP2 minbw 0% maxbw 100% priority LowHi minbuf 0% maxbuf 0 K

42 Network Design Patterns: N-Tier Data Centers • October 2003

Page 45: Network Design Patterns: N-Tier Data Centers

2. Configure the edge switch.

The following example shows an excerpt of the switch configuration file.

Configuring the Foundry Networks Switches

This section describes the network architecture implementation using Foundry

Networks equipment instead of Extreme Networks equipment. The overall setup is

shown in FIGURE 22.

## Summit7i Configuration generated Mon Dec 10 14:39:46 2001# Software Version 6.1.9 (Build 11) By Release_Master on 08/30/01 11:34:27configure dot1q ethertype 8100configure dot1p type dot1p_priority 0 qosprofile QP1....................................................enable system-watchdogconfig qosprofile QP1 minbw 0% maxbw 100% priority Low minbuf 0% maxbuf 0 K....................................................delete protocol ipdelete protocol ipxdelete protocol netbiosdelete protocol decnetdelete protocol appletalk....................................................# Config information for VLAN Default.config vlan “Default” tag 1 # VLAN-ID=0x1 Global Tag 1config vlan “Default” protocol “ANY”config vlan “Default” qosprofile “QP1”enable bootp vlan “Default”....................................................

Configuring the Physical Network 43

Page 46: Network Design Patterns: N-Tier Data Centers

FIGURE 22 Foundry Networks Implementation

Client Client Client Client

Servers Servers

Servers Servers

Servers Servers

Servers Servers

Servers Servers

Webservicetier

Directoryservicetier

Applicationservicetier

Databaseservicetier

Netscreenfirewall

NS5200Netscreen

firewall

NS5200

Server loadbalancer

SLB1BigIron

layer 2/3 switch

MLS0BigIron

layer 2/3 switch

MLS1Server load

balancer

SLB0

Extreme NetworksSummit 7i

S7i

44 Network Design Patterns: N-Tier Data Centers • October 2003

Page 47: Network Design Patterns: N-Tier Data Centers

Foundry Networks Master Core Switch Configuration

The following code box shows part of the configuration file for the BigIron master

core switch (called MLS0 in the lab).

module 1 bi-jc-8-port-gig-m4-management-modulemodule 3 bi-jc-48e-port-100-module!global-protocol-vlan!vlan 1 name DEFAULT-VLAN by portvlan 10 name refarch by port untagged ethe 1/1 ethe 3/1 to 3/16 router-interface ve 10!vlan 99 name mgmt by port untagged ethe 3/47 to 3/48 router-interface ve 99!hostname MLS0ip default-network 129.146.138.0/16ip route 192.168.0.0 255.255.255.0 172.0.0.1ip route 129.148.181.0 255.255.255.0 129.146.138.1ip route 0.0.0.0 0.0.0.0 129.146.138.1!router vrrp-extendedinterface ve 10 ip address 20.20.0.102 255.255.255.0 ip address 172.0.0.70 255.255.255.0 ip vrrp-extended vrid 1 backup priority 100 track-priority 20 advertise backup ip-address 172.0.0.10 dead-interval 1 track-port e 3/1 enable ip vrrp-extended vrid 2 backup priority 100 track-priority 20 advertise backup ip-address 20.20.0.100 dead-interval 1 track-port e 3/13 enable!interface ve 99 ip address 129.146.138.10 255.255.255.0!end

Configuring the Physical Network 45

Page 48: Network Design Patterns: N-Tier Data Centers

Foundry Networks Standby Core Switch Configuration

The following code box shows a partial listing of the configuration file for the

BigIron standby core switch (called MLS1 in the lab).

ver 07.5.05cT53!module 1 bi-jc-8-port-gig-m4-management-modulemodule 3 bi-jc-48e-port-100-module!global-protocol-vlan!!vlan 1 name DEFAULT-VLAN by port!vlan 99 name swan by port untagged ethe 1/6 to 1/8 router-interface ve 99!vlan 10 name refarch by port untagged ethe 3/1 to 3/16 router-interface ve 10!!hostname MLS1ip default-network 129.146.138.0/1ip route 192.168.0.0 255.255.255.0 172.0.0.1ip route 0.0.0.0 0.0.0.0 129.146.138.1!router vrrp-extendedinterface ve 10 ip address 20.20.0.102 255.255.255.0 ip address 172.0.0.71 255.255.255.0 ip vrrp-extended vrid 1 backup priority 100 track-priority 20 advertise backup ip-address 172.0.0.10 dead-interval 1 track-port e 3/1 enable ip vrrp-extended vrid 2 backup priority 100 track-priority 20 advertise backup ip-address 20.20.0.100 dead-interval 1 track-port e 3/13enableinterface ve 99ip address 129.146.138.11 255.255.255.0!!!!!sflow sample 512sflow source ethernet 3/1sflow enable!!!end

46 Network Design Patterns: N-Tier Data Centers • October 2003

Page 49: Network Design Patterns: N-Tier Data Centers

Foundry Networks Master Server Load Balancer Configuration

The following code box shows a partial listing of the configuration file used for the

master Server XL server load balancer (called SLB0 in the lab).

ver 07.3.05T12global-protocol-vlan!!server source-ip 20.20.0.50 255.255.255.0 172.0.0.10!!!!server real web1 10.20.0.1 port http port http url "HEAD /"!server real web2 10.20.0.2 port http port http url "HEAD /"!!server virtual WebVip1 192.168.0.100 port http port http dsr bind http web1 http web2 http!

!vlan 1 name DEFAULT-VLAN by port no spanning-tree!hostname SLB0ip address 192.168.0.111 255.255.255.0ip default-gateway 192.168.0.10web-management allow-no-passwordbanner motd ^CReference Architecture -- Enterprise Engineering^CServer Load Balancer-- SLB0 129.146.138.12/24^C!!end

Configuring the Physical Network 47

Page 50: Network Design Patterns: N-Tier Data Centers

Foundry Networks Standby Server Load Balancer Configuration

The following code box shows a partial listing of the configuration file used for the

standby Server XL server load balancer (called SLB1 in the lab).

Network Security

For the N-Tier network design, firewalls were configured between each service tier to

provide network security. FIGURE 23 shows the relationship between the firewalls

and the service tiers.

ver 07.3.05T12global-protocol-vlan!!server source-ip 20.20.0.51 255.255.255.0 172.0.0.10!!!!server real s1 20.20.0.1 port http port http url "HEAD /"!server real s2 20.20.0.2 port http port http url "HEAD /"!!server virtual vip1 172.0.0.11 port http port http dsr bind http s1 http s2 http!

!vlan 1 name DEFAULT-VLAN by port!hostname SLB1ip address 172.0.0.112 255.255.255.0ip default-gateway 172.0.0.10web-management allow-no-passwordbanner motd ^CReference Architecture - Enterprise Engineering^CServer Load Balancer - SLB1 - 129.146.138.13/24^C!

48 Network Design Patterns: N-Tier Data Centers • October 2003

Page 51: Network Design Patterns: N-Tier Data Centers

FIGURE 23 Firewalls Between Service Tiers

In the lab, one physical firewall device was used to create multiple virtual firewalls.

Network traffic was directed to pass through the firewalls between the service tiers,

as shown in FIGURE 24.

The core switch is only configured for Layer 2 with separate port-based VLANs. The

connection between the Netscreen device and the core switch uses tagged VLANS.

Trust zones are created on the Netscreen device, and they map directly to the tagged

VLANs. The Netscreen firewall device performs the Layer 3 routing. This

configuration directs all traffic through the firewall, resulting in firewall protection

between each service tier.

Firewall

Firewall

Firewall

Web servicetier

Application servicetier

Database servicetier

Edgeswitch

Intranet/Internet

Client Client

Network Security 49

Page 52: Network Design Patterns: N-Tier Data Centers

FIGURE 24 Virtual Firewall Architecture Using Netscreen and Foundry NetworksProducts

Edgeswitch

Intranet/Internet

Client Client

Coreswitch

CoreswitchVLAN*

Database VLAN

Netscreendevice Netscreen

deviceVLAN*

Application VLAN

Web VLAN

Database VLAN

Application VLAN

Web VLAN

Web servicetier

Application servicetier

Database servicetier

Traffic

*Web, application and database traffic multiplexed on one VLAN

50 Network Design Patterns: N-Tier Data Centers • October 2003

Page 53: Network Design Patterns: N-Tier Data Centers

Netscreen Firewall

The following code box shows a partial example of a configuration file used to

configure the Netscreen device.

set auth timeout 10

set clock "timezone" 0

set admin format dos

set admin name "netscreen"

set admin password nKVUM2rwMUzPcrkG5sWIHdCtqkAibn

set admin sys-ip 0.0.0.0

set admin auth timeout 0

set admin auth type Local

set zone id 1000 "DMZ1"

set zone id 1001 "web"

set zone id 1002 "appsrvr"

set zone "Untrust" block

set zone "DMZ" vrouter untrust-vr

set zone "MGT" block

set zone "DMZ1" vrouter trust-vr

set zone "web" vrouter trust-vr

set zone "appsrvr" vrouter trust-vr

set ip tftp retry 10

set ip tftp timeout 2

set interface ethernet1 zone DMZ1

set interface ethernet2 zone web

set interface ethernet3 zone appsrvr

set interface ethernet1 ip 192.168.0.253/24

set interface ethernet1 route

set interface ethernet2 ip 10.10.0.253/24

set interface ethernet2 route

set interface ethernet3 ip 20.20.0.253/24

set interface ethernet3 route

unset interface vlan1 bypass-others-ipsec

unset interface vlan1 bypass-non-ip

set interface ethernet1 manage ping

unset interface ethernet1 manage scs

unset interface ethernet1 manage telnet

unset interface ethernet1 manage snmp

unset interface ethernet1 manage global

unset interface ethernet1 manage global-pro

unset interface ethernet1 manage ssl

set interface ethernet1 manage web

unset interface ethernet1 ident-reset

set interface vlan1 manage ping

set interface vlan1 manage scs

set interface vlan1 manage telnet

set interface vlan1 manage snmp

set interface vlan1 manage global

set interface vlan1 manage global-pro

set interface vlan1 manage ssl

set interface vlan1 manage web

Network Security 51

Page 54: Network Design Patterns: N-Tier Data Centers

set interface v1-trust manage ping

set interface v1-trust manage scs

set interface v1-trust manage telnet

set interface v1-trust manage snmp

set interface v1-trust manage global

set interface v1-trust manage global-pro

set interface v1-trust manage ssl

set interface v1-trust manage web

unset interface v1-trust ident-reset

unset interface v1-untrust manage ping

unset interface v1-untrust manage scs

unset interface v1-untrust manage telnet

unset interface v1-untrust manage snmp

unset interface v1-untrust manage global

unset interface v1-untrust manage global-pro

unset interface v1-untrust manage ssl

unset interface v1-untrust manage web

unset interface v1-untrust ident-reset

set interface v1-dmz manage ping

unset interface v1-dmz manage scs

unset interface v1-dmz manage telnet

unset interface v1-dmz manage snmp

unset interface v1-dmz manage global

unset interface v1-dmz manage global-pro

unset interface v1-dmz manage ssl

unset interface v1-dmz manage web

unset interface v1-dmz ident-reset

set interface ethernet2 manage ping

unset interface ethernet2 manage scs

unset interface ethernet2 manage telnet

unset interface ethernet2 manage snmp

unset interface ethernet2 manage global

unset interface ethernet2 manage global-pro

unset interface ethernet2 manage ssl

unset interface ethernet2 manage web

unset interface ethernet2 ident-reset

set interface ethernet3 manage ping

unset interface ethernet3 manage scs

unset interface ethernet3 manage telnet

unset interface ethernet3 manage snmp

unset interface ethernet3 manage global

unset interface ethernet3 manage global-pro

unset interface ethernet3 manage ssl

unset interface ethernet3 manage web

unset interface ethernet3 ident-reset

set interface v1-untrust screen tear-drop

set interface v1-untrust screen syn-flood

set interface v1-untrust screen ping-death

set interface v1-untrust screen ip-filter-src

set interface v1-untrust screen land

set flow mac-flooding

set flow check-session

set address DMZ1 "dmznet" 192.168.0.0 255.255.255.0

set address web "webnet" 10.10.0.0 255.255.255.0

52 Network Design Patterns: N-Tier Data Centers • October 2003

Page 55: Network Design Patterns: N-Tier Data Centers

About the Authors

Deepak Kakadia is a staff engineer and network architect in the Reference

Architecture Engineering Group for Sun Microsystems in Menlo Park, California.

Deepak has been with Sun since 1994. He has previously worked with Corona

Networks, Digital Equipment Corp., and Nortel Networks. He has a Bachelor’s

Degree in Computer Systems Engineering, a Master’s Degree in Computer Science,

has completed Ph.D qualifying exams, and has completed his SCPD certificate in

Networking through the Department of Electrical Engineering at Stanford

University. He has filed four patents in the area of Networking and Systems

Management.

Richard Croucher is Chief Architect of Sun’s Professional Services in EMEA. He is

also an SMI Technical Director. Richard joined Sun in 1995. He has nearly 20 years of

experience with UNIX and nearly 30 years of experience in the IT and electronics

industries. He holds a Higher National Certificate in Applied Physics, a Higher

National Diploma in Electronics, and has received a post-graduate Certificate of

Advanced Study in Non-Destructive Testing and Materials Science from Brunel

University. He was elected to membership of the British Computer Society in 1993.

set address appsrvr "appnet" 20.20.0.0 255.255.255.0

set snmp name "ns208"

set traffic-shaping ip_precedence 7 6 5 4 3 2 1 0

set ike policy-checking

set ike respond-bad-spi 1

set ike id-mode subnet

set l2tp default auth local

set l2tp default ppp-auth any

set l2tp default radius-port 1645

set policy id 0 from DMZ1 to web "dmznet" "webnet" "ANY" Permit

set policy id 1 from web to DMZ1 "webnet" "dmznet" "ANY" Permit

set policy id 2 from DMZ1 to appsrvr "dmznet" "appnet" "ANY" Permit

set policy id 3 from appsrvr to DMZ1 "appnet" "dmznet" "ANY" Permit

set ha interface ethernet8

set ha track threshold 255

set pki authority default scep mode "auto"

set pki x509 default cert-path partial

_____________________

About the Authors 53

Page 56: Network Design Patterns: N-Tier Data Centers

at:

Acknowledgements

Sincere thanks to Bill Sprouse, Kemer Thomson, Brad Carlile, and many others who

provided tremendous support, guidance, and direction to this article.

Related Resources

Sun ONE Directory Server 5.1 documentation collection

Note: This product was formerly known as the iPlanet™ Directory Server

software. You can locate the documentation collection at:

http://docs.sun.com

Search for the phrase directory server 5.1 . Start with the iPlanet DirectoryServer 5.1 Deployment Guide.

Ordering Sun Documents

The SunDocsSM program provides more than 250 manuals from Sun Microsystems,

Inc. If you live in the United States, Canada, Europe, or Japan, you can purchase

documentation sets or individual manuals through this program.

Accessing Sun Documentation Online

The docs.sun.com web site enables you to access Sun technical documentation

online. You can browse the docs.sun.com archive or search for a specific book title

or subject. The URL is http://docs.sun.com/

To reference Sun BluePrints™ OnLine articles, visit the Sun BluePrints OnLine web sitehttp://www.sun.com/blueprints/online.html

54 Network Design Patterns: N-Tier Data Centers • October 2003