Upload
tal-lavian-phd
View
78
Download
2
Embed Size (px)
DESCRIPTION
The DWDM-RAM architecture identifies two distinct planes over the dynamic underlying optical network: the Data Grid Plane that speaks for the diverse requirements of a data-intensive application by providing generic data-intensive interfaces and services and 2) the Network Grid Plane that marshals the raw bandwidth of the underlying optical network into network services, within the OGSI framework, and that matches the complex requirements specified by the Data Grid Plane. At the application middleware layer, the Data Transfer Service (DTS) presents an interface between the system and an application. It receives high-level client requests, policy-and-access filtered, to transfer specific named blocks of data with specific advance scheduling constraints. The network resource middleware layer consists of three services: the Data Handler Service (DHS), the Network Resource Service (NRS) and the Dynamic Lambda Grid Service (DLGS). Services of this layer initiate and control sharing of resources.
Citation preview
DWDM-RAM:DARPA-Sponsored Research for
Data Intensive Service-on-Demand Advanced Optical Networks
DWDMRAM
DWDMRAM
Data@LIGHTspeed
The Data Intensive App Challenge: Emerging data intensive applications in the field of HEP,astro-physics, astronomy, bioinformatics, computational chemistry, etc., require extremely high performance andlong term data flows, scalability for huge data volume,global reach, adjustability to unpredictable traffic behavior, and integration with multiple Grid resources.
Response: DWDM-RAM An architecture for data intensive Grids enabled by next generation dynamic optical networks, incorporating new methods for lightpath provisioning. DWDM-RAM is designed to meet the networking challenges of extremely large scale Grid applications. Traditional network infrastructure cannot meet these demands, especially, requirements for intensive data flows
Data-Intensive Applications
DWDM-RAM
Abundant Optical Bandwidth
PBs Storage
Tbs on single fiber strand
Optical Abundant Bandwidth Meets Grid
The DWDM-RAM architecture identifies two distinct planes over the dynamic underlying optical network: 1) the Data Grid Plane that speaks for the diverse requirements of a data-
intensive application by providing generic data-intensive interfaces and services and
2) the Network Grid Plane that marshals the raw bandwidth of the underlying optical network into network services, within the OGSI framework, and that matches the complex requirements specified by the Data Grid Plane.
At the application middleware layer, the Data Transfer Service (DTS) presents an interface between the system and an application. It receives high-level client requests, policy-and-access filtered, to transfer specific named blocks of data with specific advance scheduling constraints.
The network resource middleware layer consists of three services: the Data Handler Service (DHS), the Network Resource Service (NRS) and the Dynamic Lambda Grid Service (DLGS). Services of this layer initiate and control sharing of resources.
DWDM-RAM Architecture
DataCenter
λ1
λn
λ1
λn
DataCenter
Data-Intensive Applications
Dynamic Lambda, Optical Burst, etc., Grid services
Dynamic Optical Network OMNInet
DataTransfer Service
Basic NetworkResource
Service
NetworkResource Scheduler
Network Resource Service
DataHandlerService
Informa tion S
e rviceApplication MiddlewareLayer
Network ResourceMiddlewareLayer
Connectivity and Fabric Layers
λ OGSI-ification API
NRS Grid Service API
DTS API
Optical path control
DWDM-RAM Architecture
Application
Fabric“Controlling things locally”: Access to, & control of, resources
Connectivity
“Talking to things”: communication (Internet protocols) & security
Resource
“Sharing single resources”: negotiating access, controlling use
Collective
“Coordinating multiple resources”: ubiquitous infrastructure services, app-specific distributed services
Data TransferService
NetworkResource
Service
Data Path ControlService
Layered DWDM-RAM Layered Grid
λ’s
Application
Optical ControlPlane
Application MiddlewareLayer
Network ResourceMiddlewareLayer
Connectivity &Fabric Layer
λ OGSI-ification API
NRS Grid Service API
DTS API
DWDM-RAM vs. Layered Grid Architecture
Optical Control Network
Optical Control Network
Network Service Request
Data Transmission Plane
OmniNet Control PlaneODIN
UNI-N
ODIN
UNI-N
Connection Control
L3 router
L2 switch
Data storageswitch
DataPath
Control
DataPath Control
DATA GRID SERVICE PLANEDATA GRID SERVICE PLANE
λ1 λn
λ1
λn
λ1
λn
DataPath
DataCenter
ServiceControl
ServiceControl
NETWORK SERVICE PLANENETWORK SERVICE PLANE
GRID Service Request
DataCenter
DWDM-RAM Service Control Architecture
4x10GE
Northwestern U
OpticalSwitchingPlatform
Passport8600
ApplicationCluster
• A four-node multi-site optical metro testbed network in Chicago -- the first 10GE service trial!• A test bed for all-optical switching and advanced high-speed services• OMNInet testbed Partners: SBC, Nortel, iCAIR at Northwestern, EVL, CANARIE, ANL
ApplicationCluster
OpticalSwitchingPlatform
Passport8600
4x10GE
StarLight
OPTera Metro5200
ApplicationCluster
OpticalSwitchingPlatform
Passport8600
4x10GE8x1GE
UIC
CA*net3--Chicago
OpticalSwitchingPlatform
Passport8600
Closed loop
4x10GE8x1GE
8x1GE
8x1GELoop
OMNInet Core Nodes
Fiber
KM MI1* 35.3 22.02 10.3 6.43* 12.4 7.74 7.2 4.55 24.1 15.06 24.1 15.07* 24.9 15.58 6.7 4.29 5.3 3.3
NWUEN Link
Span Length
CAMPUSFIBER (16)
CAMPUSFIBER (4)
GridClusters10/100/
GIGE
10 GE
10 GE
To Ca*Net 4
Lake Shore
Photonic Node
S. Federal
Photonic Node
W Taylor SheridanPhotonic
Node 10/100/GIGE
10/100/GIGE
10/100/GIGE
10 GE
Optera5200
10Gb/sTSPR
Photonic Node
λ4
PP
8600
10 GEPP
8600
PP
8600
2
3
4
1λλλ
λ
λλ
λ2
3
1
Optera5200
10Gb/sTSPR
10 GE
10 GE
Optera5200
10Gb/sTSPR
2
3
4
1λλλ
λ
Optera5200
10Gb/sTSPR
2
3
4
1λλλ
λ
1310 nm 10 GbEWAN PHY interfaces
10 GE
10 GE
PP
8600
…
EVL/UICOM5200
LAC/UICOM5200
INITIALCONFIG:10 LAMBDA(all GIGE)
StarLightInterconnect
with otherresearchnetworks
10GE LAN PHY (Dec 03)
TECH/NU-EOM5200
CAMPUSFIBER (4)
INITIALCONFIG:10 LAMBDAS(ALL GIGE)
Optera Metro 5200 OFA
NWUEN-1
NWUEN-5
NWUEN-6NWUEN-2
NWUEN-3
NWUEN-4
NWUEN-8 NWUEN-9
NWUEN-7
Fiber in use
Fiber not in use
5200 OFA
5200 OFA
Optera 5200 OFA
5200 OFA
• 8x8x8λ Scalable photonic switch• Trunk side – 10 G WDM
• OFA on all trunks
OMNInet
Data Management ServicesOGSA/OGSI compliant, capable of receiving and understanding application requests, have complete knowledge of network resources, transmit signals to intelligent middleware, understand communications from Grid infrastructure, adjust to changing requirements, understands edge resources, on-demand or scheduled processing, support various models for scheduling, priority setting, event synchronization
Intelligent Middleware for Adaptive Optical NetworkingOGSA/OGSI compliant, integrated with Globus, receives requests from data services and applications, knowledgeable about Grid resources, has complete understanding of dynamic lightpath provisioning, communicates to optical network services layer, can be integrated with GRAM for co-management, architecture is flexible and extensible
Dynamic Lightpath Provisioning ServicesOptical Dynamic Intelligent Networking (ODIN), OGSA/OGSI compliant, receives requests from middleware services, knowledgeable about optical network resources, provides dynamic lightpath provisioning, communicates to optical network protocol layer, precise wavelength control, intradomain as well as interdomain, contains mechanisms for extending lightpaths through E-Paths - electronic paths, incorporates specialized signaling, utilizes IETF – GMPLS for provisioning, new photonic protocols
DWDM-RAM Components
Network and Data Transfers scheduled• Data Management schedule coordinates network, retrieval, and sourcing services (using their schedulers)• Scheduled data resource reservation service (“Provide 2 TB storage between 14:00 and 18:00 tomorrow”)
Network Management has own schedule• Variety of request models:
• Fixed – at a specific time, for specific duration• Under-constrained – e.g. ASAP, or within a window
Auto-rescheduling for optimization• Facilitated by under-constrained requests• Data Management reschedules for its own requests or on request of Network Management
Design for Scheduling
• Request for 1/2 hour between 4:00 and 5:30 on Segment D granted to User W at 4:00
• New request from User X for same segment for 1 hour between 3:30 and 5:00
• Reschedule user W to 4:30; user X to 3:30. Everyone is happy.
Route allocated for a time slot; new request comes in; 1st route can be rescheduled for a later slot within window to accommodate new request
4:30 5:00 5:304:003:30
W
4:30 5:00 5:304:003:30
X
4:30 5:00 5:304:003:30
WX
Example: Lightpath Scheduling
0.5s 3.6s 0.5s 174s 0.3s 11s
OD
IN S
erver P
rocessin
g
File transfer
done, path released
File transfer
request arrives
Path
Deallo
cation
requ
estData T
ransfer
20 GB
Path
ID
return
ed
OD
IN S
erver P
rocessin
g
Path
Allo
cation
req
uest
25s
Netw
ork
recon
figu
ration
0.14sF
TP
setup
tim
e
End-to-end Transfer Time
20GB File TransferSet up: 29.7s
Transfer: 174.0sTear down: 11.3s
20GB File Transfer