Deutsches Forschungsnetz

Preview:

DESCRIPTION

Deutsches Forschungsnetz. GRIDs and Networks. Complementary Infrastructures for the Scientific Community K.Schauerhammer, K. Ullmann DFN-Verein, Berlin DESY Hamburg, 8.12.2003. GRIDs - networked applications GRIDs as network application User requirements on GRIDs D-GRID - PowerPoint PPT Presentation

Citation preview

Deutsches Forschungsnetz

GRIDs and Networks

Complementary Infrastructures for the Scientific Community

K.Schauerhammer, K. Ullmann

DFN-Verein, Berlin

DESY Hamburg, 8.12.2003

GRIDs - networked applications• GRIDs as network application• User requirements on GRIDs• D-GRID

The network - basis for GRIDs• Status G-WiN• Challenge X-WiN• Technology Options X-WiN• The Market for Dark Fibre• International Developments• Roadmap X-WiN• Engineering Issues X-WiN

Seite 4

GRIDs as network application (1)

• GRIDs: Set of network based applications, using distributed resources (compute services, storage services, common data repositories,...), „owned“ by a specific group of users („Virtual Organisation“)

• Example: large scale physics experiments organise their data evaluation in a distributed fashion

Seite 5

GRIDs as network application (2)LHC GRID

Centre 0

Experiment

Centre 2Centre 1 Centre n

user

Data- andCompute-resources

Network access

TIER 0

TIER 1

TIER 2 - n

LHC GRID

Seite 6

GRIDs as network application (3)LHC GRID

• 4 experiments, start: 2007

• 11 PBytes/year

• LHC Community: 5000 physicists, 300 research institutes in 50 countries

• lifetime: 15 years

Seite 7

User Requirements on GRIDs (1)

• Middleware Services like Authentication and Authorisation for ressource access („I want to restrict access to my data to users from my experiment only“)

• reliable and (sometimes) SLA guaranteed network access („I want to contribute data from my repositiory to an evaluation algo-rithm at site x with guaranteed minimum delay“)

Seite 8

User Requirements on GRIDs (2)

• directories discribing resources

• unique user interfaces

• scheduling and monitoring facilities for using resources

• facilities for error correction

• user support and hotline services

• dissemination and training of „GRID know how“

Seite 9

User Requirements on GRIDs (3)

• such user requirements clearly lead to the notion of a GRID infrastructure because– services are for multiple use (for many user

groups / GRIDs)– services have to be reliable (and not

experimental)

• implication: Services have to be „engineered“– looks simple but it isn‘t due to ongoing technology

changes

Seite 10

Seite 11

D-GRID (2) - Middleware

• Same questions:– Single sign on for services and network access– AAA-Services– Migration of existing directories

• PKI, LDAP, /etc/usr/passwd files

• Coordination is a big challenge!

• Chance for generating benefit for all communities

• Component of eScience

Seite 12

D-GRID (3) - NRENs tasks

• Provide network and the generic bundle of GRID related middleware services– Reasons:

• end users will (potentially) be on multiple GRIDs• economy• NRENs used to offer this sort of infrastructure

• problem not trivial (multi domain problem)

Seite 13

D-GRID (3a) - NRENs tasks multi domain problem

NREN1

Geant

NREN3

NREN2

Data 2

Data 1 Data 0

Both application problem and network problem aremulti domain and have an international dimension

Seite 14

D-GRID (4) - The Initative

International activities:– USA: Cyberinfrastructure programme 12 Mio$/y– UK: eScience 100 Mio £/4 Years– NL: Virtual Lab– EU: 6th framework projects (EGEE 32 Mio€/2y)

Actual state in DE (until beginning of 2003):– several single projects– low coordination between communities and

funding bodies– low representation of common interests of

German Research Community

Seite 15

D-GRID (5) - The Initative

• 3 meetings of representatives of all involved research institutes + DFN + industry + BMBF

• Goal of D-GRID: Bundle activities for global, distributed and

enhanced research collaboration based on internet-services

===> build an e-science-framework

Seite 16

D-GRID (6) - Organisation

• D-GRID Board: Hegering (LRZ), Hiller (AWI), Maschuw(FZK, GRIDKa), Reinefeld (ZIB), Resch (HLRS)

• Tasks: – to prepare a political strategic statement of

the German research community

– to build up WGs, to plan MoU

– to develop a working program

Seite 17

D-GRID (7) - Role of DFN

• Role of DFN:

– to provide network resource for GRIDs (special GRID-Access to G-WiN)

– to provide and support Middleware-Services (i.e. PKI, AAA)

– to participate in developing work program for the next years

– to participate in international projects like EGEE and GN2

Seite 18

D-GRID (8) - Role of BMBF

• BMBF expects common commitment and co-funding from research organisations and industry

• in Q3/04 tender for e-science-projects

• BMBF funding announced:

5 - 10 Mio €/y in 2005 - 2008

GRIDs - networked applications• GRIDs as network application• User requirements on GRIDs• D-GRID

The Network - basis for GRIDs• Status G-WiN• Challenge X-WiN• Technology Options X-WiN• The Market for Dark Fibre• International Developments• Roadmap X-WiN• Engineering Issues X-WiN

Seite 20

G-WiN (1) - General characteristic

• 27 nodes distributed in Germany mostly in universities / research labs

• core: flexible SDH platform (2,5 G; 10 G)

• ~ 500 access lines 128K - 622 M

• occasional lambda-links and „dark fiber“

• own IP NOC

• special customer driven solutions (VPNs, accesses etc.) are based on the platform

• diverse access options including dial-up and dsl (dfn@home)

Seite 21

Stuttgart

Leipzig

Berlin

Frankfurt

Karlsruhe

Garching

Kiel

Braunschweig

Dresden

Aachen

RegensburgKaiserslautern Augsburg

Bielefeld

Hannover

ErlangenHeidelberg

IlmenauWürzburg

Magdeburg

Marburg

Göttingen

Oldenburg

Essen

St. Augustin

Core node10 Gbit/s2,4 Gbit/s2,4 Gbit/s622 Mbit/sas of 12/03

Rostock

Global Upstream

GEANT

Hamburg

G-WiN (2) - Topology

Seite 22

G-WiN (2a) - Extension plans 04

Leipzig

Berlin

Frankfurt

Karlsruhe

Garching

Kiel

Dresden

Aachen

Regensburg

Kaiserslautern

Augsburg

Hannover

ErlangenHeidelberg

IlmenauWürzburg

Magdeburg

Oldenburg

Essen

St. Augustin

Rostock

Global UpstreamHamburg

Stuttgart

Braunschweig

Marburg

Bielefeld

Göttingen

Core node10 Gbit/s2,4 Gbit/s2,4 Gbit/s622 Mbit/sas of Q3/04

Geant10Gbit/s

Seite 23

G-WiN (2b) - Geant

Seite 24

G-WiN (3) - Usage

• New Demands:– GRID: „(V)PN“ + Middleware + Applications– „value added“ IP-Services

• Examples for new usage patterns:– computer-computer link H-B– Videoconference Service

• Volume and growth rate see figure

Seite 25

G-WiN (4) Usage figures

0

200

400

600

800

1.000

1.200

1.400

Da

ten

vo

lum

en

[T

era

by

te/M

on

at]

Okt 02 Nov 02 Dez 02 Jan 03 Feb 03 Mrz 03 Apr 03 Mai 03 Jun 03 Jul 03 Aug 03 Sep 03 Okt 03

Monat

Entwicklung des importierten Datenvolumens

Global Upstream

Géant

sonstige ISP

T-Interconnect

DE-CIX

622 Mbit/s

155 Mbit/s

34 Mbit/s

2 Mbit/s

0,128 Mbits

Seite 26

G-WiN (5) - QoS (core)

Seite 27

G-WiN (6) - QoS measurements

Performance measurements for particle physics community: TCP (GRIDKa/G-WiN/Geant/CERN) between E1 and E2

GeantG-WiN CERNE1 E2

Throughput E1-E2

Router Sequence

10G2,4G1G 1GFlowspossible

Net-works

Seite 28

G-WiN (6a) - QoS measurementsResults

Src-Sink Rate send Rate rec.UDP-3 Ka - CERN 980Mbit/s 956Mbit/sUDP-4 CERN-Ka Dto. 954Mbit/sTCP-1 Ka-CERN Dto. 510Mbit/sTCP-8 Ka-CERN Dto. 923Mbit/s

Seite 29

Challenges for X-WiN (1)

• DFNInternet– low delay (<10 ms) and jitter (< 1 ms)– packet loss extremely low (see measurements)– Throughput per user stream >1 Gbit/s possible– priority option for special applications– Desaster recovery

Seite 30

Challenges for X-WiN (2)

• Special solutions on demand should be possible

• distributed data processing, i.e. GRIDs (radio astronomy, particle physics, biology, data storage, computing ...)

• dedicated source - sink characteristic of streams

Seite 31

Challenges for X-WiN (3)

• 10 G links between all nodes

• Flexible reconfiguration (<7d)

• cheap ethernet - expensive routers !?

• High MTBF, MTTR in core and components

• 24/7 operation of platform

• Bandwidth on demand if technically and economically feasible

Seite 32

Technology Options X-WiN (1)General• There is nothing „magic“

• diverse approaches possible

• optimal value adding - options:– SDH/Ethernet as basic platform?– managed lambdas?– managed dark fiber and own WDM?

• 24/7 operation of the platform

Seite 33

Technology Options X-WiN (2)SDH Ethernet Service• Package containing

– Flexibility for reconfiguration– operation staff with 24/7 availability– SLAs with legal bindings

• tool box model– n SDH/Ethernet links– specified time lines for reconfiguration– functional extension of tool box possible

Seite 34

Technology Options X-WiN (3) Managed Lambdas• Service contains

– Lambdas as service– SDH/Ethernet „do it yourself or by others“– Switched 10G network ("L2-WiN")– Switching in L2-WiN according to user needs– operation L2-WiN 24/7– Advantage: Shaping according to own needs

possible

Seite 35

Technology Options X-WiN (4) Managed dark fiber• like managed lambdas, but...

– Buy own managed dark fiber (L1-WiN)– WDM „self-made“ value-adding

• Filter, optical MUX, EDFA, 3R

– 24/7 operation as service?

• Advantage: additional bandwidth rather cheap and scalable

Seite 36

Market for dark fibre (example)

• Example GasLine

• LWL along gas pipelines

• very high MTBF

• Budget offer looks interesting

• Lot of user sites along the links

• business model possible...

Seite 37

International Developments (1)

• Hypothesis: "Ethernet-Switches with 10 G Interfaces are stable and cheap."

• New generations for research networks– USA (Abilene)– Poland (Pionier)– Czech Republic (Cesnet)– Netherlands (Surfnet)– Canada (Canarie)– ....

Seite 38

International Developments (2)

Seite 39

International Developments (3)

Seite 40

Engineering Issues (1)The traffic matrix• A network can engineering-wise be

described by a traffic matrix T where T(i,j) describes the traffic flow requirements between network end points (i) and (j)

• T(i,j) (can) map user requirements directly

• Every network has an underlying T (explicitly in case of engineered networks or implicitely by „grown“ networks)

Seite 41

Engineering Issues (2)Examples for T• G-WiN: Assumption of statist. traffic mix (FT,

Visualisation etc.); T(i,j) describes load at peak usage time; bandwidth of a (network) link described by T(i,j) is always 4 times higher than the peak load.

• Video Conferencing network: specific requirements in respect to jitter

Seite 42

Engineering Issues (3)The „LHC T“• Experiment evaluation facilities must be

available in the middle of the decade

• due to a couple of technology dependencies of the evaluation systems the 2005/2006 perspective of T not exactly known today

• compromise: T has to be iterated on a 1-2 years basis

• ongoing e2e measurements

• close cooperation for example in the EGEE context

Seite 43

Roadmap X-WiN

• Testbed activities (optical testbed „Viola“) (network technology tests in (real) user environments, design input for X-WiN)

• At present: meetings with suppliers of– dark fiber, operation, technical components– feasibility study (ready early 2004)

• road map– market investigation Q1/04– Concept until Q3/04; CFP Q4/04– 2005: Migration G-WiN -> X-WiN: Q4/05

Recommended