Upload
ronli
View
24
Download
0
Embed Size (px)
DESCRIPTION
Retrofitting the CalREN Optical Network for Hybrid Services. ken lindahl Chair, CENIC High Performance Research network Technical Advisory Council [email protected] Joint Techs Workshop, 12 Feb 2007, Minneapolis. - PowerPoint PPT Presentation
Citation preview
Joint Techs Workshop, 12 Feb 2007, Minneapolis 1
Retrofitting the CalREN Optical NetworkRetrofitting the CalREN Optical Networkfor Hybrid Services for Hybrid Services
ken lindahl
Chair, CENIC High Performance Research networkTechnical Advisory Council
Joint Techs Workshop, 12 Feb 2007, Minneapolis
The Corporation for Education Network Initiatives in California • (714) 220-3400 • [email protected] • www.cenic.org
Joint Techs Workshop, 12 Feb 2007, Minneapolis 2
goalsgoals• provide “lightpath” (or “lightpath-like”) connections to
researchers on campuses connected to CalREN.– between any two or more HPR-connected campuses.
– between any campus and NewNet.
– between any campus and NLR {PacketNet, WaveNet}.
• dynamic, user-controlled set up and tear down– well sure, some day…
– initially, manual setup by CENIC NOC on the order of 2-3 days.
– later, extend control to researchers.
• and of course, continue providing reliable routed IP service to all CalREN-connected campuses.
Joint Techs Workshop, 12 Feb 2007, Minneapolis 3
CalREN Optical NetworkCalREN Optical Network
• two fiber paths, the Coastal path and the Central Valley path, running the length of the state from Corning in the north to San Diego in the south.
• Cisco 15808 DWDM gear at most nodes, some newer 15454 gear.– 6500s w/ CWDM optics on
Riverside/Palm Desert/El Centro/ San Diego loop.
• Cisco 15540 DWDM gear on both ends of campus access fiber.
Joint Techs Workshop, 12 Feb 2007, Minneapolis 4
CalREN Optical NetworkCalREN Optical Network• Abilene:
– 10GE connection in Los Angeles;
– 10GE backup (routed IP) in San Jose, via PacificWave to PNWGP, Seattle.
• National LambdaRail:– 10GE PacketNet connection in Los Angeles;
– 10GE FrameNet connection in San Jose.
– Easy access to additional NLR services in Los Angeles and San Jose.
• PacificWave:– international peering exchange facility, running on CalREN and
NLR waves, 10 Gbps switched fabric with 10GE connections to CalREN at Los Angeles and San Jose.
Joint Techs Workshop, 12 Feb 2007, Minneapolis 5
CalREN service tiers,CalREN service tiers, “technology refresh” opportunities “technology refresh” opportunities
• CalREN-DC: currently being refreshed
• CalREN-HPR: 2007-2008
• CalREN Optical Network: 2008-2009
Joint Techs Workshop, 12 Feb 2007, Minneapolis 6
CalREN-HPR refreshCalREN-HPR refresh• planned enhancements to routed IP network are not
particularly interesting:– capacity for more 10GE campus connections (largely a matter of
router real estate);
– capability for >10Gbps on backbone connections (multiple 10GEs or 40GE/OC-768 or 100GE).
• planning some layer 1 and 2 services, as well:– motivated by requests from researchers over the past 3 years;
– “XD Services” white paper written by Board subcommittee (comprised of mostly researchers rather than network guys);
– deploying shared infrastructure on the optical backbone, will (hopefully) reduce costs to individual researchers;
– reasonably good fit with NewNet and NLR wave and frame services.
Joint Techs Workshop, 12 Feb 2007, Minneapolis 7
caveatscaveats
• the services and designs in this presentation have not been approved by CENIC engineering staff;
• nor have they been approved (and funded) by the CENIC Board of Directors.
Joint Techs Workshop, 12 Feb 2007, Minneapolis 8
requests from researchersrequests from researchers• Researchers have requested dedicated, layer 2 private
networks between campuses, e.g.:– DETER requested a 1 Gbps layer 2 network connecting labs at
Berkeley and USC-ISI, that could be disconnected from any production network.
– CITRIS requested a dedicated GE VLAN between labs at Berkeley and UC Davis, for testing/demonstrating video applications they are developing.
• CENIC gear in place at the time was not well-suited to delivering 1 Gbps connections; could have provided 10 Gbps connections but at more cost than researchers wanted…
• and, in the CITRIS case, we didn’t have nearly enough lead time.
Joint Techs Workshop, 12 Feb 2007, Minneapolis 9
““XD Services” white paperXD Services” white paper• Requirements:
– Standing lambdas available to researchers.
– Rapid set up/tear down – 1-2 hours.
– Convenient set up/tear down – email to NOC.
– “Bypass networks.”
• Services:– 1Gbps L2-switched VLANs.
– 1Gbps optically switched lambdas.
– 10 Gbps optically switched lambdas.
• 32 standing lambdas requested:– need to replace all 15808s, 15540s with 15454s.
– hopefully we can sneak by with slightly fewer lambdas.
Joint Techs Workshop, 12 Feb 2007, Minneapolis 10
opticalbackbone
campus A
campus B
HPRng-L2 serviceHPRng-L2 service• one 10GE lambda at every campus, broken out into ten GE
VLANS.
• VLANs trunked over 10GE between switches on the optical backbone.
GE VLANs
10GE
10GE
10GE
GE VLANs
• satisfies requests for dedicated, layer 2 private networks; satisfies the “1Gbps L2-switched VLANs” XD service requirement.
Joint Techs Workshop, 12 Feb 2007, Minneapolis 11
HPRng-L1 topologyHPRng-L1 topologyOAK
SVL
LAX
RIV
NLR
NewNet
ucb
ucsf
stanford
ucsb
usc
caltech
uclauci ucsd
ucr
ucm
SAC
ucd
NLR
Joint Techs Workshop, 12 Feb 2007, Minneapolis 12
HPRng-L2 VLANsHPRng-L2 VLANs(not all campuses are shown)(not all campuses are shown)
10Glambda
1GE VLANs
OAK
RIVLAX
10GE
SAC
10GE
1GE VLANs
1GE VLANs
1GE VLANs
1554015540
campus access link
10Glambda10GE 10GE
1580815808
backbone inter-PoP link
Berkeley
UC Davis
UC Riverside
UCLA
Joint Techs Workshop, 12 Feb 2007, Minneapolis 13
• need to avoid over-subscribing backbone segments where multiple VLANs appear.– should be easy since each VLAN is limited to 1 Gbps at the
campus interface no more than 10 VLANs on any backbone segment.
• in the happy event that the service is popular enough that over-subscription is an issue, we can add additional lambdas on over-subscribed segments.
• initially, CENIC NOC will manually configure interfaces and VLANs;
• later, install HOPI-style control-plane system.
HPRng-L2 management issuesHPRng-L2 management issues
Joint Techs Workshop, 12 Feb 2007, Minneapolis 14
HPRng-L1 serviceHPRng-L1 service• 10 Gbps optically switched lambdas.
1 Gbps optically switched lambdas.– most will be 10GE, some GE;
– OC-192 or OC-48 may be required in some cases;
– very little demand for DWDM wavelength handoff to researchers.
• NewNet and NLR access– Lambdas can be switched from any HPRng-connected campus to
NewNet or to NLR.
• “Optical switches” are actually optical cross-connects (OXCs).
• Probably requires upgrading 15808s and 15540s to 15454s.
Joint Techs Workshop, 12 Feb 2007, Minneapolis 15
HPRng-L1 topologyHPRng-L1 topology
ucsf
ucb
OXC
OAK
OXCSVL
stanford
OXC
LAX
ucla
usc
caltech
ucsd
ucr
ucd
ucm
OXC
RIV
ucsb
each line representsmultiple lambdas, (1..32)
uci
NLR
NewNet
NLR
Joint Techs Workshop, 12 Feb 2007, Minneapolis 16
HPRng-L1 management issuesHPRng-L1 management issues
• initially, CENIC NOC will set up/tear down lightpaths by manually configuring the optical switches;
• later, install HOPI-style control-plane system.
• need to invent (or get someone else to invent and “borrow” from them) a lambda scheduling/reservation/automated setup system
• partitionable optical switches are desirable, to allow researchers to modify wave connections between DWDM gear, to set up/tear down lightpaths between campuses.– or, provide partitioning via access, authorization restrictions in the
control-plane system.
Joint Techs Workshop, 12 Feb 2007, Minneapolis 17
HPRng campus handoffHPRng campus handoff
DWDM to CalRENoptical backbone
border router
HPRng-L1
10GE
10GE10GE GE VLANs
HPRng-L2 HPRng-L3demarc
CENIC
campus ?how will the campusconnect to these?
λ
Joint Techs Workshop, 12 Feb 2007, Minneapolis 18
HPRng campus handoff HPRng campus handoff (2)(2)
DWDM to CalRENoptical backbone
border router
10GE
10GE10GE GE VLANsdemarc
CENIC
campus
λ
campus fiber to labs
optical cross connect
Joint Techs Workshop, 12 Feb 2007, Minneapolis 19
HPRng design committeeHPRng design committee
• Mark Boolootian, UC Santa Cruz
• Brian Court, CENIC
• John Haskins, UC Santa Barbara
• Rodger Hess, UC Davis
• Tom Hutton, SDSC
• Michael Van Norman, UCLA
• Ken Lindahl, UC Berkeley