12
Public Multi-Vendor Interoperability Event 2014 White Paper

Public Multi-Vendor Interoperability Event 2014 White Paper€¦ · Multi-Vendor Interoperability Event 2014 White Paper. 2 MPLS & SDN World Congress 2014 Multi-Vendor Interoperability

  • Upload
    others

  • View
    10

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Public Multi-Vendor Interoperability Event 2014 White Paper€¦ · Multi-Vendor Interoperability Event 2014 White Paper. 2 MPLS & SDN World Congress 2014 Multi-Vendor Interoperability

Public Multi-Vendor

Interoperability Event 2014

White Paper

Page 2: Public Multi-Vendor Interoperability Event 2014 White Paper€¦ · Multi-Vendor Interoperability Event 2014 White Paper. 2 MPLS & SDN World Congress 2014 Multi-Vendor Interoperability

2

MPLS & SDN World Congress 2014 Multi-Vendor Interoperability Test

EDITOR’S NOTE

For the annual MPLS &SDN World Congressinteroperability event,our job at EANTC issimply to identify thelatest technologiesrelevant to serviceproviders, to invitevendors and to test theirsolutions — et voilà! At least that is the theory.Sometimes, however,we find ourselves alittle too far ahead on

the technology development cycle. This year, ourgrand scheme for the showcase was end-to-endmulti-vendor cloud datacenter connectivity withelastic network services. This would have involvedcombining new OpenFlow and Open DayLightsolutions with IP/MPLS advances such as SegmentRouting, Ethernet VPNs, RSVP-TE signalled bi-direc-tional LSPs, BGP flow specifications and BGP-LS. Wewere pretty excited — vendors market theiradvances in these relevant areas quite a bit.It was not meant to happen, though. As we startedhearing back from our vendor customers ourexcitement subdued. Even where seemingly matureIP/MPLS products were involved, large manufac-turers told us the task at hand was simply too bigand implementations of the above were just on theroad map.In fact, these days manufacturers’ resources arestretched between legacy and new technologies.SDN developments are time-consuming and lots ofbrains are required for Network Functions Virtual-ization (NFV) and OpenDaylight activities as well.In this situation, MPLS and IPv6 are technologieswhere development may be slowed down andwhere interoperability can be de-prioritized.At EANTC’s service provider proof of concept tests,we notice an increasing ratio of functional andperformance software issues in related implementa-tions.Is this part of the new world? The term “Open” ispounded more than ever, but in fact serviceproviders are getting locked in to “ecosystems”,often deliberately, praising cost savings and time tomarket. I find these approaches surprising, to be diplomatic.Experience has shown that new technologies aremore successful if customers have many standards-based choices to select from.And in the existing IP/MPLStransport market, vendorswho continue developmentare more likely to win theinevitable network upgradesthat are yet to come. Let’snot forget that networkresource and performancemanagement, provisioningefficiency and high avail-

ability at scale could still be improved in many MPLSimplementations. Plus, some large enterprisecustomer groups only now discovering MPLS…The deployment of interoperable, well-supportedand better manageable packet transport solutionswill even increase in the mid-term, as mobile andfixed broadband services push new scale bound-aries every day. This is a wake-up call for network operators todecide:

• How long will MPLS remain in use in yournetworks? Do you even foresee a completemigration any time soon? How much shouldvendors go into maintenance mode from now onand focus their engineering on new markets?

• What should the new world of SDN & NFV looklike? How should the market be balancedbetween friends & family eco-system solutions(faster to deploy with less pain, allow single RfPs)and standards-based, fully interoperablesolutions (enabling best-of-breed, economicselection of products and less dependence on asingle vendor’s roadmap and support)?

These questions are open right now. There is a greatopportunity for network operators to decide aboutfuture network designs in 2014. I hope that serviceproviders will help vendors to steer their investmentsinto the right directions.Let’s focus the on the Test Areas Covered. Packetclock synchronization in mobile backhaul networksis an area where a group of vendors has tirelesslyworked together to improve interoperability continu-ously. This time, vendors were ready for veryadvanced and performance oriented tests. Weprovided a platform for vendors to test the interoper-ability of Long Term Evolution (LTE) phase clockquality under rather extreme conditions. The testswere very successful — please see details in thisreport.We were also elated to hear that several vendorswere ready to take the opportunity to show realworld, interoperable SDN applications, controllersand orchestrators. The participants worked hard ongetting demos working, resolving protocol interoper-ability issues on the way that service providers couldexpect to encounter in these early days oftechnology adaptation. At the end of our two weeks hot staging phase in ourlab in Berlin, Germany, we were proud to have aworking multi-vendor SDN-network and to havecompleted all clock synchronization test cases wehad planned for.

Test Equipment. With the help ofparticipating test equipment vendors,we generated and measured traffic,emulated control and managementprotocols and performed clock syncanalysis. We thank Ixia, Microsemiand Spirent Communications for testequipment and support. In addition,thanks to QualiSystems for providingtheir orchestrator solution to facilitatethe SDN tests.

Carsten RossenhövelManaging Director, EANTC

TABLE OF CONTENTS Participants and Devices ..............3Software-Defined Networking.......3Clock Synchronization.................5Topology....................................6Demonstration Network .............11

Page 3: Public Multi-Vendor Interoperability Event 2014 White Paper€¦ · Multi-Vendor Interoperability Event 2014 White Paper. 2 MPLS & SDN World Congress 2014 Multi-Vendor Interoperability

3

Participants and Devices

PARTICIPANTS AND DEVICES

Terminology. We use the term tested whenreporting on multi-vendor interoperability tests. Theterm demonstrated refers to scenarios where aservice or protocol was evaluated with equipmentfrom a single vendor only. In any case, demonstra-tions were permitted only when the topic had beencovered in the previously agreed test plan; when atest area had only one vendor, or when multi-vendorcombinations failed, vendors performed demonstra-tions.

SOFTWARE-DEFINED NETWORKING

In recent years, the network industry increasinglyfocused its attention on Software DefinedNetworking (SDN). SDN is a paradigm shift – fromdistributed to a centralized control plane; fromproprietary controller interfaces to standardizedprotocol defined between controller and networkelements. The promise of SDN is to separatebetween network components that are responsible

for packet forwarding from the component that areresponsible for the network control, though enablingservice providers more flexibility in choosing theirsuppliers and fast provisioning. Currently the mainfocus of SDN, from a protocol perspective, isOpenFlow – the interface between the control anddata planes. In our interoperability testing this year we focusedon OpenFlow version 1.3. The following featuresand use cases were highlighted specifically:

• Rate Limiting

• Interworking between non-OpenFlow andOpenFlow devices

• 1:1 ProtectionWe were excited to also demonstrate the value ofutilizing an SDN orchestrator to enable servicedelivery.

OpenFlow: Rate LimitingProviding appropriate QoS to the user traffic isessential and part of managing Service Level Agree-ments (SLA). To this end the OpenFlow specificationintroduced the concept of meter tables and meterbands, which can be used to limit the transmissionrate of an output port. While meter tables are usedto store a collection of meter bands, meter bandsspecify transmission rate and actions to beperformed when the specified rate is exceeded. Thespecification defines three meter band types:

DROP defines a simple rate that drops packetsthat exceed the band rate value.DSCP REMARK defines a simple DIFFSERVpolicer that remark the drop precedence of theDSCP field in the IP header of the packets thatexceed the band rate value.EXPERIMENTER allows additional function-ality in future OpenFlow message types.

In this test setup an OpenFlow (OF) controller wasconnected to an OF switch (OF Forwarder) acrossan IP network, over which the OF channel wasestablished. Either Ixia IxNetwork or SpirentTestCenter was connected to the OF Forwarder,sending traffic to validate the data plane.During the test we configured the OF controller withthe band type DROP for Low and DSCP Remark forHigh traffic class as described in the following table.We generated IP traffic for each traffic class at theircorresponding bandwidth and verified thatcontroller successfully installed the meter table intothe OF switch table. We did not observe traffic drop.

We then doubled the traffic rate for the High trafficclass and monitored that half of the traffic wasremarked with DSCP value for the low traffic class(DSCP 0).

Vendor Devices

Adva Optical Networking

FSP 150SP-100

Ericsson MINI-LINK PT 2020SP 110SP 210SP 310SP 415SP 420SSR 8010

Huawei SN-640SOX (Smart OpenFlow Controller)

Ixia Anue 3500Anue Network EmulatorImpairNetIxNetworkRackSim

Metaswitch SDN Controller

Microsemi TimeProvider 2300TimeProvider 2700TimeProvider 5000

OE Solutions —AimValley

Chronos Smart SFPOAM Smart SFPTWAMP Smart SFP

Pica8 P-3922

QualiSystems CloudShell

SpirentCommunications

Spirent TestCenter

Traffic Class

DSCP Value

Band Rate [Mbit/s]

Band Type

High 48 100 DSCP Remark

Low 0 250 Drop

Page 4: Public Multi-Vendor Interoperability Event 2014 White Paper€¦ · Multi-Vendor Interoperability Event 2014 White Paper. 2 MPLS & SDN World Congress 2014 Multi-Vendor Interoperability

4

MPLS & SDN World Congress 2014 Multi-Vendor Interoperability Test

In order to make sure that the Low traffic class wasalso being metered, we increased its rate andobserved that the Low traffic class was rate limited tothe bandwidth we defined in the test plan.Ixia IxNetwork and Huawei SOX successfully partici-pated as OF Controller. Huawei SN-640 successfullyparticipated as OF switch.During this test we initially encountered an issuebetween two participating vendors. One implemen-tation was looking for the meter band typeOFPMBT_DROP with length 12 bytes, whichaccording to the standard should be 16 bytes andhence was not installing the meter band valuecorrectly. The vendor fixed the issue by updated thecode version.

Figure 1: OpenFlow: Rate Limiting

Interoperability between OpenFlow and Non-OpenFlow SwitchesIt is likely that for the foreseeable future, OpenFlowbased networks and traditional networks, will beoperating side by side. As in our test a year ago, wewanted to provide both type of devices a platform tocreate a blue-print deployment scenario that couldbe shared. We had three roles defined in the test. AsOpenFlow controllers Huawei SOX and Metaswitchboth played their part. We had Huawei SN-640 andPica8 P-3922 as switches in the OpenFlowenvironment. Ixia IxNetwork played the part of thenon-OpenFlow switch. QualiSystems’ CloudShellfunctioned as the orchestrator, sending servicerequests to the Metaswitch controller.In all test combinations, we used a test setupconsisting of two switches: an OpenFlow (OF) switchand non-OF switch. The OF switch was connected tothe OF controller using an IP network, over which theOF channel was established. Likewise, OF switchand non-OF switch were connected over an IPnetwork. To perform this test, participating vendorsconfigured Resource Reservation Protocol with TrafficEngineering (RSVP-TE) to setup and tear down theLabel Switch Path (LSP) between OF controller andnon-OF switch. The vendors used Open Shortest PathFirst with Traffic Engineering (OSPF-TE) extensions asInterior Gateway Protocol (IGP) to build topologyinformation about the network.Once control plane sessions were established wesent traffic, between both ends of the network. Wealso made sure that the OF switches successfullyinstalled flow entries to push and pop MPLS headers.We successfully validated OSPF-TE and RSVP-TEinteroperability between non-OpenFlow andOpenFlow switches using the Metaswitch andHuawei OpenFlow controllers and Pica8 andHuawei OpenFlow switches.We discovered an issue between two vendors partic-ipating in this test. Messages sent by the controller to

the OF switch contain cookie and cookie maskfields. One vendor was not handling the cookie andcookie mask fields in strict conformance with theOpenFlow 1.3.1 specification, causing all rulesprogrammed by the OF controller and sent to the OFswitch to be deleted. Within the course of the twoweeks hot staging phase, the vendor successfullyupdated the code and was able to perform the test.

Figure 2: Interoperability between OpenFlow and Non-OpenFlow Switches

OpenFlow: 1:1 ProtectionResiliency mechanisms are crucial to the healthyoperation of modern network. There are severaldifferent types of protection mechanisms that arecommonly used such as 1:1, 1+1, 1:N and M:N. The OpenFlow specification addresses resiliency byintroducing Fast Failover group type in OpenFlow1.2. The type allows fast failover since it does notrequire a round trip communication to the controller.In a 1:1 protection the OF controller installs twodisjoint path in the OpenFlow network.Our test setup consisted of four OF switches: threeHuawei SN-640 and one Pica8 P-3922, each ofwhich was connected to the Huawei SOX controller.The controller was configured to install two disjointspaths much like any other network: working pathand protected path. While sending traffic at 1,000frame/seconds using Spirent TestCenter wetriggered a failover condition by pulling the link inthe working path. We did not observed any impacton the traffic and the measured failover time was 0milliseconds. After setting the network back to itsoriginal state, traffic reverted back to the workingpath without any impact.

Two-Way Active Measurement Using TWAMPThe quality, the performance, and the reliability of anetwork are essential to the user experience andcustomer satisfaction. Operators rely on perfor-mance measurement tools to monitor performancemetrics such as packet delay, packet delay variation,packets loss and availability of their networks. In amulti-vendor environment, Layer 3 Operation Admin-istration and Maintenance (OAM) solution based onTwo-Way Active Monitoring Protocol (TWAMP) isone way to measure performance.

Huawei SOXIxia

IxNetwork

Huawei SN-640

OF Switch

OF Controller

OF Switch

OF Controller

Metaswitch

Huawei SOX

Non-OF SwitchPica8 P-3922

Huawei SN-640 Ixia IxNetwork

QualiSystems CloudShell

Page 5: Public Multi-Vendor Interoperability Event 2014 White Paper€¦ · Multi-Vendor Interoperability Event 2014 White Paper. 2 MPLS & SDN World Congress 2014 Multi-Vendor Interoperability

5

Clock Synchronization

The Two-Way Measurement Protocol is specified inRFC 5357 and provide standard-based methods formeasuring round-trip IP performance, such as packetloss, packet delay and packet delay variationbetween any two devices that support the standard.TWAMP uses the methodology and architecture ofOne-Way Active Measurement Protocol (OWAMP)defined in RFC 4656 to define a way to measureround-trip metrics.TWAMP includes two protocols: the TWAMP controlprotocol and the TWAMP test protocol. The TWAMPcontrol protocol is used to initiate, start and stop theTWAMP sessions, while the TWAMP test protocol isused to exchange TWAMP test packets betweenTWAMP endpoints.The TWAMP standard also specifies a lighter versioncalled TWAMP Light. In the TWAMP Light implemen-tation, the role of Server, Control-Client and SessionSender are performed by the sending host and therole of the Session-Reflector is performed by theresponding host thus eliminating the TWAMP controlprotocol. TWAMP Light provides a simple archi-tecture for responders where their role will be tosimply act as light test points in the network, therebyenabling the measurement of two-way IP perfor-mance from anywhere in the network. In our event we focused on the TWAMP Light testing.In the test topology the TWAMP Light implementationconsisted of two hosts: the controller and the session-reflector. The control-client, server and session-sender were setup on a laptop. The controllerconnected to the OE Solutions — AimValley SmartTWAMP SFP which was inserted into the EricssonSP 110, acting as a session-reflector. The controllerinitiated the two-way measurement and the serveraccepted the incoming TWAMP test packets andreflected them back to the controller, which thenperformed the measurement.We used Ixia IxNetwork between both controllerand session-reflector to introduce impairment, suchas packet loss, packet delay, packet delay variation,packet reordering and packet duplication. For eachtype of impairment we compared the two-waymeasurement results with the emulated impairment.In all cases the measurement was correct.

Service ActivationWhen setting up and handing over Ethernet Serviceto the customer, Service Providers often require toolsto check if the provisioned service complies with theService Level Agreement (SLA). Service Activationhelps service providers to verify and validate thecorrect configuration and performance of the serviceat the time of their deployment. There are majorstandards in this area: ITU-T Y.1564 “EthernetService Activation Test Methodology” and CarrierEthernet Service Activation Testing (SAT), a work inprogress from the Metro Ethernet Forum (MEF). Theoperation of the Service Activation can be facilitatedby using test Protocol Data Unit (PDU). It providesthe ability to configure and control the ServiceActivation Testing (SAT) steps and to fetch test resultat the completion of the test without the need of aloopback topology, which may not appropriate

when testing for configuration of Ingress BandwidthProfile. SAT test PDU is being defined by the MEF in“Service Activation Testing Test and Control ProtocolData Units and Control Protocol” document andenables Network operator and Service Provider toperform SAT with interoperable device from diversetest equipment vendors. The specification definedFrame Loss PDU (FL-PDU) and Frame Delay PDU (FD-PDU) to support the service activation as approvedin the Y.1564 recommendation.SAT test PDU, analogue to the test methodologydefined in ITUT-T Y.1564, is designed to testEthernet-based service attributes, includingbandwidth profile parameters: Committed Infor-mation Rate (CIR), Excess Information Rate (EIR),Committed Burst Size (CBS), Excess Burst Size (EBS),Color Mode (color-blind and color-aware) andCoupling Flag. It also covers performance attributes:Frame Loss, Frame Delay and Frame DelayVariation.We conducted the test known as “Service UnderTest” by configuring two Ethernet Services, EVPL 1and EVPL2, that were being activated.In our test we used an Ethernet Test Support System(ETSS), which was connected to the MediaConverter electrical port. The ETSS commands werethen transferred to the SFP port where the ControlEnd (CE) OAM Smart SFP was inserted. Two OESolutions — AimValley OAM Smart SFPs wereemployed, one acting as the CE, another asResponder End (RE). SAT control protocol messagesbetween the CE and RE were used to configure eachtest, followed by SAT test frames generated betweenthe two Smart SFPs. Multiple EVC services weretested in parallel, and each test was carried out as 2uni-directional tests, allowing for asymmetric EVCparameters in each direction of traffic.The ETSS running on Personal Computer (PC). Weused Ixia Anue to impair the service in order todemonstrate violation of the Service AcceptanceCriteria (SAC).We ran the test in two phases: In the first phase wevalidated that each service under test is correctlyconfigured. As soon as the first phase was success-fully completed, participating vendor initiated theperformance test from the ETSS for 10 minutes. Theperformance test evaluated the service againstperformance parameters know as Service Accep-tance Criteria (SAC), which is a subset of a SLA. TheSAC was agreed with the participant vendor prior totest execution. During the test the CE successfullyretrieved the test results from the Responder End (RE)using control protocol, combined them and returnedthese to the ETSS.OE Solutions — AimValley Smart SFP successfullyparticipated in the test as CE and RE. No majorissue was observed during this test.

CLOCK SYNCHRONIZATION

Over the past six years we have tested clocksynchronization mechanisms from ways to transportTDM signal over packet (SAToP), throughSynchronous Ethernet and IEEE 1588-2008.

Page 6: Public Multi-Vendor Interoperability Event 2014 White Paper€¦ · Multi-Vendor Interoperability Event 2014 White Paper. 2 MPLS & SDN World Congress 2014 Multi-Vendor Interoperability

MPLS & SDN World Congress 2014 Multi-Vendor Interoperability Test

6 7

Topology

Virtual Infrastructure Virtual Infrastructure

Physical & Virtual Network DeviceMulti-Tenant Cloud

Orchestration

SDN Controller Southbound InterfacesData Center Server

QualiSystems CloudShell

OpenFlow Controllers Ixia IxNetwork Huawei SOXMetaswitch SDN Controller

SpirentTestCenter

12:50:00Microsemi

TimeProvider 2300

Ericsson SP 210

IxiaIxNetwork

Ericsson SP 110/OE Solution—AimValley

EricssonMINI-LINK PT 2020

Tenant B

Tenant A

Tenant C

Tenant D

Tenant B

Tenant A

Tenant C

Tenant D

Ericsson

Video Server 1 Video Server 2

Video Client 1

Video Client 2Ixia

IxNetwork

Ericsson

Ixia IxNetwork

Spirent TestCenter

Spirent TestCenter

Ixia IxNetwork

Ixia RackSim VMs

Ixia RackSim VMs

VTEP 1

VTEP 2

VTEP 3

VTEP 4

Huawei SN-640

SSR 8010

12:50:00Adva

FSP 150SP-100

IxiaIxNetwork

Ericsson SP 415

Ericsson SP 420/OE Solution—AimValley

Chronos Smart SFP

Huawei SN-640Huawei SN-640

Pica8

Huawei SN-640

P-3922

TWAMP Smart SFP

RackSim VM Manager

12:50:00Microsemi

TimeProvider 2700

ETSS/TWAMP Controller

VM Manger Session

OE Solution—AimValley OAM Smart SFP

EricssonMINI–LINK SP 310/

OE Solution—AimValley OAM

Core Network

OE Solution—AimValleyChronos Smart SFP

12:50:00Microsemi

TimeProvider 5000/ 12:50:00Adva

FSP 150SP-100

SP 420

Smart SFP

Ixia Anue 3500

Time/Phase Link

Page 7: Public Multi-Vendor Interoperability Event 2014 White Paper€¦ · Multi-Vendor Interoperability Event 2014 White Paper. 2 MPLS & SDN World Congress 2014 Multi-Vendor Interoperability

MPLS & SDN World Congress 2014 Multi-Vendor Interoperability Test

6 7

Topology

Virtual Infrastructure Virtual Infrastructure

Physical & Virtual Network DeviceMulti-Tenant Cloud

Orchestration

SDN Controller Southbound InterfacesData Center Server

QualiSystems CloudShell

OpenFlow Controllers Ixia IxNetwork Huawei SOXMetaswitch SDN Controller

SpirentTestCenter

12:50:00Microsemi

TimeProvider 2300

Ericsson SP 210

IxiaIxNetwork

Ericsson SP 110/OE Solution—AimValley

EricssonMINI-LINK PT 2020

Tenant B

Tenant A

Tenant C

Tenant D

Tenant B

Tenant A

Tenant C

Tenant D

Ericsson

Video Server 1 Video Server 2

Video Client 1

Video Client 2Ixia

IxNetwork

Ericsson

Ixia IxNetwork

Spirent TestCenter

Spirent TestCenter

Ixia IxNetwork

Ixia RackSim VMs

Ixia RackSim VMs

VTEP 1

VTEP 2

VTEP 3

VTEP 4

Huawei SN-640

SSR 8010

12:50:00Adva

FSP 150SP-100

IxiaIxNetwork

Ericsson SP 415

Ericsson SP 420/OE Solution—AimValley

Chronos Smart SFP

Huawei SN-640Huawei SN-640

Pica8

Huawei SN-640

P-3922

TWAMP Smart SFP

RackSim VM Manager

12:50:00Microsemi

TimeProvider 2700

ETSS/TWAMP Controller

VM Manger Session

OE Solution—AimValley OAM Smart SFP

EricssonMINI–LINK SP 310/

OE Solution—AimValley OAM

Core Network

OE Solution—AimValleyChronos Smart SFP

12:50:00Microsemi

TimeProvider 5000/ 12:50:00Adva

FSP 150SP-100

SP 420

Smart SFP

Ixia Anue 3500

Time/Phase Link

Page 8: Public Multi-Vendor Interoperability Event 2014 White Paper€¦ · Multi-Vendor Interoperability Event 2014 White Paper. 2 MPLS & SDN World Congress 2014 Multi-Vendor Interoperability

8

MPLS & SDN World Congress 2014 Multi-Vendor Interoperability Test

In this period, we worked with a group of vendorsthat not only return to our events again and again,but also help us in setting challenging goals.The tests executed in this area focused on phasesynchronization. Modern mobile networks requirethis technology for Time Division Duplex (TDD),enhanced inter-cell interference coordination (eICIC)and LTE Broadcast. These solutions promise higherbandwidth and spectral efficiency and wider servicecoverage, but require certain phase accuracy.We borrowed the accuracy level of ±1.5 μs from theITU-T recommendation G.8271 – accuracy level 4as an initial starting point for the testing goals. Wedefined 0.4 μs as the phase budget for the airinterface, which meant that the network phaseaccuracy level had to be ±1.1 μs. All of our tests inthis event used this level of accuracy as a conditionto passing a test. Measurement of phase wasperformed using a 1PPS interface or a Time of Day(ToD) and 1PPS composite interface. For frequencymeasurements, we used either an E1, a 10MHz or aSyncE signal. Frequency measurements wereevaluated using the G.823 SEC requirements.Since our tests imposed high accuracy requirements,in the order of single nanosecond, measurements ofphase were given special care. We measured thelength of the physical cables used to carry the 1PPSsignal and accounted for the constant offset causedby the propagation speed of signals through thephysical medium. This applies to the connectionbetween the reference source and the analyzer, aswell as the measured clocks and the analyzer.The primary time reference clock was GPS using anL1 antenna located on the roof of EANTC’s lab.

Precision Time Protocol as GPS BackupThe Global Positioning System (GPS) is an optimalchoice for phase synchronization as it can deliver —under normal working conditions — a maximumabsolute time error in the range of ±0.1μs. Thisallows deployment of accurate phase distribution,however GPS is subject to jamming, which couldbring severe operational risks. Since GPS providesphase, frequency and time of day information,currently the only protocol that could serve as analternative to delivering this information is the IEEE1588-2008 or Precision Time Protocol (PTP).The test started with both grandmaster and slaveclocks locked onto GPS and PTPv2 was active. Wethen impaired PTP by dropping all PTP messagesand verified that no transients occur, indicating thatGPS was the primary source, while also verifyingthat the slave clock detected PTP failure. We then re-enabled PTP packet flow and introduced packetdelay variation (PDV) based on G.8261 Test Case12 to simulate a network of 10 nodes without on-path support. Following, we took baseline phaseand frequency measurements from the slave clock.Afterwards, we disconnected the GPS antenna,simulating an outage. We restarted the measure-ments and evaluated the results according to thephase requirement of ±1.1 μs and G.823 SEC MTIEmask.

The measured phase accuracy with GPS was lessnoisy than PTP without on-path support. Wemeasured a maximum of 45 ns (0.045 μs) time errorwith GPS in our tests, well below our setmeasurement threshold. We still managed tomeasure a maximum time error of 1 μs with PTP —also within our set goals.The diagram depicts the results that passed thephase accuracy requirement of ±1.1 μs andfrequency accuracy requirements of G.823 SEC.

Figure 3: Precision Time Protocol as GPS Backup Results

Phase/Time Hold-Over PerformanceHold-over time is a crucial metric for mobile ServiceProviders. It is a major factor in the decision whetherto send a field technician to a cell site to performurgent maintenance or delay it for more cost-effectivescheduling of operations. In case of a prolongedoutage, a slave clock in a radio controller whichexceeds its hold-over period will most likely result inmajor failure in hand-over from (and to) neighboringcell sites. Equipment vendors design their frequencyhold-over oscillator performance accordingly. Butwhat about time/phase hold-over performance? We started the test with the slave clock in free-running mode and allowed it to lock onto the grand-master clock. We then performed baseline measure-ments. After passing the masks we set for the test, weused an impairment generator to drop all PTPpackets, simulating a PTP outage. We then verifiedthat the slave clock is in hold-over mode and startedthe measurements, letting them run over night.We observed that with SyncE providing frequencyreference while PTP is impaired, phase accuracyhold over was stable, exceeding 14 hours (the testwas stopped at this point due to time consider-ations). In one test run we performed the test with noSyncE frequency reference and measured a hold-over time of approximately 3.5 hours while stillproviding a phase accuracy of up to ±1.1 μs.

Adva FSP150SP-100

16:20:00

MicrosemiTimeProvider

16:20:00

Adva FSP150SP-100

16:20:00

MicrosemiTimeProvider

16:20:00

GPS

Ixia Anue 3500

SlaveClock

GrandmasterClock

Freq. link

Packet Switched Network (PSN)

Time/Phase link

PTP EVCRF

PTP Node

Analyzer/

12:50:00

Impairment

2700

5000

Link failure

Impairment Tool

Page 9: Public Multi-Vendor Interoperability Event 2014 White Paper€¦ · Multi-Vendor Interoperability Event 2014 White Paper. 2 MPLS & SDN World Congress 2014 Multi-Vendor Interoperability

9

Clock Synchronization

Figure 4: Phase/Time Hold-Over Performance Results

Adva FSP 150SP-100, Ericsson SP 110 and EricssonSP 310 passed the phase accuracy requirement of±1.1 μs and frequency requirements of G.823 SECas slave clocks.We executed one additional test where the opticallink between the grandmaster clock and slave clockwas replaced with Copper SFPs that support SyncEmaster/slave mode and provide symmetric delay forPTP. This test run also passed the frequency andphase requirements using OE Solutions — AimValleyChronos Smart SFPs. The diagram depicts the testedcombinations we successfully executed.

Precision Time Protocol: Boundary Clock Noise Generation (Time/Phase)When considering the phase budget to reach therequired accuracy level, several factors come intoplay. One of them is the internal noise generated byeach boundary clock, currently under study by theITU-T in the upcoming recommendation calledG.8273.2. There are two forms of noise — one isconstant time error (cTE), estimated by averaging themeasured time error, while the other is dynamic timeerror (dTE), estimated by calculating the MTIE of thephase measurements. We used the preliminaryquality targets of 50 ns constant time error and40 ns MTIE (over the whole period) as our goal.We measured the time error of PTP packets at theingress of the boundary clock on the packets origi-nating from the grandmaster to estimate the inboundconstant and dynamic noise. At the same time wemeasured the time error at the egress of the

boundary clock. As an additional control, we alsomeasured the physical phase output via 1PPSinterface. In this test, we also measured the cablelengths and accounted for the physical mediumlatency for the PTP packets, to estimate the time errorat the boundary clock itself.

Figure 5: PTP — Boundary Clock Noise Generation (Time/Phase) Results

Adva FSP 150SP-100, Ericsson SP 210 andMicrosemi TimeProvider 2700 passed as boundaryclocks with the requirements of 50 ns constant timeerror (cTE) and 40 ns MTIE dynamic time error (dTE).

Precision Time Protocol over Adaptive Modulation Microwave SystemIn some deployment scenarios, such as rural areasaccess, a microwave transport is the most cost-effective solution for mobile backhaul. Microwaveradios use Adaptive Coding and Modulation (ACM)to adapt the radio modulation to changing trans-mission conditions. Variation in modulation changesthe link capacity, making it challenging to controlpacket delay variation (PDV) for packet clockprotocols as well as guarantee its transport undersevere weather conditions.

GrandmasterClock

SlaveClock

Ixia Anue 3500

Ixia Anue 3500

Ixia Anue 3500

Adva FSP150SP-100

12:50:00

EricssonSP 310/

OE Solutions —AimValley

Chronos Smart SFP

SyncE Domain

Packet Switched Network (PSN)

Analyzer/Impairment Tool

Synchronous Node

PTP Node12:50:00

Adva FSP150SP-100

12:50:00

EricssonSP 110

MicrosemiTimeProvider 5000

12:50:00

MicrosemiTimeProvider 5000

12:50:00

MicrosemiTimeProvider 5000/

12:50:00

OE Solutions —AimValley

Chronos Smart SFP

Freq. link

Time/Phase link

PTP EVC

Adva FSP150SP-100

12:50:00

Packet Switched Network (PSN)

Analyzer/Impairment Tool

Synchronous Node

PTP Node12:50:00

Freq. link

Time/Phase link

PTP EVC

MicrosemiTimeProvider 5000

12:50:00

EricssonSP 110

Ixia Anue 3500

MicrosemiTimeProvider

12:50:00

Adva FSP150SP-100

12:50:00

Ixia Anue 3500

2700

MicrosemiTimeProvider

12:50:00

2300

MicrosemiTimeProvider 5000/

12:50:00

Ixia Anue 3500

Adva FSP150SP-100

12:50:00

OE Solutions —AimValleyChronos

Smart SFP

EricssonSP 210/

OE Solutions —AimValleyChronos

Smart SFPGrandmaster

ClockSlaveClock

BoundaryClock

Page 10: Public Multi-Vendor Interoperability Event 2014 White Paper€¦ · Multi-Vendor Interoperability Event 2014 White Paper. 2 MPLS & SDN World Congress 2014 Multi-Vendor Interoperability

10

MPLS & SDN World Congress 2014 Multi-Vendor Interoperability Test

We designed this test case to verify that when themicrowave link is 100% utilized, accuracy of thephase synchronization does not degrade — innormal and emulated severe weather conditions. Toemulate such severe weather conditions, we used anattenuator to reduce the RF signal to the lowestmodulation scheme available.We started the test with the slave clock in free-runningmode and generated traffic according to G.8261VI2.2 at the maximum line rate for the maximummodulation scheme and expected no traffic loss. Wetook baseline measured for phase and frequencyfrom the slave clock. After passing the requirements,we attenuated the signal to the lowest modulationscheme. Since the bandwidth decreased accordingly,we expected to observe verified that data packetswere dropped according to the now availablebandwidth. We restarted the measurements on theslave clock, evaluated them with the requirementsand compared them to the baseline measurements.We performed a single test run for this test withMicrosemi TimeProvider 5000 as grandmaster clock;Ericsson SP 310 as the boundary clock; EricssonMINI-LINK PT 2020 as the microwave system andtransparent clock; Ericsson SP 210 as the slave clock.Measurements were taken using Ixia Anue 3500.We measured up to 18.4 ns time error in the highestmodulation scheme (512QAM) and up to 19.2 nstime error in the lowest modulation scheme (4QAM).

Precision Time Protocol: Transparent Clock ScalabilityAs is the case for boundary and grandmaster clocks,an important characteristic of a transparent clock isthe amount of clients it supports – but since a trans-parent clock does not require a context for eachclient, the governing factor is the total PTP messagerate per second. We designed a test to verify thatwith maximum client utilization of the grandmaster,PTP accuracy quality remains within the requirementsfor phase. We performed this test with each clientconfigured for a message rate of 64 packets persecond (sync, delay request and delay response).In this test we measured the dynamic error of thetransparent clock, by comparing the correction fieldaccuracy in the ingress and egress of the transparentclock. We also measured the phase output from thenon-emulated slave clock.We started the test with onePTP client and performed baseline measurements.We then started the emulated clients, and repeatedthe same measurement, comparing the results.We observed a maximum dynamic accuracy of thetransparent clock up to 46 ns peak to peak with oneclient and 48 ns peak to peak with 500 clients. Inboth test runs, we observed an absolute maximumtime error of 39 ns on the slave clock.In one run, after establishing 500 clients through thetransparent clocks, we observed occurrences ofoutliers up to 50 ns. All observed outliers were non-contiguous. We did not observe any outliers duringthe baseline measurements. No transients wereobserved on the slave clock output. The diagramdepicts the test combinations we executed.

Figure 6: PTP — Transparent Clock Scalability Results

Precision Time Protocol: Master Clock ScalabilityAn important characteristic of a master-capable PTPnode (either a grandmaster or a boundary clock) isthe amount of clients it supports. We designed a testto verify that with the maximum client utilization, PTPaccuracy quality meets the requirements for phase.We started with one non-emulated slave clock infree-running mode and allowed it to lock to themaster clock — either a boundary clock or a grand-master clock. We then performed baseline measure-ments. After passing the requirements, we restartedthe measurements and started the emulated clients. The number of emulated clients was set according tothe vendor’s specifications of supported client tomatch the maximum together with the non-emulatedslave clock. We verified that no transients occurredwhen we started the emulated clients. We thenevaluated the results of the slave clock according tothe phase and frequency requirements and alsocompared it with the baseline measurements.We tested all devices with a message rate of 64packets per second (sync, delay request and delayresponse) for each client. The following devices weretested for their PTP client scalability: Adva FSP150SP-100 with 32 clients; Ericsson SP 110 with 8multicast master ports; Ericsson SP 310 with 7unicast master ports and a multicast slave port backto the Grandmaster; Microsemi TimeProvider 5000with 500 clients. The results are depict on page 11.

MicrosemiTimeProvider

12:50:00

5000

IxiaIxNetwork

EricssonMINI-LINK PT 2020

Ixia Anue 3500

EricssonSP 310

MicrosemiTimeProvider

12:50:00

5000

SpirentTestCenter

EricssonMINI-LINK PT 2020

Ixia Anue 3500

EricssonSP 310

GrandmasterClock

SlaveClock

TransparentClock

Packet Switched Network (PSN)

Analyzer/Impairment Tool

Synchronous Node

PTP Node12:50:00

Freq. link

Time/Phase link

PTP EVC

Emulator

Page 11: Public Multi-Vendor Interoperability Event 2014 White Paper€¦ · Multi-Vendor Interoperability Event 2014 White Paper. 2 MPLS & SDN World Congress 2014 Multi-Vendor Interoperability

11

Demonstration Network

Figure 7: PTP — Master Clock Scalability Results

DEMONSTRATION NETWORK

The first advanced use case aimed to demonstratevideo content delivery over an SDN network withquality assurance. This is a great use case for a chainthat a Service Provider is likely to see: a serviceorchestrator, OpenFlow controller (or two) and anetwork of OpenFlow switches as well as video serversand clients. QualiSystems CloudShell served as theorchestrator for providing the overall service life cycleof the demo – provisioning the controllers, the videohosts, and the Ixia IxNetwork tester.Huawei, Metaswitch and QualiSystems agreed that theorchestrator would use a combination of Secure Shell(SSH) and REST APIs for provisioning the controllers.During the limited hot staging time, we verified theinteraction between the QualiSystems CloudShellorchestrator and the Metaswitch OpenFlow controller.We successfully tested that the OpenFlow controllerswere able to control the switches in the test network.

The second use case aimed to emulate data centerworkload mobility. In this scenario, we used Ixia’sRackSim solution to create two data center sites witha large number of VMs in a multi-tenancyenvironment. Using a built-in VM Manager, Ixiaemulated forward and reverse migration of virtualmachines (VMs) between both data center sitesacross a core SDN network using Ericsson SP 420and Ericsson SSR 8010 routers as data centergateways. Here we were challenged by the varioustransport network components, specifically therequired support for IP localization. Therefore, wewere not able to measure the out-of-service timeduring the VM migration process.In the SDN area, we integrated two scenarios. Thefirst was OpenFlow rate limiting with Huawei SOXas controller and Huawei SN-640 switches. We alsoshowcased interoperability between OpenFlow andnon-OpenFlow switches with Metaswitch SDNController as the OpenFlow controller, Pica8 P-3922as OpenFlow switch, Ixia IxNetwork emulating anon-OpenFlow switch and QualiSystems CloudShellfunctioning as orchestrator.In the transport area we demonstrated SATmeasurement by placing the OE Solutions–AimValley OAM Smart SFP into Ericsson SP 310and performing measurements. TWAMP measure-ments in the network are show-cased with anOE Solutions–AimValley TWAMP Smart SFP,positioned in the Ericsson SP 110, while the TWAMPsender client connects via Ericsson SP 310.Furthermore, Ixia IxNetwork and Spirent TestCenteremulated VTEPs in each simulated data center. Bothdata centers were interconnected over VPWS circuitand Layer 3 VPNs. We sent bidirectional trafficbetween emulated IPv4/IPv6 and DHCP hosts acrossdata center sites using VXLAN encapsulation.In the clock synchronization area, we constructed atransport network with full on-path support, whereevery device is either an IEEE 1588-2008 boundaryor transparent clock.The devices for the transport part of the networkwere Ericsson MINI-LINK PT 2020, Ericsson SP 110,Ericsson SP 210, Ericsson SP 310, Ericsson SP 415and Ericsson SP 420.Adva FSP 150SP-100 and Microsemi TimeProvider5000 were integrated as grandmaster clocks hostedin the data center portion of the network.OE Solutions — AimValley Chronos Smart SFP wereinserted into Microsemi TimeProvider 5000 andEricsson SP 420, providing synchronization linksover copper SFPs.Microsemi TimeProvider 2700 acted as a slaveclock within the transport network with GPS backup,while Microsemi TimeProvider 2300 acted as slaveclock located in a data center. An additional AdvaFSP 150SP-100 acted as a slave clock within thedata center with GPS backup while performing timeerror measurements on the raw PTP stream.Ixia Anue 3500 provided measurements for thetransparent clock correction field accuracy andmeasuring the slave clock output from Ericsson SP110, while Ixia IxNetwork and Spirent TestCenteremulated slave clocks.

IxiaIxNetwork

Ixia Anue3500

EricssonSP 310

MicrosemiTimeProvider 5000

12:50:00

IxiaIxNetwork

Adva FSP150SP-100

12:50:00

Ixia Anue3500

EricssonSP 310

Adva150SP-100

12:50:00

MicrosemiTimeProvider

12:50:00

2300

SpirentTestCenter

Ixia Anue3500

EricssonSP 415

MicrosemiTimeProvider 5000

12:50:00

EricssonSP 110

IxiaIxNetwork

Adva FSP150SP-100

EricssonSP 310

MicrosemiTimeProvider 5000

12:50:00

GrandmasterClock

SlaveClock

BoundaryClock

Analyzer/Impairment Tool

Synchronous Node

PTP Node12:50:00

Freq. link

Time/Phase link

PTP EVC

Emulator

Page 12: Public Multi-Vendor Interoperability Event 2014 White Paper€¦ · Multi-Vendor Interoperability Event 2014 White Paper. 2 MPLS & SDN World Congress 2014 Multi-Vendor Interoperability

MPLS & SDN Ethernet World Congress 2014 Multi-Vendor Interoperability Test

EANTC AGEuropean Advanced Networking Test Center

Upperside Conferences

Salzufer 1410587 Berlin, GermanyTel: +49 30 3180595-0Fax: +49 30 [email protected]://www.eantc.com

54 rue du Faubourg Saint Antoine75012 Paris - FranceTel: +33 1 53 46 63 80Fax: + 33 1 53 46 63 [email protected]://www.upperside.fr

This report is copyright © 2014 EANTC AG. While every reasonable effort has been made to ensure accuracy andcompleteness of this publication, the authors assume no responsibility for the use of any information contained herein.All brand names and logos mentioned here are registered trademarks of their respective companies in the UnitedStates and other countries.20140310 v05