Upload
others
View
6
Download
0
Embed Size (px)
Citation preview
PUBLIC USE
HEMANT AGRAWAL
SOFTWARE ARCHITECT
FTF-DES-N1883
MAY 19, 2016
FTF-DES-N1883
DEEP DIVE INTO ODP AND DPDK FOR
QORIQ LS2088A AND LS1088A
PUBLIC USE1 #NXPFTF PUBLIC USE1 #NXPFTF
AGENDA
• Session Summary
• NXP Userspace Offering
• ODP
− Overview
− Implementation on NXP Platform
• DPDK
− Overview
− Support on NXP Platform
• Q &A
PUBLIC USE2 #NXPFTF
Session Objective
SDN / NFV market drivers for a single cross-platform API standard for efficient user space
packet processing
ODP and DPDK are rapidly emerging as the open-source cross-platform API standards
with fully engaged ecosystem for ARM NFV.
NXP platform provides implementation for both while taking advantage of NXP data-path
acceleration for specific packet processing elements (packet IO, classification, scheduling,
crypto, etc.).
We will also explore various possible API extensions to get additional performance on NXP
SoCs.
PUBLIC USE4 #NXPFTF
Linaro Open Data Plane – ODP
• Community effort
− Open Source, open contribution, BSD-3 licensed
− Vendor and platform neutral (depends only on C99)
− Application-centric--covers functional needs of data plane applications
− Ensures portability by specifying functional behaviour of ODP
− Defined jointly and openly by application writers and platform implementers
− Architected to be implementable on a wide range of platforms efficiently
− Sponsored, Governed, and Maintained by Linaro Networking Group (LNG)
• Main focus
− Define a common ODP API
− Implementations to be provided by hardware vendors (not linaro.org)
− Allows applications to use vendor-specific extensions
• Key features
− Run-time – timers, sync, memory, buffers
− Multiple Packet I/O modes
− Flexible queuing and scheduling
− Crypto offload – IPSec
− Classification and QoS
Source: “ODP Overview” by Bill Fischofer from linaro
PUBLIC USE5 #NXPFTF
Data Path Development Kit - DPDK• Originally Intel initiative
− Targeted towards SDN and NFV.
− Large developer/user community
− Efforts ongoing to make it part of Linux Foundation.
• Key Features
− Core run-time libraries
architecture optimized run-time services
EAL abstracts processing model
− Packet-access (I/O)
Ethernet device framework (PMD)
Intel NICs, virtual IO & 3rd Party NICs
− Classification & QoS
Leverages HW support+ SW libraries.
− Platform services
Kernel NW Interface, Power-mgmt
Shared-mem (inter-VM, app)
− Crypto API
• Open to other platforms
− Power8 as host
− ARMv8, SoC platforms
Target Platform
KNI
DPDK Framework
Environment Abstraction Layer
Sample
Applications
Customer Apps
Buffer
Management
Network
PMD
Virito LPM Packet Flow
Ring
Management
Queue
Management
Crypto
PMD
IGB UIO
IA ARM Tilera Power
PUBLIC USE6 #NXPFTF
NXP DPAA – compatible with DPDK & ODP since 2008
HW SW HW or SWHW or
SW DPDK/ODP Approach:- Leverage Hardware for ingress and egress
processing.
- ODP adds on HW scheduling offload
- Accelerators – Crypto offload
- Complete user-space processing model.
NXP Approach:- FMan offloads parsing, classification, distribution.
- QMan, BMan offload scheduling, buffering
- Virtualized accelerators – SEC, PME, DCE
- User-space driver, threading model.
- Doing all this since 2008 – now into 3rd generation
ARMv8
FQ
FQ
FQ
FQ
FQ
FQ
Ingre
ss
Sch
ed
ule
r
Crypto
(SEC)
10GARMv8
ARMv8
ARMv8
PME
FQ
FQ
FQ
FQ
FQ
FQ
Egre
ss T
M
Sch
ed
ule
r
1G
VPort
Parse,
Classify,
Distribute
10G
1G
VPort
QorIQ
FM, WRIOP
QM,BM QM,BMLinux
PUBLIC USE7 #NXPFTF
QorIQ ODP Support
IO & Acceleration
Classifier Crypto PKTIO
Run-Time Services
Timers
Buffers
Resource
Mgmt*
VFIO
LS
-BusDPAA 2
SEC
AIOP
Offload*
PME
Queue, Scheduler
ODP Applications
Fra
me
wo
rkP
latf
orm
sp
ecific
Sync
NW
Services*
AP
I L
aye
rPKTIO, Crypto, Classifier,
Runtime services
Resource-mgmt,
discovery, Eth-config
Crypto-PAC
AIOP Mgmt & Comm
NW-services – KNI, L2. L3
ODP Applications
PKTIO, Queue IO, Crypto,
Classifier, Runtime services
Extended API
Arch/ ARM
DPAA 1
Res-M
gr
ODP API
Memory Routing
ARP
KNI
Ap
plic
atio
ns
PUBLIC USE8 #NXPFTF
QorIQ ODP Support
• QorIQ HW is inherently aligned to ODP
− Classification and scheduling
− HW queue and buffer mgmt
− Crypto & other HW offloads
− ARM 64-bit cores
• Complete ODP-API coverage
− Queue and Scheduler API
− PKTIO and Classifier API
− Crypto API – algorithmic and protocol
− Runtime services incl. pkt-buffers
− Support for both DPAA1 & DPAA2 platforms
LS1043, LS1046
LS2088, LS1088
• QorIQ HW have additional capabilities
− Switching, demuxing
− Application level offloads
− Virtual networking and resource mgmt
− Provided as extensions to ODP-API
− Efforts underway to make them part of ODP
• Value-added ODP extensions
− Complete Ethernet capabilities
MAC/Phy, IPR/IPF, GRO/GSO, Smart-NIC
Physical and Virtual Ethernet ports
− NW services
Provide Linux network stack services, visiblity
Network-devices (KNI), Routing, ARP
− Resource management
VFIO and VirtIO based assignment of resources.
Dynamic re-configuration and discovery
Multiple application support, flexible process model
PUBLIC USE9 #NXPFTF
QorIQ DPDK Support
NXP support
DPDK.org
Customer
IO & Acceleration Run-Time Services
Timers
Buffers
Resource
Mgmt
VFIO
LS
-Bus
DPAA 2SEC
Intel NICs
Ethernet
Poll-mode Drivers
Fra
me
wo
rkP
latf
orm
sp
ecific
Sync
Network
Services
AP
I L
aye
r
DPDK Applications
Arch/ ARMv8
DPAA 1
Re
s-M
gr
DPDK API
Memory Kernel
NW
Interface
Ap
plic
atio
ns
Crypto
Drivers
Intel QAT
PE
X
EAL
Arch/ x86
Arch/ Power8
Arch/ Tilera
iNIC
PUBLIC USE10 #NXPFTF
QorIQ DPDK Support
• Basic Platform support
− DPAA1 Poll-mode driver
− DPAA2 Poll-mode driver
− iNIC Poll-mode driver
− Crypto offload to local SEC
− LS1043, LS1046
− LS2080, LS2088, LS1088
− Upstream in progress
• Architectural enhancements
− Support for SoC/platform drivers
− Hardware buffer management
− Optimizations for ARMv8 run-time
• Virtualization support
− OVS over DPDK in host user-space
− Vhost-user
− DPDK in guest/VM
− Virtio poll-mode driver
− DPAA2 VFIO poll-mode driver
• Future work
− Ingress scheduling, load balancing.
− Protocol aware crypto
− Egress scheduling, QoS offload.
PUBLIC USE11 #NXPFTF PUBLIC USE11
ODP OVERVIEW
PUBLIC USE12 #NXPFTF
ODP API Components for Packet-processing
• ODP API aims to cover every aspect of packet processing pipeline
• Some components are already defined, while others are in proposal
Ingress Classification Ingress ProcessingApplication
Specific
Processing
Traffic
ManagerEgress
•Stats
•Reassembly•Parsing
•Classification
•Stats
•Load-balancing
•Policing •Lookup
•Packet modification
•Protocol processing
•Accelerator offloads
•QoS
•Shaping
•Stats
•Fragmentation
Generic Packet Processing
PktIOPktIOPktIO
API
Classification
API
Queue
API
Scheduler
API
ThreadThread
ThreadThread
Timers API Packet API
Event ShMem API
Synch API
Crypto
API
Buffers
APIPktIO
PktIOPktIO
API
Queue
API
ODP API
Buffers API
Scheduling
Shaper API*Stats
API*
Policer
API*
API* New ODP API 2.0
PUBLIC USE13 #NXPFTF
API and Concepts
• ODP loosely based on Event Machine, originally developed by NSN
− Generic framework for scalable multi-core programming, not limited to packet processing
− Event based abstraction and programming model for handling IO
− Supports packet flows, physical and virtual network interfaces, accelerators, SW endpoints, etc.
− Events represent different types of data: packets, timers, baseband data, HW notifications, SW messages...
• Supports scheduler based programming model (both HW & SW)
− Scheduling of IO events using different algorithms and knowledge of work in progress
− Implicit synchronisation and mutual exclusion between threads
• Supports different IO load balancing approaches
• Chose best configuration for traffic profile, latency/throughput requirements, and HW characteristics without changing the application
PUBLIC USE14 #NXPFTF
Work Scheduling and Load Distribution (Event Machine)
PUBLIC USE15 #NXPFTF
Packet Receive Options – Direct RX From Queue
Ethernet PktIO
Thread
Thread
Thread
Thread
Queue
Queue
Queue
Queue
Hash
PktIO opened with in_mode =
ODP_PKTIN_MODE_DIRECT
Optional Hashing on IPv4/v6
and TCP/UDP fields
Associate receive pktin queue
with each receiver thread odp_pktin_queue_config()
odp_pktin_recv()
PUBLIC USE16 #NXPFTF
Packet Receive Options – Using Scheduler
Ethernet PktIO
Thread
Thread
Thread
Thread
Queue
Queue
Queue
Queue
Hash
PktIO opened with in_mode =
ODP_PKTIN_MODE_SCHED
Optional Hashing on IPv4/v6
and TCP/UDP fields
Distribute packets to multiple
queues for scheduling odp_pktin_queue_config()
Scheduler
PUBLIC USE17 #NXPFTF
Packet Receive Options – Using Classifier – Matching Rules
Ethernet
Ethernet
Ethernet
...Classifier
PMRs
PoolPool
PoolPool
Queue
Queue
Queue
Queue
Class of Service (CoS)
Packet storage Input to
Scheduler
PktIOPktIO
PktIO
PktIO opened with in_mode =
ODP_PKTIO_MODE_SCHED
Scheduler
Associate RX queues with
classifier input
Use RX queues or create
additional queues for CoS
useodp_pktin_event_queue(), odp_queue_create()
odp_cls_cos_create()
odp_pktin_queue_config()
PUBLIC USE18 #NXPFTF
ODP Classification Functional Description
• Receive packet on pktio
• Apply set of classification rules to header of an incoming packet, identify the header fields
• Validate packet data integrity and store the validation results in packet meta-data
• Compute an odp_cos(Class of service) from 2
• Based on the odp_cos from 2), select the odp_queue
• Based on the odp_cos from 2), optionally select the odp_buffer_pool to store the packet data and meta-data
• Allocate a buffer from odp_buffer_pool selected in 6) and store the packet to the allocated buffer
• Enqueue the buffer into the odp_queue selected in 4)
pktio
Internal
buffer
Parser
Meta-Data
L2
Priority
L3
Priority
Match
Rules
Pkt Buffer
Pkt Buffer
pool
poolCoS
Queue
PUBLIC USE19 #NXPFTF
Packet Transmit Options: Direct Send
Ethernet
Thread
Thread
Thread
Thread
Queue
Queue
Queue
Queue
PktIO
PktIO opened with out_mode =
ODP_PKTOUT_MODE_DIRECT
Pktout queues created and configured with odp_pktout_queue_config()
PUBLIC USE20 #NXPFTF
Packet Transmit Options: Egress Traffic Manager
Ethernet
Ethernet
Loopback
...
PktIO
Thread
Thread
Thread
...
Back to IngressODP Traffic Manager
(coming in ODP Monarch)
TM Input
Queues
Arbiters, Shapers, etc.
TM Output
Queues
TM
Configuration
APIs
PktIOPktIO
PktIO opened with out_mode
= ODP_PKTIO_MODE_TM
PUBLIC USE21 #NXPFTF
ODP Packet Buffer Format
PUBLIC USE22 #NXPFTF PUBLIC USE22
ODP
IMPLEMENTATION
ON DPAA 2
PUBLIC USE23 #NXPFTF
DPRCDPRC
ApplicationApplication
DPCI
DPCONC
DPSECI
DPNI DPNIDPNI
ODP APIs & its mapping to DPAA2
PKTIO PKTIOPKTIO
Que
ue
Que
ue
Que
ue
Que
ueQue
ue
Classifier Classifier
Qu
eue
Scheduler
Traffic
Manager
Que
ue
Que
ue
Engine
(Acc)
Qu
eue
Application
Qu
eue
Qu
eueQu
eue
IFP IFPIFP
FQ FQ FQ FQFQ
Classifier Classifier
FQ
Channel
Traffic
Manager
FQ
FQ
SEC
FQApplication(s)
FQFQ
ODP view of the hardwareDPAA2 also has other
objects like DPAIOP,
DPSW, DPDEMUX
not available in ODP
DPAA2 provides
abstraction to
simplify resource
management and
configuration
DPAA2 MC object model
DPSW
DPDEMUX
DPAIOP
DPAA2 also
provides resource
management
across
applications, VMs.
PUBLIC USE24 #NXPFTF
MC Logical Objects Examples
Configuration interface
Connection point
Queues (ingress distribution width + 1 for egress)
per traffic class (1 to 8)
Data Path Network Interface (DPNI) - A standard
network interface (L2 and up), either basic or high-function
(“smart”), as expected by standard network stacks and
other network applications.
Configuration interface
Data Path Buffer Pool (DPBP) – An abstraction
of BMan buffer pool
Configuration interface
GPP interface (QBMan software portal)
Notification channel (optional: notifications from
ingress queues or DPCON)
Data Path I/O (DPIO) - For performing
enqueue/dequeue via QMan portals and receiving
data available notifications
Configuration interface
Priorities for scheduling ingress queues (2-8)
QBMan channel for dequeue
Data Path Concentrator (DPCON) – Scheduling
object for advanced scheduling of ingress packets from
multiple interfaces.
Configuration interface
Queues (1 for ingress + 1 for egress)
per priority (1 to 8)
SEC Accelerator Interface (DPSECI)
Configuration interface
Connection point (to peer DPCI only)
Queues (1 for ingress + 1 for egress)
per priority (1 or 2)
Data Path Communication Interface (DPCI) –allows communication between different software contexts
through QMan infrastructure, which is not limited to
network packet format. Useful for IPC between two GPP
software entities, or between GPP and AIOP entities. The
communication protocol is user-defined.
PUBLIC USE25 #NXPFTF
Mapping ODP Ingress Processing to DPAA-2
DPIO
DPCONC
DPNIPkt-IO
Parser
Meta-Data
L2
Priorit
y
L3
Priorit
y
Match
Rules
Pkt
Buffe
rpool
CoS
Queues
CoreCoreCore
MAC
BMI
Parser
Classifier
Keygen
Policer
QMI
WRED
Enqueue
Dequeue
10GE GE GE GE GE
WRIOPDMA
Policer
Classifier
Parser
Keygen
Dist
QMI
BMI
Int.
Mem
ory
WQ
0
WQ
1
WQ
2
WQ
3
WQ
4
WQ
5
WQ
6
WQ
7
Core
QMAN
Distribution*
Scheduler
Thread Thread Thread
ODP Ingress processing DPAA 2 Ingress processing
Channel
Police
*
PUBLIC USE26 #NXPFTF
Mapping ODP Centralized Queuing/Scheduling to DPAA-2
DPNI
DPNI
DPSECID
PIO
DP
IO
DPCI
DPNI DP
CO
NC
while(1) {
dequeue();
process work;
…
enqueue();
}
while(1) {
dequeue();
process work;
…
enqueue();
}
FQ
FQ
FQ
FQ
FQ
FQ
Ch
an
ne
l
Porta
lP
orta
lOrder preservation
(atomic)
Order restoration
(ordered)
Crypto
Engine
(SEC)
FQ
FQ
Port 1
Port 2
Port 3
ODP queuing &
scheduling
DPAA queuing &
scheduling
FQ
PUBLIC USE27 #NXPFTF
Mapping ODP HW Flow ordering and Synchronization
• There are two basic approaches:
− Order restoration (a.k.a ODP ‘ordered’ queues)
Take note of the correct order (or sequence) of packets before processing starts and restore the packets to that order before they are transmitted
− Order preservation (a.k.a ODP ‘atomic’ queues)
Ensure that related packets are processed in order (and typically one at a time)
Order preservation can also provide “atomicity” – atomic access to data used in processing the frame
• QMan requires that related frames (which must be transmitted in order) be placed on the same frame queue for both of these approaches
This does not mean that only related frames are placed on a given FQ
Many sets of related frames can be placed on an FQ
WRIOP is responsible for achieving this
F1P1
F0P2
F0P1
Power or ARM™
Core
D-Cache I-Cache
L2 Cache
Power or ARM™
Core
D-Cache I-Cache
L2 Cache
F1P1
F0P2
F0P1
Power or ARM™
Core
D-Cache I-Cache
L2 Cache
Power or ARM™
Core
D-Cache I-Cache
L2 Cache
Ordered packets
processed in parallel
Ordered packets
queued temporally to
single processing
element
PUBLIC USE28 #NXPFTF
Cryptographic Services
• Provide ciphering, authentication, random number generation
• Basic concepts: sessions & operations
• Crypto sessions
− All packets/buffers processed by a session share the parameters that define the session
int odp_crypto_session_create(odp_crypto_session_params_t *params, odp_crypto_session_t *session, enum odp_crypto_ses_create_err*status)
− Session parameters: algorithms, keys, IVs, encode or decode, sync or async, output queue for async, etc
• Crypto operations
− Performs the operations specified during session creation
int odp_crypto_operation(struct odp_crypto_op_params *params, odp_bool_t *posted, odp_crypto_op_result_t *result ODP_UNUSED)
− Operation parameters: session, input/output packets/buffers, ranges in packets, operation context
− Async operation sets *posted
− A completion event is delivered from the session completion queue
− Completion event provides the result of the operation — status, output packet, context
PUBLIC USE29 #NXPFTF
ODP Crypto Processing…
Rx port
Rx port
Core
SEC
COMPL -
ORDERED
odp_schedule() odp_crypto_operation()
odp_crypto_session_t
Rx port
Rx port
Core
TX
Tx port
odp_schedule() odp_queue_enq()
PUBLIC USE30 #NXPFTF
IPSEC Protocol Offload APIs
• Configure existing crypto session for
IPSEC protocol offload
• Configures a crypto session for IPSec
protocol processing.
• add/remove of ipsec headers/trailers
and tunnel header.
• Internal maintenance of esp sequence
numbers.
• Optional parameters, based on
underlying hardware capabilities.
PUBLIC USE31 #NXPFTF
PDCP Protocol Offload APIs
• Configure existing crypto session for
PDCP protocol offload
• - supports both control and data mode
• The IV needed for auth/cipher will be
internally generated.
• Add ICV trailer/compare ICV trailer
• HFN maintenance and HFN threshold
notification
• HFN over-ride per operation
PUBLIC USE32 #NXPFTF
DP
RC
DP
RC
ODP API + extensions
ODP Extensions – Device Mgmt
Netdev, Cryptodev
frameworksVFIO
MC
LS-Bus NW Objects
LS-Bus
VFIO-LSEthernet SEC
Linux NW Stack
Routing ARP
Driver FrameworkDiscovery
Reset IO
Eth NI
DriverConc
Driver
SEC
Driver
AIOP
Cmd
Notification
ConfigInitMapping
Application
QBM FLib
QM, BM
WRIOP SEC
Svc Layer
AIOP AppStep 0: NW Objects created. LS Bus
driver scans bus, and adds devices
to kernel or VFIO container.
Step 1: App inits ODP.
Driver framework scans
VFIO group & inits devices
Step 2: App has own
tools, configures devices
via Eth extension APIStep 3: App does IO
via queue or sched.
Initialization
Configuration
Data I/O
DPIO
Driver
MC FLib
Notifications
Event
Dispatch
Step 4: App is notified
when data is available DPNI
DPSECI
DPIO DPNI
DPSECI
DPCONC
DPIO
Resource
Manager
PUBLIC USE33 #NXPFTF
KNI Kernel Module
ODP –DPAA2
Kernel Space
libkni
User Space
Linux NW StackvDpni-2
vDpni-3
dpni-1 dpni-1
ODP based Application
Dpni-2Dpni-3
FIFO Queues
per NW I/F
Register net_device
Standard Linux based Applications e.g. FTP
ifconfig
Kernel Network Interface
• The Kernel NIC Interface (KNI) is an ODP extn
control plane solution that allows user space
applications to exchange packets with the kernel
networking stack. It creates a pseudo network
interface in kernel for any network interface in
ODP(e.g. WRIOP- DPNI).
− The ODP extn creates a FIFO queue for packet
ingress and egress to the KNI kernel module for
each device created.
− Kernel Network Interface provides a mechanism
between kernel and user-space components to
send/receive control/management packets to
the Linux network stack. (copy-based)
− It is destroyed on application exit.
PUBLIC USE34 #NXPFTF
ODP Extensions – AIOP Mgmt & Communication
ODP API + extensionsNetdev, Cryptodev
frameworksVFIO
MC
LS-Bus NW Objects
LS-Bus
VFIO-LSEthernet SEC
Linux NW Stack
Routing ARP
Driver FrameworkDiscovery
Reset IO
Eth NIC
DriverConc
Driver
SEC
Driver
AIOP
Mgmt
Notification
ConfigInitMapping
Application
QBM FLib
QM, BM
WRIOP SEC
Svc Layer
AIOP App
Step 1: GPP App scans
and finds AIOP Mgmt &
Cmd device.
Step 2: GPP App loads
AIOP firmware using
AIOP Mgmt API
Step 5: App sends
commands and receives
response from AIOP
App.
Initialization
Configuration
Data I/O
DPIO
Driver
MC FLib
Notifications
AIOP
Cmd
Step 4: GPP App uses
AIOP Cmd API to
establish comm with AIOP
App
Step 3: MC loads firmware
on AIOP. AIOP Svc Layer
and App initializes.
Step 0: AIOP Mgmt and
Cmd objects created and
assigned to GPP App VFIO
container
Step 6: AIOP App sends
async notifications, errors
to GPP via MC
DP
RC
DPNI
DPAIOP
DPCI
DPIO
Resource
Manager
PUBLIC USE35 #NXPFTF PUBLIC USE35
DPDK
OVERVIEW
PUBLIC USE36 #NXPFTFSource: “A Look at Intel’s Dataplane Development Kit” by Dominik Scholz
PUBLIC USE37 #NXPFTF
DPDK Architecture
From DPDK Summit 2015 - Intel - Keith Wiles
http://www.slideshare.net/jstleger/dpdk-summit-2015-intel-keith-wiles
PUBLIC USE38 #NXPFTF
Core Components Architecture
PUBLIC USE39 #NXPFTF
Queue Manager
Source: “A Look at Intel’s Dataplane Development Kit” by Dominik Scholz
PUBLIC USE40 #NXPFTF
Memory Manager
Source: “A Look at Intel’s Dataplane Development Kit” by Dominik Scholz
PUBLIC USE41 #NXPFTF
Buffer Manager
Source: “A Look at Intel’s Dataplane Development Kit” by Dominik Scholz
PUBLIC USE42 #NXPFTF
Work In Progress: DPDK-AE (Acceleration Enhancements)
From DPDK Summit 2015 - Intel - Keith Wiles
http://www.slideshare.net/jstleger/dpdk-summit-2015-intel-keith-wiles
PUBLIC USE43 #NXPFTF
DPDK Crypto Subsystem
• Session-less Mode
− For each job, software defines;
The data to be operated upon (input buffers, lengths, offsets)
The output buffers to hold results
The cryptographic operations to be performed
Keys & context for the cryptographic operations
• Session Oriented Mode
− For each job, software defines;
The data to be operated upon (input buffers, lengths, offsets)
The output buffers to hold results
− Cryptographic operations, keys & context are defined at session establishment time, and referenced for each job
• Supports virtual and physical crypto devices
− Virtual Device (Software Implementation)
Intel AES-NI/vector operations
ARM NEON instructions *
− Physical Device (Hardware Accelerated)
QAT
DPAA-CAAM*
DPAA2-CAAM*
• Test Applications
− L2fwd with crypto
− ipsec forward application
PUBLIC USE44 #NXPFTF
DPDK Crypto APIs
• device creation and configuration
rte_cryptodev_configure, rte_cryptodev_queue_pair_setup
• device capabilities.
rte_cryptodev_info_get
• Pool creations
rte_crypto_op_pool_create, rte_crypto_op_alloc
• Session Management
rte_cryptodev_sym_session_create
rte_cryptodev_sym_session_free
• Packet operations
rte_cryptodev_enqueue_burst
rte_cryptodev_dequeue_burst
PUBLIC USE45 #NXPFTF PUBLIC USE45
DPDK
IMPLEMENTATION
ON NXP
PLATFORM
PUBLIC USE46 #NXPFTF
QorIQ DPDK Support
NXP support
DPDK.org
Customer
IO & Acceleration Run-Time Services
Timers
Buffers
Resource
Mgmt
VFIO
LS
-Bus
DPAA 2SEC
Intel NICs
Ethernet
Poll-mode Drivers
Fra
me
wo
rkP
latf
orm
sp
ecific
Sync
Network
Services
AP
I L
aye
r
DPDK Applications
Arch/ ARMv8
DPAA 1
Re
s-M
gr
DPDK API
Memory Kernel
NW
Interface
Ap
plic
atio
ns
Crypto
Drivers
Intel QAT
PE
X
EAL
Arch/ x86
Arch/ Power8
Arch/ Tilera
iNIC
PUBLIC USE47 #NXPFTF
DPDK Framework
DistributionDistribution
SW Portal A
DPNI #2
DPNI#1
FQs FQs
DPDK: Integration with DPAA2 framework
mempool
mbufEthernet Poll Mode Driver Framework
User Application
Th1 Th2 Th8
SW Portal B SW Portal C SW Portal H
Th3
Que
ue
Que
ue
Que
ue
Que
ue
PUBLIC USE48 #NXPFTF
DP
RC
DP
RC
DPDK API
DPDK – Device Management with VFIO
Netdev, Cryptodev
frameworksVFIO
MC
LS-Bus NW Objects
LS-Bus
MC BUSEthernet SEC
Linux NW Stack
Routing ARP
Driver FrameworkDiscovery
Reset IO
Eth NI
Driver
SEC
Driver
ConfigInitMapping
Application
QBM FLib
QM, BM
WRIOP SEC
Step 0: NW Objects created. LS Bus
driver scans bus, and adds devices
to kernel or VFIO container.
Step 1: App inits ODP.
Driver framework scans
VFIO group & inits devices
Step 2: App has own
tools, configures devices
via Eth extension APIStep 3: App does IO
via queue.
Initialization
Configuration
Data I/O
DPIO
Driver
MC FLib
DPNI
DPSECI
DPIO DPNI
DPSECI
DPCONC
DPIO
Resource
Manager
PUBLIC USE49 #NXPFTF PUBLIC USE49
APPLICATION
INTEGRATION
PUBLIC USE50 #NXPFTF
Networking & Security Package
• Features
− Data-path
IPv4/v6 Routing/Forwarding
NAPT, Firewall, IPSec
− Control-Path
Network-Function API
Integrated with Linux NW stack
Seamless integration with open-source SW like Strong-Swan, iptables etc.
• Intended deployment
− SMB/Enterprise/WLAN Gateways
− Services cards
− Cell-site routers, Access concentrators.
− Network functions – vCPE, vAccess
• NXP value proposition
− Data-path entirely offloaded to HW, GPP free for customer apps
− Highest performance/watt – maximum leverage of NXP HW offloads
− Rich feature-set = fast deployment / time-to-market
− Easy-to-use NF API to integrate with customer stacks
NW Svcs
Data-Path
Network-Function API
NAPT, Firewall
IPv4/v6
IPSec
VLAN
AIO
P
Linux NW Stack
AIOP Mgmt
Con
figu
ratio
n
Con
trol P
kt
Load/
Reset
Logs
Errors
NW I/f NW I/fSEC I/f
NW I/fCmd I/f Mgmt I/f
AIOP
Svcs
Ethernet EthernetSEC
NW I/f NW I/f
ARP/PPPoE
Routing IPv4/v6
IP-tables
XFRM GP
P
KNI KNIOffload Services
StrongSwan,
Raccoon, setkey
Ifconfig, ip, ethtool,
tc, iptables
Run-time
HW Components
SDK Components
ADK Components
Open-source SW
Customer SW
Cmd I/f NW I/f
AIO
P S
vc L
ayer
NF
Infra
NW I/f
App
DPDK, ODP
NW I/f
Customer
Application
Linux TC
Conn-track
Customer
Network
Stack
Customer
Control Plane
PUBLIC USE51 #NXPFTF
Layerscape Platform
NFV Solution Architecture
Veth-port
10G Eth
vVPN vRouter/Fwl
User Space
Kernel Space (Ubuntu)
IKE StrongSwan
so
cke
t
User Space
Kernel Space (OpenWRT)
Route, dhcp, etc
so
cke
t
fpm
DPDK
Open fastpath
3rd-Party vNF
User Space
Kernel Space (Ubuntu)
so
cke
t
3rd-Party App
OP-NFV(Orchestration,
Mgmt)
Open
Contrail,
Daylight
Open Stack
Fuel, Apex
Installer
Compute Virt
KVM
Qemu
libvirt
Do
cke
rs, C
ep
h
I/O and Network Virt
OVS -
kernel
10G Eth
OVS -
DPDK
Virtio-net
vhost-user
OVS, Service-
Chaining – HW
Packet Engine
OP-NFV • Brahmaputra
DPDK • v2.2+
OVS • v2.4/2.5
• OVS DPDK
• OVS Packet-Engine
KVM • v2.2
Qemu • v2.5
Libvirt • 1.2.20
Linux • LTS Kernel 4.1.2
Orchestratio
n
• Open Daylight
Reference
vNFs
• Open Source
• vRouter,
• vFW (iptables),
• vVPN (strongSwan)
Distro • UEFI
• CentOS
NFV Development Kit
UEFI, CentOS
Re-use from
OP-NFV
community and
run un-modified
NXP enablement for NFV
– upstreamed to
community, competitive
performance
NXP HW
assists for
extra
performance
Virtio (Qemu)
Virtio-
crypto
VFIO (direct)
Virtual
NIC, SEC
virtio vfio
DPDK
virtio vfio
NXP Sample vNF
User Space
Kernel Space (CentOS)
Virtio-net,crypto
NW Stack
socket DPDK
virtio vfio
Virtio-net,crypto
NW Stack
Virtio-net,crypto
NW Stack
Virtio-net,crypto
NW Stack (ipsec)
net
perf
OSSL
speed-
test
L3-
Fwd
IPSec
Fwd
SECARMv8
Re-use from 3rd-
Party sources
and run un-
modified
Service-chain
Kernel virt User-space virt HW assisted
Packet Engine
PUBLIC USE52 #NXPFTF
Session Summary
• SDN and NFV are market drivers for cross-platform API standards for efficient user space
packet processing with hardware acceleration.
• ODP, originally from the ARM ecosystem, has rapidly emerged as cross-platform API for
SoCs, allowing leverage of a range of acceleration features, including crypto.
• DPDK, from the IA ecosystem, started as IA-centric APIs for narrower range of
accelerators, but with DPDK-AE, is beginning to support hardware accelerators.
• ODP & DPDK is rapidly emerging as the only cross-platform API standard with a fully
engaged ecosystem.
• NXP is actively engaged in and is driving definition of ODP API within Linaro.
• NXP is committed to provide efficient implementations of the ODP API across QorIQ
platforms.
• NXP also provide efficient implementation of DPDK across QorIQ platforms.
.
PUBLIC USE53 #NXPFTF
Resources
• Software support for LS208x - NXP SDK
• Open Data Plane (www.opendataplane.org)
• DPDK (www.dpdk.org)
Visit us in the Tech Lab – #246Accelerate User-space Networking with QorIQ Processors
Visit us in the Tech Lab – #245Leveraging DPDK for Virtualized Network Functions on DPAA2.0
PUBLIC USE54 #NXPFTF
Linux® Services
Integration
Services
Development Tools
Solutions
Reference
Runtime Products
Software Products and Services Visit us in the Tech Lab – #247
Deliver Commercial Software, Support, Services and Solutions
Create Success!
Simplify Software Engagement with NXP
Find us online at www.nxp.com/networking-services
Accelerate Customer Time-to-Market
• Security
Consulting
• Hardened
Linux
• IOT
Gateway
• OpenWRT+
• CodeWarrior• VortiQa Software
Solutions
• Commercial
Support• Performance Tuning
PUBLIC USE56 #NXPFTF
ATTRIBUTION STATEMENT
NXP, the NXP logo, NXP SECURE CONNECTIONS FOR A SMARTER WORLD, CoolFlux, EMBRACE, GREENCHIP, HITAG, I2C BUS, ICODE, JCOP, LIFE VIBES, MIFARE, MIFARE Classic, MIFARE
DESFire, MIFARE Plus, MIFARE FleX, MANTIS, MIFARE ULTRALIGHT, MIFARE4MOBILE, MIGLO, NTAG, ROADLINK, SMARTLX, SMARTMX, STARPLUG, TOPFET, TrenchMOS, UCODE, Freescale,
the Freescale logo, AltiVec, C 5, CodeTEST, CodeWarrior, ColdFire, ColdFire+, C Ware, the Energy Efficient Solutions logo, Kinetis, Layerscape, MagniV, mobileGT, PEG, PowerQUICC, Processor Expert,
QorIQ, QorIQ Qonverge, Ready Play, SafeAssure, the SafeAssure logo, StarCore, Symphony, VortiQa, Vybrid, Airfast, BeeKit, BeeStack, CoreNet, Flexis, MXC, Platform in a Package, QUICC Engine,
SMARTMOS, Tower, TurboLink, and UMEMS are trademarks of NXP B.V. All other product or service names are the property of their respective owners. ARM, AMBA, ARM Powered, Artisan, Cortex,
Jazelle, Keil, SecurCore, Thumb, TrustZone, and μVision are registered trademarks of ARM Limited (or its subsidiaries) in the EU and/or elsewhere. ARM7, ARM9, ARM11, big.LITTLE, CoreLink,
CoreSight, DesignStart, Mali, mbed, NEON, POP, Sensinode, Socrates, ULINK and Versatile are trademarks of ARM Limited (or its subsidiaries) in the EU and/or elsewhere. All rights reserved. Oracle and
Java are registered trademarks of Oracle and/or its affiliates. The Power Architecture and Power.org word marks and the Power and Power.org logos and related marks are trademarks and service marks
licensed by Power.org. © 2015–2016 NXP B.V.