KEMBAR78
Introduction to SDN | PDF
2018#apricot2018 45
Intro to SDN
APRICOT2018/APNIC45
26 Feb 2018
Kathmandu
Tashi Phuntsho
Senior Network Analyst
tashi@apnic.net
2018#apricot2018 45
Agenda
• Evolution of routers
• The Clean Slate project
• OpenFlow
• Emergence and evolution of SDN
• SDN architecture today
• Use cases
• Standards development
• Comparing and contrasting with NFV
• OpenFlow Demo
2
2018#apricot2018 45
Routers
3
• Two key roles:
Determining network
paths
Packet forwarding
2018#apricot2018 45
Today’s router
4
Other Hardware
Network
Interfaces
CPUs
ASICs NPUs
Switch
Fabric
Control
Memory
(T)CAM
FIB
Management
CLI SNMP
High Availability
Resiliency
Protocols
Network Layer
RIB
Routing
Protocols
Services Layer
IP L2 L3
Application
Layer (DPI etc)
QoS Queue
Management
Hardware
Redundancy
Traffic
Managers
Packet
Memory
Scheduling
Algorithms
FCAPS
Security
AAA
CPU
Protection
Accounting
2018#apricot2018 45
Planes
5
Control plane
• Developed by
various SDOs
• Needs to be
interoperable
• Strives to
maintain
backwards
compatibility
• Sometimes takes
years to achieve
stability
Data plane
• Hardware-
dependent and
closed
• Used by
vendors to
provide
differentiation
• Can be fairly
complicated,
incorporating a
number of inline
functions e.g.
ACLs, QoS, NAT
Management
plane
• Uses a
combination of
standard (e.g.
SNMP) and non-
standard tools
such as CLI
• Generally
requires low-
level operator
input
Forwarding
Device
Data Plane
Element/Network
Management System
Control Plane
Mgmt
Plane
Management Plane
Determines how
packets should be
switched/forwarded
Responsible for
actual forwarding
of packets
FCAPS (Fault,
Configuration,
Accounting,
Performance &
Security)
2018#apricot2018 45
How did we get here ?
6
Distribution of
complexity
Backwards
compatibility
Unanticipated
applications
Need for
higher
performance
• ‘End-to-end
principle’
• Better scaling
• Survivability;
spreading of risk
• “Flag days” not
realistic
• Short-term,
incremental
evolution of
technology; no
major overhaul
in last 20 years
• Networking is a
victim of its own
success
• New applications
have been
delivered on top
of existing
capabilities
• Tight coupling
between
different
planes seen as
critical for
delivering
higher
performance
2018#apricot2018 45
Clean Slate Project (1)
7
Mission: Re-invent the Internet
With what we know today, if we
were to start again with a clean
slate, how would we design a
global communications
infrastructure?
Two research questions:
How should the Internet look in 20-
30 years from now?
2018#apricot2018 45
Clean Slate Project (2)
8
• One of the flagship projects was ‘Internet Infrastructure:
OpenFlow and Software Defined Networking’
• Seminal paper on OpenFlow…
... kicked off the SDN movement and the data
communications world would never be the same again
2018#apricot2018 45
OpenFlow: The Problem
• Initial Problem:
– A mechanism was required
for researchers to run
experimental network
protocols.
– Open software platforms did
not provide the required
performance and
commercial solutions were
too closed and inflexible.
9
Hardware
Software Tight
coupling
Closed system –
only functionalities exposed
by vendors available
Challenge: how do we influence
packet forwarding behaviour ?
2018#apricot2018 45
OpenFlow: The Solution (1)
10
FROM TO
Routing/Bridging
Protocols, RIBs, routing
policy and logic
Forwarding Tables
Secure Channel
Abstracted
Flow Table
OpenFlow
Controller
OpenFlow
Protocol
Control
Plane
Data
Plane
Data
Plane
Control
Plane
Control
Plane
Data
Plane
Protocols and algorithms to calculate
forwarding paths
Forwarding frames/packets based on paths
calculated by control plane
2018#apricot2018 45
OpenFlow: The Solution (2)
11
Secure Channel
Abstracted
Flow Table
OpenFlow
Controller
OpenFlow
Protocol
Data
Plane
Control
Plane
The solution? A compromise:
• that allowed switching/routing
decisions to be influenced
without opening up network
software
– The control process would run on a
controller
– Decisions would be pushed down to
the data plane running on the network
element
2018#apricot2018 45 12
Secure Channel
Abstracted
Flow Table
OpenFlow
Controller
OpenFlow
Protocol
Control
Plane
* Ingress Port, Ethernet SA, Ethernet DA,VLAN ID,VLAN PCP, IP SA, IP DA,
IP Proto, IP ToS, Source L4 Port, Dest L2 Port etc….
Adds, deletes and
modifies flow table
entries
Header Fields* Actions Counters
Flow 1 Forward to port 1/1
Flow 2 Drop
Flow n Send to controller
Switch forwards traffic:
by matching against header fields
and executing corresponding actions
OpenFlow: How it works (1)
2018#apricot2018 45
OpenFlow: How it works (2)
13
Secure Channel
Abstracted
Flow Table
OpenFlow
Controller
OpenFlow
Protocol
Control
Plane
Secure Channel
Abstracted
Flow Table
Secure Channel
Abstracted
Flow Table
. . .
Switch 1 Switch 2 Switch n
OpenFlow
Protocol
One controller
manages many
switches
2018#apricot2018 45
OpenFlow: Today
• Initially synonymous with SDN
• Today, relegated to being just a part of the greater
SDN architecture, with other protocols competing
(complementing) in the same space
• However, responsible for the most radical paradigm
shift in IP in recent times
14
2018#apricot2018 45
OpenFlow: Implications
• Two primary implications:
15
The control plane (processes to determine how
traffic is handled) is physically decoupled from
the data plane (forwarding traffic according to
decisions passed down by the control plane).
The control plane is consolidated and
centralised: a single software control plane
controls multiple data planes
(previously a 1:1 correspondence).
2018#apricot2018 45
Aside:
Data/control plane separation challenges
16
Scalability
The control element
now needs to be
scaled to support a
very large number
of forwarding
elements
Reliability
The controller can
NOT be a single
point of failure
Consistency
When multiple
controllers are used
(redundancy),
consistency has to be
assured across
multiple replicas
2018#apricot2018 45
The birth of SDN
17
The separation of control and data plane was not an objective in
itself but was a consequence of the compromise approach taken
by OpenFlow
Ushered a new era of programmability that has been
vastly enhanced with new architectures and capabilities
The term ‘SDN’ itself was coined in an article about the
OpenFlow project at Stanford
(http://www2.technologyreview.com/news/412194/tr10-software-defined-networking/)
2018#apricot2018 45
Emergence and Evolution
18
• OpenFlow was a starting point…
– Ushered in an era of programmability
– But a complete decoupling of the control plane and data
plane is not practical:
• Difficult to solve all the problems the industry had spent decades
working on and refining: resiliency, scalability, convergence,
redundancy, etc.
• SDN architecture today
– Hybrid: some elements of the control plane still remain
distributed while others are centralised
– Many different architectural models
• All aspire to achieve the goals of agility and network programmability
2018#apricot2018 45
Hybrid model of SDN
19
Proportion of centralisation of
control plane
Data Plane
Today’s model
Control plane is fully
distributed (collocated
with the data plane)
0%
100%
OpenFlow model
Control plane is
completely de-coupled
from the data plane
Hybrid model
Certain control plane functions are
centralised while others continue to
be distributed with the data plane
2018#apricot2018 45
Defining SDN
20
ONF:
The physical separation of the network control
plane from the forwarding plane, and where a
control plane controls several devices.
Too narrow…
Enhanced programmability and open
interfaces
Dis-aggregation and abstraction
Centralisation of network control with
real-time network visibility
SDN is …
A new approach to
networking that
provides greater
agility and
flexibility by:
2018#apricot2018 45
Objectives and benefits
21
Agility Automation
CAPEX/OPEX
reduction
Programmability
Centralised
Control
• Service
provisioning
• Network
provisioning
• Service
automation
• Quicker
introduction
of new
services
• Reduction in
hardware and
network
operations
costs
• Abstraction
via simplified
open
interfaces
• End-to-end
service and
network
management
• End-to-end
optimisation
2018#apricot2018 45
SDN SDOs
22
2018#apricot2018 45
SDN
Architectural Framework (1)
23
ITU-T
Y.3300
SDN
Controllers
SDN
Applications
Network
Resources
Application control interface
Resource control interface
2018#apricot2018 45 24
Application Plane
Application Service
Network Services Abstraction Layer
Control Plane
Service App
Control Abstraction Layer (CAL)
Management Plane
App
Mgmt.Abstraction Layer (MAL)
Service Interface
Device & Resource Abstraction Layer (DAL)
Forwarding Plane App Operational Plane
Network Device
CP Southbound Interface MP Southbound Interface
RFC
7426
SDN
Architectural Framework (2)
2018#apricot2018 45 25
Application
Plane
Application Service
Topology Discovery
& Management
Network Devices – IP/MPLS/Transport
Southbound Interfaces
REST/RESTCONF/NETCONF/XMPP
Control
Plane
(controller)
Traffic Engineering
Route selection &
failover
Resource
Management
BGP-LS PCE-Pi2RS
SNMP
MIBs OpenFlow YANG
Configuration
Open
Flow
SNMP NETCONF
Data
Plane
(with some
distributed
control plane
elements)
BGP PCCRIBs/FIBs
Segment
Routing RSVP-TE
East/West-
bound interfaces
– BGP
IPFIX
Northbound Interfaces
Note: designations of north-bound and south-bound are relative to the control plane (“controller”)
Device & Resource Abstraction
Layer (DAL)
Network Services Abstraction Layer
SDN
Architectural Framework (3)
2018#apricot2018 45
Evolution NOT Revolution
• Despite the hype, SDN is an evolution of current
network technologies
• No one protocol that defines SDN
– it is a new architectural framework for data networks
• Protocols/technologies that enable:
– centralising control plane
– abstracting networks and topologies
– enhancing programmability via standard interfaces
are considered a part of the SDN framework of technologies
• Introduction of any of these can be considered to be
SDN-enabling the network
26
2018#apricot2018 45
Enabling SDN
27
Today’s network
There is no one protocol that defines
SDN…
Implement Segment
Routing with PCE SDN✓
Implement OpenFlow SDN✓
Implement
NETCONF/YANG SDN✓
... all of these qualify as having
implemented SDN in the network
2018#apricot2018 45
Comparing/contrasting with
NFV
28
FROM TO
Tightly coupled
Software
Purpose-
built
hardware
COTS
hardware
Virtualised
Software
SDN: decouples elements of the control plane from the data plane
NFV: decouples network software from closed, proprietary hardware
systems
2018#apricot2018 45
Open source projects
29
• NOX/POX
• Beacon
• OpenDayLight (ODL)
• Open Network Operating System (ONOS)
• Ryu
• OpenContrail
• Floodlight
• ……………
2018#apricot2018 45
What more?....
• OpenFlow is an interface between the control plane
and forwarding plane
– based on Match and Actions
• Instead of just manipulating the forwarding plane,
can we
– Implement Match+Action on the hardware itself?
– Better performance and greater flexibility
30
2018#apricot2018 45
Reconfigurable Switches
• Current switches work on MMT model
– Pipelined stages
• But only a small number of tables, whose size and
execution (pipeline) order are fixed at fabrication
– Limiting flexibility!
– Only a limited set of actions (forward, drop, tagging,
encapsulation)
• The idea of RMT
– Match fields can be modified or new ones added (reconfigurable parser)
– Match table sizes can be configured
– New actions (based on match) can be written
– Packets can be placed on specified output queues
31
2018#apricot2018 45
P4
• Programming Protocol-independent packet processors
• Language that programs switches (reconfigurable)
– Not be constrained by fixed switch designs
• Three main goals:
– controller should be able to define packet parsing and
processing in the field (Reconfigurability)
– Not limited to specific packet formats (network protocols)
and a pipeline of match+action tables (protocol
independence)
– The controller switch need not know the underlying switch
hardware (that is the compiler’s job)– target independence
32
2018#apricot2018 45
P4 vs OF
• P4 tells the switch what to do
– Instead of the switch telling the limited things it can do
• P4 uses programmable parser
– new header fields can be defined and what headers a switch
should recognize
– OF parsing is based on known header fields
• Match and Action can be in series or parallel in P4
– Match+Action in series in OF
• P4 is a language
– OF is a fixed protocol
33
2018#apricot2018 45
Simple use case
2018#apricot2018 45
Background
• Flows from R1 à R4
needs to take the
R2 à R3 path
• Flows from R5 à R8
needs to take the
R6 à R7 path
• The flows need
disjointed paths!
35
R1
R6
R5
R3
R2
R4
R7
R8
2018#apricot2018 45
Problem (general)
36
• Currently:
– Not possible to manipulate the
forwarding table (only through the RIB)
Best path computation
(distributed)
R1
R6
R5
R3R2
R4
R7
R8
2018#apricot2018 45
Solution – centralised controller
37
• With a controller:
– We can manipulate the forwarding table
to provision separate paths for the flows
R1
R6
R5
R3R2
R4
R7
R8
Centralised controller
with network visibility
2018#apricot2018 45
Use Case-1
2018#apricot2018 45
Background
• AS1 is an internet service
provider to end-customers,
typically enterprises
• It peers with two upstream
providers, AS2 and AS3
• Distribution of traffic inside AS1:
– Multiple RSVP-TE LSPs are used on the
inter-router links and traffic load-
balanced over these to provide crude
traffic-engineering to ensure better
utilisation of links and prevent
congestion
– To influence one set of links over
another, multiple parallel LSPs may be
created
AS1
R3 R4
R1 R2
LSP
LSP
AS2
Peering/
Transit
AS3
Peering/
Transit
Customer
1
Customer
2
39
2018#apricot2018 45
Problem statement 1
• Uneven distribution of inbound
traffic at BGP peering point
– Typical case of unbalanced inbound
traffic with potential to overrun
capacity
– Typical BGP attributes are used to
control inbound traffic into AS1,
pre-dominantly AS path-
prepending
– Laborious, manual process: traffic
levels are monitored, traffic data is
analysed, BGP policies changed
and then applied. Frequently,
traffic patterns have changed by
the time the new policy is applied.
AS1
R3 R4
R1
R2
LSP
LSP
AS2
Peering/
Transit
AS3
Peering/
Transit
Customer
1
Customer
2
40
2018#apricot2018 45
Problem statement 2
• Unbalanced traffic on internal
links depending on the
peering point where bulk of
inbound traffic is entering
AS1
• Traffic on R1 has to be
manually forced into LSPs to
provide better utilisation of
internal links and prevent
congestion
AS1
R3 R4
R1 R2
LSP
LSP
AS2
Peering/
Transit
AS3
Peering/
Transit
Customer
1
Customer
2
41
2018#apricot2018 45
Requirements
• Provide complete automation of current manual
process for influencing inbound traffic and balancing
traffic over internal links
• Monitor link utilisation of both internal and external
links
• When utilisation exceeds pre-defined (and
configurable) thresholds, automatically trigger
mechanisms to balance traffic flows:
– For external links, this will translate to influencing inbound traffic by
manipulating AS-path attribute length (will require intelligent analysis
to determine routes to which this will apply). For outbound traffic,
manipulation of LOCAL_PREF etc. will be required.
– For internal links, a mechanism is need to provide intelligent traffic
balancing.
42
2018#apricot2018 45
Solution: inbound traffic
• Inbound traffic to AS1 from
upstream peers
– Monitor inbound traffic on links with
upstream peers (may be LAG or ECMP):
• Threshold crossing alerts (TCA)
• Flow stats with IPFIX
• Generate top-N lists based on
destination prefixes (to identify
subnets to be manipulated)
– When TCA event is triggered, initiate a
BGP policy update (AS-path prepending)
to apply to the top-N traffic contributors
• Once policy is constructed, it needs to
be pushed down to R1 and R2
1
R1 R2
AS2
Peering/
Transit
AS3
Peering/
Transit
2
11
2 2
43
2018#apricot2018 45
Solution: outbound traffic
• Outbound traffic from AS1 to
upstream peers
– Monitor outbound traffic on on links
with upstream peers:
• Threshold crossing alerts (TCA)
• Flow stats with IPFIX
• Generate top-N lists based on
destination prefixes (to identify
subnets to be manipulated)
– When TCA event is triggered, initiate a
BGP policy update (set LOCAL_PREF) to
apply to the top-N traffic contributors to
make congested next-hop less
preferable:
• Once policy is constructed, it needs
to be pushed down to R1 and R2
1
2
AS1
R3
R4
R1 R2
LSP1
LSP2
AS2
Peering/
Transit
AS3
Peering/
Transit
Customer
1
Customer
2
1 12 2
44
2018#apricot2018 45
Solution: intra-AS traffic
• Intra-AS inter-router links (in the
case that the external peering
links are not themselves
congested)
– Monitor traffic on internal inter-router
links:
• Threshold crossing alerts (TCA)
• Flow stats with IPFIX
• Generate top-N lists based on
destination prefixes (to identify
subnets to be manipulated)
– When TCA event is triggered, use
OpenFlow to steer flows off the
congested link onto an LSP on an
alternate physical link.
1
2
AS1
R3 R4
R1
R2
LSP1
LSP2
AS2
Peering/
Transit
AS3
Peering/
Transit
Customer
1
Customer
2
1 1
1
1
2 2
2
45
2018#apricot2018 45
How SDN can help
• The solution elements described for addressing this
use case are quite disparate and require co-
ordination between a number of different tasks:
– Link utilisation monitoring
– Generation of alerts on traffic threshold crossing
– Collection of flow information
– Analysis of flow information to identify ”top talkers”
– Crafting of BGP policy to influence traffic
– Application of BGP policy to routers
– OpenFlow-based traffic steering
• In the absence of SDN, there are very few viable
solutions to address all of these in a holistic manner
46
2018#apricot2018 45 47
Application
Plane
Application Service
Topology Discovery
& Management
Network Devices – IP/MPLS/Transport
Southbound Interfaces
REST/RESTCONF/NETCONF/XMPP
Control
Plane
(controller)
Traffic Engineering
Route selection &
failover
Resource
Management
BGP-LS PCE-Pi2RS
SNMP
MIBs OpenFlow YANG
Configuration
Open
Flow
SNMP NETCONF
Data
Plane
(with some
distributed
control plane
elements)
BGP PCCRIBs/FIBs
Segment
Routing RSVP-TE
IPFIX
Northbound Interfaces
Note: designations of north-bound and south-bound are relative to the control plane (“controller”)
Device & Resource Abstraction
Layer (DAL)
Network Services Abstraction Layer
Mapping to SDN
[Link
utilisation
monitoring]
[Flow info
collection and
analysis]
[BGP policy]
[Application
of BGP policy]
[OpenFlow-
based traffic
steering]
2018#apricot2018 45
Use Case-2
2018#apricot2018 45
Eolo’s BLU Project
• Their own router
– TileGX (72 core CPU)
• Their own Router OS
– BLUos – based on 6windgate
– Customisation: RFC3107 in Quagga BGP
• Their own controller
– BLU-GW
49
2018#apricot2018 45
BLU Project – Stage 1
• OpenFlow rules for MPLS label switching
• RFC3107 for traffic labelling (downstream)
• Problems:
– OpenFlow granularity issues
– Change of single flow required all BLUs along the path to be
reprogrammed
50
2018#apricot2018 45
BLU Project – Stage 2
• Segment Routing + RFC3107 for traffic labelling
– MPLS dataplane
Contact them: blu@eolo.it
51
2018#apricot2018 45
OpenFlow demo
• OpenDayLight Controller
• Mininet
52
2018#apricot2018 45
THANK YOU
53
#apricot2018
19 – 28 February 2018KATHMANDU, NEPAL
2018
45
54

Introduction to SDN

  • 1.
    2018#apricot2018 45 Intro toSDN APRICOT2018/APNIC45 26 Feb 2018 Kathmandu Tashi Phuntsho Senior Network Analyst tashi@apnic.net
  • 2.
    2018#apricot2018 45 Agenda • Evolutionof routers • The Clean Slate project • OpenFlow • Emergence and evolution of SDN • SDN architecture today • Use cases • Standards development • Comparing and contrasting with NFV • OpenFlow Demo 2
  • 3.
    2018#apricot2018 45 Routers 3 • Twokey roles: Determining network paths Packet forwarding
  • 4.
    2018#apricot2018 45 Today’s router 4 OtherHardware Network Interfaces CPUs ASICs NPUs Switch Fabric Control Memory (T)CAM FIB Management CLI SNMP High Availability Resiliency Protocols Network Layer RIB Routing Protocols Services Layer IP L2 L3 Application Layer (DPI etc) QoS Queue Management Hardware Redundancy Traffic Managers Packet Memory Scheduling Algorithms FCAPS Security AAA CPU Protection Accounting
  • 5.
    2018#apricot2018 45 Planes 5 Control plane •Developed by various SDOs • Needs to be interoperable • Strives to maintain backwards compatibility • Sometimes takes years to achieve stability Data plane • Hardware- dependent and closed • Used by vendors to provide differentiation • Can be fairly complicated, incorporating a number of inline functions e.g. ACLs, QoS, NAT Management plane • Uses a combination of standard (e.g. SNMP) and non- standard tools such as CLI • Generally requires low- level operator input Forwarding Device Data Plane Element/Network Management System Control Plane Mgmt Plane Management Plane Determines how packets should be switched/forwarded Responsible for actual forwarding of packets FCAPS (Fault, Configuration, Accounting, Performance & Security)
  • 6.
    2018#apricot2018 45 How didwe get here ? 6 Distribution of complexity Backwards compatibility Unanticipated applications Need for higher performance • ‘End-to-end principle’ • Better scaling • Survivability; spreading of risk • “Flag days” not realistic • Short-term, incremental evolution of technology; no major overhaul in last 20 years • Networking is a victim of its own success • New applications have been delivered on top of existing capabilities • Tight coupling between different planes seen as critical for delivering higher performance
  • 7.
    2018#apricot2018 45 Clean SlateProject (1) 7 Mission: Re-invent the Internet With what we know today, if we were to start again with a clean slate, how would we design a global communications infrastructure? Two research questions: How should the Internet look in 20- 30 years from now?
  • 8.
    2018#apricot2018 45 Clean SlateProject (2) 8 • One of the flagship projects was ‘Internet Infrastructure: OpenFlow and Software Defined Networking’ • Seminal paper on OpenFlow… ... kicked off the SDN movement and the data communications world would never be the same again
  • 9.
    2018#apricot2018 45 OpenFlow: TheProblem • Initial Problem: – A mechanism was required for researchers to run experimental network protocols. – Open software platforms did not provide the required performance and commercial solutions were too closed and inflexible. 9 Hardware Software Tight coupling Closed system – only functionalities exposed by vendors available Challenge: how do we influence packet forwarding behaviour ?
  • 10.
    2018#apricot2018 45 OpenFlow: TheSolution (1) 10 FROM TO Routing/Bridging Protocols, RIBs, routing policy and logic Forwarding Tables Secure Channel Abstracted Flow Table OpenFlow Controller OpenFlow Protocol Control Plane Data Plane Data Plane Control Plane Control Plane Data Plane Protocols and algorithms to calculate forwarding paths Forwarding frames/packets based on paths calculated by control plane
  • 11.
    2018#apricot2018 45 OpenFlow: TheSolution (2) 11 Secure Channel Abstracted Flow Table OpenFlow Controller OpenFlow Protocol Data Plane Control Plane The solution? A compromise: • that allowed switching/routing decisions to be influenced without opening up network software – The control process would run on a controller – Decisions would be pushed down to the data plane running on the network element
  • 12.
    2018#apricot2018 45 12 SecureChannel Abstracted Flow Table OpenFlow Controller OpenFlow Protocol Control Plane * Ingress Port, Ethernet SA, Ethernet DA,VLAN ID,VLAN PCP, IP SA, IP DA, IP Proto, IP ToS, Source L4 Port, Dest L2 Port etc…. Adds, deletes and modifies flow table entries Header Fields* Actions Counters Flow 1 Forward to port 1/1 Flow 2 Drop Flow n Send to controller Switch forwards traffic: by matching against header fields and executing corresponding actions OpenFlow: How it works (1)
  • 13.
    2018#apricot2018 45 OpenFlow: Howit works (2) 13 Secure Channel Abstracted Flow Table OpenFlow Controller OpenFlow Protocol Control Plane Secure Channel Abstracted Flow Table Secure Channel Abstracted Flow Table . . . Switch 1 Switch 2 Switch n OpenFlow Protocol One controller manages many switches
  • 14.
    2018#apricot2018 45 OpenFlow: Today •Initially synonymous with SDN • Today, relegated to being just a part of the greater SDN architecture, with other protocols competing (complementing) in the same space • However, responsible for the most radical paradigm shift in IP in recent times 14
  • 15.
    2018#apricot2018 45 OpenFlow: Implications •Two primary implications: 15 The control plane (processes to determine how traffic is handled) is physically decoupled from the data plane (forwarding traffic according to decisions passed down by the control plane). The control plane is consolidated and centralised: a single software control plane controls multiple data planes (previously a 1:1 correspondence).
  • 16.
    2018#apricot2018 45 Aside: Data/control planeseparation challenges 16 Scalability The control element now needs to be scaled to support a very large number of forwarding elements Reliability The controller can NOT be a single point of failure Consistency When multiple controllers are used (redundancy), consistency has to be assured across multiple replicas
  • 17.
    2018#apricot2018 45 The birthof SDN 17 The separation of control and data plane was not an objective in itself but was a consequence of the compromise approach taken by OpenFlow Ushered a new era of programmability that has been vastly enhanced with new architectures and capabilities The term ‘SDN’ itself was coined in an article about the OpenFlow project at Stanford (http://www2.technologyreview.com/news/412194/tr10-software-defined-networking/)
  • 18.
    2018#apricot2018 45 Emergence andEvolution 18 • OpenFlow was a starting point… – Ushered in an era of programmability – But a complete decoupling of the control plane and data plane is not practical: • Difficult to solve all the problems the industry had spent decades working on and refining: resiliency, scalability, convergence, redundancy, etc. • SDN architecture today – Hybrid: some elements of the control plane still remain distributed while others are centralised – Many different architectural models • All aspire to achieve the goals of agility and network programmability
  • 19.
    2018#apricot2018 45 Hybrid modelof SDN 19 Proportion of centralisation of control plane Data Plane Today’s model Control plane is fully distributed (collocated with the data plane) 0% 100% OpenFlow model Control plane is completely de-coupled from the data plane Hybrid model Certain control plane functions are centralised while others continue to be distributed with the data plane
  • 20.
    2018#apricot2018 45 Defining SDN 20 ONF: Thephysical separation of the network control plane from the forwarding plane, and where a control plane controls several devices. Too narrow… Enhanced programmability and open interfaces Dis-aggregation and abstraction Centralisation of network control with real-time network visibility SDN is … A new approach to networking that provides greater agility and flexibility by:
  • 21.
    2018#apricot2018 45 Objectives andbenefits 21 Agility Automation CAPEX/OPEX reduction Programmability Centralised Control • Service provisioning • Network provisioning • Service automation • Quicker introduction of new services • Reduction in hardware and network operations costs • Abstraction via simplified open interfaces • End-to-end service and network management • End-to-end optimisation
  • 22.
  • 23.
    2018#apricot2018 45 SDN Architectural Framework(1) 23 ITU-T Y.3300 SDN Controllers SDN Applications Network Resources Application control interface Resource control interface
  • 24.
    2018#apricot2018 45 24 ApplicationPlane Application Service Network Services Abstraction Layer Control Plane Service App Control Abstraction Layer (CAL) Management Plane App Mgmt.Abstraction Layer (MAL) Service Interface Device & Resource Abstraction Layer (DAL) Forwarding Plane App Operational Plane Network Device CP Southbound Interface MP Southbound Interface RFC 7426 SDN Architectural Framework (2)
  • 25.
    2018#apricot2018 45 25 Application Plane ApplicationService Topology Discovery & Management Network Devices – IP/MPLS/Transport Southbound Interfaces REST/RESTCONF/NETCONF/XMPP Control Plane (controller) Traffic Engineering Route selection & failover Resource Management BGP-LS PCE-Pi2RS SNMP MIBs OpenFlow YANG Configuration Open Flow SNMP NETCONF Data Plane (with some distributed control plane elements) BGP PCCRIBs/FIBs Segment Routing RSVP-TE East/West- bound interfaces – BGP IPFIX Northbound Interfaces Note: designations of north-bound and south-bound are relative to the control plane (“controller”) Device & Resource Abstraction Layer (DAL) Network Services Abstraction Layer SDN Architectural Framework (3)
  • 26.
    2018#apricot2018 45 Evolution NOTRevolution • Despite the hype, SDN is an evolution of current network technologies • No one protocol that defines SDN – it is a new architectural framework for data networks • Protocols/technologies that enable: – centralising control plane – abstracting networks and topologies – enhancing programmability via standard interfaces are considered a part of the SDN framework of technologies • Introduction of any of these can be considered to be SDN-enabling the network 26
  • 27.
    2018#apricot2018 45 Enabling SDN 27 Today’snetwork There is no one protocol that defines SDN… Implement Segment Routing with PCE SDN✓ Implement OpenFlow SDN✓ Implement NETCONF/YANG SDN✓ ... all of these qualify as having implemented SDN in the network
  • 28.
    2018#apricot2018 45 Comparing/contrasting with NFV 28 FROMTO Tightly coupled Software Purpose- built hardware COTS hardware Virtualised Software SDN: decouples elements of the control plane from the data plane NFV: decouples network software from closed, proprietary hardware systems
  • 29.
    2018#apricot2018 45 Open sourceprojects 29 • NOX/POX • Beacon • OpenDayLight (ODL) • Open Network Operating System (ONOS) • Ryu • OpenContrail • Floodlight • ……………
  • 30.
    2018#apricot2018 45 What more?.... •OpenFlow is an interface between the control plane and forwarding plane – based on Match and Actions • Instead of just manipulating the forwarding plane, can we – Implement Match+Action on the hardware itself? – Better performance and greater flexibility 30
  • 31.
    2018#apricot2018 45 Reconfigurable Switches •Current switches work on MMT model – Pipelined stages • But only a small number of tables, whose size and execution (pipeline) order are fixed at fabrication – Limiting flexibility! – Only a limited set of actions (forward, drop, tagging, encapsulation) • The idea of RMT – Match fields can be modified or new ones added (reconfigurable parser) – Match table sizes can be configured – New actions (based on match) can be written – Packets can be placed on specified output queues 31
  • 32.
    2018#apricot2018 45 P4 • ProgrammingProtocol-independent packet processors • Language that programs switches (reconfigurable) – Not be constrained by fixed switch designs • Three main goals: – controller should be able to define packet parsing and processing in the field (Reconfigurability) – Not limited to specific packet formats (network protocols) and a pipeline of match+action tables (protocol independence) – The controller switch need not know the underlying switch hardware (that is the compiler’s job)– target independence 32
  • 33.
    2018#apricot2018 45 P4 vsOF • P4 tells the switch what to do – Instead of the switch telling the limited things it can do • P4 uses programmable parser – new header fields can be defined and what headers a switch should recognize – OF parsing is based on known header fields • Match and Action can be in series or parallel in P4 – Match+Action in series in OF • P4 is a language – OF is a fixed protocol 33
  • 34.
  • 35.
    2018#apricot2018 45 Background • Flowsfrom R1 à R4 needs to take the R2 à R3 path • Flows from R5 à R8 needs to take the R6 à R7 path • The flows need disjointed paths! 35 R1 R6 R5 R3 R2 R4 R7 R8
  • 36.
    2018#apricot2018 45 Problem (general) 36 •Currently: – Not possible to manipulate the forwarding table (only through the RIB) Best path computation (distributed) R1 R6 R5 R3R2 R4 R7 R8
  • 37.
    2018#apricot2018 45 Solution –centralised controller 37 • With a controller: – We can manipulate the forwarding table to provision separate paths for the flows R1 R6 R5 R3R2 R4 R7 R8 Centralised controller with network visibility
  • 38.
  • 39.
    2018#apricot2018 45 Background • AS1is an internet service provider to end-customers, typically enterprises • It peers with two upstream providers, AS2 and AS3 • Distribution of traffic inside AS1: – Multiple RSVP-TE LSPs are used on the inter-router links and traffic load- balanced over these to provide crude traffic-engineering to ensure better utilisation of links and prevent congestion – To influence one set of links over another, multiple parallel LSPs may be created AS1 R3 R4 R1 R2 LSP LSP AS2 Peering/ Transit AS3 Peering/ Transit Customer 1 Customer 2 39
  • 40.
    2018#apricot2018 45 Problem statement1 • Uneven distribution of inbound traffic at BGP peering point – Typical case of unbalanced inbound traffic with potential to overrun capacity – Typical BGP attributes are used to control inbound traffic into AS1, pre-dominantly AS path- prepending – Laborious, manual process: traffic levels are monitored, traffic data is analysed, BGP policies changed and then applied. Frequently, traffic patterns have changed by the time the new policy is applied. AS1 R3 R4 R1 R2 LSP LSP AS2 Peering/ Transit AS3 Peering/ Transit Customer 1 Customer 2 40
  • 41.
    2018#apricot2018 45 Problem statement2 • Unbalanced traffic on internal links depending on the peering point where bulk of inbound traffic is entering AS1 • Traffic on R1 has to be manually forced into LSPs to provide better utilisation of internal links and prevent congestion AS1 R3 R4 R1 R2 LSP LSP AS2 Peering/ Transit AS3 Peering/ Transit Customer 1 Customer 2 41
  • 42.
    2018#apricot2018 45 Requirements • Providecomplete automation of current manual process for influencing inbound traffic and balancing traffic over internal links • Monitor link utilisation of both internal and external links • When utilisation exceeds pre-defined (and configurable) thresholds, automatically trigger mechanisms to balance traffic flows: – For external links, this will translate to influencing inbound traffic by manipulating AS-path attribute length (will require intelligent analysis to determine routes to which this will apply). For outbound traffic, manipulation of LOCAL_PREF etc. will be required. – For internal links, a mechanism is need to provide intelligent traffic balancing. 42
  • 43.
    2018#apricot2018 45 Solution: inboundtraffic • Inbound traffic to AS1 from upstream peers – Monitor inbound traffic on links with upstream peers (may be LAG or ECMP): • Threshold crossing alerts (TCA) • Flow stats with IPFIX • Generate top-N lists based on destination prefixes (to identify subnets to be manipulated) – When TCA event is triggered, initiate a BGP policy update (AS-path prepending) to apply to the top-N traffic contributors • Once policy is constructed, it needs to be pushed down to R1 and R2 1 R1 R2 AS2 Peering/ Transit AS3 Peering/ Transit 2 11 2 2 43
  • 44.
    2018#apricot2018 45 Solution: outboundtraffic • Outbound traffic from AS1 to upstream peers – Monitor outbound traffic on on links with upstream peers: • Threshold crossing alerts (TCA) • Flow stats with IPFIX • Generate top-N lists based on destination prefixes (to identify subnets to be manipulated) – When TCA event is triggered, initiate a BGP policy update (set LOCAL_PREF) to apply to the top-N traffic contributors to make congested next-hop less preferable: • Once policy is constructed, it needs to be pushed down to R1 and R2 1 2 AS1 R3 R4 R1 R2 LSP1 LSP2 AS2 Peering/ Transit AS3 Peering/ Transit Customer 1 Customer 2 1 12 2 44
  • 45.
    2018#apricot2018 45 Solution: intra-AStraffic • Intra-AS inter-router links (in the case that the external peering links are not themselves congested) – Monitor traffic on internal inter-router links: • Threshold crossing alerts (TCA) • Flow stats with IPFIX • Generate top-N lists based on destination prefixes (to identify subnets to be manipulated) – When TCA event is triggered, use OpenFlow to steer flows off the congested link onto an LSP on an alternate physical link. 1 2 AS1 R3 R4 R1 R2 LSP1 LSP2 AS2 Peering/ Transit AS3 Peering/ Transit Customer 1 Customer 2 1 1 1 1 2 2 2 45
  • 46.
    2018#apricot2018 45 How SDNcan help • The solution elements described for addressing this use case are quite disparate and require co- ordination between a number of different tasks: – Link utilisation monitoring – Generation of alerts on traffic threshold crossing – Collection of flow information – Analysis of flow information to identify ”top talkers” – Crafting of BGP policy to influence traffic – Application of BGP policy to routers – OpenFlow-based traffic steering • In the absence of SDN, there are very few viable solutions to address all of these in a holistic manner 46
  • 47.
    2018#apricot2018 45 47 Application Plane ApplicationService Topology Discovery & Management Network Devices – IP/MPLS/Transport Southbound Interfaces REST/RESTCONF/NETCONF/XMPP Control Plane (controller) Traffic Engineering Route selection & failover Resource Management BGP-LS PCE-Pi2RS SNMP MIBs OpenFlow YANG Configuration Open Flow SNMP NETCONF Data Plane (with some distributed control plane elements) BGP PCCRIBs/FIBs Segment Routing RSVP-TE IPFIX Northbound Interfaces Note: designations of north-bound and south-bound are relative to the control plane (“controller”) Device & Resource Abstraction Layer (DAL) Network Services Abstraction Layer Mapping to SDN [Link utilisation monitoring] [Flow info collection and analysis] [BGP policy] [Application of BGP policy] [OpenFlow- based traffic steering]
  • 48.
  • 49.
    2018#apricot2018 45 Eolo’s BLUProject • Their own router – TileGX (72 core CPU) • Their own Router OS – BLUos – based on 6windgate – Customisation: RFC3107 in Quagga BGP • Their own controller – BLU-GW 49
  • 50.
    2018#apricot2018 45 BLU Project– Stage 1 • OpenFlow rules for MPLS label switching • RFC3107 for traffic labelling (downstream) • Problems: – OpenFlow granularity issues – Change of single flow required all BLUs along the path to be reprogrammed 50
  • 51.
    2018#apricot2018 45 BLU Project– Stage 2 • Segment Routing + RFC3107 for traffic labelling – MPLS dataplane Contact them: blu@eolo.it 51
  • 52.
    2018#apricot2018 45 OpenFlow demo •OpenDayLight Controller • Mininet 52
  • 53.
  • 54.
    #apricot2018 19 – 28February 2018KATHMANDU, NEPAL 2018 45 54