Today’s traditional Network paradigm: Control and data plane reside within the physical device
Control Plane: Routing protocol, spanning tree, SYSLOG, AAA, Netflow, CLI, SNMP: It hangles by switch CPU: These packet will come in the order of thousand of packet per seconds.
Data Plane: Layer 2 switching, L3 switching, MPLS forwarding, VRF forwarding, QoS, Marking, Classification, Policing, Netflow flow collection, security access control lists: Dedicated hardware ASIC’s available. Millions or Billions of packet per seconds.
Over the Years… This network paradigm has remained mostly intact..Until 2012 and since then SDN comes
SDN Definition: SDN is an approach to building computer networks that separated and abstracts elements of these systems. Here separation is talking about the separation about the control and data plane.
In SDN paradigm, not all processing happens inside the same device. It means control plane yank it out and run it some where in the network. or control plane is separated from the physical device, but it is likely to also have local control plane. So both control plane would have different functionality.
Where did SDN came from? from Stanford University
The important point to keep in mind that OpenFlow does not equal to SDN. It is one of the tool in toolbox in SDN.
What is OpenFlow?
OpenFlow is layer 2 communication protocol that gives access to the forwarding plane of network switch or router over the network.
1) Original Motivation:
– Research community’s desire to be able to experiment with new control paradigms
2) Base Assumption
– Providing reasonable abstraction for control requires the control system topology to be decouples from the physical network topology(as in the top-down approach)
Starting point: Data-plane abstraction: separate control plane from the devices that implement data plane
3) OpenFlow was designed to facilitate separation of control and data planes in a standardized way
4) Current spec is both a device model and a protocol
– OpenFlow device model: An abstraction of network element (switch/router); currently (version<=1.3.0) focused on forwarding plane abstraction.
– OpenFlow protocol: A communications protocol that provides access to the forwarding plane of an OpenFlow device.
Control and data plane communicate through OpenFlow protocol.
Four Parts of OpenFlow
1) Controller: Resides on a server and provides control plane function for the network
2) OpenFlow Agent: Resides on a network device(router/switch) and fulfil requests from the controller
3) Northbound APIs: Enable applications to interface with the controller
4) OpenFlow Protocol: The layer 2 protocol that the controller and Agent use to communicate.
Controller is acting as abstraction.
Two OpenFlow switch model
1) OpenFlow Only Switch: Yanking out all the control plane functionality and sit on OpenFlow controller. Switch has two component 1) flow table 2) OF interfaces. All the computation happen on OpenFlow Controller and it push down to flow table and when packet comes in depending upon flow table switch makes decision.
1) First packet of flow triggers controller to insert flow entries
2) Efficient use of flow table
3) Every flow incurs small additional flow setup time
4) If control connection lost, switch has limited utility
1) Controller pre-populates flow table in switch
2) Zero additional flow setup time
3) Loss of control connection does not disrupt traffic
4) Essentially requires aggregated(wildcard) rule.
2) OpenFlow Hybrid Switch
We have OpenFlow Controller and at the same time we have switch plane controller plane as well. So some of the decision made at the switch controller. In this mode, we have two type of switch OF(OpenFlow) interface and switch interface.
Reactive Switch Operation
1) Data enters switch
2) Lookup key compared to Flow table(Lookup key created using the header of the packet)
3) If Match, Forward to Switch forwarding Engine
4) If no Match, Forward to controller
5) Controller injects new Flow Entry
6) Switch Forwards Data
Proactive Switch Operation
1) OF controller programs switch Flow Table
2) Data Enters switch
3) Lookup Key Compared to Flow Table
4) If No match, DROP
5) If match, Forward to switch forwarding engine
6) Switch Forwards data.
It has three field
1) Header Field: Based contains all the fields like Src IP, dst IP, mac, VLAN ID, etc,
2) Counter: About the packet and bytes track
3) Actions: What we will do?
Who Control Open Flow?
Open network Foundation.
Deployments model according to SDN terms…
1) Classic SDN
2) Hybrid SDN
Both of the model has controller but in device we have control plane(only in hybrid model). When controller talking to the data plane either in classic or hybrid we can see protocol over there is not only open flow it’s multiple other protocol(PCEP,BGP-LS etc) there could be vendor specific protocol as well which can talk between the controller and Data plane (Cisco onePK).
Controller/Agent: Old Concept- new Apps…
1) Networking already leverage’s a great breath of agents and controllers. Current Agent-Controller pairs always server a specific task( or set of task) in a specific domain.
2) System Design: trade-off between Agent-controller and fully distributed control control loop requirements differ per function/service and deployment domain.
Example 1) Session border control 2) Wireless LAN control 3) Patch Computation.
Controller and Agents
1) Some network delivered functionality benefits from logically centralized coordination across multiple network devices.
– Functionality typically domain, task or customer specific typically multiple controller/agent pairs are combined for a network solution
– Process on a device interacting with a set of devices using a set of APIs or protocols
– Offer a control interface/API
– Process on a device that deliver a task/domain specific function
The Aim of Open dayLight Project is to create the Open source controller.
Project Open DayLight Goals
1) Code: To create a robust, extensible, open source code base that covers that major common components required to build an SDN solution
2) Acceptance: To get broad industry acceptance amongst vendor and users
3) Community: To have a thriving and growing technical community contributing to the code base, using the code in commercial products, and adding value abov, below and around.
Cisco’s XNC is the part of daylight
Beyond SDN: Full Network Programmability
Fully Distributed Control Plane: Optimized For reliability
but now we are moving towards Hybrid Control Plane: Distributed control combined with logically centralized control for optimized behaviour(ex. Reliability and performance)
Apart from hybrid SDN and Traditional SDN we have Programmable APIs. It’s simple way to talk to router/switch through different ways. In Cisco we do that throhg OnePK. What does it mean of talking to the device, it really means that now we have ability using programmable API’s lets say routing protocol which has building RIB and I can through programmable API’s I can influence the RIB. Ok OSPF is telling to take route A from 1 to 2 but through this API talking to router/switch you can say no do not take that router but take another route because that router value me depending upon my business needs.
onePK for Rapid Application development
1) Development Environment
– Language of choice
– Programmatic interface
– Rich data delivery via API
2) Comprehensive service sets
– Better Apps
– New Service
– Monetization Opportunity
– On a server blade
– On an external server
– Directly on the device
4) Consistent Platform Support
– IOS XR
What about NfV and Overlay Networks?
Overlay network can be created and torn down without changing underlying physical network.
Network Functions Virtulization(NfV)
– NfV initiative announced at SDN and OpenFlow world congress.
– Leveraging clod technology to support Virtulizing specific network functions.
Every component we have in network should be virtual.
the idea here is to control the network through virtual switch. What does it really mean? You have start with physical switch network and top of that we have add an overlay. It provides the base of logical network and on this overlay we can build logical switch devices overlay the physical network. Underlying physical network carries data traffic for overlay networ. We can build multiple overlay network can exist at the same time. Overlay provides logical network constructs for different tenants.
Overlay Encapsulations and forwarding
1) Virtual OVerlays in the SDN context usually refers to host-based encapsulation and forwarding
– Extended L2 connectivity and scalability
– Secure segmentation(Multi-tenant environments, etc.)
2) Stateless tunnelling Mechanisms
– No static tunnel setup required
– Frame formats recognized by hosts and treated as tunnelled frame.
3) 3 popular hypervisor-based overlay technologies:
– Virtual extensible local area network(VXLAN)
– Network Virtulization using generic routing encapsulation(NVGRE)
– Stateless transport tunnelling(STT)
…and how does OpenStack fits into SDN?
To understand Openstack, let us first, let us define Cloud computing…
Cloud computing provides a set of resource and service through the internet.
What are these resource?
What resource you mange inside the cloud defines the following…
infrastructure as a service (IAAS)
platform as service(PAAS)
software as a service(SAAS)
How does these differ from one another?
The main differentiation criteria is managed by you or manger by vendor?
With IAAS, Compute, storage, networking and virtualization resource are managed by the vendor(this defines them as an IAAS provider)
Where does Openstack comes into picture? OpenStack lets the provider manger these resource.
Based on these OpenStack has 4 components
1) Openstack computer(NOVA): Allows the administrator to create and mange virtual machine using various machine image.
2) Openstack object store(SWIFT): Provides the ability to store object-basically it is the component that is responsible for managing storage and reading/writing objects to that storage.
3) Openstack Image service(GLANCE): This is the component responsible for managing the different operating system images(windows, linux etx) that NOVA uses to create virtual machines.
4) Openstack neutron service: Allows the administrator to create and mange virtual networks. This is the piece that has relevance to our SDN story.
Cisco ONE(open Network environments) Framework:
SDN or OpenFlow talk about the control plane or data plane but Cisco would consider 5 aspects 1) Management and orchestration 2) Network services 3) Control plane 4) data plane 5) Transport
We saw two model where control plane can be decouple from data plane model
1) Fully distributed control plane
2) Hybrid control plane: Distributed control combined with logically centralized control for optimized behavior.
1) Platform APIs
2) Controllers and Agents
– Cisco XNC controller
– Open Daylight
3) Virtual OVerlays
Full-Duplex, Multi-layer/Multi-plane APIs
Management: Workflow management network configuration and device models.(Network Models- interface(OMI))
Orchestration: L2-segments, L3-segments, service-chains multi-domain(WAN.LAN DC)(OpenStack, Quantum API)
Network Service: Topology, positioning, analytics multi-layer path control, demand eng.(Positioning(ALTO) pathc control(PCE))
Control: routing policy, discovery, VPN, Subscriber, AAA/logging, switching, addressing,…(Interface to the routing system I2RS)
Forwarding: L2/L3 forwarding control, Interfaces, Tunnels, Enhances Qos…(OpenFlow Protocols)
Device/Transport: Device configuration, life-cycle management, monitoring, HA,..(Network function virtualization NfV)
Not all networking APIs are created the same
classes of networking APIs following their scope
1) Classify networking APIs based on their scope
– API scopes: location independent; area; particular place; specific device
– Alternate approach like device/network/service APIs difficult to associate with use cases
– Location where an API is hosted can differ from the scope of the API
2) Different network planes could implement different flavour of APIs, based on associated abstractions.
Below are the few API
Utility: which covers all authentication, location
Area/set: which cover routing related information
Element: Get the interface statistics
onePK use case:
Customer routing: Customer is aksing to do routing based on their own metrics. Lets says it’s based on $. First onePK have topology discovery and route information and then define your own path.
Challenges with the conventional Approach
1) High cost of conventional matrix switches make scaling unaffordable
2) Filtering and forwarding are statically configured, not event driven
3) Tools compatibility limited to off the shelf.
Hence, Cisco replaces Matrix Network with Nexus 3000s, Controller, and monitor Manager Controller Application. Cisco XNC catching information from the network
Nexus/catalyst —- vSwitch(Nexus1000v)
Wireless LAN Controller —vWLC
Video Cache— vVidoeCache
Web Security WSA—vWSA
Network Analysis NAM—-vNAM
Virtual overlay network
1) Example: Virtual overlay networks and services with Nexus 1000v
2) Large scale L2 domains: Tens of thousands of virtual ports
3) Common APIs: including openstack quantum API’s for orchestration
4) Scalable DC segmentation and addressing VXLAN
5) Virtual service appliances and service chaining/traffic steering
VSG, vWAAS, vPATH
6) Multi-hypervisor platform support: ESX, hyper-V, OpenSource Hypervisors
7) Physical and virtual: VXLAN to VLAN gateway
Current Industry approaches and challenges
Traditional Network model: Existing infrastructure model and Existing application model.
Todays SDN model: 1)Lac of transparency and visibility
2) Virtual domain attempts to replicate physical network constructs ex LAN emulation
3) Per hypervisor integration overhead
4) Multiple management points
Application Centric: Centralized automation, security and application profiles
1) Simplification, complete network automation and programmability
2) Software flexibility with hardware based performance and integrated visibility
3) Bypass 1st generation SDN limitation to an Application centric infrastructure
4) Extensible to storage and compute.
It’s software development kit and which allow you to write code. You need API to communicate between the router and application. so you can say OpenPK is collection of API.
Where do you run the onePK?
You can run it on router/switch using the process hosting. If you have platform like ISR where it have blade then you can host openPK on that blade. Also, you can run this on physical server or virtual server.
1) Process hosting: Advantage is low latency.
2) Blade Hosting: Adv Low latency is an advantage.
3) End-node hosting: Supported on by all platform.
Configure IOS for onePK(unsecure mode)
#username user1 password pass1 //all application must autheticate with username and password
#oneP // Enter onePK config mode
#transport socket // socket = tcp 15001
#start // start onePK and activate API history trails
Configure IOS for onePK(Encrypted)
#unsername user1 password pass1
#onePasnport tls // use tls = TCP 15002
#trasport tls localcert <trustpoint name> // can use local certificate or certificate authority
# history on
# show onep status // to check if openpk is enabled or not.
#show openp statistics session all // this will tell u how much traffic is generating by openPK
# show openp history session all // what openPK is doing
Additional onePK IOS commands
# session max 2 // specify the maximum number of session that can connect to a device is 1 to 32.
# cpu threshold rising 60 failing 40 interval 10 // Use the cpu threthold command to control the amount of CPU used by onep application when the rising threshold is exceeded. API requests will be rejected with the error code ONEP_ERR_RESOURCE_BUSY
When CPU utilization reached the failing threshold, API request are served again.
# onep stop session all // Disconnect all openp applications on a network devices.
Using onePK APIs
onePK functions are grouped in service sets
data path: Provides packet delivery service to application, copy, punt, injects
policy: Provides filtering, classification, actions, applies policies to interface on network elements
Routing: Read RIB routes, add/remove routes, receive RIB notification
Elements: get element properties, CPU/memory statistics, network interface, element and interface events
Discovery: L2 topology and local service discovery
Developer: Debug capability, CLI extension which allows application to extend/integrate application’s CLIs with network element.
Punting and Injecting Packets(c)
ONEP_DPSS_ACTION_PUNT // Defines traffic of interest
encrypt_callback, // Action to take on interesting traffic
®_handle),”register for packet”); // Where traffic goes next
Example: Customer encryption
problem: Customers want customer encryption on specific traffic types value properties: punt traffic of interest, encrypt, and re-injects
1) Policy APIs on ingress router are set to punt telnet and syslog to app
2) App encrypt punted traffic and re-injects into data path
3)Policy APIs on egress router punt telnet and syslog to app
4) App decrypt punted traffic and re-injects into data path
5) Traffic that does not match policy passes through unencrypted.
Revenue: Pay-as-you-go QoS
1) Customer buys pre-pay QoS package for cloud service
2) First packet for new session appears on ingress PE and is relayed to master server
3) Master server verifies pre-pay account and applies QoS
4) Ingress PE detects end of session and relays this to the server.
5) Server removes policy, bills customer for duration of session.