Cisco Nexus 7000 Hardware Architecture
BRKARC-3470
Session Goal
To provide you with a thorough understanding of the Cisco Nexus 7000 switching architecture, supervisor, fabric, and I/O module design, packet flows, and key forwarding engine functions This session will not examine Unified I/O, DCB, FCoE, NXOS software architecture, or other Nexus platforms Related sessions:
BRKARC-3471: Cisco NXOS Software Architecture
Presentation_ID
2010 Cisco and/or its affiliates. All rights reserved.
Cisco Public
Agenda
Chassis Architecture Supervisor Engine Architecture I/O Module Architecture Forwarding Engine Architecture Fabric Architecture Layer 2 Forwarding IP Forwarding IP Multicast Forwarding ACLs QoS NetFlow
Presentation_ID 2010 Cisco and/or its affiliates. All rights reserved. Cisco Public
Nexus 7010 Chassis
Integrated cable management with cover Optional locking front doors Locking ejector levers Supervisor slots (5-6) I/O module slots (1-4, 7-10)
System status LEDs
ID LEDs on all FRUs
Front-toback airflow
Air exhaust
System fan trays Fabric fan trays
21RU
Two chassis per 7 rack Crossbar fabric modules
Power supplies Air intake with optional filter
Presentation_ID 2010 Cisco and/or its affiliates. All rights reserved.
Front
N7K-C7010Cisco Public
Rear
Common equipment removes from rear
4
Supported in NX-OS release 4.1(2) and later
Nexus 7018 Chassis
Integrated cable management
System status LEDs Optional front door Side-to-side airflow
System fan trays
Supervisor slots (9-10)
25RU
Crossbar fabric modules
I/O module slots (1-8, 11-18)
Common equipment removes from rear
Power supply air intake
Presentation_ID 2010 Cisco and/or its affiliates. All rights reserved.
Power supplies
Front
N7K-C7018 Cisco Public
Rear
Agenda
Chassis Architecture Supervisor Engine Architecture I/O Module Architecture Forwarding Engine Architecture Fabric Architecture Layer 2 Forwarding IP Forwarding IP Multicast Forwarding ACLs QoS NetFlow
Presentation_ID 2010 Cisco and/or its affiliates. All rights reserved. Cisco Public
Supervisor Engine
Performs control plane and management functions Dual-core 1.66GHz Intel Xeon processor with 4GB DRAM 2MB NVRAM, 2GB internal bootdisk, compact flash slots Out-of-band 10/100/1000 management interface Always-on Connectivity Management Processor (CMP) for lights-out management Console and auxiliary serial ports USB ports for file transfer N7K-SUP1
ID LED Status LEDs
AUX Port Console Port
Presentation_ID
USB Ports Management Ethernet Compact Flash Slots
Cisco Public
CMP Ethernet
Reset Button
7
2010 Cisco and/or its affiliates. All rights reserved.
Management Interfaces
Management Ethernet
10/100/1000 interface used exclusively for system management Belongs to dedicated management VRF
Prevents data plane traffic from entering/exiting from mgmt0 interface Cannot move mgmt0 interface to another VRF Cannot assign other system ports to management VRF
Connectivity Management Processor (CMP) Ethernet
Connects to standalone, always-on microprocessor on supervisor engine
Runs lightweight software with network stack Completely independent of NX-OS on main CPU
Provides lights out remote management and disaster recovery via 10/100/1000 interface
Removes need for terminal servers
Presentation_ID
2010 Cisco and/or its affiliates. All rights reserved.
Cisco Public
Supervisor Engine Architecture
To Modules To Fabrics To Modules n * 23G Switched Gigabit Ethernet
Fabric ASIC
Arbitration Path
Arbitration Path
Switched EOBC
1GE EOBC
VOQs
1GE Inband
Central Arbiter
128MB
16MB
DRAM
Flash
System Controller
4GB
CMP
Security Processor
266MHz
Link Encryption
2GB
DRAM
2MB
PHY
Internal CF
NVRAM OBFL Flash slot0:
Main CPU
10/100/1000
1.66GHz Dual-Core
PHY
10/100/1000
Console
AUX
Mgmt Enet
usb usb usb
log-flash:
CMP Enet
Presentation_ID
2010 Cisco and/or its affiliates. All rights reserved.
Cisco Public
Agenda
Chassis Architecture Supervisor Engine Architecture I/O Module Architecture Forwarding Engine Architecture Fabric Architecture Layer 2 Forwarding IP Forwarding IP Multicast Forwarding ACLs QoS NetFlow
Presentation_ID 2010 Cisco and/or its affiliates. All rights reserved. Cisco Public
10
8-Port 10GE I/O Module
N7K-M108X2-12L
8-port 10G with X2 transceivers 80G full-duplex fabric connectivity Two integrated forwarding engines (120Mpps)
Support for XL forwarding tables (licensed feature)
8 ports wire-rate L3 multicast replication 802.1AE LinkSec
N7K-M108X2-12L
Presentation_ID
2010 Cisco and/or its affiliates. All rights reserved.
Cisco Public
11
8-Port 10G XL I/O Module Architecture
N7K-M108X2-12L
EOBC To Fabric Modules To Central Arbiter
Fabric ASIC
LC CPU
VOQs
Forwarding Engine
Replication Engine
Forwarding Engine
Replication Engine
VOQs
Replication Engine 10G MAC Linksec 1 10G MAC Linksec 2 10G MAC Linksec 3 10G MAC Linksec 4 10G MAC Linksec 5 10G MAC Linksec 6
Replication Engine 10G MAC Linksec 7 10G MAC Linksec 8
Front Panel Ports
Presentation_ID 2010 Cisco and/or its affiliates. All rights reserved. Cisco Public
12
32-Port 10GE I/O Module
N7K-M132XP-12
32-port 10G with SFP+ transceivers 80G full-duplex fabric connectivity Integrated 60Mpps forwarding engine Oversubscription option for higher density (up to 4:1) 8 ports wire-rate L3 multicast replication 802.1AE LinkSec
N7K-M132XP-12
Presentation_ID
2010 Cisco and/or its affiliates. All Public reserved. Cisco rights
Cisco Public
13
Shared vs. Dedicated Mode
To Fabric rate-mode shared (default) 9 10G 11 13 15
Shared mode
Four interfaces in port group share 10G bandwidth Port groupgroup of contiguous even or odd ports that share 10G of bandwidth (e.g., ports 1,3,5,7)
To Fabric rate-mode dedicated 10G
Dedicated mode
9 11 13 15 First interface in port group gets 10G bandwidth Other three interfaces in port group disabled
Presentation_ID 2010 Cisco and/or its affiliates. All rights reserved. Cisco Public
14
32-Port 10G I/O Module Architecture
N7K-M132XP-12
EOBC To Fabric Modules To Central Arbiter
Fabric ASIC
LC CPU
VOQs
Forwarding Engine
VOQs
Replication Engine Replication Engine 10G MAC 4:1 Mux + Linksec 1 3 5 7 10G MAC 4:1 Mux + Linksec 9 11 13 15 10G MAC 4:1 Mux + Linksec 17 19 21 23 10G MAC 4:1 Mux + Linksec 25 27 29 31 10G MAC 4:1 Mux + Linksec 2 4 6 8
Replication Engine Replication Engine 10G MAC 4:1 Mux + Linksec 10G MAC 4:1 Mux + Linksec 10G MAC 4:1 Mux + Linksec
10 12 14 16 18 20 22 24 26 28 30 32
Front Panel Ports
Presentation_ID 2010 Cisco and/or its affiliates. All rights reserved. Cisco Public
15
48-Port 1G I/O Modules
Three 1G I/O module options:
N7K-M148GT-11, N7K-M148GS-11, N7K-M148GS-11L
48 10/100/1000 RJ-45 ports (N7K-M148GT-11) 48 1G SFP ports (N7K-M148GS-11) 48 1G SFP ports with XL forwarding engine (N7K-M148GS-11L)
N7K-M148GT-11 Release 4.0(1) and later
Integrated 60Mpps forwarding engine 46G full duplex fabric connectivity
Line rate on 48-ports with some local switching N7K-M148GS-11 Release 4.1(2) and later
48 ports wire-rate L3 multicast replication 802.1AE LinkSec
N7K-M148GS-11L Release 5.0(2) and later
Presentation_ID 2010 Cisco and/or its affiliates. All Public reserved. Cisco rights Cisco Public
16
48-Port 1G I/O Modules Architecture
N7K-M148GT-11, N7K-M148GS-11, N7K-M148GS-11L
EOBC To Fabric Modules To Central Arbiter
Fabric ASIC
LC CPU
VOQs
Replication Engine
Forwarding Engine
Replication Engine
12 x 1G MAC Linksec Linksec
12 x 1G MAC Linksec 13-24
12 x 1G MAC Linksec 25-36 Linksec
12 x 1G MAC Linksec 37-48
1-12
Front Panel Ports
Presentation_ID 2010 Cisco and/or its affiliates. All rights reserved. Cisco Public
17
Agenda
Chassis Architecture Supervisor Engine Architecture I/O Module Architecture Forwarding Engine Architecture Fabric Architecture Layer 2 Forwarding IP Forwarding IP Multicast Forwarding ACLs QoS NetFlow
Presentation_ID 2010 Cisco and/or its affiliates. All rights reserved. Cisco Public
18
Forwarding Engine Hardware
Hardware forwarding engine(s) integrated on every I/O module 60Mpps per forwarding engine Layer 2 bridging with hardware MAC learning 60Mpps per forwarding engine IPv4 and 30Mpps IPv6 unicast IPv4 and IPv6 multicast support (SM, SSM, bidir)
Hardware Table FIB TCAM Classification TCAM (ACL/QoS) MAC Address Table NetFlow Table
Presentation_ID
RACL/VACL/PACLs Policy-based routing (PBR) Unicast RPF check and IP source guard QoS remarking and policing policies Ingress and egress NetFlow (full and sampled)
M1-XL Modules without License 128K 64K 128K 512K
Cisco Public
M1 Modules 128K 64K 128K 512K
M1-XL Modules with License 900K 128K 128K 512K
19
2010 Cisco and/or its affiliates. All rights reserved.
Scalable Services License
Forwarding engines on M1-XL I/O modules always have XL capacity Access to additional capacity controlled by presence of Scaleable Services license
License applies to entire system (per-chassis)
N7K# show license usage Feature
Lic Status Expiry Date Comments Count ------------------------------------------------------------------------SCALABLE_SERVICES_PKG Yes In use Never LAN_ADVANCED_SERVICES_PKG Yes In use Never LAN_ENTERPRISE_SERVICES_PKG Yes In use Never ------------------------------------------------------------------------N7K#
Ins
Presentation_ID
2010 Cisco and/or its affiliates. All rights reserved.
Cisco Public
20
Forwarding Engine Architecture
Forwarding engine chipset consists of two ASICs:
Layer 2 Engine
Ingress and egress SMAC/DMAC lookups Hardware MAC learning IGMP snooping and IP-based Layer 2 multicast constraint
Layer 3 Engine
IPv4/IPv6 Layer 3 lookups ACL, QoS, NetFlow and other processing Linear, pipelined architectureevery packet subjected to both ingress and egress pipeline Enabling features does not affect forwarding engine performance
Presentation_ID
2010 Cisco and/or its affiliates. All rights reserved.
Cisco Public
21
Forwarding Engine Pipelined Architecture
FE Daughter Card
Ingress Pipeline Ingress NetFlow collection Ingress ACL and QoS classification lookups Egress Pipeline Layer 3 Engine Egress policing
FIB TCAM and adjacency table lookups for Layer 3 forwarding ECMP hashing Multicast RPF check Ingress policing
Egress NetFlow collection
Unicast RPF check Ingress MAC table lookups IGMP snooping lookups IGMP snooping redirection
Egress ACL and QoS classification lookups Layer 2 Engine Egress MAC lookups IGMP snooping lookups
Final lookup result to I/O Module Replication Engine
Cisco Public
Packet Headers from I/O Module Replication Engine
Presentation_ID 2010 Cisco and/or its affiliates. All rights reserved.
22
Agenda
Chassis Architecture Supervisor Engine Architecture I/O Module Architecture Forwarding Engine Architecture Fabric Architecture Layer 2 Forwarding IP Forwarding IP Multicast Forwarding ACLs QoS NetFlow
Presentation_ID 2010 Cisco and/or its affiliates. All rights reserved. Cisco Public
23
Crossbar Switch Fabric Module
Each fabric module provides 46Gbps per I/O module slot
Up to 230Gbps per slot with 5 fabric modules
Currently shipping I/O modules do not leverage full fabric bandwidth
Maximum 80G per slot with 10G module Future modules leverage additional available fabric bandwidth
Access to fabric controlled using QoS-aware central arbitration with VOQ
N7K-C7010-FAB-1
N7K-C7018-FAB-1
Presentation_ID 2010 Cisco and/or its affiliates. All rights reserved. Cisco Public
24
Fabric Module Capacity
46Gbps/slot 2 x 23G channels per I/O module slot
Fabric Modules
Crossbar Fabric ASICs
1 x 23G channel per supervisor slot
46Gbps/slot
Crossbar Fabric ASICs
46Gbps/slot
230Gbps 46Gbps 184Gbps 138Gbps 92Gbps
per slot bandwidth
46Gbps/slot
Crossbar Fabric ASICs
Crossbar Fabric ASICs
46Gbps/slot
Crossbar Fabric ASICs
Nexus 7018
Presentation_ID 2010 Cisco and/or its affiliates. All rights reserved. Cisco Public
25
I/O Module Capacity
1G modules
Require 1 fabric for full bandwidth Require 2 fabrics for N+1 redundancy
46Gbps/slot
Fabric Modules
Crossbar Fabric ASICs
46Gbps/slot
Crossbar Fabric ASICs
230Gbps 46Gbps 184Gbps 138Gbps 92Gbps
per slot bandwidth 4th and 5th fabric modules provide additional redundancy and future-proofing
46Gbps/slot
Crossbar Fabric ASICs
46Gbps/slot
Crossbar Fabric ASICs
10G modules
Require 2 fabrics for full bandwidth Require 3 fabrics for N+1 redundancy
Presentation_ID 2010 Cisco and/or its affiliates. All rights reserved. Cisco Public
46Gbps/slot
Crossbar Fabric ASICs
26
Access to Fabric Bandwidth
Access to fabric controlled using central arbitration
Arbiter ASIC on supervisor engine provides fabric arbitration
Bandwidth capacity on egress modules represented by Virtual Output Queues (VOQs) at ingress to fabric
I/O modules interface with arbiter to gain access to VOQs
Presentation_ID
2010 Cisco and/or its affiliates. All rights reserved.
Cisco Public
27
What Are VOQs?
Virtual Output Queues (VOQs) on ingress modules represent bandwidth capacity on egress modules If VOQ available on ingress to fabric, capacity exists at egress module to receive traffic from fabric
Central arbiter determines whether VOQ is available for a given packet Bandwidth capacity represented by credits Credits are requested by I/O modules and granted by arbiter
VOQ is virtual because it represents EGRESS capacity but resides on INGRESS modules
It is still PHYSICAL buffer where packets are stored
Note: VOQ is not equivalent to ingress or egress port buffer or queues
Relates ONLY to ASICs at ingress and egress to fabric
Presentation_ID
2010 Cisco and/or its affiliates. All rights reserved.
Cisco Public
28
Benefits of Central Arbitration with VOQ
Ensures priority traffic takes precedence over besteffort traffic across fabric
Four levels of priority for each VOQ destination
Ensures fair access to bandwidth for multiple ingress ports transmitting to one egress port
Central arbiter ensures all traffic sources get appropriate access to fabric bandwidth, even with traffic sources on different modules
Prevents congested egress ports from blocking ingress traffic destined to other ports
Mitigates head-of-line blocking by providing independent queues for individual destinations across the fabric
In future, will provide lossless service for FCoE traffic across the fabric
Can provide strict priority and backpressure (blocking instead of dropping) for certain traffic classes, such as SAN traffic
Presentation_ID 2010 Cisco and/or its affiliates. All rights reserved. Cisco Public
29
Agenda
Chassis Architecture Supervisor Engine Architecture I/O Module Architecture Forwarding Engine Architecture Fabric Architecture Layer 2 Forwarding IP Forwarding IP Multicast Forwarding ACLs QoS NetFlow
Presentation_ID 2010 Cisco and/or its affiliates. All rights reserved. Cisco Public
31
Layer 2 Forwarding
MAC table is 128K entries (115K effective) Hardware MAC learning
CPU not directly involved in learning
All modules have copy of MAC table
New learns communicated to other modules via hardware flood to fabric mechanism Software process ensures continuous MAC table sync
Spanning tree (PVRST or MST) or Virtual Port Channel (VPC) ensures loop-free Layer 2 topology
Presentation_ID
2010 Cisco and/or its affiliates. All rights reserved.
Cisco Public
32
Layer 2 Forwarding Architecture
Layer 2 Forwarding Manager (L2FM) maintains central database of MAC tables L2FM keeps MAC table on all forwarding engines in sync L2FM-Client process on I/O modules interfaces between L2FM and hardware MAC table
n7010# sh processes cpu | egrep PID|l2fm PID Runtime(ms) Invoked uSecs 1Sec Process 3848 1106 743970580 0 0 l2fm n7010# attach mod 9 Attaching to module 9 ... To exit type 'exit', to abort type '$.' Last login: Mon Apr 21 15:58:12 2009 from sup02 on pts/0 Linux lc9 2.6.10_mvl401-pc_target #1 Fri Mar 21 23:26:28 PDT 2009 ppc GNU/Linux module-9# sh processes cpu | egrep l2fm 1544 6396 388173 16 0.0 l2fmc module-9#
Presentation_ID 2010 Cisco and/or its affiliates. All rights reserved.
Supervisor Engine L2FM
L2FM-C Hardware I/O Module
L2FM-C Hardware I/O Module
L2FM-C Hardware I/O Module
Hardware MAC Table
Hardware MAC Learning
Cisco Public
33
Hardware Layer 2 Forwarding Process
MAC table lookup in Layer 2 Engine based on {VLAN,MAC} pairs Source MAC and destination MAC lookups performed for each frame
Source MAC lookup drives new learns and refreshes aging timers Destination MAC lookup dictates outgoing switchport
Presentation_ID
2010 Cisco and/or its affiliates. All rights reserved.
Cisco Public
34
HDR
= Packet Headers
DATA
= Packet Data
CTRL
= Internal Signaling
Return credit to pool
L2 Packet Flow
Fabric Module 1
Fabric ASIC
Transmit to fabric VOQ arbitration and queuing
Credit grant for fabric access
Supervisor Engine
Central Arbiter
12
Fabric Module 2
Fabric ASIC
Fabric Module 3
Fabric ASIC
Receive from fabric Return buffer credit
10
ACL/QoS/ NetFlow lookups
11
8
Fabric ASIC
Layer 3 Engine Layer 2 Engine
Forwarding Engine
6 14
VOQs
Submit packet headers for lookup
L2-only SMAC/DMAC lookup L2 SMAC/ DMAC lookups Return result
Layer 3 Engine Layer 2 Engine
Forwarding Engine
Fabric ASIC
VOQs Replication Engine 10G MAC
Replication Engine 10G MAC 4:1 Mux + Linksec
2nd stage ingress port QoS
Submit packet headers for egress L2 lookup
13
LinkSec decryption 1st stage ingress port QoS
Receive packet from wire
1
e1/1
Presentation_ID
Module 1
2010 Cisco and/or its affiliates. All rights reserved.
15 Module 2
Transmit packet on Cisco Public wire
Egress port QoS
Linksec
LinkSec encryption
16
35
17
e2/1
Agenda
Chassis Architecture Supervisor Engine Architecture I/O Module Architecture Forwarding Engine Architecture Fabric Architecture Layer 2 Forwarding IP Forwarding IP Multicast Forwarding ACLs QoS NetFlow
Presentation_ID 2010 Cisco and/or its affiliates. All rights reserved. Cisco Public
36
IP Forwarding
Nexus 7000 decouples control plane and data plane Forwarding tables built on control plane using routing protocols or static configuration
OSPF, EIGRP, IS-IS, RIP, BGP for dynamic routing
Tables downloaded to forwarding engine hardware for data plane forwarding
Presentation_ID
2010 Cisco and/or its affiliates. All rights reserved.
Cisco Public
37
IP Forwarding Architecture
Routing protocol processes learn routing information from neighbors IPv4 and IPv6 unicast RIBs calculate routing/next-hop information Unicast Forwarding Distribution Manager (UFDM) interfaces between URIBs on supervisor and IP FIB on I/O modules IP FIB process programs forwarding engine hardware on I/O modules
FIB TCAM contains IP prefixes Adjacency table contains next-hop information
Supervisor Engine BGP OSPF ISIS RIP EIGRP
URIB/U6RIB UFDM
IP FIB Hardware
IP FIB Hardware I/O Module
IP FIB Hardware I/O Module
n7010# PID 20944 n7010# 3573 3574 n7010# 3836
sh processes Runtime(ms) 93 sh processes 117 150 sh processes 1272
cpu | egrep ospf|PID Invoked uSecs 1Sec 33386880 0 0 cpu | egrep u.?rib 44722390 0 0 34200830 0 0 cpu | egrep ufdm 743933460 0 0
Process ospf u6rib urib ufdm ipfib
I/O Module
Hardware FIB TCAM ADJ Table
module-9# sh processes cpu | egrep fib 1534 80042 330725 242 0.0 module-9#
Presentation_ID
2010 Cisco and/or its affiliates. All rights reserved.
Cisco Public
38
Hardware IP Forwarding Process
FIB TCAM lookup based on destination prefix (longest-match) FIB hit returns adjacency, adjacency contains rewrite information (next-hop) Pipelined forwarding engine architecture also performs ACL, QoS, and NetFlow lookups, affecting final forwarding result
Presentation_ID
2010 Cisco and/or its affiliates. All rights reserved.
Cisco Public
39
IPv4 FIB TCAM Lookup
Generate Lookup Key 10.1.1.10 10.1.1.2 10.1.1.3 10.1.1.4 10.10.0.10 10.10.0.100 10.10.0.33 10.1.1.xx 10.1.2.xx 10.1.3.xx 10.10.100.xx
HIT!
Generate TCAM lookup key (destination IP address) Compare lookup key
Ingress unicast IPv4 packet header
Flow Data
Forwarding Engine
Next-hop 1 (IF, MAC) Next-hop 2 (IF, MAC)
Index, # next-hops Index, # next-hops Index, # next-hops Index, # next-hops Index, # next-hops Index, # next-hops Index, # next-hops Index, # next-hops Index, # next-hops Index, # next-hops Index, # next-hops Index, # next-hops
Hit in FIB Index, returns result# in FIB DRAM
Load-Sharing Hash
Offset
Next-hop 3 (IF, MAC)
Return lookup result
Next-hop 4 (IF, MAC) # nexthops Adj Index Next-hop 5 (IF, MAC) Next-hop 6 (IF, MAC) Next-hop 7 (IF, MAC) Result
10.1.1.xx 10.100.1.xx 10.10.0.xx 10.100.1.xx
FIB TCAM
Presentation_ID
next-hops
Adjacency index identifies ADJ block
Cisco Public
FIB DRAM
2010 Cisco and/or its affiliates. All rights reserved.
Hash selects exact next hop entry
Adjacency Table
41
Routing vs. Forwarding
Routing information refers to unicast RIB contents in supervisor control plane Forwarding information refers to FIB contents at I/O module
Presentation_ID
2010 Cisco and/or its affiliates. All rights reserved.
Cisco Public
42
Displaying Routing and Forwarding Information
show routing [ipv4|ipv6] [<prefix>] [vrf <vrf>]
Displays software routing (URIB) information Can also use traditional show ip route command
show forwarding [ipv4|ipv6] route module <mod> [vrf <vrf>]
Displays hardware forwarding (FIB) information on permodule basis
show forwarding adjacency module <mod>
Displays hardware adjacency table information on per-module basis
Presentation_ID
2010 Cisco and/or its affiliates. All rights reserved.
Cisco Public
43
Displaying Routing and Forwarding Information (Cont.)
n7010# sh routing ipv4 10.100.7.0/24 IP Route Table for VRF "default" 10.100.7.0/24, 1 ucast next-hops, 0 mcast next-hops *via 10.1.2.2, Ethernet9/2, [110/5], 00:02:30, ospf-1, type-1 n7010# show forwarding ipv4 route 10.100.7.0/24 module 9 IPv4 routes for table default/base ------------------+------------------+--------------------Prefix | Next-hop | Interface
------------------+------------------+--------------------10.100.7.0/24 10.1.2.2 Ethernet9/2
n7010# show forwarding adjacency 10.1.2.2 module 9 IPv4 adjacency information, adjacency count 1 next-hop --------------Presentation_ID
rewrite info --------------
interface ---------Cisco Public
2010 Cisco and/or its affiliates. All rights reserved.
44
ECMP Load Sharing
Up to 16 hardware load-sharing paths per prefix Use maximum-paths command in routing protocols to control number of load-sharing paths Load-sharing is per-IP flow or per-packet
Use caution with per-packet load-balancing!
10.10.0.0/16
Configure load-sharing hash options with global ip load-sharing command:
Source and Destination IP addresses Source and Destination IP addresses plus L4 ports (default) Destination IP address and L4 port
Additional randomized number added to hash prevents polarization
Automatically generated or user configurable value
Configure per-packet load-sharing with interface ip load-sharing per-packet command
Ingress interface determines if load-sharing is per-flow or per-packet!
Presentation_ID 2010 Cisco and/or its affiliates. All rights reserved. Cisco Public
10.10.0.0/16 via Rtr-A via Rtr-B
45
ECMP Prefix Entry Example
n7010# sh routing ipv4 10.200.0.0 IP Route Table for VRF "default" 10.200.0.0/16, 2 ucast next-hops, 0 mcast next-hops *via 10.1.1.2, Ethernet9/1, [110/5], 00:03:33, ospf-1, inter *via 10.1.2.2, Ethernet9/2, [110/5], 00:00:13, ospf-1, inter n7010# sh forwarding ipv4 route 10.200.0.0 module 9 IPv4 routes for table default/base ------------------+------------------+--------------------Prefix 10.200.0.0/16 n7010#
Presentation_ID 2010 Cisco and/or its affiliates. All rights reserved. Cisco Public
| Next-hop 10.1.1.2 10.1.2.2
| Interface Ethernet9/1 Ethernet9/2
------------------+------------------+---------------------
46
Identifying the ECMP Path for a Flow
show routing [ipv4|ipv6] hash <sip> <dip> [<sport> <dport>] [vrf <vrf>]
n7010# sh routing hash 192.168.44.12 10.200.71.188 Load-share parameters used for software forwarding: load-share type: 1 Randomizing seed (network order): 0xebae8b9a
Hash for VRF "default" Hashing to path *10.1.2.2 (hash: 0x29), for route:
10.200.0.0/16, 2 ucast next-hops, 0 mcast next-hops *via 10.1.1.2, Ethernet9/1, [110/5], 00:14:18, ospf-1, inter *via 10.1.2.2, Ethernet9/2, [110/5], 00:10:58, ospf-1, inter n7010#
Same hash algorithm applies to both hardware and software forwarding
Presentation_ID 2010 Cisco and/or its affiliates. All rights reserved. Cisco Public
47
HDR
= Packet Headers
DATA
= Packet Data
CTRL
= Internal Signaling
Return credit to pool
L3 Packet Flow
Fabric Module 1
Fabric ASIC
Transmit to fabric VOQ arbitration and queuing
Credit grant for fabric access
Supervisor Engine 9
Central Arbiter
12
Fabric Module 2
Fabric ASIC
Fabric Module 3
Fabric ASIC
Receive from fabric Return buffer credit
10 6
L3 FIB/ADJ lookup Ingress and egress ACL/QoS/ NetFlow lookups L2-only SMAC/DMAC lookup L2 ingress and egress SMAC/ DMAC lookups Return result
11
8
Fabric ASIC
Layer 3 Engine Layer 2 Engine
Forwarding Engine
14 5
Layer 3 Engine Layer 2 Engine
Forwarding Engine
Fabric ASIC
VOQs
Submit packet headers for lookup
VOQs Replication Engine 10G MAC
4 3
Replication Engine 10G MAC 4:1 Mux + Linksec
2nd stage ingress port QoS
Submit packet headers for egress L2 lookup LinkSec encryption
13
LinkSec decryption 1st stage ingress port QoS
Receive packet from wire
1
e1/1
Module 1
2010 Cisco and/or its affiliates. All rights reserved.
15 Module 2
Transmit packet on Cisco Public wire
Egress port QoS
Linksec
16
48
Presentation_ID
17
e2/1
Agenda
Chassis Architecture Supervisor Engine Architecture I/O Module Architecture Forwarding Engine Architecture Fabric Architecture Layer 2 Forwarding IP Forwarding IP Multicast Forwarding ACLs QoS NetFlow
Presentation_ID 2010 Cisco and/or its affiliates. All rights reserved. Cisco Public
49
IP Multicast Forwarding
Forwarding tables built on control plane using multicast protocols
PIM-SM, PIM-SSM, PIM-Bidir, IGMP, MLD
Tables downloaded to:
Forwarding engine hardware for data plane forwarding Replication engines for data plane packet replication
Presentation_ID
2010 Cisco and/or its affiliates. All rights reserved.
Cisco Public
50
IP Multicast Forwarding Architecture
Multicast routing processes learn routing information from neighbors/hosts IPv4 and IPv6 multicast RIBs calculate multicast routing/RP/RPF/OIL information Multicast Forwarding Distribution Manager (MFDM) interfaces between MRIBs on supervisor and IP FIB on I/O modules IP FIB process programs hardware:
FIB TCAM Adjacency table Multicast Expansion Table (MET) n7010# PID 3842 3850 n7010# 3843 3847 n7010# 3846 sh processes Runtime(ms) 109 133 sh processes 177 115 sh processes 2442 cpu | egrep pim|igmp|PID Invoked uSecs 1Sec Process 32911620 0 0 pim 33279940 0 0 igmp cpu | egrep m.?rib 33436550 0 0 mrib 47169180 0 0 m6rib cpu | egrep mfdm 743581240 0 0 mfdm ipfib
Supervisor Engine PIM IGMP PIM6 ICMP6 BGP MSDP MRIB/M6RIB MFDM
IP FIB Hardware I/O Module
IP FIB Hardware I/O Module
IP FIB Hardware I/O Module
Hardware FIB TCAM
MET ADJ Table
module-9# sh processes cpu | egrep fib 1534 80153 330725 242 0.0 module-9#
Presentation_ID
2010 Cisco and/or its affiliates. All rights reserved.
Cisco Public
51
Hardware Programming
IP FIB process on I/O modules programs hardware:
FIB TCAM
Part of Layer 3 Engine ASIC on forwarding engine Consists of (S,G) and (*,G) entries as well as RPF interface
Adjacency Table (ADJ)
Part of Layer 3 Engine ASIC on forwarding engine Contains MET indexes
Multicast Expansion Table (MET)
Part of replication engine ASIC on I/O modules Contains output interface lists (OILs), i.e., lists of interfaces requiring replication
Presentation_ID
2010 Cisco and/or its affiliates. All rights reserved.
Cisco Public
52
Multicast FIB TCAM Lookup
Ingress multicast packet header
Compare lookup key
Generate TCAM lookup key (source and group IP address)
Generate Lookup Key 10.1.1.10, 239.1.1.1
Forwarding Engine
RPF, ADJ Index RPF, ADJ Index RPF, ADJ Index RPF, ADJ Index RPF, ADJ Index FIB DRAM Adj Index
10.1.1.12, 239.1.1.1 10.1.1.10, 232.1.2.3 10.4.7.10, 225.8.8.8
HIT! 10.1.1.10, 239.1.1.1
MET Index MET Index MET Index MET Index MET Index
Adjacency Table
Identifies multicast adjacency entry Return lookup result
10.6.6.10, 239.44.2.1
FIB TCAM
Hit in FIB returns result in FIB DRAM
Result
Replication Engine
Replication for each OIF in MET block
OIFs OIFs OIFs OIFs
MET
Cisco Public
Replicate
MET index used to find OIFs for replication 53
Presentation_ID
2010 Cisco and/or its affiliates. All rights reserved.
Displaying Multicast Routing and Forwarding Information
show routing [ipv4|ipv6] multicast [vrf <vrf>] [<source-ip>] [<group-ip>] [summary]
Displays software multicast routing (MRIB) information Can also use traditional show ip mroute command
show forwarding [ipv4|ipv6] multicast route [source <ip>] [group <ip>] [vrf <vrf>] module <mod>
Displays hardware multicast forwarding (FIB) information on per-module basis
Presentation_ID
2010 Cisco and/or its affiliates. All rights reserved.
Cisco Public
55
Displaying Multicast Routing and Forwarding Information (Cont)
n7010# sh routing multicast 10.1.1.2 239.1.1.1 IP Multicast Routing Table for VRF "default" (10.1.1.2/32, 239.1.1.1/32), uptime: 00:40:31, ip mrib pim Incoming interface: Ethernet9/1, RPF nbr: 10.1.1.2, internal Outgoing interface list: (count: 2) Ethernet9/17, uptime: 00:05:57, mrib Ethernet9/2, uptime: 00:06:12, mrib n7010# sh routing multicast 239.1.1.1 summary IP Multicast Routing Table for VRF "default" Total number of routes: 202 Total number of (*,G) routes: 1 Total number of (S,G) routes: 200 Total number of (*,G-prefix) routes: 1 Group count: 1, average sources per group: 200.0 Group: 239.1.1.1/32, Source count: 200 Source (*,G) 10.1.1.2 10.1.1.3 10.1.1.4 <>
Presentation_ID 2010 Cisco and/or its affiliates. All rights reserved. Cisco Public
packets 767 9917158 9917143 9917127
bytes 84370 1269395810 1269393890 1269391824
aps 110 127 127 127
pps 0 4227 4227 4227
bit-rate 0 bps 4 mbps 4 mbps 4 mbps
oifs 2 2 2 2
56
Displaying Multicast Routing and Forwarding Information (Cont.)
n7010# sh forwarding ipv4 multicast route group 239.1.1.1 source 10.1.1.2 module 9
(10.1.1.2/32, 239.1.1.1/32), RPF Interface: Ethernet9/1, flags: Received Packets: 10677845 Bytes: 1366764160 Number of Outgoing Interfaces: 2 Outgoing Interface List Index: 15 Ethernet9/2 Outgoing Packets:432490865 Bytes:55358830720 Ethernet9/17 Outgoing Packets:419538767 Bytes:53700962176 n7010#
Presentation_ID
2010 Cisco and/or its affiliates. All rights reserved.
Cisco Public
57
Egress Replication
IIF
Distributes multicast replication load among replication engines of all I/O modules with OIFs Input packets get lookup on ingress forwarding engine For OIFs on ingress module, ingress replication engine performs the replication For OIFs on other modules, ingress replication engine replicates a single copy of packet over fabric to those egress modules Each egress forwarding engine performs lookup to drive replication Replication engine on egress module performs replication for local OIFs
Local OIFs
Cisco Public
Local OIF
Module 1
Replication MET Engine
Fabric ASIC
Fabric Copy
Fabric Module
Fabric ASIC
Fabric ASIC
Fabric ASIC
Fabric ASIC
Replication MET Engine
Replication MET Engine
Replication MET Engine
Local OIFs
Local OIFs
58
Presentation_ID
2010 Cisco and/or its affiliates. All rights reserved.
HDR
= Packet Headers
DATA
= Packet Data
L3 Multicast Packet Flow
Fabric Module 1
Fabric ASIC
Transmit to fabric
Fabric Module 2
Fabric ASIC
Fabric Module 3
Fabric ASIC
Dequeue multicast distribution copy from fabric
11 6
L3 multicast FIB lookup Ingress ACL/QoS/ NetFlow lookups Egress ACL/QoS/ NetFlow lookups L2 ingress snooping lookup Return MET result
12
VOQ queuing
10
Transmit multicast fabric distribution packet Submit packet headers for lookup
Fabric ASIC
Layer 3 Engine Layer 2 Engine
Forwarding Engine
Layer 3 Engine
Fabric ASIC
Replicate for local OIF delivery
VOQs Replication Engine 10G MAC 4:1 Mux + Linksec
15
Layer 2 Engine
VOQs Replication Engine 10G MAC
13
4 3
2nd stage ingress port QoS
Receive packet from wire
1
e1/1
8 Module 1
Replicate for fabric delivery
Forwarding Engine
Submit packet headers for egress lookups
14
LinkSec encryption
16 2
L2 egress snooping lookup
17 Module 2
Transmit packet on wire
Egress port QoS
Linksec
18
LinkSec decryption 1st stage ingress port QoS
e2/1
19
Presentation_ID 2010 Cisco and/or its affiliates. All rights reserved.
Cisco Public
59
Unicast vs. Multicast on Fabric
Fabric consists of two parallel fabric planes:
Unicast traffic
Centrally arbitrated Round-robin load balanced over available fabric channels
Multicast traffic
Locally arbitrated Load balanced over available fabric channels using hash
Presentation_ID
2010 Cisco and/or its affiliates. All rights reserved.
Cisco Public
60
Agenda
Chassis Architecture Supervisor Engine Architecture I/O Module Architecture Forwarding Engine Architecture Fabric Architecture Layer 2 Forwarding IP Forwarding IP Multicast Forwarding ACLs QoS NetFlow
Presentation_ID 2010 Cisco and/or its affiliates. All rights reserved. Cisco Public
61
Security ACLs
Enforce security policies based on Layer 2, Layer 3, and Layer 4 information Classification TCAM (CL TCAM) provides ACL lookups in forwarding engine
64K hardware entries
Router ACL (RACL)Enforced for all traffic crossing a Layer 3 interface in a specified direction
IPv4, ARP RACLs supported
VLAN ACLs (VACLs)Enforced for all traffic in the VLAN
IPv4, MAC VACLs supported
Port ACLs (PACLs)Enforced for all traffic input on a Layer 2 interface
IPv4, MAC PACLs supported
Presentation_ID 2010 Cisco and/or its affiliates. All rights reserved. Cisco Public
62
ACL Architecture
ACL manager receives policy via configuration ACL manager distributes policies to ACL/QoS Clients on I/O modules Clients perform ACL merge and program ACEs in Classification (CL) TCAM in forwarding engines
n7010# sh processes cpu | egrep aclmgr|PID PID Runtime(ms) Invoked uSecs 1Sec Process 3589 1662 516430000 0 0 aclmgr module-9# sh processes cpu | egrep aclqos 1532 9885 671437 14 0.0 module-9# aclqos
Supervisor Engine CLI XML
ACL Manager
ACL/QoS-C Hardware I/O Module
ACL/QoS-C Hardware I/O Module
ACL/QoS-C Hardware I/O Module
Hardware CL TCAM
Presentation_ID
2010 Cisco and/or its affiliates. All rights reserved.
Cisco Public
63
ACL CL TCAM Lookup
Packet header: SIP: 10.1.1.1 DIP: 10.2.2.2 Protocol: TCP SPORT: 33992 DPORT: 80
Compare lookup key Generate TCAM lookup key (source/dest IPs, protocol, L4 ports, etc.)
Security ACL ip access-list example permit ip any host 10.1.2.100 deny ip any host 10.1.68.44 deny ip any host 10.33.2.25 permit tcp any any eq 22 deny tcp any any eq 23 deny udp any any eq 514 permit tcp any any eq 80 permit udp any any eq 161
Generate Lookup Key
SIP | DIP | Protocol | SPORT | DPORT
10.1.1.1 | 10.2.2.2 | 06 | 84C8 | 0050
Forwarding Engine
Permit Deny Deny Permit Deny Deny Permit Permit CL SRAM
Return lookup result 64
xxxxxxx | 10.2.2.2 | xx | xxx | xxx xxxxxxx | 10.1.2.100 |xx | xxx | xxx xxxxxxx | 10.1.68.44 | xx | xxx | xxx xxxxxxx | 10.33.2.25 | xx | xxx | xxx
X=Mask
xxxxxxx | xxxxxxx | |06 | |xxx | 0050 xxxxxxx xxxxxxx 06 xxx | 0016 xxxxxxx | xxxxxxx | 06 | xxx | 0017 xxxxxxx | xxxxxxx | 11 | xxx | 0202 HIT! xxxxxxx | xxxxxxx | 06 | xxx | 0050 xxxxxxx | xxxxxxx | 11 | xxx | 00A1 CL TCAM
Hit in CL TCAM returns result in CL SRAM
Cisco Public
Result
Result affects final packet handling
Presentation_ID
2010 Cisco and/or its affiliates. All rights reserved.
Displaying Classification Resources
show hardware access-list resource utilization module <mod>
n7010# sh hardware access-list resource utilization module 9 Hardware Modules Used Free Percent Utilization ----------------------------------------------------Tcam 0, Bank 0 Tcam 0, Bank 1 Tcam 1, Bank 0 Tcam 1, Bank 1 LOU Both LOU Operands Single LOU Operands TCP Flags Protocol CAM Mac Etype/Proto CAM Non L4op labels, Tcam 0 Non L4op labels, Tcam 1 L4 op labels, Tcam 0 L4 op labels, Tcam 1
Presentation_ID n7010#
1 4121 4013 4078 2 0 2 0 4 0 3 3 0 1
16383 12263 12371 12306 102
0.000 25.000 24.000 24.000 1.000
16 3 14 6140 6140 2047 2046
0.000 57.000 0.000 0.000 0.000 0.000 0.000
Cisco Public
2010 Cisco and/or its affiliates. All rights reserved.
65
Agenda
Chassis Architecture Supervisor Engine Architecture I/O Module Architecture Forwarding Engine Architecture Fabric Architecture Layer 2 Forwarding IP Forwarding IP Multicast Forwarding ACLs QoS NetFlow
Presentation_ID 2010 Cisco and/or its affiliates. All rights reserved. Cisco Public
68
Quality of Service
Comprehensive LAN QoS feature set Ingress and egress queuing and scheduling
Applied in I/O module port ASICs
Ingress and egress mutation, classification, marking, policing
Applied in I/O module forwarding engines
All configuration through Modular QoS CLI (MQC)
All QoS features applied using class-maps/policymaps/service-policies
Presentation_ID
2010 Cisco and/or its affiliates. All rights reserved.
Cisco Public
69
QoS Architecture
QoS manager receives policy via configuration QoS manager distributes policies to ACL/QoS Clients on I/O modules Clients perform ACL merge and program hardware:
ACEs in Classification (CL) TCAM in forwarding engines Queuing policies in I/O module port ASICs
n7010# sh processes cpu | egrep qos|PID PID Runtime(ms) Invoked uSecs 1Sec 3849 1074 66946870 0 0 module-9# sh processes cpu | egrep aclqos 1532 9885 671437 14 0.0 module-9# Process ipqosmgr aclqos
Supervisor Engine CLI XML
QoS Manager
ACL/QoS-C Hardware I/O Module
ACL/QoS-C Hardware I/O Module
ACL/QoS-C Hardware I/O Module
Hardware CL TCAM I/O Module ASICs
Presentation_ID
2010 Cisco and/or its affiliates. All rights reserved.
Cisco Public
70
Port QoS 8-Port 10G Module
Buffers
96MB ingress per port 80MB egress per port
Queue Structure
8q2t ingress 1p7q4t egress
Presentation_ID
2010 Cisco and/or its affiliates. All rights reserved.
Cisco Public
71
8-Port 10G Module Buffering
Ingress
Replication Engine
12345678
8q2t
Port 1 96MB
Port 1 80MB
12345678
Port ASIC 1
1p7q4t
Egress
Presentation_ID
2010 Cisco and/or its affiliates. All rights reserved.
Cisco Public
72
Port QoS 32-Port 10G Module
Buffers
Ingress (two-stage ingress buffering)
Dedicated mode: 1MB per port + 65MB per port Shared mode: 1MB per port + 65MB per port group
Egress
Dedicated mode: 80MB per port Shared mode: 80MB per port-group
Queue Structure
8q2t + 2q1t ingress 1p7q4t egress
Presentation_ID
2010 Cisco and/or its affiliates. All rights reserved.
Cisco Public
73
32-Port 10G Module Buffering Shared Mode
Replication Engine
Ingress
(Fixed)
12
2q1t
Ports 1,3,5,7 65MB
Port 1,3,5,7 80MB
12345678
Port ASIC
Port 3 1MB
1p7q4t
12345678
Port 1 1MB
8q2t
Port 5 1MB
Port 7 1MB
4:1 Mux
1,3,5,7
Port Group
Egress
Presentation_ID
2010 Cisco and/or its affiliates. All rights reserved.
Cisco Public
74
32-Port 10G Module Buffering Dedicated Mode
Replication Engine
Ingress
(Fixed)
12
2q1t
Port 1 65MB
Port 1 80MB
12345678
Port ASIC
1p7q4t
12345678
Port 1 1MB
8q2t
4:1 Mux
1,3,5,7
Port Group
Egress
Presentation_ID
2010 Cisco and/or its affiliates. All rights reserved.
Cisco Public
75
Port QoS 48-Port 1G Modules
Buffers
7.56MB ingress per port 6.15MB egress per port
Queue Structure
2q4t ingress 1p3q4t egress
Presentation_ID
2010 Cisco and/or its affiliates. All rights reserved.
Cisco Public
76
48-Port 1G Modules Buffering
Ingress
Replication Engine
Port 10 7.6MB Port 7 4 Port 7.6MB 7.6MB Port 1 Port 11 7.6MB 7.6MB Port 8 5 Port 7.6MB 7.6MB Port 2 Port 12 7.6MB 7.6MB Port 9 6 Port 7.6MB 7.6MB Port 3 7.6MB Port 10 6.2MB Port 7 4 Port 6.2MB 6.2MB Port 1 Port 11 6.2MB 6.2MB Port 8 5 Port 6.2MB 6.2MB Port 2 Port 12 6.2MB 6.2MB Port 9 6 Port 6.2MB 6.2MB Port 3 6.2MB
12
1234
2q4t
1p3q4t
Port ASIC 1-12
Egress
Presentation_ID
2010 Cisco and/or its affiliates. All rights reserved.
Cisco Public
77
Marking and Policing
Classification uses CL TCAM in forwarding engine to match traffic After classification, traffic can be marked or policed Marking policies statically set QoS values for each class Policing performs markdown and/or policing (drop) Policers use classic token-bucket scheme
Uses Layer 2 frame size when determining rate
Note: policing performed on per-forwarding engine basis
Shared interfaces (such as SVI/EtherChannel) and egress policies could be policed at <policing rate> * <number of forwarding engines>
Presentation_ID
2010 Cisco and/or its affiliates. All rights reserved.
Cisco Public
78
QoS Classification ACLs
QoS CL TCAM Lookup
Packet header: SIP: 10.1.1.1 DIP: 10.2.2.2 Protocol: TCP SPORT: 33992 DPORT: 80
Compare lookup key
ip access-list police permit ip any 10.3.3.0/24 permit ip any 10.4.12.0/24 ip access-list remark-dscp-32 permit udp 10.1.1.0/24 any ip access-list remark-dscp-40 permit tcp 10.1.1.0/24 any
Generate TCAM lookup key (source/dest IPs, protocol, L4 ports, etc.)
ip access-list remark-prec-3 permit tcp any 10.5.5.0/24 eq 23
Generate Lookup Key
SIP | DIP | Protocol | SPORT | DPORT
Forwarding Engine
10.1.1.1 | 10.2.2.2 | 06 | 84C8 | 0050 Policer ID 1 Policer ID 1 Remark DSCP 32 Remark DSCP 40 Remark IP Prec 3
Hit in CL TCAM returns result in CL SRAM
xxxxxxx | 10.2.2.xx | xx | xxx | xxx 10.3.3.xx xxxxxxx | 10.4.24.xx | xx | xxx | xxx 10.1.1.xx | xxxxxxx | 11 || xxx | xxx 06 xxx| xxx HIT! 10.1.1.xx | xxxxxxx | 06 | xxx | xxx xxxxxxx | 10.5.5.xx| 06 | xxx | 0017 CL TCAM
Result
Result affects final packet handling
CL SRAM
Return lookup result
Presentation_ID
2010 Cisco and/or its affiliates. All rights reserved.
Cisco Public
79
Monitoring QoS Service Policies
show policy-map interface [[<interface>] [type qos|queuing]]|brief]
n7010# show policy-map interface e9/1 Global statistics status : Ethernet9/1 Service-policy (qos) input: policy statistics status: Class-map (qos): 432117468 packets Match: access-group multicast set dscp cs4 Class-map (qos): 76035663 packets Match: access-group other-udp police cir 2 mbps bc 1000 bytes pir 4 mbps be 1000 bytes conformed 587624064 bytes, 3999632 bps action: transmit exceeded 293811456 bytes, 1999812 bps action: set dscp dscp table cir-markdown-map violated 22511172352 bytes, 153221133 bps action: drop n7010#
Presentation_ID 2010 Cisco and/or its affiliates. All rights reserved. Cisco Public
enabled
mark enabled
udp-mcast (match-all)
udp (match-all)
80
Agenda
Chassis Architecture Supervisor Engine Architecture I/O Module Architecture Forwarding Engine Architecture Fabric Architecture Layer 2 Forwarding IP Forwarding IP Multicast Forwarding ACLs QoS NetFlow
Presentation_ID 2010 Cisco and/or its affiliates. All rights reserved. Cisco Public
81
NetFlow
NetFlow table is 512K entries (490K effective), shared between ingress/egress NetFlow Hardware NetFlow entry creation
CPU not involved in NetFlow entry creation/update
All modules have independent NetFlow table Full and sampled NetFlow supported by hardware
Presentation_ID
2010 Cisco and/or its affiliates. All rights reserved.
Cisco Public
82
NetFlow Architecture
NetFlow manager receives configuration via CLI/XML NetFlow manager distributes configuration to NetFlow-Clients on I/O modules NetFlow-Clients apply policy to hardware
n7010# sh processes cpu | egrep nfm|PID PID Runtime(ms) Invoked uSecs 1Sec Process 24016 1463 735183570 0 0 nfm module-9# sh processes cpu | egrep nfp 1538 68842 424290 162 0.0 module-9# nfp
Supervisor Engine CLI XML
NetFlow Manager
NF-C Hardware I/O Module
NF-C Hardware I/O Module
NF-C Hardware I/O Module
Hardware NF Table
Hardware NetFlow Creation
Presentation_ID 2010 Cisco and/or its affiliates. All rights reserved. Cisco Public
83
Full vs. Sampled NetFlow
NetFlow configured per-direction and per-interface
Ingress and/or egress on per-interface basis
Each interface can collect full or sampled flow data Full NetFlow: Accounts for every packet of every flow on interface, up to capacity of NetFlow table Sampled NetFlow: Accounts for M in N packets on interface, up to capacity of NetFlow table
Presentation_ID
2010 Cisco and/or its affiliates. All rights reserved.
Cisco Public
86
Viewing NetFlow Records
show hardware flow ip [detail] module <mod>
n7010# sh hardware flow ip interface e9/1 module 9 D - Direction; IF - Intf/VLAN; L4 Info - Protocol:Source Port:Destination Port TCP Flags: Ack, Flush, Push, Reset, Syn, Urgent D IF I 9/1 I 9/1 I 9/1 <> n7010# sh hardware flow ip interface e9/1 detail module 9 D - Direction; IF - Intf/VLAN; L4 Info - Protocol:Source Port:Destination Port TCP Flags: Ack, Flush, Push, Reset, Syn, Urgent; FR - FRagment; FA - FastAging SID - Sampler/Policer ID; AP - Adjacency/RIT Pointer CRT - Creation Time; LUT - Last Used Time; NtAddr - NT Table Address D IF ByteCnt I 9/1 SrcAddr DstAddr AP L4 Info CRT LUT PktCnt NtAddr TCP Flags SrcAddr DstAddr L4 Info PktCnt TCP Flags
-+-----+---------------+---------------+---------------+----------+----------010.001.001.002 010.001.002.002 006:01024:01024 0001403880 A . . . S . 010.001.001.003 010.001.002.003 006:01024:01024 0001403880 A . . . S . 010.001.001.004 010.001.002.004 006:01024:01024 0001403880 . . . . S .
-+-----+---------------+---------------+---------------+----------+----------TOS FR FA SID -------------+---+--+--+-----+--------+-----+-----+-------010.001.001.002 010.001.002.002 006:01024:01024 0001706722 A . . . S .
2010 Cisco and/or its affiliates. All rights reserved.
0000218460416 000 N
Presentation_ID
0x000 0x000000 02168 02571 0x000331
Cisco Public
89
NetFlow Data Export
To NetFlow Collector
Generate NetFlow v5 or v9 export packets
I/O Module
Fabric ASIC via Inband VOQs
Supervisor Engine
LC CPU
NetFlow Table
Aged Flows
Forwarding Engine I/O Module
LC CPU
NetFlow Table
Hardware Flow Creation
Main CPU
Switched EOBC
Aged Flows
Forwarding Engine I/O Module
LC CPU
NetFlow Table
Hardware Flow Creation
via mgmt0
Mgmt Enet
To NetFlow Collector
Aged Flows
Forwarding Engine
Hardware Flow Creation
Presentation_ID
2010 Cisco and/or its affiliates. All rights reserved.
Cisco Public
90
Conclusion
You should now have a thorough understanding of the Nexus 7000 switching architecture, I/O module design, packet flows, and key forwarding engine functions Any questions?
Presentation_ID
2010 Cisco and/or its affiliates. All rights reserved.
Cisco Public
92
Q and A
Presentation_ID
2010 Cisco and/or its affiliates. All rights reserved.
Cisco Public
94
Complete Your Online Session Evaluation
Give us your feedback and you could win fabulous prizes. Winners announced daily. Receive 20 Passport points for each session evaluation you complete. Complete your session evaluation online now (open a browser through our wireless network to access our portal) or visit one of the Internet stations throughout the Convention Center.
Dont forget to activate your Cisco Live Virtual account for access to all session material, communities, and on-demand and live activities throughout the year. Activate your account at the Cisco booth in the World of Solutions or visit www.ciscolive.com.
Cisco Public
Presentation_ID
2010 Cisco and/or its affiliates. All rights reserved.
95