Although you have several alternatives to STP that are available for your data
center, STP is still a fairly popular technology in the data center environment. It is a
Layer 2 control plane protocol that runs on switches to ensure that you do not create
topology loops when you have these redundant paths in the network. STP features
on the Cisco Nexus platform are similar to STP features on Cisco IOS platforms and
include a number of enhancements that improve the basic STP functionality.
Rapid PVST+ Overview
Rapid Per VLAN Spanning Tree Plus (Rapid PVST+) is an updated, faster
implementation of STP that allows you to create one spanning tree topology for each
VLAN. It is the default STP mode on Cisco Nexus switches. Rapid PVST+ is enabled
by default on the default VLAN (VLAN1) and on all newly created VLANs in software.
It provides for rapid recovery of connectivity following the failure of switch, a switch
port, or a LAN.
Rapid Spanning Tree Protocol (RSTP) IEEE 802.1w represents the evolution of the
IEEE 802.1D standard. The 802.1D terminology remains basically unchanged in
802.1w, as do most parameters, which makes it easier for users to configure the new
protocol.
Per VLAN Spanning Tree Plus (PVST+) allows the definition of one spanning-tree
instance per VLAN. Normal PVST+ relies on the use of the older 802.1D STP to re-
converge the STP domain in the case of link failures.
Rapid PVST+ allows the use of IEEE 802.1w with Cisco PVST to provide a much
faster convergence per VLAN.
Note
Cisco Nexus switches support Rapid PVST+ and Multiple Spanning Tree (MST). You
can run either Rapid PVST+ or MST on a switch, but not both simultaneously.
Note
Cisco Nexus switches will not run non-rapid version of STP. However, these devices
support interoperability with switches that are running non-rapid STP. You will
encounter this situation when you are connecting your Cisco Nexus switches to
Cisco IOS switches with enabled non-rapid version of STP. Or, more often, you will
encounter this situation when connecting to switches of other vendors. The Cisco
Nexus switch that receives 802.1D Bridge Protocol Data Units (BPDUs) on a port,
will revert to the legacy 802.1D mode of operation on that port. It is recommended
that you avoid using switches that can only run non-rapid version of STP.
If the default spanning tree mode on the Cisco Nexus switch is changed, you can
revert to Rapid PVST+ using the following commands, which also set the current
switch to be the primary root bridge for odd-numbered VLANs and secondary root
bridge for even-numbered VLANs from 1, and 100 through 109:
spanning-tree mode rapid-pvst
!
spanning-tree vlan 1,101,103,105,107,109 root primary
spanning-tree vlan 100, 102,104,106,108 root secondary
Use the following command to display detailed information for the current spanning
tree configuration:
Switch#show spanning-tree summary
Switch is in rapid-pvst mode
Root bridge for: VLAN0001, VLAN0101, VLAN0103, VLAN0105, VLAN0107,
VLAN0109
<... output omitted ...>
Cisco has added extensions to STP that enhance loop prevention, protect against
user configuration errors, and provide better control over the protocol parameters.
Available STP extensions are: spanning-tree edge ports (previously known as
PortFast), BPDU filter, BPDU guard, loop guard, root guard, and bridge assurance.
All these extensions can be used with both Rapid PVST+ and MST.
STP Extensions
Cisco has added extensions to STP that enhance loop prevention, protect against
user configuration errors, and provide better control over the protocol parameters.
Available STP extensions are: spanning-tree edge ports (previously known as
PortFast), BPDU filter, BPDU guard, loop guard, root guard, and bridge assurance.
All these extensions can be used with both Rapid PVST+ and MST.
STP Edge Port
Configuring a Layer 2 access port as a spanning-tree Edge Port causes the port to
bypass the spanning tree listening and learning states and move to the forwarding
state immediately. This feature was formerly known as PortFast.
Spanning-tree edge ports are typically deployed on Layer 2 access ports that are
connected to a single workstation or server. These ports can be considered safe
from topology loops. This design allows those connected devices to access the
network immediately without waiting for STP convergence to take place. Edge ports
also do not generate topology change BPDUs when the link state changes, which
helps reduce STP processing edge ports can be either an access port or a trunk
port.
Note
If you enable the spanning tree edge port feature on a port that is connected to a
switch, you might inadvertently create a bridging loop.
Use the following commands to configure STP edge port on an interface:
interface Ethernet0/0
spanning-tree port type edge
Securing STP Edge Port with BPDU Guard
BPDU Guard protects the integrity of ports that are configured as STP edge ports. If
any BPDU is received on an STP edge port, that port is put into an error-disabled
state. The port is shut down and must be manually re-enabled or automatically
recovered through the error-disabled timeout function.
You can enable BPDU guard globally on all operational spanning tree edge ports
using the spanning-tree port type edge bpduguard default command. Global
BPDU guard is disabled by default.
Use the following commands to enable BPDU guard on an interface:
interface Ethernet0/0
spanning-tree bpduguard enable
You should always enable BPDU guard on all ports that are configured as STP edge
ports. This kind of implementation will prevent adding a switch to a switch port that is
dedicated to an end device.
BPDUs are sent on all ports, including edge ports. You should always run STP to
prevent loops. But there are special cases where you need to prevent BPDUs from
being sent out.
You can achieve that by using the BPDU Filter feature, following these guidelines:
● A BPDU filter allows you to avoid transmitting BPDUs on ports configured as
STP edge ports that are connected to an end system.
1. Usually used as a workaround.
● Do not use a BPDU filter unless you absolutely need to use it.
One example where you would configure BPDU filter is in company networks that
have multiple administrators and those administrators do not want for their networks
to share BPDUs. This is a bad implementation practice, however you will find it in
real life scenarios.
The following figure illustrates a topology that can benefit from the BPDU filter
option:
This figure illustrates a topology that can benefit from the BPDU filter option:
You can enable BPDU filtering globally on all operational spanning tree edge ports
using the spanning-tree port type edge bpdufilter default command. Global
BPDU filtering is disabled by default.
Use the following commands to enable BPDU Filtering on an interface:
interface Ethernet0/0
spanning-tree bpdufilter enable
Protecting STP Topology with Root Guard
When the spanning tree topology is calculated, all switches determine a loop-free
best path toward the elected root bridge. You should consider defining a root bridge
in a way that maximum bandwidth (most high-capacity links) is available in the
topology. In the figure, switches DSW1 and DSW2 are the core of the network.
DSW1 is the root bridge for VLAN 1. ASW is an access layer switch. The link
between DSW2 and ASW is blocking on the ASW side.
If ASW becomes the root bridge, the link between DSW1 and DSW2 would be
blocked and not passing traffic. This behavior is clearly unwanted, since all traffic
between the core switches must now go through ASW, an access layer switch.
When you enable Root Guard on a port, Root Guard does not allow that port to
become a root port. If a root guard-enabled port receives a BPDU that would make
this port a root port, then that port will be moved to a root-inconsistent state, and will
not forward traffic.
Use the following commands to enable root guard on an interface:
interface Ethernet0/0
spanning-tree guard root
Problem with Unidirectional Links
With bidirectional links, traffic flows in both directions (receive/transmit). If for some
reason one direction of traffic flow fails, the result is a unidirectional link. Because
this prevents spanning tree BPDUs from being properly propagated within the
topology, unidirectional links can cause a Layer 2 loop.
The following figure depicts what will happen if the transmit circuitry in a Gigabit
Interface Converter or small form-factor pluggable (SFP) module fails:
Mechanisms for Loop Protection
There are three mechanisms that you will find on Cisco Nexus switches that you can
use to protect against inadvertent loops: loop guard, bridge assurance, and
UniDirectional Link Detection (UDLD).
Loop Guard detects if an active port is no longer receiving BPDUs, and moves that
port into the STP loop-inconsistent blocking state. When the port starts receiving
BPDUs again, indicating that the unidirectional link failure is no longer present, loop
guard removes the blocking state on the port.
You can enable loop guard globally on all spanning tree normal and network ports
using the spanning-tree loopguard default command. Global loop guard is
disabled by default.
Use the following commands to enable loop guard on an interface:
interface Ethernet1/1
spanning-tree guard loop
The Bridge Assurance feature, applicable with Rapid PVST+ and MST, is an
extension of the idea that is used by loop guard. When bridge assurance is activated
on an operational port, this port always sends BPDUs, regardless of the port role.
BPDUs essentially become a hello mechanism between pairs of interconnected
switches. A port that is configured with bridge assurance is required to receive
BPDUs. If a port does not receive BPDUs, it goes into the blocking state. Thus, both
ends of the link must have bridge assurance enabled.
Bridge assurance is enabled by default and you can only disable it globally. Also,
bridge assurance is enabled only on spanning tree network ports that are point-to-
point links. You can disable bridge assurance using the no spanning-tree bridge
assurance command.
Note
If your network devices support bridge assurance, use it instead of loop guard. Do
not use loop guard and bridge assurance at the same time. If the unidirectional
problem exists before link comes up, loop guard will not detect such an issue, but
bridge assurance will.
UniDirectional Link Detection (UDLD) is a Layer 2 protocol that works with Layer 1
mechanisms to determine the physical status of a link. The switch periodically
transmits UDLD packets on an interface with enabled UDLD. If the packets are not
echoed back within a specific time frame, the link is flagged as unidirectional and the
interface is error-disabled. Devices on both ends of the link must support UDLD for
the protocol to successfully identify and disable unidirectional links.
By default, the UDLD feature is disabled. To enable UDLD on fiber-optic LAN ports,
use the feature udld command.
Note
It is recommended that you use both UDLD and bridge assurance (or loop guard).
Bridge assurance will protect your network against STP failures and UDLD will
protect your network against mis-wiring. You can use bridge assurance in a
multivendor environment. UDLD is a proprietary protocol that you can only use with
Cisco equipment.
← AnteriorSiguiente →
Port channels or aggregation of ports is one of the core technologies that you can
use in Ethernet-based networks. This technology enables you to bundle multiple
physical links into a single logical link, which improves resiliency and optimizes
bandwidth utilization.
However, classic port aggregation technology has a limitation—it only allows
aggregation of links between two switches. To form port channeling from one device
to two different devices, you can use evolution of port channeling technology that is
called vPC.
Port Channels
To add resiliency against link failures and to increase the available bandwidth
between two devices, you can provision multiple physical links between the devices.
However, without a port channel, control plane protocols, such as STP, or routing
protocols, such as Open Shortest Path First (OSPF), treat the links as individual
links. In the case of STP, this process results in blocked ports. Although the
additional links add resiliency and bandwidth, the bandwidth between the two
devices is not fully utilized. In the case of routing protocols, the routing protocol can
use additional links for load balancing. This process, however, requires a routing
adjacency to be formed for every link, which increases routing protocol overhead.
Control plane protocols (such as STP) and routing protocols treat the port channel as
a single link. Spanning tree will not block the links that are part of the port channel,
and routing protocols will only form a single routing adjacency across the port
channel.
Traffic that is switched or routed to a port channel interface is balanced across
individual physical links through a hashing mechanism. The hashing mechanism
uses a configurable selection of fields in the packet headers as input. For example,
you can choose to balance the load across the links by checking just the destination
MAC address, or the source and destination TCP port numbers.
Note
Port channels, EtherChannels, and port aggregation all refer to the same group of
technologies that enables you to bond multiple physical links into a virtual one. While
port channels and aggregation of ports are general terms, EtherChannel is a Cisco
brand name for its implementation of this technology.
You can use the port channeling technology to bundle ports of the same type. You
can aggregate either Layer 2 ports, or Layer 3 ports. Layer 3 ports can be configured
on router platforms or multilayer switches. If you want to create a Layer 2 port
channel, all member ports must be in the same trunking mode, either access or
trunk.
In the next figure, port channel links are used to connect several switches, and pairs
of ports are used to create port channel bundles. Because a switch detects each port
channel as one logical connection, the switch will be able to use both ports of each
port channel link at the same time.
Note
The classic port channel technology has always been limited to the aggregation of
links that run between two devices, and has been a point-to-point technology.
Port channel benefits include the following:
● Optimized bandwidth usage
● Improved network convergence
● Spanning-tree mitigation
● Resiliency against physical link failures
The following figure illustrates the port channel physical and logical view:
Cisco Nexus Series Switches support the bundling of interfaces into port channel.
The maximum number of ports in a channel depends on the exact switch hardware
and software combination. For example, on the Cisco Nexus 9000 Series Switches,
you can bundle up to 32 active links into a port channel.
On the Cisco Nexus 9000 platform, port channels can be configured on Layer 2 or
Layer 3 interfaces.
Port Channels and LACP
When you create a port channel, the default channel mode is set to on, which
defines a static port channel. Cisco Nexus series switches also support the Link
Aggregation Control Protocol (LACP) for negotiating link bundling.
Note
LACP is part of the IEEE 802.1AX specification. Because LACP is an IEEE standard,
you can use it to facilitate port channels in mixed switch environments. LACP checks
for configuration consistency and manages link additions and failures between two
switches. It ensures that when you create a port channel, all ports have the same
type of configuration speed, duplex setting, and VLAN information. Any port
modification after the creation of the channel will also make changes on other
channel ports.
LACP packets are exchanged between ports in passive or active mode. Both modes
allow LACP to negotiate between ports to determine if they can form a port channel.
The successful negotiation is based on the criteria such as the port speed and the
trunking state. The passive mode is useful when you do not know whether the
remote system or partner support LACP.
The following table summarizes different port channel and LACP options:
Channel Mode Port Description
Passive (LACP) ●
Responds to LACP packets that it receives
● Does not initiate LACP negotiation
Active (LACP) ●
Initiates negotiations with other ports by sending LACP
packets
On (static) ●
Does not send any LACP packets
● Does not join any LACP channel groups
● Becomes an individual link with that interface
Ports can form an LACP port channel when they are in different LACP modes, as
long as the modes are compatible, as described in the following table:
Passive Active On
Passive NO YES NO
Active YES YES NO
On NO NO YES
The LACP feature is disabled by default, so you must enable LACP before you begin
LACP configuration. You cannot disable LACP while any LACP configuration is
present.
After you enable LACP, you can configure the channel mode for each individual link
in the LACP port channel as active or passive. This channel configuration mode
allows the link to operate with LACP. Use the following commands to enable LACP
and configure an interface to be in a port channel:
feature lacp
!
interface type slot / port
channel-group number mode { active | on | passive }
The number in the channel-group command specifies the port channel to which this
interface is associated. When you run port channels with no configured protocol, the
default port-channel mode is on.
← AnteriorSiguiente →
The port channels enable you to aggregate (bond) multiple interfaces together, while
the traffic is load balanced across each of the physical links. Still, the problem is that
all links in the port channel must be connected to the same switch, when the port
channel is established between a pair of two switches. In modern data centers, a
switch, a router, or a computing node can be connected to two different switches,
which cannot be done with traditional port channels.
Virtual Port Channels
A pair of Cisco Nexus switches that use vPC present themselves to other network
devices as a single logical Layer 2 switch. However, the two switches remain as two
separately managed switches with independent management and control planes.
The vPC architecture includes modifications to the data plane of the switches to
ensure optimal packet forwarding. The vPC architecture also includes control plane
components to exchange state information between the switches and allow the two
switches to appear as a single logical Layer 2 switch to the downstream devices.
For control plane purposes, the two vPC peer switches should present themselves
as a single logical switch to the Layer 2 domain. For LACP, this appearance is
accomplished by generating the LACP system ID from a reserved pool of MAC
addresses, which is then combined with the vPC domain ID. If the downstream
device in the vPC did not see a single device on the remote end of the port channel,
the port channel would not form. For STP, the vPC primary switch is responsible for
generating and processing BPDUs and uses its own bridge ID for the BPDUs. The
vPC secondary switch relays BPDU messages but does not itself generate BPDUs
for the vPCs.
A vPC provides the following benefits:
● Allows a single device to use a port channel across two upstream devices
● Eliminates STP blocked ports
● Provides a loop-free topology
● Uses all available uplink bandwidth
● Provides fast convergence if either the link or a device fails
● Provides link-level resiliency
● Helps ensure high availability
Between the pair of vPC peer switches, an election is held to determine the primary
and secondary vPC device. The election is not preemptive. The vPC role determines
which of the two switches is responsible for the generation and processing of control
plane information for the vPCs. The election also controls the vPC operation in
failure scenarios.
The primary and secondary roles determine the behavior of the vPC peer switches in
certain failure scenarios; most notably, if there is a peer link failure. If the vPC peer
link fails, the vPC primary switch determines, through the peer-keepalive link, if the
vPC secondary peer switch is still operational. If the vPC secondary peer switch is
operational, the primary switch instructs the secondary switch to suspend all vPC
member ports. As a result, the secondary switch also shuts down all switch virtual
interfaces (SVIs) that are associated with any VLANs that are allowed on the vPC
peer link.
Cisco Fabric Services over Ethernet is the primary control plane protocol over the
vPC peer link. It performs several functions:
● Synchronizes MAC address table entries
● Synchronizes Internet Group Management Protocol (IGMP) snooping entries
● Communicates essential configuration information to ensure configuration
consistency between the vPC peer switches
● Tracks the vPC status on the peer
● Synchronizes ARP tables (for Layer 3 vPC peers)
vPC Building Blocks
A pair of Cisco Nexus Switches that uses vPC present themselves to other network
devices as a single logical Layer 2 switch. However, the two switches remain two
separately managed switches with independent management and control planes.
The vPC architecture includes modifications to the data plane of the switches to
ensure optimal packet forwarding. vPC architecture also includes control plane
components to exchange state information between the switches and allow the two
switches to appear as a single logical Layer 2 switch to the downstream devices.
The vPC architecture consists of these components:
● vPC peers: The core of the vPC architecture is a pair of Cisco Nexus
switches. This pair of switches acts as a single logical switch.
● vPC peer link: The vPC peer link is the most important connectivity element in
the vPC system. This link is used to create the illusion of a single control
plane by forwarding BPDUs and LACP packets to the primary vPC switch
from the secondary vPC switch. The peer link is also used to synchronize
MAC address tables between the vPC peers and to synchronize IGMP entries
for IGMP snooping. The peer link provides the necessary transport for
multicast traffic and for the traffic of orphaned ports. When a vPC device is
also a Layer 3 switch, the peer link also carries Hot Standby Router Protocol
(HSRP) packets.
● Cisco Fabric Services: The Cisco Fabric Services protocol is a reliable
messaging protocol that is designed to support rapid stateful configuration
message passing and synchronization. The vPC peers use the Cisco Fabric
Services protocol to synchronize data plane information and implement
necessary configuration checks. vPC peers must synchronize the Layer 2
forwarding table between the vPC peers. This way, if one vPC peer learns a
new MAC address, that MAC address is also programmed on the Layer 2
Forwarding Protocol (L2F) table of the other peer device. The Cisco Fabric
Services protocol travels on the peer link and does not require any
configuration by the user. To help ensure that the peer link communication for
Cisco Fabric Services over Ethernet is always available, spanning tree has
been modified to keep the peer-link ports always forwarding. You also use the
Cisco Fabric Services over Ethernet protocol to perform compatibility checks
to validate the compatibility of vPC member ports to form the channel, to
synchronize the IGMP snooping status, to monitor the status of the vPC
member ports, and to synchronize the ARP table.
● vPC peer keepalive link: The peer keepalive link is a logical link that often
runs over an out-of-band (OOB) network. The peer keepalive link provides a
Layer 3 communications path that vPC uses as a secondary test to determine
whether the remote peer is operating properly. The switch does not send data
or synchronization traffic over the vPC peer keepalive link—only IP packets
that indicate that the originating switch is operating and running vPC. The
peer keepalive status is used to determine the status of the vPC peer when
the vPC peer link goes down. In this scenario, it helps the vPC switch to
determine whether the peer link itself has failed or whether the vPC peer has
failed entirely.
● vPC: A vPC is a multichassis EtherChannel (MEC), a Layer 2 port channel
that spans the two vPC peer switches. The downstream device that is
connected on the vPC sees the vPC peer switches as a single logical switch.
The downstream device does not need to support vPC itself. The downstream
device then connects to the vPC peer switches using a regular port channel,
which can either be statically configured or negotiated through LACP.
● vPC domain: The vPC domain includes both vPC peer devices, the vPC peer
keepalive link, vPC peer link, and all port channels in the vPC domain that are
connected to the downstream devices. A numerical vPC domain ID identifies
the vPC. You can have only one vPC domain ID on each device.
● vPC member port: The port on one of the vPC peers that is a member of one
of the vPCs that are configured on the vPC peers.
● Orphan device: The term orphan device refers to any device that you
connected to a vPC domain using regular links instead of connecting it
through a vPC.
● Orphan port: The term orphan port refers to a switch port that you connected
to an orphan device. The term also means vPC ports whose members are all
connected to a single vPC peer. This situation can occur if a device that you
connected to a vPC loses all its connections to one of the vPC peers.
vPC configuration on the Cisco Nexus Switch includes these steps:
● Enable the vPC feature.
● Create a vPC domain and enter vpc-domain mode.
● Configure the vPC peer keepalive link between switches.
● (Optional) Configure system priority.
● (Optional) Configure vPC role priority.
● Create the vPC peer link.
● Move the PortChannel to vPC.
Use the following commands to create a vPC on Cisco Nexus Switch:
feature vpc
!
vpc domain 10
peer-keepalive destination 192.168.21.101 source 192.168.21.100 vrf
vPC_VRF
!
interface port-channel 1
vpc peer-link
!
interface port-channel 2
vpc 11
Through the vPC domain, you define vPC peer switches that participate in vPC.
When you enter the vPC domain ID, you enter the subconfiguration mode where you
can then configure additional global parameters for the vPC domain. The vPC
domain ID is a value between 1 and 1000 that uniquely identifies the vPC switch
pair.
The peer keepalive link provides an OOB heartbeat between the vPC peer switches,
and in this example (which is recommended) is established through a separate
virtual routing and forwarding (VRF) instance created specifically for the vPC peer
keepalives, using the port channel 1 that bundles the links between the switches.
Port channel 2 contains the interface to the downstream device, and is moved to
vPC. This port channel must be associated to the port channel on the other vPC
switch using the same vPC number on its port channel interface. The vPC port
number is unique for the vPC within the vPC domain and must be identical on the
two peer switches.
Note
Keep in mind that the port channel number and vPC number can be different on the
same switch.
vPC Guidelines
There are several guidelines and considerations that you need to be aware of when
you are designing vPCs.
● You must pair Cisco Nexus switches of the same type. For example, you can
deploy vPC on a pair of Cisco Nexus 5600 Series Switches or Cisco Nexus
9300 Platform Switches but not on a combination of them.
● A vPC peer link must consist of Ethernet ports with an interface speed of 10
Gbps or higher. It is recommended to use at least two 10-Gigabit Ethernet
ports in dedicated mode on two different I/O modules.
● vPC keepalive should not run across a vPC peer link.
● A vPC is a per-VDC function on the Cisco Nexus 7000 Series Switches. You
can configure a vPC in multiple VDCs, but the configuration is entirely
independent. A separate vPC peer link and vPC peer keepalive link are
required for each of the VDCs. vPC domains cannot be stretched across
multiple VDCs on the same switch, and all ports for a given vPC must be in
the same VDC.
● A vPC domain by definition consists of a pair of switches that are identified by
a shared vPC domain ID. It is not possible to add more than two switches or
VDCs to a vPC domain.
● You can configure only one vPC domain ID on a single switch or VDC. It is
not possible for a switch or VDC to participate in more than one vPC domain.
● A vPC is a Layer 2 port channel. vPC does not support the configuration of
Layer 3 port channels. Dynamic routing from the vPC peers to the routers that
are connected on a vPC is not supported. It is recommended that you
establish routing adjacencies on separate routed links
● vPC supports static routing to First Hop Redundancy Protocol (FHRP)
addresses. The FHRP enhancements for vPC enable routing to a virtual
FHRP address across a vPC
● You can use vPC as a Layer 2 link to establish a routing adjacency between
two external routers. The routing restrictions for vPCs only apply to routing
adjacencies between the vPC peer switches and routers that are connected
on a vPC.
Note
For more details, you should read the vPC Best Practices Design Guide at
http://www.cisco.com/c/dam/en/us/td/docs/switches/datacenter/sw/design/
vpc_design/vpc_best_practices_design_guide.pdf
vPC Features
The vPC Peer-Gateway enhancement allows a vPC peer device to act as the active
gateway for packets that are addressed to the other peer device router MAC. This
feature enables local forwarding of packets, destined to the other peer device,
without the need to cross the vPC peer link.
The Peer-Gateway feature allows vPC interoperability with some network-attached
storage (NAS) devices or load balancers. These devices might have some
optimization features that allow the devices to avoid a typical default gateway ARP
request.
Note
Dell EMC or NetApp NAS devices are examples of such NAS servers
In the figure, PEER-A is the default gateway in VLAN10. But because NAS uses
non-standard packet forwarding, it could use PEER-B’s MAC2 as the destination
MAC address to reach the IP gateway. The ACC-1 switch accepts this packet,
hashes it, and chooses to forward it through the port towards PEER-A. With peer-
gateway enabled, PEER-A will route the packets normally, and will not forward them
over the vPC peer link.
When you enable the vPC peer-gateway functionality, each vPC peer device will
locally replicate the MAC address of the interface VLAN that is defined on the other
vPC peer device with the G flag (Gateway flag). In the figure, PEER-A will program
MAC2 (the MAC address of interface VLAN 10) in its MAC table and set the G flag
for this MAC address. PEER-B will do the same for MAC1.
To activate the vPC peer-gateway capability, use the following command line (under
the vPC configuration mode):
vpc domain domain-id peer-gateway
Note
You need to configure both vPC peer devices with this command.
You should always enable vPC peer-gateway in the vPC domain There is no impact
on traffic and existing functionality when activating the peer-gateway capability.
vPC Peer-Switch
vPC Peer-Switch is another enhancement in the context of STP in a vPC
environment.
The vPC peer-switch feature allows a pair of vPC peer devices to appear as a single
STP root in the Layer 2 topology (they have the same bridge ID). vPC peer-switch
must be configured on both vPC peer devices to become operational with the peer-
switch command.
vpc domain domain-id peer-switch
The main advantage of the vPC peer-switch feature is the improvement in terms of
convergence time during vPC primary peer device failure/recovery. These up/down
events do not cause any STP recalculations, so the traffic disruption can be lowered
to subsecond values.
This feature also simplifies the STP configuration by eliminating the need to pin the
STP root to the vPC primary switch.
← AnteriorSiguiente →