Skip to content
Home
o Your O'Reilly
Profile
History
Playlists
Highlights
o Answers
o Explore
All Topics
Most Popular Titles
Recommended
Early Releases
Shared Playlists
Resource Centers
o Live Events
All Events
Architectural Katas
AI & ML
Data Sci & Eng
Programming
Infra & Ops
Software Arch
o Interactive
Scenarios
Sandboxes
Jupyter Notebooks
o Certifications
o Settings
o Support
o Newsletters
o Sign Out
Table of Contents for
CCNP Data Center Application Centric Infrastructure 300-620 DCACI
Official Cert Guide
CLOSE
CCNP Data Center Application
Centric Infrastructure 300-620 DCACI Official Cert Guideby Ammar
AhmadiPublished by Cisco Press, 2021
1. Cover Page (01:09 mins)
2. About This eBook (01:09 mins)
3. Title Page (01:09 mins)
4. Copyright Page (03:27 mins)
5. About the Author (01:09 mins)
6. About the Technical Reviewers (01:09 mins)
7. Dedication (01:09 mins)
8. Acknowledgments (01:09 mins)
9. Contents at a Glance (01:09 mins)
10. Reader Services (01:09 mins)
11. Contents (13:48 mins)
12. Icons Used in This Book (01:09 mins)
13. Command Syntax Conventions (01:09 mins)
14. Introduction (12:39 mins)
15. Figure Credit (01:09 mins)
16. Part I Introduction to Deployment (01:09 mins)
o Chapter 1 The Big Picture: Why ACI? (32:12 mins)
o Chapter 2 Understanding ACI Hardware and Topologies (42:33 mins)
o Chapter 3 Initializing an ACI Fabric (93:09 mins)
o Chapter 4 Exploring ACI (59:48 mins)
17. Part II ACI Fundamentals (01:09 mins)
o Chapter 5 Tenant Building Blocks (44:51 mins)
o Chapter 6 Access Policies (55:12 mins)
o Chapter 7 Implementing Access Policies (92:00 mins)
o Chapter 8 Implementing Tenant Policies (97:45 mins)
18. Part III External Connectivity (01:09 mins)
o Chapter 9 L3Outs (125:21 mins)
o Chapter 10 Extending Layer 2 Outside ACI (60:57 mins)
19. Part IV Integrations (01:09 mins)
o Chapter 11 Integrating ACI into vSphere Using VDS (54:03 mins)
o Chapter 12 Implementing Service Graphs (69:00 mins)
20. Part V Management and Monitoring (01:09 mins)
o Chapter 13 Implementing Management (35:39 mins)
o Chapter 14 Monitoring ACI Using Syslog and SNMP (51:45 mins)
o Chapter 15 Implementing AAA and RBAC (63:15 mins)
21. Part VI Operations (01:09 mins)
o Chapter 16 ACI Anywhere (26:27 mins)
22. Part VII Final Preparation (01:09 mins)
o Chapter 17 Final Preparation (10:21 mins)
23. Appendix A Answers to the “Do I Know This Already?” Questions (27:36 mins)
24. Appendix B CCNP Data Center Application Centric Infrastructure DCACI 300-620
Exam Updates (02:18 mins)
25. Glossary (23:00 mins)
26. Index (69:00 mins)
27. Appendix C Memory Tables (32:12 mins)
28. Appendix D Memory Tables Answer Key (34:30 mins)
29. Appendix E Study Planner (04:36 mins)
30. Where are the companion content files? - Register (01:09 mins)
31. Inside Front Cover (01:09 mins)
32. Inside Back Cover (01:09 mins)
33. Code Snippets (05:45 mins)
Search in book...
Toggle Font Controls
o
o
o
o
PREV Previous Chapter
Chapter 6 Access Policies
NEXT Next Chapter
Chapter 8 Implementing Tenant Policies
Chapter 7
Implementing Access Policies
This chapter covers the following topics:
Configuring ACI Switch Ports: This section addresses practical
implementation of ACI switch port configurations.
Configuring Access Policies Using Quick Start Wizards: This section shows
how to configure access policies using quick start wizards.
Additional Access Policy Configurations: This section reviews
implementation procedures for a handful of other less common access policies.
This chapter covers the following exam topics:
1.5 Implement ACI policies
1.5.a access
1.5.b fabric
Chapter 6, “Access Policies,” covers the theory around access policies and the
configuration of a limited number of objects available under the Access Policies
menu. This chapter completes the topic of access policies by covering the
configuration of all forms of Ethernet-based switch port connectivity available
in ACI.
“DO I KNOW THIS ALREADY?” QUIZ
The “Do I Know This Already?” quiz allows you to assess whether you should
read this entire chapter thoroughly or jump to the “Exam Preparation Tasks”
section. If you are in doubt about your answers to these questions or your own
assessment of your knowledge of the topics, read the entire chapter. Table 7-
1 lists the major headings in this chapter and their corresponding “Do I Know
This Already?” quiz questions. You can find the answers in Appendix A,
“Answers to the ‘Do I Know This Already?’ Questions.”
Table 7-1 “Do I Know This Already?” Section-to-Question Mapping
Foundation Topics Section
Configuring ACI Switch Ports
Configuring Access Policies Using Quick Start Wizards
Additional Access Policy Configurations
Caution
The goal of self-assessment is to gauge your mastery of the topics in this chapter. If you
do not know the answer to a question or are only partially sure of the answer, you should
mark that question as wrong for purposes of the self-assessment. Giving yourself credit
for an answer you correctly guess skews your self-assessment results and might provide
you with a false sense of security.
1. An administrator has configured a leaf interface, but it appears to have the
status out-of-service. What does this mean?
1. The port has a bad transceiver installed.
2. The server behind the port has failed to PXE boot, and the port has been
shut down.
3. This status reflects the fact that access policies have been successfully
deployed.
4. The port has been administratively disabled.
2. Where would you go to configure a vPC domain in ACI?
1. Fabric > Access Policies > Policies > Switch > Virtual Port Channel
default
2. Fabric > Access Policies > Interfaces > Leaf Interfaces > Policy Groups
3. Fabric > Access Policies > Policies > Switch > VPC Domain
4. Fabric > Fabric Policies > Policies > Switch > Virtual Port Channel
default
3. True or false: To configure an LACP port channel, first create a leaf access
port policy group and then add a port channel policy to the interface policy
group.
1. True
2. False
4. True or false: To forward traffic destined to an endpoint behind a vPC,
switches within the fabric encapsulate each packet twice and forward a copy
separately to the loopback 0 tunnel endpoint of each vPC peer.
1. True
2. False
5. True or false: The only way to enable CDP in ACI is through the use of
interface overrides.
1. True
2. False
6. True or false: The Configure Interface wizard in ACI can be used to make
new port assignments using preconfigured interface policy groups.
1. True
2. False
7. True or false: To configure a fabric extender (FEX), you first create a FEX
profile and then configure an access port selector from the parent leaf down to
the FEX with the Connected to FEX checkbox enabled.
1. True
2. False
8. Which of the following are valid steps in implementing MCP on all 20
VLANs on a switch? (Choose all that apply.)
1. Enable MCP at the switch level.
2. Ensure that MCP has been enabled on all desired interfaces through
interface policies.
3. Select the Enable MCP PDU per VLAN checkbox.
4. Enable MCP globally by toggling the Admin State to Enabled and
defining a key.
9. True or false: With dynamic port breakouts, a port speed can be lowered, but
a dramatic loss occurs in the forwarding capacity of the switch.
1. True
2. False
10. True or false: ACI preserves dot1q CoS bits within packets by default.
1. True
2. False
FOUNDATION TOPICS
CONFIGURING ACI SWITCH PORTS
Put yourself in the shoes of an engineer working at a company that has decided
to deploy all new applications into ACI. Looking at a platform with an initial
focus on greenfield deployments as opposed to the intricacies of migrations can
often lead to better logical designs that fully leverage the capabilities of the
solution.
Imagine as part of this exercise that you have been asked to accommodate a
newly formed business unit within your company, focusing on multiplayer
gaming. This business unit would like to be able to patch its server operating
systems independently and outside of regular IT processes and to have full
autonomy over its applications with close to zero IT oversight beyond
coordination of basic security policies. The business unit thinks it can achieve
better agility if it is not bound by processes dictated by IT. Aside from whether
deploying a shadow environment alongside a production environment is even
desirable, is a setup like this even feasible with ACI? By thinking about this
question while reading through the following sections, you may gain insights
into how access policies can be used to share underlying infrastructure among
tenants in ACI.
Configuring Individual Ports
This section shows how to deploy access policies for two new multiplayer
gaming servers. Assume that each of these new servers has a single 10 Gbps
network card and does not support port channeling. Let’s say that the network
engineers configuring switch ports for connectivity to these servers want to
enable LLDP and CDP to have visibility into host names, if advertised by the
servers. They also decide to auto-detect speed and duplex settings to reduce the
need for their team to have to coordinate network card upgrades with the
business unit.
Note
This chapter demonstrates a wide variety of common port configurations through
examples. The examples are not meant to imply that implementation of auto-negotiation,
LLDP, and CDP toward servers outside an organization’s administrative control is a best
practice. Where the intent is to convey that something is a best practice, this book
explicitly says so.
To configure an interface policy with LLDP enabled, navigate to Fabric >
Access Policies > Policies > Interface, right-click LLDP Interface, and select
Create LLDP Interface Policy. Figure 7-1 shows an interface policy with LLDP
enabled bidirectionally.
Figure 7-1 Configuring an LLDP Interface Policy
It is often good practice to use explicit policies. Auto-negotiation of port speed
and duplex settings can be achieved by using a link level policy. To create a link
level policy, navigate to Fabric > Access Policies > Policies > Interface, right-
click Link Level, and select Create Link Level Policy.
Figure 7-2 shows the settings for a link level policy. By default, Speed is set to
Inherit, and Auto Negotiation is set to On to allow the link speed to be
determined by the transceiver, medium, and capabilities of the connecting
server. The Link Debounce Interval setting delays reporting of a link-down
event to the switch supervisor. The Forwarding Error Correction (FEC) setting
determines the error correction technique used to detect and correct errors in
transmitted data without the need for data retransmission.
Figure 7-2 Configuring a Link Level Interface Policy
To create a policy with CDP enabled, navigate to Fabric > Access Policies >
Policies > Interface, right-click CDP Interface, and select CDP Interface
Policy. Figure 7-3 shows an interface policy with CDP enabled.
Figure 7-3 Configuring a CDP Interface Policy
In addition to interface policies, interface policy groups need to reference a
global access policy (an AAEP) for interface deployment. AAEPs can often be
reused. Figure 7-4 shows the creation of an AAEP named Bare-Metal-Servers-
AAEP. By associating the domain phys as shown in Figure 7-4, you enable any
servers configured with the noted AAEP to map EPGs to switch ports using
VLAN IDs 300 through 499.
Figure 7-4 Configuring an AAEP
With interface policies and global policies created, it is time to create an
interface policy group to be applied to ports.
To create an interface policy group for individual (non-aggregated) switch
ports, navigate to Fabric > Access Policies > Interfaces > Leaf Interfaces >
Policy Groups, right-click the Leaf Access Port option, and select Create Leaf
Access Port Policy Group.
Figure 7-5 shows the association of the interface policies and AAEP created
earlier with an interface policy group. Because policy groups for individual
ports are fully reusable, a generic name not associated with any one server
might be most beneficial for the interface policy group.
Figure 7-5 Configuring a Leaf Access Port Policy Group
Next, the interface policy group needs to be mapped to switch ports. Let’s say a
new switch has been procured and will be dedicated to multiplayer gaming
servers for the business unit. The switch, which has already been commissioned,
has node ID 101 and a switch profile. An interface profile has also been linked
with the switch profile.
To associate an interface policy with ports, navigate to the desired interface
profile, click on the Tools menu, and select Create Access Port Selector, as
shown in Figure 7-6.
Figure 7-6 Navigating to the Create Access Port Selector Window
Figure 7-7 demonstrates the association of the new interface policy group with
ports 1/45 and 1/46. Since this is a contiguous block of ports, you can use a
hyphen to list the ports. After you click Submit, the interface policy group is
deployed on the selected switch ports on all switches referenced by the interface
profile.
Figure 7-7 Mapping Ports to an Interface Policy Group
Back under the leaf interface profile, notice that an entry should be added in the
Interface Selectors view (see Figure 7-8).
Figure 7-8 Port Mappings Added to the Interface Selector View
Double-click the entry to view the Access Port Selector page. As shown
in Figure 7-9, ports that are mapped to an interface policy group as a contiguous
block cannot be individually deleted from the port block. This might pose a
problem if a single port that is part of a port block needs to be deleted and
repurposed at some point in the future. Therefore, use of hyphens to group ports
together is not always suitable.
Figure 7-9 Ports Lumped Together in a Port Block
In the GUI, the operational state of ports can be verified under Fabric >
Inventory > Pod number > Node Name > Interfaces > Physical Interfaces.
According to Figure 7-10, the newly configured ports appear to have the Usage
column set to Discovery.
Figure 7-10 Verifying the Status of Physical Interfaces in the ACI GUI
Example 7-1 shows how to verify the operational status of ports in the switch
CLI.
Example 7-1 Verifying Port Status via the ACI Switch CLI
Click here to view code image
LEAF101# show interface ethernet 1/45-46 status
-------------------------------------------------------------------------------------
Port Name Status Vlan Duplex Speed Type
-------------------------------------------------------------------------------------
Eth1/45 -- out-of-ser trunk full 10G 10Gbase-SR
Eth1/46 -- out-of-ser trunk full 10G 10Gbase-SR
What does the status “out-of-service” actually mean? When this status appears
for operational fabric downlink ports, it simply means that tenant policies have
not yet been layered on top of the configured access policies.
Table 7-2 summarizes port usage types that may appear in the GUI.
Table 7-2 Port Usages
Port Usage Description
Blacklist Blacklist indicates that a port has been disabled either by an administrator or
detected anomalies with the port. Anomalies can include wiring errors or swit
nonmatching fabric IDs connecting to the fabric.
Controller ACI detects an APIC controller attached to the port.
Discovery The port is not forwarding user traffic because no tenant policies have been e
port. This can be due to the lack of an EPG mapping or routing configuration
the default state for all fabric downlinks.
EPG At least one EPG has been correctly associated with the port. This is a valid s
is disabled.
Fabric Fabric indicates that a port functions or can potentially function as a fabric up
connectivity between leaf and spine switches. By default, fabric ports have U
Fabric and Fabric External until cabling is attached or a configuration change
Fabric Fabric External indicates that a port functions as an L3Out, peering with som
External outside the fabric. By default, fabric ports have Usage set to both Fabric and F
cabling is attached or a configuration change takes place.
Infra Infra indicates that a port is trunking the overlay VLAN.
The APIC CLI commands shown in Example 7-2 are the equivalent of the
configurations completed via the GUI. Notice that the LLDP setting does not
appear in the output. This is because not all commands appear in the output of
the APIC CLI running configuration. Use the command show running-config
all to see all policy settings, including those that deviate for default values for a
parameter.
Example 7-2 APIC CLI Configurations Equivalent to the GUI Configurations Demonstrated
Click here to view code image
APIC1# show run
(...output truncated for brevity...)
template policy-group Multiplayer-Gaming-PolGrp
cdp enable
vlan-domain member phys type phys
exit
leaf-interface-profile LEAF101-IntProfile
leaf-interface-group Multiplayer-Gaming-Servers
interface ethernet 1/45-46
policy-group Multiplayer-Gaming-PolGrp
exit
exit
Note
Switch port configurations, like all other configurations in ACI, can be scripted or
automated using Python, Ansible, Postman, or Terraform or using workflow
orchestration solutions such as UCS Director.
Configuring Port Channels
Let’s say that the business unit running the multiplayer project wants a server
deployed using LACP, but it has purchased only a single leaf switch, so dual-
homing the server to a pair of leaf switches is not an option. Before LACP port
channels can be deployed in ACI, you need to configure an interface policy with
LACP enabled. To do so, navigate to Fabric > Access Policies > Policies >
Interface, right-click Port Channel, and select Create Port Channel Policy. The
window shown in Figure 7-11 appears.
Figure 7-11 Configuring a Port Channel Interface Policy with LACP Enabled
The function of the Mode setting LACP Active should be easy to
understand. Table 7-3 details the most commonly used Control settings
available for ACI port channels.
Table 7-3 Common Control Settings for ACI Port Channel Configuration
Control Setting Description
Fast Select Hot This setting enables fast select for hot standby ports. Enabling this featu
Standby Ports the faster selection of a hot standby port when the last active port in the
going down.
Graceful This setting ensures optimal failover of links in an LACP port channel
Convergence or virtual port channel configured with this setting connects to Nexus d
Suspend With this setting configured, LACP suspends a bundled port if it does n
Individual Port packets from its peer port. When this setting is not enabled, LACP mov
the Individual state.
Symmetric With this setting enabled, bidirectional traffic is forced to use the same
Hashing and each physical interface in the port channel is effectively mapped to
Control Setting Description
When an administrator creates a policy with Symmetric Hashing enable
new field for selection of a hashing algorithm.
After you create a port channel interface policy, you can create a port channel
interface policy group for each individual port channel by navigating to Fabric
> Access Policies > Policies > Interface > Leaf Interfaces > Policy Groups,
right-clicking PC Interface, and selecting Create PC Interface Policy
Group. Figure 7-12 shows the grouping of several policies to create a basic port
channel interface policy group.
Figure 7-12 Configuring a Port Channel Interface Policy Group
You use an access selector to associate the interface policy group with the
desired ports. If the intent is to configure ports 1/31 and 1/32 without lumping
these ports into a single port block, it might make sense to first associate a
single port with the port channel interface policy group and then add the next
port as a separate port block. Figure 7-13 demonstrates the association of port
1/31 on Leaf 101 with the interface policy group.
Figure 7-13 Mapping Ports to a Port Channel Interface Policy Group
To add the second port to the port channel, click on the + sign in the Port
Blocks section, as shown in Figure 7-14, to create a new port block.
Figure 7-14 Navigating to the Create Access Port Block Page
Finally, you can add port 1/32 as a new port block, as shown in Figure 7-15.
Figure 7-15 Adding a New Port Block to an Access Port Selector
Example 7-3, taken from the Leaf 101 CLI, verifies that Ethernet ports 1/31 and
1/32 have indeed been bundled into an LACP port channel and that they are up.
Why was there no need to assign an ID to the port channel? The answer is that
ACI itself assigns port channel IDs to interface bundles.
Example 7-3 Switch CLI-Based Verification of Port Channel Configuration
Click here to view code image
LEAF101# show port-channel summary
Flags: D - Down P - Up in port-channel (members)
I - Individual H - Hot-standby (LACP only)
s - Suspended r - Module-removed
S - Switched R - Routed
U - Up (port-channel)
M - Not in use. Min-links not met
F - Configuration failed
-------------------------------------------------------------------------------
Group Port- Type Protocol Member Ports
Channel
-------------------------------------------------------------------------------
1 Po1(SU) Eth LACP Eth1/6(P) Eth1/8(P)
2 Po2(SU) Eth LACP Eth1/31(P) Eth1/32(P)
You have already learned that port channel interface policy groups should
ideally not be reused, especially on a single switch. But why is this the
case? Figure 7-16 shows that an administrator has created a new interface
selector and has mistakenly associated the same port channel interface policy
group with ports 1/35 and 1/36. Note in this figure that using commas to
separate the interface IDs leads to the creation of separate port blocks.
Figure 7-16 Multiple Interface Selectors Referencing a Port Channel Interface
Policy Group
The setup in Figure 7-16 would lead to the switch CLI output presented
in Example 7-4.
Example 7-4 Interfaces Bundled Incorrectly Due to PC Interface Policy Group Reuse
Click here to view code image
LEAF101# show port-channel summary
Flags: D - Down P - Up in port-channel (members)
I - Individual H - Hot-standby (LACP only)
s - Suspended r - Module-removed
S - Switched R - Routed
U - Up (port-channel)
M - Not in use. Min-links not met
F - Configuration failed
-------------------------------------------------------------------------------
Group Port- Type Protocol Member Ports
Channel
-------------------------------------------------------------------------------
1 Po1(SU) Eth LACP Eth1/6(P) Eth1/8(P)
2 Po2(SU) Eth LACP Eth1/31(P) Eth1/32(P) Eth1/35(D)
Eth1/36(D)
To the administrator’s surprise, ports 1/35 and 1/36 have been added to the
previously created port channel. The initial assumption may have been that
because a different interface selector name was selected, a new port channel
would be created. This is not the case.
Example 7-5 shows the CLI-equivalent configuration of the port channel
interface policy group and the assignment of the policy group to ports on Leaf
101.
Example 7-5 APIC CLI Configuration for the Port Channel Interfaces
Click here to view code image
template port-channel Multiplayer-Gaming-1-PC-PolGrp
cdp enable
vlan-domain member phys type phys
channel-mode active
speed 10G
no negotiate auto
exit
leaf-interface-profile LEAF101-IntProfile
leaf-interface-group Multiplayer-Server-1
interface ethernet 1/31
interface ethernet 1/32
channel-group Multiplayer-Gaming-1-PC-PolGrp
exit
exit
There is nothing that says you cannot reuse port channel or virtual port channel
interface policy groups in new interface selector configurations if the intent
truly is to bundle the new interfaces into a previously created port channel or
virtual port channel. You may still question whether a port channel interface
policy group or a vPC interface policy group can be reused on a different switch
or vPC domain. As a best practice, you should avoid reuse of port channel and
vPC interface policy groups when creating new port channels and vPCs to
minimize the possibility of configuration mistakes.
Note
You may not have noticed it, but the Control settings selected in the port channel
interface policy shown earlier are Suspend Individual Ports, Graceful Convergence, and
Fast Select Hot Standby Ports (refer to Figure 7.11). These settings are the default
Control settings for LACP port channel interface policy groups in ACI. Unfortunately,
these default Control settings are not always ideal. For example, LACP graceful
convergence can lead to packet drops during port channel bringup and teardown when
used to connect ACI switches to servers or non-Cisco switches that are not closely
compliant with the LACP specification. As a general best practice, Cisco recommends
keeping LACP graceful convergence enabled on port channels connecting to Nexus
switches but disabling this setting when connecting to servers and non-Nexus switches.
Configuring Virtual Port Channel (vPC) Domains
When configuring switch ports to servers and appliances, it is best to dual-home
devices to switches to prevent total loss of traffic if a northbound switch fails.
Some servers can handle failover at the operating system level very well and
may be configured using individual ports from a switch point of view, despite
being dual-homed. Where a server intends to hash traffic across links dual-
homed across a pair of switches, virtual port channeling needs to be configured.
vPC technology allows links that are physically connected to two different
Cisco switches to appear to a downstream device as coming from a single
device and part of a single port channel. The downstream device can be a
switch, a server, or any other networking device that supports Link Aggregation
Control Protocol (LACP) or static port channels.
Standalone Nexus NX-OS software does support vPCs, but there are fewer
caveats to deal with in ACI because ACI does not leverage peer links. In ACI,
the keepalives and cross-switch communication needed for forming vPC
domains all traverse the fabric.
Note
One limitation around vPC domain configuration in ACI that you should be aware of is
that two vPC peer switches joined into a vPC domain must be of the same switch
generation. This means you cannot form a vPC domain between a first-generation switch
suffixed with TX and a newer-generation switch suffixed with EX, FX, or FX2. ACI
does allow migration of first-generation switches that are in a vPC domain to higher-
generation switches, but it typically requires 10 to 20 seconds of downtime for vPC-
attached servers.
The business unit running the multiplayer gaming project has purchased three
additional switches and can now make use of vPCs in ACI. Before configuring
virtual port channels, vPC domains need to be identified.
To configure a vPC domain, navigate to Fabric > Access Policies > Policies >
Switch, right-click Virtual Port Channel Default, and select Create VPC
Explicit Protection Group. Figure 7-17 shows how to navigate to the Create
VPC Explicit Protection Group wizard.
Figure 7-17 Navigating to the Create VPC Explicit Protection Group Wizard
Figure 7-18 shows how you can pair together two switches with node IDs 101
and 102 into vPC domain 21 by populating the Name, ID, Switch 1, and Switch
2 fields. Even though populating the Name field is mandatory, it has little
impact on the configuration.
Figure 7-18 Configuring a vPC Domain
The only vPC failover parameter that can be tweaked in ACI at the time of
writing is the vPC peer dead interval, which is the amount of time a leaf switch
with a vPC secondary role waits following a vPC peer switch failure before
assuming the role of vPC master. The default peer dead interval in ACI is 200
seconds. This value can be tuned between 5 and 600 seconds through
configuration of a vPC domain policy, which can then be applied to the vPC
explicit protection group.
Note
As a best practice, vPC domain IDs should be unique across each Layer 2 network.
Problems can arise when more than one pair of vPC peer switches attached to a common
Layer 2 network have the same vPC domain ID. This is because vPC domain IDs are a
component in the generation of LACP system IDs.
The CLI-based equivalent of the vPC domain definition completed in this
section is the command vpc domain explicit 21 leaf 101 102. Example 7-
6 shows CLI verification of Leaf 101 and Leaf 102 having joined vPC domain
ID 21. Note that the vPC peer status indicates that the peer adjacency with Leaf
102 has been formed, but the vPC keepalive status displays as Disabled. This is
expected output from an operational vPC peering in ACI.
Example 7-6 Verifying a vPC Peering Between Two Switches
Click here to view code image
LEAF101# show vpc
Legend:
(*) - local vPC is down, forwarding via vPC peer-link
vPC domain id : 21
Peer status : peer adjacency formed ok
vPC keep-alive status : Disabled
Configuration consistency status : success
Per-vlan consistency status : success
Type-2 consistency status : success
vPC role : primary, operational secondary
Number of vPCs configured :1
Peer Gateway : Disabled
Dual-active excluded VLANs :-
Graceful Consistency Check : Enabled
Auto-recovery status : Enabled (timeout = 240 seconds)
Operational Layer3 Peer : Disabled
vPC Peer-link status
-------------------------------------------------------------------------------------
id Port Status Active vlans
-- ---- ------ ---------------------------------------------------------------
1 up -
vPC status
-------------------------------------------------------------------------------------
id Port Status Consistency Reason Active vlans
-- ------ -------- -------------- --------- ----------------
From a forwarding perspective, the result of creating a vPC explicit protection
group is that ACI assigns a common virtual IP address to the loopback 1
interface on the two vPC peers. This new IP address functions as a tunnel
endpoint within the fabric, enabling all other switches in the fabric to forward
traffic to either of the two switches via equal-cost multipathing. For this to
work, the two vPC switches advertise reachability of vPC-attached endpoints
using the loopback 1 interface, and traffic toward all endpoints that are not vPC
attached continues to be forwarded to the tunnel IP addresses of the loopback 0
interfaces.
Note
A vPC domain is a Layer 2 construct. ACI spine switches do not function as connection
points for servers and non-ACI switches at Layer 2. Therefore, vPC is not a supported
function for spine switches.
Configuring Virtual Port Channels
Let’s say you want to configure a resilient connection to a new multiplayer
gaming server that does not support LACP but does support static port
channeling. The first thing you need to do is to create a new interface policy that
enables static port channeling. Figure 7-19 shows such a policy.
Figure 7-19 Configuring an Interface Policy for Static Port Channeling
Next, you can move onto the configuration of a vPC interface policy group by
navigating to Fabric > Access Policies > Policies > Interface > Leaf
Interfaces > Policy Groups, right-clicking VPC Interface, and selecting Create
VPC Interface Policy Group. Figure 7-20 shows the configuration of a vPC
interface policy group.
Figure 7-20 Configuring a vPC Interface Policy Group
Next, you need to associate the vPC interface policy group with interfaces on
both vPC peers. The best way to associate policy to multiple switches
simultaneously is to create an interface profile that points to all the desired
switches.
Figure 7-21 shows that the process of creating an access port selector for a vPC
is the same as the process of configuring access port selectors for individual
ports and port channels.
Figure 7-21 Applying vPC Access Port Selectors to an Interface Profile for
vPC Peers
The show vpc and show port-channel summary commands verify that the
vPC has been created. As indicated in Example 7-7, vPC IDs are also auto-
generated by ACI.
Example 7-7 Verifying the vPC Configuration from the Switch CLI
Click here to view code image
LEAF101# show vpc
Legend:
(*) - local vPC is down, forwarding via vPC peer-link
vPC domain id : 21
Peer status : peer adjacency formed ok
vPC keep-alive status : Disabled
Configuration consistency status : success
Per-vlan consistency status : success
Type-2 consistency status : success
vPC role : primary, operational secondary
Number of vPCs configured :2
Peer Gateway : Disabled
Dual-active excluded VLANs :-
Graceful Consistency Check : Enabled
Auto-recovery status : Enabled (timeout = 240 seconds)
Operational Layer3 Peer : Disabled
vPC Peer-link status
-------------------------------------------------------------------------------------
id Port Status Active vlans
-- ---- ------ ---------------------------------------------------------------
1 up -
vPC status
-------------------------------------------------------------------------------------
id Port Status Consistency Reason Active vlans
-- ---- ------ -------------- -------- ---------------
685 Po3 up success success -
LEAF101# show port-channel summary
Flags: D - Down P - Up in port-channel (members)
I - Individual H - Hot-standby (LACP only)
s - Suspended r - Module-removed
S - Switched R - Routed
U - Up (port-channel)
M - Not in use. Min-links not met
F - Configuration failed
-------------------------------------------------------------------------------------
Group Port- Type Protocol Member Ports
Channel
-------------------------------------------------------------------------------------
1 Po1(SU) Eth LACP Eth1/6(P) Eth1/8(P)
2 Po2(SU) Eth LACP Eth1/31(P) Eth1/32(P)
3 Po3(SU) Eth NONE Eth1/38(P)
Example 7-8 shows the APIC CLI configurations equivalent to the GUI-based
vPC configuration performed in this section.
Example 7-8 Configuring a vPC Using the APIC CLI
Click here to view code image
template port-channel Multiplayer-Gaming-3-VPC-PolGrp
cdp enable
vlan-domain member phys type phys
speed 10G
no negotiate auto
exit
leaf-interface-profile LEAF101-102-vPC-IntProfile
leaf-interface-group Multiplayer-Server-3
interface ethernet 1/38
channel-group Multiplayer-Gaming-3-VPC-PolGrp vpc
exit
exit
The static port channel policy setting does not show up in the configuration. As
shown in Example 7-9, by adding the keyword all to the command, you can
confirm that the setting has been applied.
Example 7-9 Using all to Include Defaults Not Otherwise Shown in the APIC CLI
Click here to view code image
APIC1(config)# show running-config all template port-channel
Multiplayer-Gaming-3-VPC-PolGrp
(...output truncated for brevity...)
template port-channel Multiplayer-Gaming-3-VPC-PolGrp
no description
lldp receive
lldp transmit
cdp enable
vlan-domain member phys type phys
channel-mode on
lacp min-links 1
lacp max-links 16
no lacp symmetric-hash
exit
mcp enable
spanning-tree bpdu-filter disable
spanning-tree bpdu-guard disable
speed 10G
no negotiate auto
exit
Configuring Ports Using AAEP EPGs
Even seasoned ACI engineers are often under the impression that EPG
assignments can only be made under the Tenants menu. This is not true. Figure
7-22 shows the mapping of an EPG to VLAN 302. The mappings in this view
require that users prefix the VLAN ID with vlan-.
Figure 7-22 Mapping One or More EPGs to All Ports Leveraging a Specific
AAEP
Figure 7-23 shows that after making this change, the newly configured ports,
which all referenced the AAEP, transition out of the Usage status Discovery to
EPG.
Figure 7-23 Ports with AAEP Assignment Transitioned to the EPG State
As shown in Example 7-10, the ports are no longer out of service. This indicates
that tenant policies have successfully been layered on the access policies by
using the AAEP.
Example 7-10 Operational Ports with an Associated EPG Transition to Connected Status
Click here to view code image
LEAF101# show interface ethernet 1/45-46, ethernet 1/31-32, ethernet 1/38 status
-------------------------------------------------------------------------------------
Port Name Status Vlan Duplex Speed Type
-------------------------------------------------------------------------------------
Eth1/31 -- connected trunk full 10G 10Gbase-SR
Eth1/32 -- connected trunk full 10G 10Gbase-SR
Eth1/38 -- connected trunk full 10G 10Gbase-SR
Eth1/45 -- connected trunk full 10G 10Gbase-SR
Eth1/46 -- connected trunk full 10G 10Gbase-SR
So, what are AAEP EPGs, and why use this method of EPG-to-VLAN
assignment? Static path mappings in the Tenants view associate an EPG with a
single port, port channel, or vPC. If a set of 10 EPGs need to be associated with
10 servers, a sum total of 100 static path assignments are needed. On the other
hand, if exactly the same EPG-to-VLAN mappings are required for the 10
servers, the 10 assignments can be made once to an AAEP, allowing all switch
ports referencing the AAEP to inherit the EPG-to-VLAN mappings. This
reduces administrative overhead in some environments and eliminates
configuration drift in terms of EPG assignments across the servers.
Note
Some engineers strictly stick with EPG-to-VLAN mappings that are applied under the
Tenants menu, and others focus solely on AAEP EPGs. These options are not mutually
exclusive. The method or methods selected should be determined based on the business
and technical objectives. Methods like static path assignment result in large numbers of
EPG-to-VLAN mappings because each port needs to be individually assigned all the
desired mappings, but the number of AAEPs in this approach can be kept to a minimum.
In environments that solely center around AAEP EPGs, there are as many AAEPs as
there are combinations of EPG-to-VLAN mappings. Therefore, the number of AAEPs in
such environments is higher, but tenant-level mappings are not necessary. In
environments in which automation scripts handle the task of assigning EPGs to hundreds
of ports simultaneously, there may be little reason to even consider AAEP EPGs.
However, not all environments center around scripting.
Implications of Initial Access Policy Design on
Capabilities
What are some of the implications of the configurations covered so far in this
chapter? The EPG trunked onto the object Bare-Metal-Servers-AAEP resides in
the tenant Production. This particular customer wants to manage its own
servers, so would it make more sense to isolate the customer’s servers and
applications in a dedicated tenant? The answer most likely is yes.
If a new tenant were built for the multiplayer gaming applications, the business
unit could be provided not just visibility but configuration access to its tenant.
Tasks like creating new EPGs and EPG-to-port mappings could then be
offloaded to the business unit.
In addition, what happens if this particular customer wants to open up
communication between the tenant and a specific subnet within the campus? In
this case, a new external EPG may be needed to classify traffic originating from
the campus subnet. Creating a new external EPG for L3Outs in already
available VRF instances in the Production tenant could force a reevaluation of
policies to ensure continuity of connectivity for other applications to the
destination subnet. Sometimes, use of a new tenant can simplify the external
EPG design and the enforcement of security policies.
Finally, what are the implications of the AAEP and domain design? If central IT
manages ACI, there’s really nothing to worry about. However, if all bare-metal
servers in a fabric indeed leverage a common AAEP object as well as a
common domain, how would central IT be able to prevent the gaming business
unit from mistakenly mapping an EPG to a corporate IT server? How could
central IT ensure that an unintended VLAN ID is not used for the mapping? The
answer is that it cannot. This highlights the importance of good AAEP and
domain design.
In summary, where there is a requirement for the configuration of a server
environment within ACI to be offloaded to a customer or an alternate internal
organization or even when there are requirements for complete segmentation of
traffic in one environment (for example, production) and a new server
environment, it often makes sense to use separate tenants, separate physical
domains, and separate non-overlapping VLAN pools. Through enforcement of
proper role-based access control (RBAC) and scope-limiting customer
configuration changes to a specific tenant and relevant domains, central IT is
then able to ensure that any configuration changes within the tenant do not
impact existing operations in other server environments (tenants).
CONFIGURING ACCESS POLICIES USING
QUICK START WIZARDS
All the configurations performed in the previous section can also be done using
quick start wizards. There are two such wizards under the Access Policies view:
the Configure Interface, PC, and vPC Wizard and the Configure Interface
Wizard.
The Configure Interface, PC, and VPC Wizard
Under Fabric > Access Policies > Quick Start, click on Configure Interface,
PC, and VPC. The page shown in Figure 7-24 appears. Everything from a
switch profile-to-node ID association to interface policies and mapping
configurations can be done in this simple view.
Figure 7-24 The Configure Interface, PC, and VPC Wizard
The Configure Interface Wizard
Under Fabric > Access Policies > Quick Start, notice the Configure Interface
wizard. Click it to see the page shown in Figure 7-25. This page provides a
convenient view for double-checking previously configured interface policy
group settings before making port assignments.
Figure 7-25 View of the Configure Interface Wizard
ADDITIONAL ACCESS POLICY
CONFIGURATIONS
The access policies covered so far in this chapter apply to all businesses and
ACI deployments. The sections that follow address the implementation of less
common access policies.
Configuring Fabric Extenders
Fabric extenders (FEX) are a low-cost solution for low-bandwidth port
attachment to a parent switch. Fabric extenders are less than ideal for high-
bandwidth and low-latency use cases and do not have a lot of appeal in ACI due
to feature deficiencies, such as analytics capabilities.
Note
Ideally, new ACI deployments should not leverage fabric extenders. This book includes
coverage of FEX because it is a topic that can appear on the Implementing Cisco
Application Centric Infrastructure DCACI 300-620 exam and because not all companies
are fortunate enough to be able to remove fabric extenders from their data centers when
first migrating to ACI.
Fabric extenders attach to ACI fabrics in much the same way they attach to NX-
OS mode switches. However, ACI does not support dual-homing of fabric
extenders to leaf switch pairs in an active/active FEX design. Instead, to make
FEX-attached servers resilient to the loss of a single server uplink in ACI, you
need to dual-home the servers to a pair of fabric extenders. Ideally, these fabric
extenders connect to different upstream leaf switches that form a vPC domain.
In such a situation, you can configure vPCs from the servers up to the fabric
extenders to also protect server traffic against the failure of a single leaf switch.
There are two steps involved in implementing a fabric extender:
Step 1.Configure a FEX profile.
Step 2.Associate the FEX profile with the parent switch by configuring access
policies down to the fabric extender.
After these two steps have been completed, you can configure FEX downlinks
to servers by configuring access port selectors on the newly deployed FEX
profile.
Let’s say you want to deploy a fabric extender to enable low-bandwidth CIMC
connections down to servers. To do so, navigate to Fabric > Access Policies >
Interfaces > Leaf Interfaces, right-click Profiles, and select Create FEX
Profile. The page shown in Figure 7-26 appears. The FEX Access Interface
Selectors section is where the CIMC port mappings need to be implemented.
Enter an appropriate name for the FEX interface profile and click Submit.
Figure 7-26 Configuring a FEX Profile
Next, navigate to the interface profile of the parent leaf and configure an
interface selector. In Figure 7-27, ports 1/11 and 1/12 on Leaf 101 connect to
uplink ports on the new fabric extender. To expose the list of available FEX
profiles, enable the Connected to FEX checkbox and select the profile of the
FEX connecting to the leaf ports.
Figure 7-27 Associating a FEX Profile with a Parent Switch
After you click Submit, ACI bundles the selected ports into a static port
channel, as indicated by the output NONE in the Protocol column in Example 7-
11. The FEX eventually transitions through several states before moving to the
Online state.
Example 7-11 Verifying FEX Association with a Parent Leaf Switch
Click here to view code image
LEAF101# show port-channel summary
Flags: D - Down P - Up in port-channel (members)
I - Individual H - Hot-standby (LACP only)
s - Suspended r - Module-removed
S - Switched R - Routed
U - Up (port-channel)
M - Not in use. Min-links not met
F - Configuration failed
-------------------------------------------------------------------------------
Group Port- Type Protocol Member Ports
Channel
-------------------------------------------------------------------------------
1 Po1(SU) Eth LACP Eth1/6(P) Eth1/8(P)
2 Po2(SU) Eth LACP Eth1/31(P) Eth1/32(P)
3 Po3(SU) Eth NONE Eth1/38(P)
4 Po4(SU) Eth NONE Eth1/11(P) Eth1/12(P)
LEAF101# show fex
FEX FEX FEX FEX
Number Description State Model Serial
-----------------------------------------------------------------------------------
101 FEX0101 Online N2K-C2248TP-1GE XXXXX
When the FEX has been operationalized, access policies are still needed for
FEX port connectivity down to CIMC ports. You can navigate to the FEX
profile and configure an interface selector for these ports. Figure 7-28 shows
connectivity for 24 FEX ports being prestaged using a newly created interface
policy group for non-aggregated ports.
Figure 7-28 Configuring FEX Downlinks to Servers via FEX Interface Profiles
Example 7-12 shows how fabric extenders might be implemented via the APIC
CLI.
Example 7-12 Configuring a FEX and Downstream Connectivity via the APIC CLI
Click here to view code image
APIC1# show running-config leaf-interface-profile LEAF101-IntProfile
(...output truncated for brevity...)
leaf-interface-profile LEAF101-IntProfile
leaf-interface-group Port-Channel-to-FEX101
interface ethernet 1/11-12
fex associate 101 template FEX101
exit
exit
APIC1# show running-config fex-profile FEX101
fex-profile FEX101
fex-interface-group Multiplayer-Gaming-CIMC
interface ethernet 1/1-24
policy-group Server-CIMC-PolGrp
exit
exit
Note
Not all ACI leaf switches can function as FEX parents.
Configuring Dynamic Breakout Ports
Cisco sells ACI leaf switches like the Nexus 93180YC-FX that are optimized
for 10 Gbps/25 Gbps compute attachment use cases. It also offers switch
models like the Nexus 9336C-FX2, whose 36 ports each support speeds of up to
100 Gbps.
Port speeds on platforms like the Nexus 9336C-FX2 can be lowered by seating
a CVR-QSFP-10G adapter into a port, along with a supported 10 Gbps or 1
Gbps transceiver. Purchasing a platform like this and using CVR adapters to
lower port speeds, however, could turn out to be an expensive approach when
calculating the per-port cost because this approach makes suboptimal use of the
forwarding capacity of the switch. This approach may still be deemed
economical, however, if only a fraction of ports are “burned” this way.
Another approach is to dynamically split ports into multiple lower-speed
connections. With dynamic breakout ports, a 40 Gbps switch port can be split
into four independent and logical 10 Gbps ports. Likewise, a 100 Gbps port can
be split into four independent and logical 25 Gbps ports. This does require
special breakout cabling, but it allows customers to use a greater amount of the
forwarding capacity of high-bandwidth ports.
Let’s say you have just initialized two new high-density Nexus switches with
node IDs 103 and 104. These two switches both support dynamic breakout
ports. Imagine that you have been asked to deploy 12 new servers, and each
server needs to be dual-homed to these new switches using 25 Gbps network
cards. Since the ports on these particular switches are optimized for 100 Gbps
connectivity, implementation of dynamic breakout ports can help. Splitting
three 100 Gbps ports on each switch, in this case, yields the desired 12 25 Gbps
connections from each leaf to the servers.
To deploy dynamic breakout ports, create a new interface selector on the
interface profiles bound to each switch and select Create Leaf Breakout Port
Group from the Interface Policy Group drop-down box, as shown in Figure 7-
29.
Figure 7-29 Navigating to the Create Leaf Breakout Port Group Page
On the Create Leaf Breakout Port Group page, select a name for the new
interface selector and select an option from the Breakout Map drop-down
box. Figure 7-30 shows the option 25g-4x being selected, which implies that a
100 Gbps port will be broken out into four 25 Gbps ports.
Figure 7-30 Configuring Dynamic Port Breakouts
When implementing the equivalent breakouts for any additional nodes, you may
find that you can reuse the interface policy group that references the breakout
map.
Once breakouts have been implemented on the desired ports on the switches,
you can configure the desired access policies for the resulting subports. These
subports resulting from dynamic breakouts need to be referenced using the
numbering module/port/subport. Figure 7-31 illustrates access policies being
applied to subports of interface 1/1—namely, logical ports 1/1/1, 1/1/2, 1/1/3,
and 1/1/4.
Figure 7-31 Implementing Access Policies for Subports Within a Breakout Port
Example 7-13 shows the APIC CLI commands that are equivalent to the GUI-
based dynamic breakout port configurations implemented in this section.
Example 7-13 Implementing Dynamic Breakout Ports via the APIC CLI
Click here to view code image
leaf-interface-profile LEAF103-IntProfile
leaf-interface-group Breakout-Ports
interface ethernet 1/1
interface ethernet 1/5
interface ethernet 1/6
breakout 25g-4x
exit
leaf-interface-group Multiplayer-Servers-25G
interface ethernet 1/1/1-4
interface ethernet 1/5/1-4
interface ethernet 1/6/1-4
policy-group Multiplayer-Gaming-PolGrp
exit
exit
Configuring Global QoS Class Settings
Quality of service (QoS) allows administrators to classify network traffic and
prioritize and police the traffic flow to help avoid congestion in the network.
To gain an understanding of QoS in ACI, the behavior of the platform can be
analyzed in four key areas:
Traffic classification: Traffic classification refers to the method
used for grouping traffic into different categories or classes. ACI
classifies traffic into priority levels. Current ACI code has six user-
configurable priority levels and several reserved QoS groups. ACI allows
administrators to classify traffic by trusting ingress packet headers, such
as Differentiated Services Code Point (DSCP) or Class of Service (CoS).
Administrators can also assign a priority level to traffic via contracts or
by manually assigning an EPG to a priority level.
Policing: The term policing refers to enforcement of controls on
traffic based on classification. Even though there should be no
oversubscription concerns in ACI fabrics, there is still a need for
policing. Suppose backup traffic has been trunked on the same link to a
server as data traffic. In such cases, administrators can police traffic to
enforce bandwidth limits on the link for the backup EPG. ACI policing
can be enforced on an interface or on an EPG. If traffic exceeds
prespecified limits, packets can be either marked or dropped. Policing
applies both in the inbound direction and in the outbound direction.
Marking: Once a switch classifies traffic, it can also mark traffic
by setting certain values in the Layer 3 header (DSCP) or in the Layer 2
header (Class of Service [CoS]) to notify other switches in the traffic
path of the desired QoS treatment. Under default ACI settings, marking
takes place on ingress leaf switches only.
Queuing and scheduling: Once a platform assigns packets to a
QoS group, outbound packets are queued for transmission. Multiple
queues can be used based on packet priority. A scheduling algorithm
determines which queue’s packet should be transmitted next. Scheduling
and queuing, therefore, collectively refer to the process of prioritization
of network packets and scheduling their transmission outbound on the
wire. ACI uses the Deficit Weighted Round Robin (DWRR) scheduling
algorithm.
Which aspects of QoS relate to access policies? Global QoS class settings
govern priority levels and other fabricwide aspects of QoS applicable to
treatment of server and other endpoint traffic and therefore fall under access
policies.
To review the global QoS class settings or make changes, navigate to Fabric >
Access Policies > Policies > Global > QoS Class.
Let’s say that at some point in the future, your company intends to connect its
Cisco Unified Computing System (UCS) domains to the ACI fabric in an effort
to gradually migrate all workloads into ACI. Default gateways will move into
the fabric at a later time, and legacy data center infrastructure is expected to
remain in the network for a long time. UCS server converged network adapters
(CNA) tag certain critical traffic with CoS values, and the production network
currently honors markings from UCS servers. The IT organization wants to
ensure that ACI preserves these CoS values and restores them as these packets
leave the fabric so that the legacy network can act on these markings. After
reviewing the settings in the Global - QoS Class page, you might learn that ACI
preserves DSCP markings by default but does not preserve CoS markings. You
can enable the Dot1p Preserve setting to address this requirement, as shown
in Figure 7-32.
Figure 7-32 Enabling the Dot1p Preserve Checkbox on the Global - QoS Class
Page
Notice in the figure the six user-configurable QoS priority levels in ACI and the
bandwidth allocation for each of them. Any traffic that cannot be otherwise
classified into a priority level gets assigned to the default class (Level 3).
Note
Reserved QoS groups in ACI consist of APIC controller traffic, control plane protocol
traffic, Switched Port Analyzer (SPAN) traffic, and traceroute traffic. ACI places APIC
and control plane protocol traffic in a strict priority queue; SPAN and traceroute traffic
are considered best-effort traffic.
Configuring DHCP Relay
In ACI, if a bridge domain has been configured to allow flooding of traffic and
a DHCP server resides within an EPG associated with the bridge domain, any
endpoints within the same EPG can communicate with the DHCP server
without needing DHCP relay functionality.
When flooding is not enabled on the bridge domain or when the DHCP server
resides in a different subnet or EPG than endpoints requesting dynamic IP
assignment, DHCP relay functionality is required.
To define a list of DHCP servers to which ACI should relay DHCP traffic, a
DHCP relay policy needs to be configured. There are three locations where a
DHCP relay policy can be configured in ACI:
In the Access Policies view: When bridge domains are placed in
user tenants and one or more DHCP servers are expected to be used
across these tenants, DHCP relay policies should be configured in the
Access Policies view.
In the common tenant: When bridge domains are placed in the
common tenant and EPGs reside in user tenants, DHCP relay policies are
best placed in the common tenant.
In the infra tenant: When DHCP functionality is needed for
extending ACI fabric services to external entities such as hypervisors and
VMkernel interfaces need to be assigned IP addresses from the infra
tenant, DHCP relay policies need to be configured in the infra tenant.
This option is beyond the scope of the DCACI 300-620 exam and,
therefore, this book.
Once DHCP relay policies have been configured, bridge domains can reference
these policies.
Let’s say that you need to configure a DHCP relay policy referencing all DHCP
servers within the enterprise network. In your environment, your team has
decided that bridge domains will all be configured in user tenants. For this
reason, DHCP relay policies should be configured under Fabric > Access
Policies > Policies > Global > DHCP Relay. Figure 7-33 shows how you
create a new DHCP policy by entering a name and adding providers (DHCP
servers) to the policy.
Figure 7-33 Configuring a New DHCP Relay Policy in the Access Policies
View
Define each DHCP server by adding its address and the location where it
resides. Where a DHCP server resides within the fabric, select Application EPG
and define the tenant, application profile, and EPG in which the server resides
and then click OK. Then add any redundant DHCP servers to the policy and
click Submit.
Figure 7-34 Configuring a Provider Within a DHCP Relay Policy
Chapter 8, “Implementing Tenant Policies,” covers assignment of DHCP relay
policies to EPGs and DHCP relay caveats in ACI.
Configuring MCP
A Layer 2 loop does not impact the stability of an ACI fabric because ACI can
broadcast traffic at line rate with little need to process the individual packets.
Layer 2 loops, however, can impact the ability of endpoints to process important
traffic. For this reason, mechanisms are needed to detect loops resulting from
miscabling and misconfiguration. One of the protocols ACI uses to detect such
externally generated Layer 2 loops is MisCabling Protocol (MCP).
MCP is disabled in ACI by default. To enable MCP, you must first enable MCP
globally and then ensure that it is also enabled at the interface policy level. As
part of the global enablement of MCP, you define a key that ACI includes in
MCP packets sent out on access ports. If ACI later receives an MCP packet with
the same key on any other port, it knows that there is a Layer 2 loop in the
topology. In response, ACI can either attempt to mitigate the loop by disabling
the port on which the MCP protocol data unit was received or it can generate a
system message to notify administrators of the issue.
To enable MCP globally, navigate to Fabric > Access Policies > Policies >
Global > MCP Instance Policy Default. As shown in Figure 7-35, you can
then enter a value in the Key field, toggle Admin State to Enabled, check the
Enable MCP PDU per VLAN checkbox, select the desired Loop Prevention
Action setting, and click Submit.
Figure 7-35 Enabling MCP Globally Within a Fabric
Table 7-4 describes these settings.
Table 7-4 Settings Available in Global MCP Policy
Setting Description
Admin State This setting determines whether MCP is globally enabled or disabled
for this field is Disabled.
Enable MCP PDU By default, ACI only sends MCP packets on the native VLAN on a p
per VLAN useless in detecting Layer 2 loops when an EPG has been trunked ove
that loops behind tagged ports can also be detected, the Enable MCP P
option needs to be checked. If this option is checked, ACI sends MCP
Setting Description
256 VLANs per interface. If more than 256 VLANs have been mappe
port, the first 256 VLAN IDs are chosen.
Key This is a string that ACI includes in MCP packets to uniquely identify
intent to be able to later validate whether it has been the originator of
packet.
Loop Detect This is the number of self-originated continuous MCP packets ACI ne
Multiplication before it declares a loop. The default value for this setting is 3. With d
Factor takes ACI approximately 7 seconds to detect a loop.
Loop Protection This is the response ACI takes after receiving a number of self-origin
Action on a port. If the Port Disable option is checked, ACI disables the port
packets have been received and logs the incident. If the Port Disabled
disabled, ACI just logs the incident, which can be forwarded to a sysl
administrators to take action.
Initial Delay This is the delay time, in seconds, before MCP begins taking action. B
option is set to 180 seconds, but it can be tuned down.
Transmission This is the frequency for transmission of MCP packets, in seconds or
Frequency
To enable MCP on a port-by-port basis, create an explicit MCP interface policy
by navigating to Fabric > Access Policies > Policies > Interface, right-clicking
MCP Interface, and selecting Create MisCabling Protocol Interface Policy.
Assign a name to the policy and toggle Admin State to Enabled, as shown
in Figure 7-36.
Figure 7-36 Creating an Interface Policy with MCP Enabled
Then you can apply the policy on relevant interface policy groups, as shown
in Figure 7-37.
Figure 7-37 Applying an MCP Interface Policy to an Interface Policy Group
Configuring Storm Control
Storm control is a feature that enables ACI administrators to set thresholds for
broadcast, unknown unicast, and multicast (BUM) traffic so that traffic
exceeding user-defined thresholds within a 1-second interval can be suppressed.
Storm control is disabled in ACI by default.
Say that for the multiplayer gaming business unit, you would like to treat all
multiplayer servers with suspicion due to lack of IT visibility beyond the
physical interfaces of these servers. Perhaps these servers may someday have
malfunctioning network interface cards and might possibly trigger traffic
storms. If the servers never need to push more than 30% of the bandwidth
available to them in the form of multicast and broadcast traffic, you can enforce
a maximum threshold for multicast and broadcast traffic equivalent to 30% of
the bandwidth of the server interfaces. Figure 7-38 shows the settings for such a
storm control interface policy, configured by navigating to Fabric > Access
Policies > Policies > Interface, right-clicking Storm Control, and selecting
Create Storm Control Interface Policy.
Figure 7-38 Configuring a Storm Control Interface Policy
As indicated in Figure 7-38, thresholds can be defined using bandwidth
percentages or the number of packets traversing a switch interface (or
aggregates of interfaces) per second.
The Rate parameter determines either the percentage of total port bandwidth or
number of packets allowed to ingress associated ports during each 1-second
interval. The Max Burst Rate, also expressed as a percentage of total port
bandwidth or the number of packets entering a switch port, is the maximum
accumulation of rate that is allowed when no traffic passes. When traffic starts,
all the traffic up to the accumulated rate is allowed in the first interval. In
subsequent intervals, traffic is allowed only up to the configured rate.
The Storm Control Action setting determines the action ACI takes if packets
continue to exceed the configured threshold for the number of intervals
specified in the Storm Control Soak Count setting. In the configuration shown
in Figure 7-38, the Storm Control Soak Count has been kept at its default value
of 3, but Storm Control Action has been set to Shutdown. This ensures that any
port or port channel configured with the specified interface policy is shut down
on the third second it continues to receive BUM traffic exceeding the
configured rate. Storm Control Soak Count can be configured to between 3 and
10 seconds.
Figure 7-39 shows that once created, a storm control interface policy needs to
be applied to an interface policy group before it can be enforced at the switch
interface level.
Figure 7-39 Applying Storm Control to an Interface Policy Group
Note
In the configurations presented in Figure 7-38 and Figure 7-39, the assumption is that the
L2 Unknown Unicast setting on the bridge domains associated with the servers will be
configured using the Hardware Proxy setting, which enables use of spine-proxy
addresses for forwarding within a fabric when a destination is unknown to leaf switches.
If the L2 Unknown Unicast setting for relevant bridge domains were configured to
Flood, it would be wise to also set a threshold for unicast traffic. This example shows
how the storm control threshold for unicast traffic does not really come into play when
Hardware Proxy is enabled on pertinent bridge domains.
Configuring CoPP
Control Plane Policing (CoPP) protects switch control planes by limiting the
amount of traffic for each protocol that can reach the control processors. A
switch applies CoPP to all traffic destined to the switch itself as well as
exception traffic that, for any reason, needs to be handled by control processors.
CoPP helps safeguard switches against denial-of-service (DoS) attacks
perpetrated either inadvertently or maliciously, thereby ensuring that switches
are able to continue to process critical traffic, such as routing updates.
ACI enforces CoPP by default but also allows for tuning of policing parameters
both at the switch level and at the interface level. Supported protocols for per-
interface CoPP are ARP, ICMP, CDP, LLDP, LACP, BGP, Spanning Tree
Protocol, BFD, and OSPF. CoPP interface policies apply to leaf ports only.
Switch-level CoPP can be defined for both leaf switches and spine switches and
supports a wider number of protocols.
Let’s say you need to ensure that multiplayer gaming servers can send only a
limited number of ICMP packets to their default gateways. They should also be
allowed to send only a limited number of ARP packets. This can be
accomplished via a CoPP interface policy. As indicated in Figure 7-40, the
relevant interface policy wizard can be accessed by navigating to Fabric >
Access Policies > Interface, right-clicking CoPP Interface, and selecting Create
per Interface per Protocol CoPP Policy.
Figure 7-40 Configuring a CoPP Interface Policy
The columns Rate and Burst in Figure 7-40 refer to Committed Information
Rate (CIR) and Committed Burst (BC), respectively. The Committed
Information Rate indicates the desired bandwidth allocation for a protocol,
specified as a bit rate or a percentage of the link rate. The Committed Burst is
the size of a traffic burst that can exceed the CIR within a given unit of time and
not impact scheduling.
For the CoPP interface policy to take effect, it needs to be applied to the
interface policy groups of the multiplayer gaming servers, as shown in Figure 7-
41.
Figure 7-41 Applying a CoPP Interface Policy to Interface Policy Groups
What do the default settings for CoPP on leaf switches look like? Example 7-
14 displays the result of the command show copp policy on a specific leaf
switch.
Example 7-14 Default CoPP Settings on a Leaf Switch
Click here to view code image
LEAF101# show copp policy
COPP Class COPP proto COPP Rate COPP Burst
lldp lldp 1000 1000
traceroute traceroute 500 500
permitlog permitlog 300 300
nd nd 1000 1000
icmp icmp 500 500
isis isis 1500 5000
eigrp eigrp 2000 2000
arp arp 1360 340
cdp cdp 1000 1000
ifcspan ifcspan 2000 2000
ospf ospf 2000 2000
bgp bgp 5000 5000
tor-glean tor-glean 100 100
acllog acllog 500 500
mcp mcp 1500 1500
pim pim 500 500
igmp igmp 1500 1500
ifc ifc 7000 7000
coop coop 5000 5000
dhcp dhcp 1360 340
ifcother ifcother 332800 5000
infraarp infraarp 300 300
lacp lacp 1000 1000
glean glean 100 100
stp stp 1000 1000
To modify the CoPP settings applied on a leaf, navigate to Fabric > Access
Policies > Policies > Switch, right-click Leaf CoPP, and select Create Profiles
for CoPP to be Applied at the Leaf Level. Notice that there are options to define
custom values for each protocol, apply default CoPP values on a per-platform
basis, apply permissive CoPP values, enforce strict CoPP values, and apply
values between permissive and strict. Figure 7-42 shows the selection of strict
CoPP settings. Strict values can potentially impact certain operations, such as
upgrades.
Figure 7-42 Creating a CoPP Switch Policy That Uses Aggressively Low
Values
Switch CoPP policies need to be applied to a switch policy group before they
can be associated with switch profiles. Figure 7-43 shows the creation and
application of switch CoPP policies to a new switch policy group.
Figure 7-43 Applying of a CoPP Switch Policy to a Switch Policy Group
Finally, you can allocate the CoPP switch policy group to leaf selectors
referencing the intended switches as shown in Figure 7-44.
Figure 7-44 Applying a CoPP Switch Policy Group to Desired Leaf Selectors
Verification of the current CoPP settings indicates that the application of the
strict CoPP policy has dramatically lowered the CoPP values to those that
appear in Example 7-15.
Example 7-15 Switch CoPP Settings Following Application of Strict CoPP Values
Click here to view code image
LEAF101# show copp policy
COPP Class COPP proto COPP Rate COPP Burst
lldp lldp 10 10
traceroute traceroute 10 10
permitlog permitlog 10 10
nd nd 10 10
icmp icmp 10 10
isis isis 10 10
eigrp eigrp 10 10
arp arp 10 10
cdp cdp 10 10
ifcspan ifcspan 10 10
ospf ospf 10 10
bgp bgp 10 10
tor-glean tor-glean 10 10
acllog acllog 10 10
mcp mcp 10 10
pim pim 10 10
igmp igmp 10 10
ifc ifc 7000 7000
coop coop 10 10
dhcp dhcp 10 10
ifcother ifcother 10 10
infraarp infraarp 10 10
lacp lacp 10 10
glean glean 10 10
stp stp 10 10
Note
IFC stands for Insieme Fabric Controller. Even strict CoPP policies keep IFC values
relatively high. This is important because IFC governs APIC communication with leaf
and spine switches.
Another CoPP configuration option in ACI is to implement CoPP leaf and spine
prefilters. CoPP prefilter switch policies are used on spine and leaf switches to
filter access to authentication services based on specified sources and TCP ports
with the intention of protecting against DDoS attacks. When these policies are
deployed on a switch, control plane traffic is denied by default, and only the
traffic specified by CoPP prefilters is permitted. Misconfiguration of CoPP
prefilters, therefore, can impact connectivity within multipod configurations, to
remote leaf switches, and in Cisco ACI Multi-Site deployments. For these
reasons, CoPP prefilter entries are not commonly modified.
Modifying BPDU Guard and BPDU Filter Settings
Spanning Tree Protocol bridge protocol data units (BPDUs) are critical to
establishing loop-free topologies between switches. However, there is little
reason for servers and appliances that do not have legitimate reasons for
participating in Spanning Tree Protocol to be sending BPDUs into an ACI
fabric or receiving BPDUs from the network. It is therefore best to implement
BPDU Guard and BPDU Filter on all server-facing and appliance-facing ports
unless there is a legitimate reason for such devices to be participating in
Spanning Tree Protocol. Although ACI does not itself participate in Spanning
Tree Protocol, this idea still applies to ACI. When a BPDU arrives on a leaf
port, the fabric forwards it on all ports mapped to the same EPG on which the
BPDU arrived. This behavior ensures that non-ACI switches connecting to ACI
at Layer 2 are able to maintain a loop-free topology.
When applied on a switch port, BPDU Filter prevents Spanning Tree Protocol
BPDUs from being sent outbound on the port. BPDU Guard, on the other hand,
disables a port if a Spanning Tree Protocol BPDU arrives on the port.
If you were concerned that a group of servers might one day be hacked and used
to inject Spanning Tree Protocol BPDUs into the network with the intent of
triggering changes in the Spanning Tree Protocol topology outside ACI, it
would make a lot of sense to implement BPDU Filter and BPDU Guard on all
ACI interfaces facing such servers.
To implement BPDU Filter and BPDU Guard, you first create a Spanning Tree
Protocol interface policy with these features enabled (see Figure 7-45).
Figure 7-45 Creating a Spanning Tree Interface Policy
The policy should then be associated with interface policy groups for the
intended servers (see Figure 7-46).
Figure 7-46 Applying a Spanning Tree Interface Policy to Interface Policy
Groups
Note that FEX ports enable BPDU Guard by default, and this behavior cannot
be changed.
Modifying the Error Disabled Recovery Policy
When administrators set up features like MCP and BPDU Guard and determine
that ports should be error disabled as a result of ACI loop-detection events, the
error disabled recovery policy can be used to control whether the fabric
automatically reenables such ports after a recovery interval.
ACI can also move a port into an error-disabled state if an endpoint behind the
port moves to other ports at a high frequency with low intervals between moves.
The reasoning in such cases is that high numbers of endpoint moves can be
symptomatic of loops.
To modify the error disabled recovery policy in a fabric, navigate to Fabric >
Access Policies > Policies > Global > Error Disabled Recovery
Policy. Figure 7-47 shows a configuration with automatic recovery of ports that
have been disabled by MCP after a 300-second recovery interval.
Figure 7-47 Editing the Error Disabled Recovery Policy
To configure whether ACI should disable ports due to frequent endpoint moves
in the first place, navigate to System > System Settings > Endpoint Controls
> Ep Loop Protection.
Configuring Leaf Interface Overrides
A leaf interface override policy allows interfaces that have interface policy
group assignments to apply an alternate interface policy group.
Imagine that a group of ports have been configured on Node 101, using a
specific interface policy group. One of the interfaces connects to a firewall, and
security policies dictate that LLDP and CDP toward the firewall need to be
disabled on all firewall-facing interfaces. It might be impossible to modify the
interface policy group associated with the port because it might be part of a port
block. In this case, a leaf interface override can be used to assign an alternative
interface policy group to the port of interest.
To implement such a leaf interface override, you create a new interface policy
group with the desired settings. Then you navigate to Fabric > Access Policies
> Interfaces > Leaf Interfaces, right-click Overrides, and select Create Leaf
Interface Overrides. Set Path Type and Path to identify the desired switch
interface and the new policy group that needs to be applied to the
interface. Figure 7-48 shows a leaf interface override configuration.
Figure 7-48 Configuring a Leaf Interface Override
With this configuration, LLDP and CDP have been disabled on firewall-facing
interface 1/16.
Configuring Port Channel Member Overrides
When an override needs to be applied to one or more links that are part of a port
channel or vPC but not necessarily the entire port channel or vPC, a port
channel member override can be used. Examples of port channel member
overrides include the implementation of LACP fast timers and the modification
of LACP port priorities.
To configure a port channel member override, first configure an interface policy
that will be used to override the configuration of one or more member ports.
Create a port channel member policy by navigating to Fabric > Access Policies
> Policies > Interface, right-clicking Port Channel Member, and selecting
Create Port Channel Member Policy. Figure 7-49 shows a policy that enables
LACP fast timers.
Figure 7-49 Configuring a Port Channel Member Policy
Note that the port priority setting in this policy has not been modified from its
default. The Priority setting can be used to determine which ports should be put
in standby mode and which should be active when there is a limitation
preventing all compatible ports from aggregating. A higher port priority value
means a lower priority for LACP.
Example 7-16 shows the current timer configuration for port 1/32 on Node 101.
This port has been configured as part of a port channel along with port 1/31.
Example 7-16 Ports 1/31 and 1/32 Both Default to Normal LACP Timers
Click here to view code image
LEAF101# show port-channel summary interface port-channel 2
Flags: D - Down P - Up in port-channel (members)
I - Individual H - Hot-standby (LACP only)
s - Suspended r - Module-removed
S - Switched R - Routed
U - Up (port-channel)
M - Not in use. Min-links not met
F - Configuration failed
-------------------------------------------------------------------------------
Group Port- Type Protocol Member Ports
Channel
-------------------------------------------------------------------------------
2 Po2(SD) Eth LACP Eth1/31(P) Eth1/32(P)
LEAF101# show lacp interface ethernet 1/31 | egrep -A8 "Local" | egrep "Local|
LACP"
Local Port: Eth1/31 MAC Address= 00-27-e3-15-bd-e3
LACP_Activity=active
LACP_Timeout=Long Timeout (30s)
LEAF101# show lacp interface ethernet 1/32 | egrep -A8 "Local" | egrep "Local|
LACP"
Local Port: Eth1/32 MAC Address= 00-27-e3-15-bd-e3
LACP_Activity=active
LACP_Timeout=Long Timeout (30s)
To apply the port channel member policy, you first associate the policy to the
desired port channel or vPC interface policy group in the form of an override
policy group (see Figure 7-50).
Figure 7-50 Adding a Port Channel Member Policy to an Interface Policy
Group
Next, you determine specifically which ports the override policy applies
to. Figure 7-51 shows the application of the policy to port 1/32. After you shut
down and reenable the port, it appears to have LACP fast timers implemented.
This can be confirmed in the output displayed in Example 7-17.
Figure 7-51 Applying an Override to a Member of a Port Channel
Example 7-17 Port 1/32 Overridden Using Fast LACP Timers
Click here to view code image
LEAF101# show lacp interface ethernet 1/31 | egrep -A8 "Local" | egrep
"Local|LACP"
Local Port: Eth1/31 MAC Address= 00-27-e3-15-bd-e3
LACP_Activity=active
LACP_Timeout=Long Timeout (30s)
LEAF101# show lacp interface ethernet 1/32 | egrep -A8 "Local" | egrep
"Local|LACP"
Local Port: Eth1/32 MAC Address= 00-27-e3-15-bd-e3
LACP_Activity=active
LACP_Timeout=Short Timeout (1s)
EXAM PREPARATION TASKS
As mentioned in the section “How to Use This Book” in the Introduction, you
have a couple of choices for exam preparation: Chapter 17, “Final Preparation,”
and the exam simulation questions in the Pearson Test Prep Software Online.
REVIEW ALL KEY TOPICS
Review the most important topics in this chapter, noted with the Key Topic icon
in the outer margin of the page. Table 7-5 lists these key topics and the page
number on which each is found.
Table 7-5 Key Topics for Chapter 7
Key Topic Description
Element
Figure 7-1 Shows the settings that are available when configuring an LLDP interfac
policy
Figure 7-2 Shows the settings that are available when configuring a link level interfa
Key Topic Description
Element
policy
Paragraph Clarifies that the leaf access port policy group needs to be used when
configuring non-aggregated ports
Figure 7-5 Shows a sample configuration of a leaf access port policy group
Figure 7-7 Shows how to map an interface policy group to switch ports via switch a
port selectors
Figure 7-11 Shows a sample configuration of an interface policy group with LACP
enabled
Table 7-3 Details the common control options available for configuration of port
channel interface policies
Paragraph Addresses the extent of reusability of port channel and vPC interface pol
groups
Figure 7-18 Shows how to configure vPC domains in ACI
Paragraph Describes the result of defining a vPC explicit protection group from a
forwarding perspective
Figure 7-19 Shows the configuration of a port channel interface policy with static por
channeling enabled
Figure 7-20 Shows the creation of a vPC interface policy group
Paragraph Describes the use case and benefits of AAEP EPGs
List Lists the steps necessary for deploying a new fabric extender in ACI
Paragraph Explains the function of dynamic breakout ports
Key Topic Description
Element
Figure 7-30 Shows a sample configuration of a leaf port breakout group
Figure 7-31 Depicts the implementation of access policies for dynamic breakout subp
and the resulting port numbering convention
Paragraph Describes a common use case for implementing the Dot1p Preserve settin
Figure 7-32 Shows how to enable the Dot1p Preserve setting
Figure 7-33 Shows configuration of a DHCP relay policy in the Access Policies view
Figure 7-34 Shows the addition of a DHCP server to a DHCP relay policy in the Acce
Policies view
Paragraph Provides an understanding of the steps needed to implement MCP
Paragraph Explains where MCP can be globally enabled
Table 7-4 Describes the configuration settings available when implementing MCP
globally
Paragraph Reinforces the idea that MCP needs to be enabled both globally and at th
interface level
Paragraph Describes the use case for storm control and its default configuration stat
ACI
Paragraph Describes CoPP in ACI and how CoPP can be configured
Figure 7-47 Shows how to edit the error disabled recovery policy
COMPLETE TABLES AND LISTS FROM
MEMORY
Print a copy of Appendix C, “Memory Tables” (found on the companion
website), or at least the section for this chapter, and complete the tables and lists
from memory. Appendix D, “Memory Tables Answer Key” (also on the
companion website), includes completed tables and lists you can use to check
your work.
DEFINE KEY TERMS
Define the following key terms from this chapter and check your answers in the
glossary:
link debounce interval
vPC peer dead interval
dynamic breakout port
leaf interface override
port channel member override
Find answers on the fly, or master something new. Subscribe today. See pricing
options.
Support
Sign Out
© 2021 O'Reilly Media, Inc. Terms of Service / Privacy Policy