APIC Failure Testing

I had an interesting conversation the other day about failure of the APICs within an ACI fabric. This particular customer had been burned by another vendor’s fabric solution with regards to failure or upgrade scenarios. I mentioned that the APICs could all blow up and the data plane of the fabric would keep chugging along. This customer said, “I want to see it. I want to pull the power to the controllers and see it.”

I realized that I had been saying this scenario was possible for a while now, but I had never explicitly tested it. I figured I had better ensure this behavior before I have my customer come out and pull the plug on my APICs.

Here is the setup. I have two EPGs on either side of an F5 LTM. This LTM is doing a simple round robin load balancing of some apache web servers. In my test, I am pinging the virtual server on the LTM. I also tested that I could get to the web pages served by the pool defined in the LTM.



I am advertising the bridge domain subnet ( associated with F5 Outside EPG via an L3 Out.

Starting a ping to all three controllers and the VS on the LTM appliance ( I can ping all four addresses. I can also access web pages from servers in the Web EPG that are load balanced by the LTM.

I logged into the CIMC of all three controllers and did a reboot. At this point I lost pings to all three controllers as expected. I did, however, keep my pings to the VS and I was still able to access web pages.


Now I have proof that the APIC failures do not affect data plane traffic. I can rest easy the next time a customer questions that statement.

UCS and ACI VLAN Creation (Secret San Jose)

To follow-up on the previous post about the UCS Python tool. Here is the script I wrote to create the VLAN pool within UCS and ACI.


The script takes an argument for a VLAN range (for example 100-199).

It takes that VLAN range and creates the VLAN pool in ACI. It also creates those VLANs in UCSM.

It them takes those VLANs and applies them to the vnic templates for the A fabric and B fabric of UCS. The template names would need to be modified to match your environment.

Updating ACI Python SDK

Leaving this here for future reference.

After the ACI fabric is upgraded, you must update the egg files in your acicobra SDK.

The egg files are available on your upgraded controller.



Download the egg files to wherever you store them on your system. Mine are in /home/bob/Cobra.

Once the files are downloaded, uninstall the current acicobra and reinstall the new version.

bob@UbuntuBob:~/Cobra$ pip uninstall acicobra

bob@UbuntuBob:~/Cobra$ sudo easy_install -Z ./acimodel-2.1_1h-py2.7.egg

bob@UbuntuBob:~/Cobra$ sudo easy_install -Z ./acicobra-2.1_1h-py2.7.egg




Scenario: My Customer is an ISP who has deployed an ACI fabric as a backbone for their different customer interconnects. They want to re-use the same VLAN IDs across leaf switches. They want these different connections to be different EPGs even though they are using the same VLAN ID.

There is an L2 setting within ACI that changes the VLAN scope from global to local. Utilizing this setting, we can have overlapping VLAN IDs associated with static bindings on different EPGs.

However, it is not as simple as just changing this one setting. There are some order-of-operations or design caveats. Here is how I have configured this per-port VLAN functionality.

  1. VLAN Pools – You must have different VLAN pools for each instance. Even though the same VLAN IDs will be in the pools. The pools must be unique.
  2. Physical Domains – You must have different physical domains for each instance where you will deploy the overlapping VLANs in EPGs.
  3. Create the AEP associated with the previously created physical domain.
  4. Create the interface policy for the per-port VLAN characteristic. Interface Policies > Policies > L2 Interface. Create a policy user the port-local option.
  5. Create the policy group for include the previously created attributes.
  6. Create the leaf profiles and include the interface selectors.
  7. Create the switch policy profile.
  8. You can then create the ANP and EPGs using the static binding to associate to the same VLAN and interface on the leaf switch.

In summary, the two factors that I ran into during testing this is that you have to configure unique VLAN pools and physical domains for the per-port VLAN to work.


There are a few different ways to interact with ACI. The GUI, the API, or through some sort of orchestration/automation system such as UCS Director. The GUI is confusing and can be cumbersome. Speaking only for myself, I learned through the GUI the concepts of ACI. Understanding the necessary policies and groups helped me know what I needed to configure via the API. I have played around with the python SDK. I don’t have enough background in programming or interacting with APIs to tell you how it compares to other vendors API. Especially when it comes to documentation.

What I wanted to document here is using the post functionality from the GUI. This is an easy way to deploy pieces of an ACI configuration.

In this example we are going to configure a VLAN Pool using the XML functionality.

If we right-click on any existing VLAN pool we can pull the XML or JSON configuration for that object.


I am using XML for this example. Choose configuration only and subtree.


Save that XML file and open to edit. We then edit the XML file with our new information. In this case I am creating a new pool with the name of “BL_New_Pool” and a VLAN range of 2000-20001.

<?xml version="1.0" encoding="UTF-8"?>
<imdata totalCount="1">
	<fvnsVlanInstP allocMode="dynamic" descr="" dn="uni/infra/vlanns-[BL_New_Pool]-dynamic" name="BL_New_Pool" ownerKey="" ownerTag="">
		<fvnsEncapBlk allocMode="dynamic" descr="" from="vlan-2000" name="" to="vlan-2001"/>

Save this edited file. Right click on any object within ACI and choose “Post.” In the Parent DN field, remove all except the base uni/ and then choose your file to import.


This will create your new object within ACI.


This is a quick and easy way to create new object within ACI without all of the clicking.

One foot in the door, the other one in the gutter

The changing role of the network engineer. The devops future of network engineering. Network developers instead of network engineers. A python script to rule the network. Abstraction. Virtualization. Overlays. Underlays.

In the networking industry, are the changes happening quickly or slowly? Sometimes it seems like SDN is right around the corner, sometimes it seems like still a dream far away. Reserved for academics and people with more time than me.

I am very interested in the future of network engineering. I’ve been in IT for most of my adult life, and I have been dedicated to the network side of things for the past ten or so years. I am only 40, so its not like I can retire soon, not that I’d want to. I get excited about technology, I want to be part of this new wave of datacenter architecture. But…

I read blogs, I learn python, I run ODL and mininet on my laptop. All the while my day job keeps me installing and maintaining “traditional” networks. My knowledge of Nexus 7Ks is what pays my mortgage. So how do I keep my foot in the door of the new datacenter while selling the old?

I work for a VAR. We sell gear and services. There has been a push internally for as long as I have been here to emphasize the services over the hardware. But its hard to break old habits I suppose. The services, 90 percent of the time, are tied to a purchase of really expensive gear. I still think it is good gear, but until we have something in the SDN realm to sell, I’m stuck with traditional networking. It will be interesting for me to see where Cisco’s Nexus 9K and ACI go this year. Will my customers want it? Will my salesforce want to sell it?

The future of the Network Engineer

As any engineer or follower of the world of networking knows, SDN is a prevalent topic with regards to the future of networking. Pretty much since I started hearing about SDN concepts and how they apply our the industry, people have been asking, and people have been trying to answer the question: What does this mean for the network engineers of today? And the real answer obviously is nobody knows for sure. Undoubtedly our jobs will change, but nobody can put a date on a calendar saying when the job you have now will be irrelevant. As many people have said before me: change is the constant in the world of technology.

With all of this mind, I was looking through the session catalog for this year’s Cisco Live U.S. There are many sessions addressing Cisco’s specific solution for the general concept of SDN–Application Centric Infrastructure. There are also some sessions on other areas of the SDN spectrum, devops and openstack are two other topics I saw. There is also a session on the changing role of the engineer. I think this might be interesting to see where Cisco thinks the role of the engineer is going versus what else has been written.

BRKCRT-1601 – Evolution of the Job Roles with Introduction of Open NetworksNetwork Programmability is changing the way IT professionals are operating and how applications can be integrated into the infrastructure. Learn how Cisco is addressing the challenges and opportunities of creating the new workorce facing new technologies like Cloud and Network Programmability. We will be sharing the evolution of the Data Center Certification portfolio to support Partners and Customers in the journey towards the cloud and the new paradigms of network programmability

It looks like the certification track of the data center will also be discussed. The CCIE Data Center has only been around for about two years, so in my opinion, it seems a little soon for Cisco to announce a version 2 of the exam. But that is just my opinion, I don’t know anything.

VSS Pairs connected via a layer-3 link

Recently I installed two pairs of 4500x switches. Each pair setup as a VSS. VSS allows for two switches to share a control and management plane. In essence, to the administrator, it looks like one switch. The two VSS pairs were installed at separate sites–we’ll call them HQ and Colo for the sake of this post. The connection was an L2 link from a service-provider. We created a L3 adjacency between the two sites running EIGRP as the routing protocol. Previously there was a standalone 3560 on the Colo side and a 3750G stack on the HQ side.

To setup VSS you must define the VSL ports (virtual switch link) and the virtual switch domain. For example:

int po60
 description VSL to Switch 2
 switch virtual link 1
 no shut
switch virtual domain 100
 switch 1
 switch 1 priority 200
int range te1/15-16
 switchport mode trunk
 description part of VSL po60
 channel-group 60 mode on
 no shut
A similar configuration would be put on the second switch, you would enter the command switch convert mode virtual and you have a VSS pair.

The issue I ran into, and the point of this post, is when I tried to turn up the second site, HQ in this example, the L3 link would not come up. I could see my neighbor in CDP, but my L3 interface showed down/down.

The answer lies in the switch virtual domain ID number. It turns out when you try to connect two VSS via an L3 link, if the switch virtual domain ID is the same, the link will not come up.


By changing the domain ID on the HQ pair, L3 came up and EIGRP was happy, I had routes everywhere. To change the ID, you don’t have to wipe out your existing VSS-related config, you just need to change the ID and issue the switch convert command again.

switch virtual domain 101
switch convert mode virtual


I’d love to know why this is, but unfortunately I wasn’t able to grab a packet capture while I was at the customer site. If I get the chance, I will definitely try to figure it out.

Enable Jumbo Frames on Nexus 5000

Enabling the use of Jumbo frames in the nexus world differs from the IOS world. On the 5Ks this is a global command set with a policy map. Once this is in place, all interfaces are capable of sending and receiving Jumbo frames.

policy-map type network-qos jumbo
class type network-qos class-default
mtu 9216
system qos
service-policy type network-qos jumbo

Cisco Documentation is here.