Network Computing is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them. Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

Inside the Cisco Nexus 6004 Switch

The Cisco Nexus 6004 is a 4U L2/L3 switch with up to 96 ports of 40-Gbps Ethernet, or 384 ports of 10-Gbps Ethernet. The ports are QSFP, which means a breakout cable is required to support 10 Gbps.

The primary target market for this switch is the data center that requires a non-blocking aggregation layer running 40-Gbps uplinks, although the 6004 could conceivably play a core role in certain deployments.

An obvious use case for the 6004 is deployed as a part of the backbone in a leaf-spine network topology, where access-layer leaf switches are uplinked to a series of spine-layer aggregation switches. This design keeps hosts topologically close to one another, minimizing hop counts and latency.

The 6004 ships with 48 fixed ports (sans optics), and offers four linecard expansion module (LEM) slots. Only one LEM is available as of this writing, with at least two more reportedly being considered by Cisco.

The N6K-C6004-M12Q, which is currently shipping, offers 12 ports of 40-Gbps Ethernet or FCoE. A Unified Port LEM (tentatively availability in the second half of 2013) would offer native 2/4/8 Fibre Channel support, as well as 1/10GbE SFP+ ports.

Cisco is considering a 100 GbE LEM, which would most likely include FCoE support. Tentative availability is the first half of 2014.

The 6004 is not a feature match for the Nexus 5K line, in that it doesn't offer native Fibre Channel. Then again, the 6004 not really positioned in that space. The 6004's focus is that of a 40 GbE monster Ethernet switch with line-rate forwarding capacity on all ports.

40 GbE isn't a bandwidth requirement for SAN fabrics just yet, and designers requiring storage at the outset have the option of FCoE--even multihop. A lack of native Fibre Channel seems a minor issue that's likely to be addressed by Cisco within the next 12 months.

Power consumption is a possible consideration for those evaluating the 6004. While there's no direct competitor for the 6004 I'm aware of due to its port density, I compared it against a few other (as it happens, fixed configuration) 40 GbE switches using numbers published on vendors' websites.

After a nominal analysis of per-port power draw based on maximum wattage specified, the 6004 does look like a bit of an electron chewer.

• The Arista Networks 7050Q has 16 40 GbE ports and is rated for 303W max, which equals 18.93W per port.

• The Juniper Networks QFX3600 has 16 40 GbE ports and is rated for 345W max, which equals 21.56W per port.

• The Dell Force10 Z9000 has 32 40 GbE ports and is rated for 800W max, which equals 25W max per port.

• The Cisco Nexus 6004 has 96 ports (when fully populated) and is rated for 3300W max, for 34.375W max per port.

Cisco states that the 6004 forwards at line rate using any combination of 10 Gbps or 40 Gbps ports at a latency of 1 microsecond. The 1-microsecond latency is consistent even when interfaces are loaded with functionality such as security and QoS policies. To understand how the switch accomplishes this feat, I'll review the architectural details inside the switch.

Overall Fabric Architecture

The job of any switch is to accept chunks of data flowing into it, determine where those chunks of data should go and send them. Going forward, I'll refer to these chunks as "packets," although that could mean L2 frames or L3 packets.

The key underlying components of the Nexus 6004 that create the non-blocking architecture are two unique Cisco ASICs that a packet flows through on its trip through the 6004:

• Unified Port Controller (UPC)

• Crossbar Switch Fabric.

Let's take a look at each. I'll examine the UPC at both ingress and egress.

Ingress Unified Port Controller

As implied by the name "ingress," this is where the packet flows into the switch. There's one UPC for every three 40 GbE (or 12 10GbE) ports in the Nexus 6004. Whether traffic is flowing into or out of the UPC, it is a busy chip, with four important functions.

1. Media access control is a lower-level function that handles things like Ethernet framing and flow control.

2. The forwarding controller determines *if* a packet is to be forwarded, *where* it will go and *what* it will look like when it gets there. Policy is applied here (access lists and so on). Functions such as tunnel encapsulation-decapsulation and header re-writes happen here, as well.

3. The buffer manager handles queuing and dequeuing of packets. The need for buffers might seem counterintuitive in a non-blocking fabric, but buffers help to manage contention of multiple packets on the wire if they try to access a single egress port during the same clock cycle.

4. The queuing subsystem manages the virtual output and egress queues themselves, as opposed to the packets in the queues (which is what the buffer manager does).

Once the packet has made it into the UPC, has policy applied to it, and is rewritten or encapsulated if necessary, the UPC determines the egress port. The packet is then buffered in the appropriate queue to prepare for a journey to the egress queue via the crossbar switch fabric. In the ingress UPC, the packet is going to be buffered in a virtual output queue (VOQ) if necessary. Let's look at VOQs in more detail, a common feature in switches that are input-queued.

Unified Port controller

In the 6004, every ingress interface gets eight VOQs (one per 802.1p priority class) per egress interface. With a maximum of 384 physical interfaces in the system, that translates to 3,072 VOQs per ingress interface.

Practically speaking, that means that as traffic flows into a switch port, the switch determines what port it should egress on and try to forward it there. But if the egress port is busy, the ingress port doesn't have to stall all the other incoming packets. Instead, the ingress port can service traffic flowing into the other VOQs and send them along.

Therefore, VOQs eliminate what's known as "head of line blocking," where the packet in the front of the line is holding up all the other packets queued up behind. The 6004 has enough VOQs to be able to forward from any ingress port to any egress port on any traffic class. Note that the 3,072 VOQs I refer to are for unicast traffic. Multicast traffic has 8,192 VOQs, as well as 32 VOQs dedicated to SPAN traffic (that is, port mirroring).

When a packet has been assigned to a VOQ, it's ready to make its trip across the crossbar fabric.

Next page: Crossbar Switch Fabric and Egress UPC

  • 1