- Apple Music Replay is here, and it pales in comparison to Spotify Wrapped
- FBI, CISA urge Americans to use secure messaging apps in wake of massive cyberattack
- Are Long-Lived Credentials the New Achilles' Heel for Cloud Security?
- Report: AI expands the IT workforce as tech jobs return to offices
- AWS upgrades its 10p10u network to handle massive AI clusters
Cisco Nexus B22HP FEX Deployment Guide
Cisco Nexus B22HP FEX Deployment Guide
It seems that Cisco hasn’t completed ticked off HP with their UCS line of servers, because they were able to collaborate on a fabric extender for the HP C-series chassis, the Nexus B22HP. A fabric extender is a Cisco Nexus switch (typically a Nexus 2000) that is managed via an upstream Nexus 5500 switch. It allows you to scale out your top of rack (TOR) switches and still manage them with a pair of end of row (EOR) switches such as the Nexus 5500. Cisco and HP took that same logic and created the Nexus B22HP for TenGigabit Ethernet / Fibre-Channel over Ethernet connectivity in the HP C3000 and C7000 chassis. It allows your network administrators to manage the HP chassis switch from the familiar NX-OS command line of the Nexus 5500s. It also allows unified cabling via Fibre-Channel over Ethernet instead of separate LAN and SAN cabling for ease of deployment (manage LAN and SAN on one device, the Nexus 5500) and clean cabling. The Nexus B22HP has 8 external TenGigabit Ethernet interfaces for uplinks and 16 internal TenGigabit Ethernet interfaces, one for each blade in the chassis. The diagram below illustrates typical deployment scenarios of the Nexus B22HP as well as the Nexus 2000 (in the right-most two racks):
Since we’re utilizing FCoE, we adhered to the second-from-left-most design in the above diagram. This is required for SAN isolation, but still utilizing virtual port-channeling (vPC) on the blade server itself. The mapping of the C7000 chassis bays to the placement of the IO Modules is as follows:
LAN on Motherboard (LOM) = IOM Bays 1 & 2
Mezzanine Card 1 = IOM Bays 3 & 4
Mezzanine Card 2 = IOM Bays 5 & 6
Our deployment required that we insert the Nexus B22HP into IOM Bays 5 & 6 since they were empty; we verified that the CNA in the blades was in mezzanine slot 2. After installing the Nexus B22HPs and connecting them to the upstream Nexus 5548UPs, we were ready to configure them! For reference, I used this Cisco guide to deploy the Nexus B22HPs. The diagram below is a reference of the FEX numbers, port-channel numbers, VLANs and VSANs that we used:
First, configure the FEX connectivity between the Nexus 5500s and the Nexus B22HPs:
N5K-CORE01:
fex 101
pinning max-links 1
description “B22_IN_HP7000_SLOT5″
!
interface port-channel101
description B22_IN_HP7000_SLOT5
switchport mode fex-fabric
fex associate 101
!
interface Ethernet1/17
switchport mode fex-fabric
fex associate 101
channel-group 101
interface Ethernet1/18
switchport mode fex-fabric
fex associate 101
channel-group 101
interface Ethernet1/19
switchport mode fex-fabric
fex associate 101
channel-group 101
interface Ethernet1/20
switchport mode fex-fabric
fex associate 101
channel-group 101
N5K-CORE02:
fex 102
pinning max-links 1
description “B22_IN_HP7000_SLOT6″
!
interface port-channel102
description B22_IN_HP7000_SLOT6
switchport mode fex-fabric
fex associate 102
interface Ethernet1/17
switchport mode fex-fabric
fex associate 102
channel-group 102
interface Ethernet1/18
switchport mode fex-fabric
fex associate 102
channel-group 102
interface Ethernet1/19
switchport mode fex-fabric
fex associate 102
channel-group 102
interface Ethernet1/20
switchport mode fex-fabric
fex associate 102
channel-group 102
Secondly, we configure the vPC peer link between the Nexus 5500s to utilize vPCs on the blade servers, if you haven’t already done so:
N5K-CORE01:
interface mgmt0
ip address 10.1.255.90/23
!
vpc domain 1
role priority 4096
peer-keepalive destination 10.1.255.91 source 10.1.255.90
delay restore 150
peer-gateway
auto-recovery
!
interface port-channel1
switchport mode trunk
spanning-tree port type network
speed 10000
vpc peer-link
!
interface Ethernet1/1
switchport mode trunk
channel-group 1 mode active
interface Ethernet1/2
switchport mode trunk
channel-group 1 mode active
interface Ethernet1/3
switchport mode trunk
channel-group 1 mode active
interface Ethernet1/4
switchport mode trunk
channel-group 1 mode active
N5K-CORE02:
interface mgmt0
ip address 10.1.255.91/23
!
vpc domain 1
peer-keepalive destination 10.1.255.90 source 10.1.255.91
delay restore 150
peer-gateway
auto-recovery
!
interface port-channel1
switchport mode trunk
spanning-tree port type network
speed 10000
vpc peer-link
!
interface Ethernet1/1
switchport mode trunk
channel-group 1 mode active
interface Ethernet1/2
switchport mode trunk
channel-group 1 mode active
interface Ethernet1/3
switchport mode trunk
channel-group 1 mode active
interface Ethernet1/4
switchport mode trunk
channel-group 1 mode active
Thirdly, we configure the vPCs for blade server NIC-teaming:
N5K-CORE01:
interface port-channel205
description HP7000_BLADE5
switchport mode trunk
vpc 205
interface Ethernet101/1/5
description HP7000_BLADE5
switchport mode trunk
channel-group 205
!
interface port-channel206
description HP7000_BLADE6
switchport mode trunk
vpc 206
interface Ethernet101/1/6
description HP7000_BLADE6
switchport mode trunk
channel-group 206
!
interface port-channel213
description HP7000_BLADE13
switchport mode trunk
vpc 213
interface Ethernet101/1/13
description HP7000_BLADE13
switchport mode trunk
channel-group 213
!
interface port-channel214
description HP7000_BLADE14
switchport mode trunk
vpc 214
interface Ethernet101/1/14
description HP7000_BLADE14
switchport mode trunk
channel-group 214
N5K-CORE02:
interface port-channel205
description HP7000_BLADE5
switchport mode trunk
vpc 205
interface Ethernet102/1/5
description HP7000_BLADE5
switchport mode trunk
channel-group 205
!
interface port-channel206
description HP7000_BLADE6
switchport mode trunk
vpc 206
interface Ethernet102/1/6
description HP7000_BLADE6
switchport mode trunk
channel-group 206
!
interface port-channel213
description HP7000_BLADE13
switchport mode trunk
vpc 213
interface Ethernet102/1/13
description HP7000_BLADE13
switchport mode trunk
channel-group 213
!
interface port-channel214
description HP7000_BLADE14
switchport mode trunk
vpc 214
interface Ethernet102/1/14
description HP7000_BLADE14
switchport mode trunk
channel-group 214
These blade servers had VMware vSphere installed on them and were configured for IP Hash NIC teaming on the vSwitch. If you’re doing a vPC, you need to use IP Hash since it assumes the the NICs are bound in a port-channel on the other end (which they are since you’re using a vPC for the NICs).
Fourthly, we’ll configure FCoE for our blades. You need to go into the BIOS of each blade and enable FCoE on the CNA under System Options > NIC Personality Options > FCoE:
When you enable the FCoE feature in NX-OS (“feature fcoe”), it should automatically create the QoS policies for FCoE traffic. If for some reason it doesn’t, apply these commands:
system qos
service-policy type queuing input fcoe-default-in-policy
service-policy type queuing output fcoe-default-out-policy
service-policy type qos input fcoe-default-in-policy
service-policy type network-qos fcoe-default-nq-policy
Once FCoE is enabled and the QoS policy created, create the virtual fibre-channel (vfc) interfaces and add them to the VSAN database:
N5K-CORE01:
interface vfc5
bind interface port-channel205
no shutdown
interface vfc6
bind interface port-channel206
no shutdown
interface vfc13
bind interface port-channel213
no shutdown
interface vfc14
bind interface port-channel214
no shutdown
!
vsan database
vsan 5 interface vfc5
vsan 5 interface vfc6
vsan 5 interface vfc13
vsan 5 interface vfc14
!
N5K-CORE02:
interface vfc5
bind interface port-channel205
no shutdown
interface vfc6
bind interface port-channel206
no shutdown
interface vfc13
bind interface port-channel213
no shutdown
interface vfc14
bind interface port-channel214
no shutdown
!
vsan database
vsan 6 interface vfc5
vsan 6 interface vfc6
vsan 6 interface vfc13
vsan 6 interface vfc14
Next, we create the FCoE VLAN and isolate it; this means only allow VLAN 805 on N5K-CORE01 and only allow VLAN 806 on N5K-CORE02. To explain how vPC works with FCoE VLAN isolation while still aggregating data VLANs, read one of my earlier blog posts.
N5K-CORE01:
vlan 805
fcoe vsan 5
!
interface port-channel205
switchport trunk allowed vlan except 806
interface port-channel206
switchport trunk allowed vlan except 806
interface port-channel213
switchport trunk allowed vlan except 806
interface port-channel214
switchport trunk allowed vlan except 806
N5K-CORE02:
vlan 806
fcoe vsan 6
!
interface port-channel205
switchport trunk allowed vlan except 805
interface port-channel206
switchport trunk allowed vlan except 806
interface port-channel213
switchport trunk allowed vlan except 813
interface port-channel214
switchport trunk allowed vlan except 814
Lastly, we zone the WWPNs of the CNAs (these can be found in the HP Onboard Administrator for each blade’s CNAs) to the SAN and verify they can see the storage.