Upgrading the Nexus 1000v to SV1(3)

Upgrading the Nexus 1000v to SV1(3)

I performed my first Nexus 1000v upgrade last week, upgrading from SV1(2) to SV1(3). It was relatively painless, but definitely worth reading about what is involved. Read on…

Prerequisites:

  • download Nexus 1000v 4.0(4)SV1(3) software
  • save the config (copy run start)
  • back up the config
  • clone primary VSM virtual machine

Upgrade the VSMs

Do this on the primary VSM (it will automatically be synced to standby):

  • Copy bin files to primary VSM:
  • copy scp://root@server/nexus-1000v-mz.4.0.4.SV1.3.bin bootflash:
  • copy scp://root@server/nexus-1000v-kickstart-mz.4.0.4.SV1.3.bin bootflash:
  • dir bootflash:
    N1kv-DR# dir bootflash:
    38 Mar 12 15:23:00 2010 .ovfconfigured
    77824 Jul 14 15:24:57 2010 accounting.log
    16384 Dec 09 18:56:23 2009 lost+found/
    21408768 Dec 09 18:57:21 2009 nexus-1000v-kickstart-mz.4.0.4.SV1.2.bin
    21283328 Jul 14 14:40:18 2010 nexus-1000v-kickstart-mz.4.0.4.SV1.3.bin
    73068811 Dec 09 18:57:32 2009 nexus-1000v-mz.4.0.4.SV1.2.bin
    81982425 Jul 14 14:38:44 2010 nexus-1000v-mz.4.0.4.SV1.3.bin
  • Install the new NX-OS:
  • install all system bootflash:nexus-1000v-mz.4.0.4.SV1.3.bin kickstart bootflash:nexus-1000v-kickstart-mz.4.0.4.SV1.3.bin
  • Verify the boot statements were updated to boot from the new NX-OS:
  • show running-config | include boot
    boot kickstart bootflash:/nexus-1000v-kickstart-mz.4.0.4.SV1.3.bin sup-1
    boot system bootflash:/nexus-1000v-mz.4.0.4.SV1.3.bin sup-1
    boot kickstart bootflash:/nexus-1000v-kickstart-mz.4.0.4.SV1.3.bin sup-2
    boot system bootflash:/nexus-1000v-mz.4.0.4.SV1.3.bin sup-2
  • Save the config:
  • copy run start
  • Reload the standby VSM (verify the module number of the standby VSM by entering show module):
  • reload module 2
  • Verify standby VSM was upgraded:
  • show module
    Mod Ports Module-Type Model Status
    --- ----- -------------------------------- ------------------ ------------
    1 0 Virtual Supervisor Module Nexus1000V active *
    2 0 Virtual Supervisor Module Nexus1000V ha-standby
    3 248 Virtual Ethernet Module NA ok
    4 248 Virtual Ethernet Module NA ok
    5 248 Virtual Ethernet Module NA ok
    6 248 Virtual Ethernet Module NA ok
    Mod Sw Hw
    --- --------------- ------
    1 4.0(4)SV1(2) 0.0
    2 4.0(4)SV1(3) 0.0
    3 4.0(4)SV1(2) 1.9
    4 4.0(4)SV1(2) 1.9
    5 4.0(4)SV1(2) 1.9
    6 4.0(4)SV1(2) 1.9
  • Failover to the standby VSM:
  • system switchover
  • You will be disconnected if connected via SSH, just restart your session to connect to the standby VSM. Enter show module to verify the standby VSM is now the primary VSM and to verify that the old primary VSM was upgraded.
  • Fail the standby VSM back over (to restore the original primary/standby status of the VSMs):
  • system switchover
  • If, during all this, the primary VSM becomes disconnected from vCenter (you’ll know by entering show svs connection), simply enter the following:
  • configure terminal
  • svs connection _________
  • connect
  • If the VSM still doesn’t connect to vCenter, simply try reloading the VSM as this did the trick for me.

Upgrade the VEMs

You have to options to update the VEMs on the hosts:

  1. Place each host into maintenance mode and use VMware Update Manager to update the VEM. Done!
  2. Copy the relevant .vib file to each host and manually install it.

I tried using VMware Update Manager to update one of the hosts I upgraded and it did update the VEM successfully. However, it didn’t install the correct VEM version based on the Cisco Nexus 1000v Compatibility Matrix. I didn’t do extensive testing at this point to ensure there were no issues with using the version of VEM installed by VUM, but rather exercised caution on our customer’s network and opted to manually install the correct version. My notes below assume you are upgrading the VEMs manually.

  • Place host into maintenance mode and ensure no VMs are running on host (safety first).
  • SSH to host and copy the relevant .vib file to the host. Refer to the compatibility matrix for the appropriate VEM version to use based on the ESX build number.
  • Install the VEM (this will automatically remove the old VEM and install the new one). If you run into any issues and need to manually remove the VEM, just follow this page at Cisco’s website.
  • esxupdate -b /tmp/cross_cisco-vem-v120-4.0.4.1.3.0.0-1.9.2.vib update
  • Verify the VEM installed successfully:
  • [root@server ~]# vem version
    Running esx version -208167 x86_64
    VEM Version: 4.0.4.1.3.0.0-1.9.2
    VSM Version: 4.0(4)SV1(3)
    System Version: VMware ESX 4.0.0 Releasebuild-208167

Upgrade the Feature Level

I almost missed this part as it is completely non-intuitive. SV1(3) comes with a few new features such as changing the uplink MTU, ERSPAN Type III format, Cisco Network Analysis Module, and others detailed here. You need to tell the VSM to use those features.

  • Verify the current level of feature support:
  • show system vem feature level
    current feature level: 4.0(4)SV1(2)
  • Display output based on the current VEM version; if you don’t have ALL of the VEMs upgraded, you won’t see anything listed since you can’t upgrade the feature level:
  • system update vem feature level
    1 4.0(4)SV1(3)
  • Change the feature level to the desired level from the list displayed in the last step:
  • system update vem feature level 1
    Old feature level: 4.0(4)SV1(2)
    New feature level: 4.0(4)SV1(3)
  • Verify the updated level of feature support:
  • show system vem feature level
    current feature level: 4.0(4)SV1(3)
  • Verify the VEMs are still connected; if they were not, they will be listed as “not-inserted (upgrade)” and will need the VEMs upgraded:
  • show mod
    Mod Ports Module-Type Model Status
    --- ----- -------------------------------- ------------------ ------------
    1 0 Virtual Supervisor Module Nexus1000V ha-standby
    2 0 Virtual Supervisor Module Nexus1000V active *
    3 248 Virtual Ethernet Module NA ok
    4 248 Virtual Ethernet Module NA ok
  • Save the config:
  • copy run start