Optimising vSphere path selection for Nimble Storage

THIS POST IS NOW OBSOLETE. Use Nimble Connection Manager instead

This post is mostly for documentation or a quick how-to for changing the default path selection policy (PSP) so it is more suitable for Nimble arrays.

For Nimble Storage by default, ESXi 5.1 uses the ALUA SATP with a PSP of most recently used (MRU)

Nimble recommends that this be altered to round robin.

From the CLI of each ESXi host (or from the vMA):

esxcli storage nmp satp set -s VMW_SATP_ALUA -P VMW_PSP_RR

After this point any volume presented from the Nimble array and configured as a VMFS datastore will appear with round robin load balancing and I/O active on all paths.

psp

If you had previously added volumes before changing the default PSP, you can change all existing Nimble volumes to round robin with PowerCLI

#Get all hosts and store in a variable I have called $esxihosts

 $esxihosts = get-vmhost

#Gather all SCSI luns presented to hosts where the vendor string in the LUN description equals “Nimble” and set the multi-pathing policy

Get-ScsiLun -VmHost $esxihosts | where {$_.Vendor -eq "Nimble"} | Set-ScsiLun -MultipathPolicy "roundrobin"

Nimble also make a recommendation around altering the IOps per path setting for round robin. This by default is 1000 IOps before changing data paths.

The official recommendation it to set it to zero (0) for each volume on each host.

Nimble support confirms that this ignores IOps per path and bases path selection on queue depth instead.

So from esxcli on each host, do the following which will search for the string Nimble on all storage devices presented to the host and then loop through and set IOps to zero

for i in `esxcli storage nmp device list | grep Nimble` ; do esxcli storage nmp psp roundrobin deviceconfig set --type "iops" --iops=0 --device=$i; done

I haven’t tested or implemented this as of yet.

I believe Nimbles internal support community has a single powercli script to cover everything here.

It would not be much work to take the per host esxcli stuff into powercli with get-esxcli and aggregate it all together.

10 thoughts on “Optimising vSphere path selection for Nimble Storage

  1. Thanks for that, just wondering why the iops recommended to be changed to 0 instead of 1? Does this apply to other storage as well or its just this Storage?

    1. Most vendors specify 1 IO per path as a tweak (HP, EMC).
      Not 100% sure what the rational is behind Nimbles setting – supports comment was “to accommodate burst performance”.
      I will do some testing under high IO load and see if there is a significant difference.

  2. It’ll be easy enough to put into a single script, but like Barrie, I’m also interested in finding out why the IO path setting is 0 and what the overall effect is, especially when compared to an IO setting of 1.

  3. I thought I was told that Nimble Arrays are Active / Standby. Why would all 4 paths show as Active?

    1. The controllers do indeed run Active/Standby, however this particular system has 4 individual data ports per controller so you’ll always see the 4 connections no matter which controller is servicing I/O.

      1. In this case, it’s two active 10g ports on the Nimble controller, but vSphere has two paths to each controller port – so four in total from the perspective of the ESXi host.

  4. Thanks Barrie. Since this is a standby controller, what would happen if the power cord is physically removed from the active controller? Would the controller go down and the standby controller become active, or is there some type of inter-controller architecture that will pass power from the surviving controller?

    1. The controllers are independently powered (as far as I can tell), fail-over from a vSphere perspective is seamless as the standby controller has identical config to its partner (including all mgmt,data and iSCSI discovery IP’s)

Comments are closed.