Jumbo frames performance with 10GbE iSCSI

This is yet another voice to add to the opinion that jumbo frames are worth the effort.

This is especially true in a 10GbE iSCSI environment doing a hardware refresh of hosts, storage and switches or a pure greenfield deployment.

The ease of configuring jumbo frames across your environment is helped by a couple of things:

Most 10GbE switches ship with an MTU of 9214 or 9216 out of the box, so are ready for host and storage to be tuned to match. The Arista 7150 I have tested are configured out of the box this way.

Changing vSphere virtual switches and VMkernel interfaces to a large MTU size is now very easy via the vSphere client and Web Client.

On with the testing.

Hardware I tested with:

vSphere host: Cisco UCSC-C220-M3S, 16-core, 196GB Memory, Intel X520 Dual Port 10Gb SFP+ Adapter. This was running ESXi 5.1 Update 1.

Switch: Arista 7150S-24 (24 SFP+ ports)

Storage: Nimble Storage CS260G (10GbE iSCSI)

This was not meant to be an all encompassing test across all known workloads. Just some synthetic sequential benchmarks to see what difference existed between a default MTU of 1500 bytes vs 9000 bytes for iSCSI.

I set up a VM with Windows Server 2012, 4 x vCPU, 8GB RAM, 100GB system disk and a 100GB data disk (for iometer). This was on a 1TB iSCSI VMFS5 datastore presented from the Nimble array.

Using the good old IOMeter, I setup a basic sequential read test of a 2GB file with 32 outstanding I/O’s and then ran 4K, 8K and 32K tests for 5 minutes (once with ESXi vSwitches/VMkernel interfaces and Nimble ‘s data interfaces at 1500 MTU and then once with them set to 9000 MTU). The Arista was set to an MTU of 9214 across all ports at all times (the default)

jumbo frames

The results showed a consistent 10%+ performance increase in both throughput and bandwidth.

It would be difficult not to recommend jumbo frames as a default configuration for new 10GbE iSCSI or NAS environments.

For more great in-depth reading on jumbo frames and vSphere, make sure to check out Michael Wesbter’s (@vcdxnz001) posts on his blog:

http://longwhiteclouds.com/2012/02/20/jumbo-frames-on-vsphere-5/

http://longwhiteclouds.com/2013/09/10/the-great-jumbo-frames-debate/

Posted in iSCSI Storage, Nimble Storage, VMware vSphere | Tagged , , , , ,

iSCSI configuration on Extreme Networks switches

This is a production configuration I have used with 1GbE and 10GbE Extreme Networks switches for iSCSI storage, specifically a Nimble Storage CS260G with 1GbE management interfaces and 10GbE iSCSI data interfaces.

10GbE iSCSI is on one stacked pair of Extreme x670’s and the 1GbE management interfaces are on an x460 stack.

extreme_nimble

This is the ExtremeOS configuration I have for both networks.

#iSCSI data ports (10GbE) Extreme Networks x670 (stacked pair)

configure ports 1:1, 1:2, 2:1, 2:2 display-string “Nimble01-Prod-iSCSI”
enable flow-control tx-pause ports 1:1, 1:2, 2:1, 2:2
enable flow-control rx-pause ports 1:1, 1:2, 2:1, 2:2
configure vlan “iSCSI_Data” tag 140

configure vlan “iSCSI_Data” ipaddress 10.190.201.250 255.255.255.0
configure vlan “iSCSI_Data” add ports 1:1,1:2,2:1,2:2

#Mgmt ports (1GbE) Extreme Networks x460 (stacked pair)

configure ports 1:1,1:2,2:1,2:2 display-string “Nimble01-Prod-Mgmt”
configure vlan “Storage_Mgmt” tag 130
configure vlan “Storage_Mgmt” ipaddress 10.201.201.1 255.255.255.0
enable ipforwarding vlan “Storage_Mgmt”
configure vlan “Storage_Mgmt” add ports 1:1,1:2,2:1,2:2

I have no spanning tree or rate-limiting (also known as broadcast storm control) configured on any iSCSI or management ports. These are not recommended for iSCSI performance reasons.

The important architectural points are:

  • iSCSI and management have dedicated VLAN’s for isolation, performance and security/policy etc .
  • The switch ports carrying iSCSI data have send/receive flow control enabled, for performance consistency.
  • Switches carrying management and iSCSI data are stacked for redundancy.

http://www.extremenetworks.com/products/summit-all.aspx

http://www.nimblestorage.com/products/specifications.php

Posted in Extreme Networks, iSCSI Storage, Nimble Storage | Tagged , , , , ,

What makes up a Nimble Storage array?

I have been asked a couple of times about what is inside a Nimble Storage array in terms of hardware.

People have assumed that there may have been proprietary components in a high performance array, but Nimble’s architecture is completely leveraged off a commodity chassis, off the shelf drives and multi-core Intel CPU’s.

Most of the hardware is best illustrated with some photos:

First up – the chassis. Nimble uses a SuperMicro unit with hot swap controllers, power supplies and fans: http://www.supermicro.com/products/nfo/sbb.cfm

WP_20130511_009

This is the front view of the chassis (in this case a Nimble CS260G). It has twelve 3TB Seagate Enterprise NL-SAS drives (model: Constellation ES.2)
http://www.seagate.com/internal-hard-drives/enterprise-hard-drives/hdd/enterprise-capacity-3-5-hdd/

It also has four 300GB Intel 320 SSD’s for read cache purposes.

WP_20130511_010The rear of the chassis, showing the redundant power modules and controllers (with PCIe 10GbE adapters) and dual 1GbE NICs.

Looking at the controller itself:

WP_20130511_004

It’s a Supermicro board (X8DTS-F), with 12GB Kingston ECC DDR3 memory and with what appears to be an Intel 55xx Nehalem family CPU (four core/8 threads). On the far back side are the PCIe slots. Far left is an LSI SAS adapter.

WP_20130511_006

In the top PCIe slot, sits the NVRAM card (for fast write caching on inbound I/O). It appears to be an NVvault 1GB DDR2 module, that is built by Netlist:
http://www.netlist.com/products/vault-data-protection/nvvault-ddr2/

WP_20130511_007In the bottom PCIe slot sits the 10GbE dual port network adapter, which from the model number would appear to be the PE210G2SPi9 component from Silicom:
http://www.silicom-usa.com/Networking_Adapters/PE210G2SPi9-Dual_Port_10_Gigabit_Ethernet_PCI_Express_Server_Adapter_Intel_based_49

That about wraps up the commodity components inside a Nimble array.

What is really apparent, it that cost effective, reliable, high performance/capacity commodity disks, multi-core CPU’s, standard chassis have enabled array vendors like Nimble (and others) to concentrate on engineering their software architecture around data services, file systems, user interfaces and seamless array metrics collection enabling support and data mining opportunities.

Untitled

Posted in Hardware, iSCSI Storage, Nimble Storage | Tagged , ,

Renaming vSphere Datastore device display names

In vCenter, storage devices or volumes/LUNs present themselves with a unique identifier usually prefixed with eui. or naa.

These are known as Network Address Authority Identifiers and Extended Unique Identifiers. (part of FC and iSCSI naming format standards for LUNs)

1a

However, when looking at them in vSphere and via the esxcli, these unique identifiers are less than useful for identifying which VMFS datastores you are dealing with when looking at paths selection etc.

For that reason, I usually rename each of these via the vSphere client as I add them.

1

This also benefits when filtering NMP devices from the esxcli

If you have connected to the command line of an ESXi host, you can grep for your renamed display name on a storage device

2

esxcli storage nmp device list | grep -A 9 VMFSProd

3You can see that it picks up the Device Display Name set via the vSphere client as well as returning the 9 rows of available information about that NMP device

Posted in ESXi, iSCSI Storage, Nimble Storage, VMware vSphere | Tagged , , ,

Hyper-V converged network configuration with two NICs

Just a script I put together for my Server 2012 Hyper-V hosts that only have two 10Gbe physical ports, where I want to separate various network functions of Hyper-V into separate virtual network adapters.

It also lets me use MPIO with two virtual network adapters dedicated for iSCSI.

This is useful for iSCSI storage vendors who recommend using Windows MPIO with their arrays, e.g HP StoreVirtual and Nimble Storage.

With older versions of Hyper-V (Windows 2008 R2), vendor NIC teaming and iSCSI was not recommended, so you were left with dedicating two physical NICs just for an iSCSI MPIO configuration. This would normally mean you would need at least 4 NICs in your host. (iSCSI and other Hyper-V virtual switch functions)


#Create new NIC team using two existing physical adapters in host, set LB and teaming mode
New-NetLbfoTeam -Name NICTeam1 -TeamMembers pNIC1,pNIC2 -LoadBalancingAlgorithm TransportPorts -TeamingMode SwitchIndependent

#Create new virtual switch using the NIC team created prior. Set to not allow hyper-v mgmt
New-VMSwitch -Name ConvergedHyperSwitch -NetAdapterName NICTeam01 -AllowManagementOS $False -MinimumBandwidthMode Weight

#Create five virtual adapters for various components (LM,mgmt,cluster,iSCSI)
Add-VMNetworkAdapter -ManagementOS -Name "Management" -SwitchName "ConvergedHyperSwitch"
Add-VMNetworkAdapter -ManagementOS -Name "LiveMigration" -SwitchName "ConvergedHyperSwitch"
Add-VMNetworkAdapter -ManagementOS -Name "CSV" -SwitchName "ConvergedHyperSwitch"
Add-VMNetworkAdapter -ManagementOS -Name "iSCSI01" -SwitchName "ConvergedHyperSwitch"
Add-VMNetworkAdapter -ManagementOS -Name "iSCSI02" -SwitchName "ConvergedHyperSwitch"

#If required, set VLAN access for your virtual adapters
Set-VMNetworkAdapterVlan -ManagementOS -VMNetworkAdapterName "Management" -Access -VlanId 100
Set-VMNetworkAdapterVlan -ManagementOS -VMNetworkAdapterName "CSV" -Access -VlanId 100
Set-VMNetworkAdapterVlan -ManagementOS -VMNetworkAdapterName "LiveMigration" -Access -VlanId 100
Set-VMNetworkAdapterVlan -ManagementOS -VMNetworkAdapterName "iSCSI01" -Access -VlanId 140
Set-VMNetworkAdapterVlan -ManagementOS -VMNetworkAdapterName "iSCSI02" -Access -VlanId 140

#Set minimum bandwidth in weighting (0-100)
Set-VMNetworkAdapter -ManagementOS -Name "LiveMigration" -MinimumBandwidthWeight 20
Set-VMNetworkAdapter -ManagementOS -Name "CSV" -MinimumBandwidthWeight 10
Set-VMNetworkAdapter -ManagementOS -Name "Management" -MinimumBandwidthWeight 10
Set-VMNetworkAdapter -ManagementOS -Name "iSCSI01" -MinimumBandwidthWeight 20
Set-VMNetworkAdapter -ManagementOS -Name "iSCSI02" -MinimumBandwidthWeight 20

#Set IP addresses/subnet on interfaces 
New-NetIPAddress -InterfaceAlias "vEthernet (Management)" -IPAddress 10.201.201.201 -PrefixLength "16" -DefaultGateway 10.201.1.1 
Set-DnsClientServerAddress -InterfaceAlias "vEthernet (Management)" -ServerAddresses 10.200.3.21, 10.200.3.96 
New-NetIPAddress -InterfaceAlias "vEthernet (LiveMigration)" -IPAddress 10.201.201.202 -PrefixLength "16" 
New-NetIPAddress -InterfaceAlias "vEthernet (CSV)" -IPAddress 10.201.201.203 -PrefixLength "16" 
New-NetIPAddress -InterfaceAlias "vEthernet (iSCSI01)" -IPAddress 10.190.201.120 -PrefixLength "24" 
New-NetIPAddress -InterfaceAlias "vEthernet (iSCSI02)" -IPAddress 10.190.201.121 -PrefixLength "24" 
Posted in Hyper-V, iSCSI Storage, Microsoft Windows Server 2012 | Tagged , , , , , , | 4 Comments

Optimising vSphere path selection for Nimble Storage

This post is mostly for documentation or a quick how-to for changing the default path selection policy (PSP) so it is more suitable for Nimble arrays.

For Nimble Storage by default, ESXi 5.1 uses the ALUA SATP with a PSP of most recently used (MRU)

Nimble recommends that this be altered to round robin.

From the CLI of each ESXi host (or from the vMA):

esxcli storage nmp satp set -s VMW_SATP_ALUA -P VMW_PSP_RR

After this point any volume presented from the Nimble array and configured as a VMFS datastore will appear with round robin load balancing and I/O active on all paths.

psp

If you had previously added volumes before changing the default PSP, you can change all existing Nimble volumes to round robin with PowerCLI

#Get all hosts and store in a variable I have called $esxihosts

 $esxihosts = get-vmhost

#Gather all SCSI luns presented to hosts where the vendor string in the LUN description equals “Nimble” and set the multi-pathing policy

Get-ScsiLun -VmHost $esxihosts | where {$_.Vendor -eq "Nimble"} | Set-ScsiLun -MultipathPolicy "roundrobin"

Nimble also make a recommendation around altering the IOps per path setting for round robin. This by default is 1000 IOps before changing data paths.

The official recommendation it to set it to zero (0) for each volume on each host.

Nimble support confirms that this ignores IOps per path and bases path selection on queue depth instead.

So from esxcli on each host, do the following which will search for the string Nimble on all storage devices presented to the host and then loop through and set IOps to zero

for i in `esxcli storage nmp device list | grep Nimble` ; do esxcli storage nmp psp roundrobin deviceconfig set --type "iops" --iops=0 --device=$i; done

I haven’t tested or implemented this as of yet.

I believe Nimbles internal support community has a single powercli script to cover everything here.

It would not be much work to take the per host esxcli stuff into powercli with get-esxcli and aggregate it all together.

Posted in ESXi, iSCSI Storage, Nimble Storage, VMware vSphere | Tagged , , , , | 10 Comments

Storage Efficiency with Nimble Compression

In my last post I talked about performance of the Nimble Storage platform I was implementing.

In a contended virtualised environment, storage is often the performance sore point or Achilles heal due to insufficient IOps and poor latency of the storage design.

The other major consideration is how efficient the storage platform is at managing the available capacity and what can be done to “do more with less”

With the Nimble platform I am using – this problem is addressed by using:

  • In-line compression on all workloads

Nimble uses a variable block compression algorithm they say can yield a 30-70% saving without altering performance or latency.

  • Thin provisioning

Ubiquitous across most array vendors, variations in block size can result in differences in efficiency.

  • VAAI UNMAP for space reclaimation

Nimble supports the VAAI primitive UNMAP for block reclaimation. This is done from the esxcli of a vSphere host as per this VMware KB:

http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=2014849

For testing I’ll concentrate on the compression and how it fares for a SQL server in a vSphere 5.1 environment.

I took a test SQL server running Windows 2008/SQL 2008 that had disk utilisation around 88%  (130GB of data on 149GB of available disk)

sqldisks

I then performed a storage vMotion migration from the original VMFS datastore on an HP LeftHand volume to a 200GB thin provisioned VMFS datastore presented from the Nimble array.

The result was a 2.25 x saving in space (compressed from 130GB to 56GB)

volcompress

Other VMs such as file/print, remote desktop hosts, infrastructure VMs in the test environment have anywhere between 1.7x to 2.0x compression levels.

In addition to the savings on primary volumes, compression is also applied to snapshots.

Who can say no to close to double the effective capacity on all virtualised workloads with no extra datacenter footprint?

Posted in iSCSI Storage, Nimble Storage, VMware vSphere | Tagged , ,