PowerShell/PowerCLI – vMotion VMs between clusters based on Name and IP

A quick post where I threw together a script to vMotion all VMs of a particular wildcard name and a certain IP range between two vSphere clusters (5.0 and 5.5) under the same vCenter Server.

The script assumes your source and destination clusters are using Standard vSwitches (VSS). with identical port group names on each.

In a migration scenario, perhaps your source cluster is using a Standard vSwitch, but the new destination cluster has a vSphere Distributed Switch (VDS).

In this case, I would implement a temporary standard vSwitch with a single NIC on the new cluster with identical port group names as the source cluster.

This makes vMotion migration simple. Once VMs are on the new cluster, vNICs can be bounced from standard to VDS port groups.

Anyway – on with the script. PowerCLI version used was 6.0 Release 1

#vMotion VMs based on name and IP address between clusters under one vCenter
#Assumes standard vSwitch networking and consistent port group names between clusters
#Source cluster (Std vSwitch), destination Cluster (temp migration Std vSwitch) with same port group names. Maker sure Log folder is created ahead of time

Import-Module VMware.VimAutomation.Core

Set-PowerCLIConfiguration -InvalidCertificateAction Ignore -Scope Session -Confirm:$false

#Change your variables here:
$vcenter = "MYvCenterServer"
$sourcecluster = "MyOldCluster"
$destcluster = "MyNewCluster"
$VMnameFilter = "PROD*"
$IPFilter = "10.1.3.*"
$logfolder = "c:\MigrationLogs"

Connect-VIServer $vcenter

$getvms = Get-Cluster $sourcecluster | Get-VM | where name -Like $VMnameFilter | Select Name, @{ L = "IP Address"; E = { ($_.guest.IPAddress[0]) } } | where "IP Address" -Like $IPFilter | select -ExpandProperty name

foreach ($vm in $getvms)
{
Move-VM -vm $vm -Destination $destcluster -VMotionPriority High -Confirm: $false -ErrorAction SilentlyContinue

If (!$error)
{ Write-Output $vm, $error[0] | out-file -append -filepath "$logfolder\success.log" }
Else
{ Write-Output $vm, $error[0] | out-file -append -filepath "$logfolder\failed.log" }

}

Jumbo frames performance with 10GbE iSCSI

This is yet another voice to add to the opinion that jumbo frames are worth the effort.

This is especially true in a 10GbE iSCSI environment doing a hardware refresh of hosts, storage and switches or a pure greenfield deployment.

The ease of configuring jumbo frames across your environment is helped by a couple of things:

Most 10GbE switches ship with an MTU of 9214 or 9216 out of the box, so are ready for host and storage to be tuned to match. The Arista 7150 I have tested are configured out of the box this way.

Changing vSphere virtual switches and VMkernel interfaces to a large MTU size is now very easy via the vSphere client and Web Client.

On with the testing.

Hardware I tested with:

vSphere host: Cisco UCSC-C220-M3S, 16-core, 196GB Memory, Intel X520 Dual Port 10Gb SFP+ Adapter. This was running ESXi 5.1 Update 1.

Switch: Arista 7150S-24 (24 SFP+ ports)

Storage: Nimble Storage CS260G (10GbE iSCSI)

This was not meant to be an all encompassing test across all known workloads. Just some synthetic sequential benchmarks to see what difference existed between a default MTU of 1500 bytes vs 9000 bytes for iSCSI.

I set up a VM with Windows Server 2012, 4 x vCPU, 8GB RAM, 100GB system disk and a 100GB data disk (for iometer). This was on a 1TB iSCSI VMFS5 datastore presented from the Nimble array.

Using the good old IOMeter, I setup a basic sequential read test of a 2GB file with 32 outstanding I/O’s and then ran 4K, 8K and 32K tests for 5 minutes (once with ESXi vSwitches/VMkernel interfaces and Nimble ‘s data interfaces at 1500 MTU and then once with them set to 9000 MTU). The Arista was set to an MTU of 9214 across all ports at all times (the default)

jumbo frames

The results showed a consistent 10%+ performance increase in both throughput and bandwidth.

It would be difficult not to recommend jumbo frames as a default configuration for new 10GbE iSCSI or NAS environments.

For more great in-depth reading on jumbo frames and vSphere, make sure to check out Michael Wesbter’s (@vcdxnz001) posts on his blog:

http://longwhiteclouds.com/2012/02/20/jumbo-frames-on-vsphere-5/

http://longwhiteclouds.com/2013/09/10/the-great-jumbo-frames-debate/

Renaming vSphere Datastore device display names

In vCenter, storage devices or volumes/LUNs present themselves with a unique identifier usually prefixed with eui. or naa.

These are known as Network Address Authority Identifiers and Extended Unique Identifiers. (part of FC and iSCSI naming format standards for LUNs)

1a

However, when looking at them in vSphere and via the esxcli, these unique identifiers are less than useful for identifying which VMFS datastores you are dealing with when looking at paths selection etc.

For that reason, I usually rename each of these via the vSphere client as I add them.

1

This also benefits when filtering NMP devices from the esxcli

If you have connected to the command line of an ESXi host, you can grep for your renamed display name on a storage device

2

esxcli storage nmp device list | grep -A 9 VMFSProd

3You can see that it picks up the Device Display Name set via the vSphere client as well as returning the 9 rows of available information about that NMP device

Optimising vSphere path selection for Nimble Storage

THIS POST IS NOW OBSOLETE. Use Nimble Connection Manager instead

This post is mostly for documentation or a quick how-to for changing the default path selection policy (PSP) so it is more suitable for Nimble arrays.

For Nimble Storage by default, ESXi 5.1 uses the ALUA SATP with a PSP of most recently used (MRU)

Nimble recommends that this be altered to round robin.

From the CLI of each ESXi host (or from the vMA):

esxcli storage nmp satp set -s VMW_SATP_ALUA -P VMW_PSP_RR

After this point any volume presented from the Nimble array and configured as a VMFS datastore will appear with round robin load balancing and I/O active on all paths.

psp

If you had previously added volumes before changing the default PSP, you can change all existing Nimble volumes to round robin with PowerCLI

#Get all hosts and store in a variable I have called $esxihosts

 $esxihosts = get-vmhost

#Gather all SCSI luns presented to hosts where the vendor string in the LUN description equals “Nimble” and set the multi-pathing policy

Get-ScsiLun -VmHost $esxihosts | where {$_.Vendor -eq "Nimble"} | Set-ScsiLun -MultipathPolicy "roundrobin"

Nimble also make a recommendation around altering the IOps per path setting for round robin. This by default is 1000 IOps before changing data paths.

The official recommendation it to set it to zero (0) for each volume on each host.

Nimble support confirms that this ignores IOps per path and bases path selection on queue depth instead.

So from esxcli on each host, do the following which will search for the string Nimble on all storage devices presented to the host and then loop through and set IOps to zero

for i in `esxcli storage nmp device list | grep Nimble` ; do esxcli storage nmp psp roundrobin deviceconfig set --type "iops" --iops=0 --device=$i; done

I haven’t tested or implemented this as of yet.

I believe Nimbles internal support community has a single powercli script to cover everything here.

It would not be much work to take the per host esxcli stuff into powercli with get-esxcli and aggregate it all together.

Storage Efficiency with Nimble Compression

In my last post I talked about performance of the Nimble Storage platform I was implementing.

In a contended virtualised environment, storage is often the performance sore point or Achilles heal due to insufficient IOps and poor latency of the storage design.

The other major consideration is how efficient the storage platform is at managing the available capacity and what can be done to “do more with less”

With the Nimble platform I am using – this problem is addressed by using:

  • In-line compression on all workloads

Nimble uses a variable block compression algorithm they say can yield a 30-70% saving without altering performance or latency.

  • Thin provisioning

Ubiquitous across most array vendors, variations in block size can result in differences in efficiency.

  • VAAI UNMAP for space reclaimation

Nimble supports the VAAI primitive UNMAP for block reclaimation. This is done from the esxcli of a vSphere host as per this VMware KB:

http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=2014849

For testing I’ll concentrate on the compression and how it fares for a SQL server in a vSphere 5.1 environment.

I took a test SQL server running Windows 2008/SQL 2008 that had disk utilisation around 88%  (130GB of data on 149GB of available disk)

sqldisks

I then performed a storage vMotion migration from the original VMFS datastore on an HP LeftHand volume to a 200GB thin provisioned VMFS datastore presented from the Nimble array.

The result was a 2.25 x saving in space (compressed from 130GB to 56GB)

volcompress

Other VMs such as file/print, remote desktop hosts, infrastructure VMs in the test environment have anywhere between 1.7x to 2.0x compression levels.

In addition to the savings on primary volumes, compression is also applied to snapshots.

Who can say no to close to double the effective capacity on all virtualised workloads with no extra datacenter footprint?

Initial Performance Impressions of Nimble Storage

Recently I added a Nimble CS240G to my vSphere production storage environment.

IMG_1406

The Nimble is affordable and packs some serious features and performance in a 3u enclosure.

The CS240G has 24TB of raw capacity, comprised of twelve 2TB 7200rpm Western Digital enterprise RE4 SATA drives and four Intel 160GB SSDs (Intel 320)

IMG_1404

It has redundant controllers, one active, one on standby (mirrored). There are dual 10GbE ports on each controller as well as dual 1GbE.

You can choose to have the data and management sharing one network or split it out. In my case I went with a redundant management setup on the two 1GbE ports on one VLAN and iSCSI data on the two 10GbE ports on another VLAN.

Nimble does smart things around its ingestion of data. The general process is:

  • Inbound writes, cached in NVRAM and mirrored to the standby controller, write acknowledgement is given to the host at this point (latency measured in micro seconds)
  • The write IO’s are compressed inline in memory and a 4.5MB stripe is built (from possibly many thousands of small IO’s)
  • This large stripe is then written sequentially to free space on on the NL-SAS disks with only an impact to the array of around 11 IOps. Far more efficient than writing random small block data to the same type of disks.
  • The stripe is analysed by Nimbles caching algorithm and if required is also written sequentially to the SSD’s for future random read cache purposes.

For more in-depth look at the tech. Nimble has plenty of resources here:
http://www.nimblestorage.com/resources/datasheets.php
http://www.nimblestorage.com/resources/videos-demos.php

In summary, Nimble uses the NL-SAS for large block sequential writes, SSD for random read cache and compresses data in-line at no performance loss.

To get a solid idea of how the Nimble performed, I fired up a Windows 2008 R2 VM (2 vCPU/4GB RAM) on vSphere 5.1 backed by a 200GB thin provisioned volume from the Nimble over 10GbE iSCSI.

Using the familiar IOMeter, I did a couple of quick benchmarks:

All done with one worker process, tests run for 5 minutes.

80/20 Write/Read, 75% Random, 4K Workload using a 16GB test file (32000000 sectors) and 32 outstanding I/Os per target

24,600 IOps, averaging 1.3 m/s latency, and 95MB/s

50/50 Write/Read, 50% Random, 8K Workload using a 16GB test file (32000000 sectors) and 32 outstanding I/Os per target

19,434 IOps, averaging 1.6 m/s latency and 152MB/s

50/50 Write/Read, 50% Random, 32K Workload using a 16GB test file (32000000 sectors) and 32 outstanding I/Os per target

9,370 IOps, averaging  3.6 m/s latency and 289MB/s

Initial thought is – Impressive throughput while keeping latency incredibly low

That’s all I have had time to play with at this point, if anyone has any suggestions for benchmarks or other questions, I am quite happy to take a look.

Configuring iSCSI with ESXCLI in vSphere 5.0

This is mostly me learning my way around some of the new namespaces in esxcli as part of vSphere 5.0

If you want to know what’s new in esxcli in vSphere 5.0, please read these two posts from Duncan Epping and William Lam.

I wanted to see what needed to be done to configure a load balanced iSCSI connection with two VMkernel portgroups.

So here goes; All done against just a standard vSwitch.

Enable software iSCSI on the ESXi host
~ # esxcli iscsi software set --enabled=true

Add a portgroup to my standard vswitch for iSCSI #1
~ # esxcli network vswitch standard portgroup add -p iSCSI-1 -v vSwitch0

Now add a vmkernel nic (vmk1) to my portgroup
~ # esxcli network ip interface add -i vmk1 -p iSCSI-1

Repeat for iSCSI #2
~ # esxcli network vswitch standard portgroup add -p iSCSI-2 -v vSwitch0
~ # esxcli network ip interface add -i vmk2 -p iSCSI-2

Set the VLAN for both my iSCSI VMkernel port groups - in my case VLAN 140
~ # esxcli network vswitch standard portgroup set -p iSCSI-1 -v 140
~ # esxcli network vswitch standard portgroup set -p iSCSI-2 -v 140

Set the static IP addresses on both VMkernel NICs as part of the iSCSI network 
~ # esxcli network ip interface ipv4 set -i vmk1 -I 10.190.201.62 -N 255.255.255.0 -t static
~ # esxcli network ip interface ipv4 set -i vmk2 -I 10.190.201.63 -N 255.255.255.0 -t static

Set manual override fail-over policy so each iSCSI VMkernel portgroup had one active physical vmnic
~ # esxcli network vswitch standard portgroup policy failover set -p iSCSI-1 -a vmnic0
~ # esxcli network vswitch standard portgroup policy failover set -p iSCSI-2 -a vmnic3

Bond each of the VMkernel NICs to the software iSCSI HBA
~ # esxcli iscsi networkportal add -A vmhba33 -n vmk1
~ # esxcli iscsi networkportal add -A vmhba33 -n vmk2

Add the IP address of your iSCSI array or SAN as a dynamic discovery sendtarget
~ # esxcli iscsi adapter discovery sendtarget add -A vmhba33 -a 10.190.201.102

Re-scan your software iSCSI hba to discover volumes and VMFS datastores
~ # esxcli storage core adapter rescan --adapter vmhba33