Bulk creating VMs using Nutanix Acropolis ACLI

I have been playing around with the fantastic Nutanix Community Edition (CE) now for a week or more.

Despite the availability of all KVM VM actions in the HTML5 Prism GUI, I wanted to quickly create any number of VMs for testing.

The best way to do this for me was from the shell of the Nutanix controller VM.

I SSH’d into the CVM on my Nutanix CE node, then from the bash shell ran my script which calls the Acropolis CLI commands for creating and modifying VMs.

It loops for 10 iterations, sets VMs to 2GB of memory and 1 virtual CPU. It also attaches a 20GB disk for each VM on my NDFS container called CNTR1

for n in {1..10}
do
acli vm.create MyVM$n memory=2048 num_vcpus=1
acli vm.disk_create MyVM$n create_size=20G container=CNTR1
done

ntnx1

This is the tip of the iceberg really, in the script you could also mount an ISO file from your container to each VM and power them on if you wanted to. Better still, clone from an existing VM and create 10 that way would have been better, but the ACLI does not appear to have vm.clone yet as shown in the Nutanix Bible: Capture Or the Acropolis Virtualization Administration docs: Capture2

ACLI commands present on the Community Edition (currently 2015.06.08 version) – are missing vm.clonentnx3 Make sure you check out the new “Book of Acropolis” section in the Nutanix Bible: http://stevenpoitras.com/the-nutanix-bible/#_Toc422750852

Posted in Uncategorized | Tagged , , , , | 1 Comment

PowerShell/PowerCLI – vMotion VMs between clusters based on Name and IP

A quick post where I threw together a script to vMotion all VMs of a particular wildcard name and a certain IP range between two vSphere clusters (5.0 and 5.5) under the same vCenter Server.

The script assumes your source and destination clusters are using Standard vSwitches (VSS). with identical port group names on each.

In a migration scenario, perhaps your source cluster is using a Standard vSwitch, but the new destination cluster has a vSphere Distributed Switch (VDS).

In this case, I would implement a temporary standard vSwitch with a single NIC on the new cluster with identical port group names as the source cluster.

This makes vMotion migration simple. Once VMs are on the new cluster, vNICs can be bounced from standard to VDS port groups.

Anyway – on with the script. PowerCLI version used was 6.0 Release 1

#vMotion VMs based on name and IP address between clusters under one vCenter
#Assumes standard vSwitch networking and consistent port group names between clusters
#Source cluster (Std vSwitch), destination Cluster (temp migration Std vSwitch) with same port group names. Maker sure Log folder is created ahead of time

Import-Module VMware.VimAutomation.Core

Set-PowerCLIConfiguration -InvalidCertificateAction Ignore -Scope Session -Confirm:$false

#Change your variables here:
$vcenter = "MYvCenterServer"
$sourcecluster = "MyOldCluster"
$destcluster = "MyNewCluster"
$VMnameFilter = "PROD*"
$IPFilter = "10.1.3.*"
$logfolder = "c:\MigrationLogs"

Connect-VIServer $vcenter

$getvms = Get-Cluster $sourcecluster | Get-VM | where name -Like $VMnameFilter | Select Name, @{ L = "IP Address"; E = { ($_.guest.IPAddress[0]) } } | where "IP Address" -Like $IPFilter | select -ExpandProperty name

foreach ($vm in $getvms)
{
Move-VM -vm $vm -Destination $destcluster -VMotionPriority High -Confirm: $false -ErrorAction SilentlyContinue

If (!$error)
{ Write-Output $vm, $error[0] | out-file -append -filepath "$logfolder\success.log" }
Else
{ Write-Output $vm, $error[0] | out-file -append -filepath "$logfolder\failed.log" }

}
Posted in PowerShell, VMware vSphere | Tagged , , ,

Using Azure PowerShell to deploy Windows Server Technical Preview (Windows Server 2015)

I whipped up a script so I didn’t have to use the Azure portal wizard to easily spin up Windows Server 10 VM’s for further testing.

It’s mostly self explanatory, the Azure PowerShell cmdlets have great help and online resources are immense as well.
http://azure.microsoft.com/en-us/documentation/articles/install-configure-powershell/

Basically, it connects to Azure with your provided credentials, grabs existing subscription ID and an existing storage account (helpful to have this sorted in Azure portal first)

You need to provide username, password for your new VM and cloud service name in the script (*Change Me*)

It will provide out-gridview prompts to select an existing private virtual network in Azure (again helpful to have created one)

Also prompts for VM instance size – I use Basic A0 for testing (no availability set)

It also prompts to pick the Azure Datacenter. The VM must be created in the same DC as the storage account. For New Zealand customers, West US seems fastest. No hard testing behind that opinion.

It automatically picks the Windows Server Technical Preview image, but you can change that to whatever image takes your fancy really. You can get a list using Get-AzureVMImage.

Here is the script – I’m no super PS guru, so probably tons of tweaks and optimisations to do, but it works for my basic testing. Comments and suggestions are very welcome.

#Connect and auth PS session against your Azure creds
Add-AzureAccount

#Get subscription ID of your azure account 
$MySubID = Get-AzureSubscription | select -expand subscriptionid

#Select existing storage account - VM must be created in same DC as storage account 
$MyStorageAccountName = Get-AzureStorageAccount | out-gridview -passthru | select -expand StorageAccountName

#Name of your VM 
$MyVMName = "*Change Me*"

#Username for your Windows VM 
$MyVMAdminUsername = "*Change Me*"

#Password for your Windows VM 
$MyVMAdminPassword = "*Change Me*"

#New Cloud Service name 
$MYCloudServiceName = "*Change Me*"

#Select internal Azure virtual network for VM (must exist already). Comment out line if you want to use automatic public connection 
$MyVMNetwork = Get-AzureVNetSite | Out-GridView -PassThru | select -expand name $MyVMSubnet = Get-AzureVNetSite | Out-GridView -PassThru | Select-Object Subnets -ExpandProperty Subnets | select -expand name

#Select VM Instance size from list 
$MyInstanceSize = Get-AzureRoleSize | Out-GridView -passthru | select -expand instancesize

#Select Datacenter location from list 
$MyVMDCLocation = Get-AzureLocation | Out-GridView -PassThru | select -expand name

#Set the Azure Subscription and the Storage account to be used 
Set-AzureSubscription -SubscriptionId $MySubID -CurrentStorageAccountName $MyStorageAccountName

#Select Windows Server Technical Preview - i.e. Server 10 
$MyImageName = Get-AzureVMImage | where {$_.Label -eq "Windows Server Technical Preview"} | select -expand imagename

#Build VM 
New-AzureQuickVM -Windows -ServiceName $MYCloudServiceName -Name $MyVMName -ImageName $MyImageName -AdminUsername $MyVMAdminUsername -Password $MyVMAdminPassword -Location $MyVMDCLocation -InstanceSize $MyInstanceSize -VNetName $MyVMNetwork -SubnetNames $MyVMSubnet
Posted in Azure, Cloud, PowerShell | Tagged , , , , , ,

Jumbo frames performance with 10GbE iSCSI

This is yet another voice to add to the opinion that jumbo frames are worth the effort.

This is especially true in a 10GbE iSCSI environment doing a hardware refresh of hosts, storage and switches or a pure greenfield deployment.

The ease of configuring jumbo frames across your environment is helped by a couple of things:

Most 10GbE switches ship with an MTU of 9214 or 9216 out of the box, so are ready for host and storage to be tuned to match. The Arista 7150 I have tested are configured out of the box this way.

Changing vSphere virtual switches and VMkernel interfaces to a large MTU size is now very easy via the vSphere client and Web Client.

On with the testing.

Hardware I tested with:

vSphere host: Cisco UCSC-C220-M3S, 16-core, 196GB Memory, Intel X520 Dual Port 10Gb SFP+ Adapter. This was running ESXi 5.1 Update 1.

Switch: Arista 7150S-24 (24 SFP+ ports)

Storage: Nimble Storage CS260G (10GbE iSCSI)

This was not meant to be an all encompassing test across all known workloads. Just some synthetic sequential benchmarks to see what difference existed between a default MTU of 1500 bytes vs 9000 bytes for iSCSI.

I set up a VM with Windows Server 2012, 4 x vCPU, 8GB RAM, 100GB system disk and a 100GB data disk (for iometer). This was on a 1TB iSCSI VMFS5 datastore presented from the Nimble array.

Using the good old IOMeter, I setup a basic sequential read test of a 2GB file with 32 outstanding I/O’s and then ran 4K, 8K and 32K tests for 5 minutes (once with ESXi vSwitches/VMkernel interfaces and Nimble ‘s data interfaces at 1500 MTU and then once with them set to 9000 MTU). The Arista was set to an MTU of 9214 across all ports at all times (the default)

jumbo frames

The results showed a consistent 10%+ performance increase in both throughput and bandwidth.

It would be difficult not to recommend jumbo frames as a default configuration for new 10GbE iSCSI or NAS environments.

For more great in-depth reading on jumbo frames and vSphere, make sure to check out Michael Wesbter’s (@vcdxnz001) posts on his blog:

http://longwhiteclouds.com/2012/02/20/jumbo-frames-on-vsphere-5/

http://longwhiteclouds.com/2013/09/10/the-great-jumbo-frames-debate/

Posted in iSCSI Storage, Nimble Storage, VMware vSphere | Tagged , , , , ,

iSCSI configuration on Extreme Networks switches

This is a production configuration I have used with 1GbE and 10GbE Extreme Networks switches for iSCSI storage, specifically a Nimble Storage CS260G with 1GbE management interfaces and 10GbE iSCSI data interfaces.

10GbE iSCSI is on one stacked pair of Extreme x670’s and the 1GbE management interfaces are on an x460 stack.

extreme_nimble

This is the ExtremeOS configuration I have for both networks.

#iSCSI data ports (10GbE) Extreme Networks x670 (stacked pair)

configure ports 1:1, 1:2, 2:1, 2:2 display-string “Nimble01-Prod-iSCSI”
enable flow-control tx-pause ports 1:1, 1:2, 2:1, 2:2
enable flow-control rx-pause ports 1:1, 1:2, 2:1, 2:2
configure vlan “iSCSI_Data” tag 140

configure vlan “iSCSI_Data” ipaddress 10.190.201.250 255.255.255.0
configure vlan “iSCSI_Data” add ports 1:1,1:2,2:1,2:2

#Mgmt ports (1GbE) Extreme Networks x460 (stacked pair)

configure ports 1:1,1:2,2:1,2:2 display-string “Nimble01-Prod-Mgmt”
configure vlan “Storage_Mgmt” tag 130
configure vlan “Storage_Mgmt” ipaddress 10.201.201.1 255.255.255.0
enable ipforwarding vlan “Storage_Mgmt”
configure vlan “Storage_Mgmt” add ports 1:1,1:2,2:1,2:2

I have no spanning tree or rate-limiting (also known as broadcast storm control) configured on any iSCSI or management ports. These are not recommended for iSCSI performance reasons.

The important architectural points are:

  • iSCSI and management have dedicated VLAN’s for isolation, performance and security/policy etc .
  • The switch ports carrying iSCSI data have send/receive flow control enabled, for performance consistency.
  • Switches carrying management and iSCSI data are stacked for redundancy.

http://www.extremenetworks.com/products/summit-all.aspx

http://www.nimblestorage.com/products/specifications.php

Posted in Extreme Networks, iSCSI Storage, Nimble Storage | Tagged , , , , ,

What makes up a Nimble Storage array?

I have been asked a couple of times about what is inside a Nimble Storage array in terms of hardware.

People have assumed that there may have been proprietary components in a high performance array, but Nimble’s architecture is completely leveraged off a commodity chassis, off the shelf drives and multi-core Intel CPU’s.

Most of the hardware is best illustrated with some photos:

First up – the chassis. Nimble uses a SuperMicro unit with hot swap controllers, power supplies and fans: http://www.supermicro.com/products/nfo/sbb.cfm

WP_20130511_009

This is the front view of the chassis (in this case a Nimble CS260G). It has twelve 3TB Seagate Enterprise NL-SAS drives (model: Constellation ES.2)
http://www.seagate.com/internal-hard-drives/enterprise-hard-drives/hdd/enterprise-capacity-3-5-hdd/

It also has four 300GB Intel 320 SSD’s for read cache purposes.

WP_20130511_010The rear of the chassis, showing the redundant power modules and controllers (with PCIe 10GbE adapters) and dual 1GbE NICs.

Looking at the controller itself:

WP_20130511_004

It’s a Supermicro board (X8DTS-F), with 12GB Kingston ECC DDR3 memory and with what appears to be an Intel 55xx Nehalem family CPU (four core/8 threads). On the far back side are the PCIe slots. Far left is an LSI SAS adapter.

WP_20130511_006

In the top PCIe slot, sits the NVRAM card (for fast write caching on inbound I/O). It appears to be an NVvault 1GB DDR2 module, that is built by Netlist:
http://www.netlist.com/products/vault-data-protection/nvvault-ddr2/

WP_20130511_007In the bottom PCIe slot sits the 10GbE dual port network adapter, which from the model number would appear to be the PE210G2SPi9 component from Silicom:
http://www.silicom-usa.com/Networking_Adapters/PE210G2SPi9-Dual_Port_10_Gigabit_Ethernet_PCI_Express_Server_Adapter_Intel_based_49

That about wraps up the commodity components inside a Nimble array.

What is really apparent, it that cost effective, reliable, high performance/capacity commodity disks, multi-core CPU’s, standard chassis have enabled array vendors like Nimble (and others) to concentrate on engineering their software architecture around data services, file systems, user interfaces and seamless array metrics collection enabling support and data mining opportunities.

Untitled

Posted in Hardware, iSCSI Storage, Nimble Storage | Tagged , ,

Renaming vSphere Datastore device display names

In vCenter, storage devices or volumes/LUNs present themselves with a unique identifier usually prefixed with eui. or naa.

These are known as Network Address Authority Identifiers and Extended Unique Identifiers. (part of FC and iSCSI naming format standards for LUNs)

1a

However, when looking at them in vSphere and via the esxcli, these unique identifiers are less than useful for identifying which VMFS datastores you are dealing with when looking at paths selection etc.

For that reason, I usually rename each of these via the vSphere client as I add them.

1

This also benefits when filtering NMP devices from the esxcli

If you have connected to the command line of an ESXi host, you can grep for your renamed display name on a storage device

2

esxcli storage nmp device list | grep -A 9 VMFSProd

3You can see that it picks up the Device Display Name set via the vSphere client as well as returning the 9 rows of available information about that NMP device

Posted in ESXi, iSCSI Storage, Nimble Storage, VMware vSphere | Tagged , , ,