Cleaning up “Already used” desktops in VMware View

The “Already used” error is a common problem that exists with using floating (non-persistent) desktops in a completely stateless way (i.e you have assigned the pool desktops to refresh on log-off)

However, a user can (in some circumstances) shutdown or reset their floating desktop and this does not give the View Management server a chance to refresh the desktop back to its “known good” internal disk ready for another user.

This leaves the desktop in an error state, which required manual admin intervention or scheduled scripts to handle the issue.

0a

Of course you should be using AD group policy, you can then remove the ability of the user to shutdown the PCoIP windows desktop leaving them with just the ability to log-off.

But in reality I have still seen the “Already used” error crop up due to administrators haphazardly and incorrectly resetting floating desktops and power or individual server issues causing pools on local disk to come up in this state.

With View 5.1.2, VMware have added a an LDAP attribute called pae-DirtyVMPolicy

This is a per pool setting that allows the administrator to control how “Already used” desktops are treated.

There are three policy settings:

pae-DirtyVMPolicy=0 – This is the default behavior of leaving the desktop in the error state and not available for use.

pae-DirtyVMPolicy=1 – This allows desktops that were not cleanly logged off to be available without being refreshed. The desktop is available in the pool for another user.

pae-DirtyVMPolicy=2 – This setting will automatically refresh a desktop in the “already used” state and make it available again in the pool.

To change the setting on a pool, fire up ADSI Edit on a View Connection server and connect to dc=vdi, dc=vmware, dc=int (see screenshot)

1

Browse down to Server Groups and you will see your desktop pools on the right pane

2

Double click a pool name to edit the attributes

3

Find the LDAP attribute pae-DirtyVMPolicy in the list and change its value from <not set> to 1 or 2 (depending on your requirement)

Hopefully this can help ease some of the administrative burden of large floating desktop pools and the small issues they can have.

For more information on VMware View 5.1.2, see the release notes here:

http://www.vmware.com/support/view51/doc/view-512-release-notes.html

Running Veeam Backup 5 in a VM

One of the best features of most modern virtual backup products, is that they support the vStorage APIs for Data Protection (VADP), including change block tracking (CBT) and hot-add.

More here on VADP: kb.vmware.com/kb/1021175

To take advantage of the vSphere hot-add ability, the backup server itself must be a VM with access to the same storage and datastores of the VM’s it is backing up.

Veeam calls this virtual appliance mode and similar techniques are used by other products from PHD Virtual and VMware (VDR). Ultimately it means backups are very fast, LAN free and are “localised” to the backup server.

Veeam fully supports this configuration, but depending on your how many VM’s you are backing up, the capacity and the backup window  – it does have some small challenges.

Firstly, we all know the storage Achilles heal of vSphere 4 is its 2TB (minus 512 bytes) limit on VMDK’s or RAW disks passed through to the VM.

If you plan to keep a month of compressed and de-duped backups of 100 Server VM’s, 2TB may well be a drop in the ocean of data.

In my environment, backing up 80+ VM’s and keeping versions for 30 days, this results in around 12TB of data – this includes SQL servers, Exchange, File and Content Management servers – and the usual suspects in a corporate Windows environment.

So, do we present multiple 2TB VMDK’s or RAW disks to the Veeam Backup VM and then spread jobs across the drives? Perhaps aggregate and stripe them using Windows disk manager? Both of these options will work, but I preferred a simpler and cleaner single disk/volume solution.

Using the Windows iSCSI initiator inside the VM allows me to present huge volumes directly to the 64-bit Windows server VM from the iSCSI SAN – avoiding those nasty 32-bit SCSI2 limits of vSphere currently.

So onto some technical aspects on how I have this configured.

The Windows VM itself, is Server 2008 R2.
It is configured with 6 vCPU and 8GB ofRAM, as i said previously, depending on your workload, this may be overkill, or perhaps not enough.

I find that two running jobs can saturate a 6 vCPU VM running on top of a Westmere based blade server.

Presented to this is a 20TB Volume from an HP P4000 G2 SAN (iSCSI)

The VM has two dedicated vNics (VMXNET3) for iSCSI, both are separated onto different dedicated virtual machine port groups on the ESX host side, which are on the iSCSI VLAN. (See two images below)

Each of the port groups have been configured with manual fail-over order to set the preferred active physical adapter. iSCSI Network 1 using vmnic2 and iSCSI Network 2 using vmnic3 (see two images below)

This means that the two virtual nics in the VM traverse separate physical nics coming out of the ESX hosts.

Then using the iSCSI initiator software inside Windows 2008 R2 and installing the HP DSM driver for iSCSI Multipathing to the P4000 (other iSCSI vendors should also have MPIO packages for Windows)

Docs on how to configure under this in Windows are available from HP:
http://bizsupport1.austin.hp.com/bc/docs/support/SupportManual/c01865547/c01865547.pdf

This allows Windows to use both nics and perform round-robin load balancing to the volume on the iSCSI storage array. In the two images below, you can see the iSCSI target tab and both paths to the volume.

The multi-pathed volume appears in device manager like this:

Data traverses both paths evenly: (VMXNET3 drivers show as 10Gbps)

The end result is a gigantic multi-pathed multi terabyte volume available for Veeam Backup 5 to drop its backups onto and with all the the advantages of using your virtual infrastructure (HA/DRS etc.)

More info on Veeam Backup and Replication at their site:
http://www.veeam.com/vmware-esx-backup.html

Running XenServer 5.6 on VMware Workstation

With XenServer 5.6 just released, it was time to fire it up inside my virtual lab at home.

VMware Workstation 7.1 makes a great performing platform for a virtual XenServer test lab.

In my case, I have two virtual XenServer 5.6 instances in a pool and a virtual iSCSI SAN appliance (latest 8.5 HP P4000 VSA) running on top of Workstation 7.1 (on Windows 7 x64).

This setup gives me shared storage for my XenServer pool and full live migration of VM’s (XenMotion)

To get this up and running is quite simple:

Your host PC on which you run VMware Workstation needs a bit of grunt – but that comes cheap these days.

A quad core Intel or six core Phenom CPU, 8GB of RAM and one or two cheap 40GB Intel V series SSD would give you enough cores, memory and IOPS for quite a large virtual home lab.

Here we go – kick off the New Virtual Machine wizard in VMware Workstation

Point it at the XenServer install iso you have downloaded from Citrix

As a guest OS, make sure to select Linux and the version as 2.6.x 64-bit

Give it a name and a location for the virtual machine

Give your virtual XenServer one CPU and core (unless you have an insane number of cores at your disposal)

Give the VM at least 1GB of memory – this is the minimum for XenServer to install (you can give more if you have it)

Choose your network type for your VM – I use bridged so the VMs can access the network of my underlying host PC

Leave the SCSI controller as LSI Logic (Default choice)

Leave the disk type as SCSI

Make the disk at least 14GB to successfully install XS 5.6 (the official XenServer docs say 16GB is the minimum disk size)

Then click next through the rest of the wizard and power on the VM and install from the iso file

Booting up after install/config

Two functional XenServer 5.6 VM’s

Add the HP P4000 virtual iSCSI SAN appliance (www.hp.com/go/tryvsa) for shared storage

If your XenServer and HP VSA VM’s are bridged onto your host PC network, you can then install the XenCenter client as well as the HP CMC (SAN management) on your host OS and access them directly.

XenCenter client managing a pool of two virtual XenServer 5.6 hosts, shared iSCSI storage provided by the HP virtual SAN appliance.

Be aware that this setup will only allow you to run nested Linux VM’s – not Windows, as the Hardware Virtualization assist CPU support is not passed though when XenServer is virtualised on top of VMware Workstation.

BTW, vSphere/ESX(i) runs equally well on this setup.

All-in-all a good setup for a home testing/training/certification lab

More P4000 G2 IOMeter Benchmarking (with vSphere RoundRobin tweak)

Here we go with some more IOMeter madness. This time using a much debated vSphere round-robin tweak.

This debate has gone on in the blogosphere in recent weeks about the number of IOPS per path tweak that can be changed from the ESX command line.

The debate is simply; Does it work?

By default, using the round-robin path selection,  1000 IOPS goes down the first path and then the next 1000 on the next path.

From the commandline of the ESX host we can set the round robin parameters:

esxcli nmp roundrobin setconfig –device LUNID  –iops 3 –type iops

Where LUNID is your volume identifer in vSphere e.g naa.6000eb39539d74ca0000000000000067

This setting effectively means that the path will switch every 3 IOPS.

It  appears to greatly help throughput intensive workloads i.e large block sequential reads (restores and file archives) where the iSCSI load balancing of the HP P4000 nodes (ALB) can take effect.

It appears that small block (4k, 8K) are not really helped by the change – so I am only looking at larger block (32k and 256K)

The Setup
Three node P4500 G2 cluster, with twelve 450GB 15K SAS drives in each node. All running SAN/iQ 8.5. Node NICs using ALB.

Windows Server 2003 R2 VM Guest (Single vCPU, 2GB RAM) with a virtual disk on 50GB Volume @ Network RAID-10 (2-way replica)

ESX 4.0 Update 1, with two VMkernel ports on a single vSwitch (two Intel Pro/1000 NICs) bound to the software iSCSI adapter with Round Robin multi-path selection

IOMeter Settings

One worker
64 Outstanding requests (mid-heavy load)
8000000 sectors (4GB file)
5 Minute test runs

The Results

(100% Sequential, 100% Read, 32K Transfer size)

Default IOPS setting :110 MB/s sustained with 3500 IOPS
Tweaked IOPS setting: 163MB/s sustained with 5200 IOPS

(100% Sequential, 100% Read, 256K Transfer size)

Default IOPS setting :113 MB/s sustained with 449 IOPS
Tweaked IOPS setting: 195MB/s sustained with 781 IOPS

The NMP tweak does indeed seem to make a difference for larger block read operations from  iSCSI volumes where the array can do it’s outbound load balancing magic

I would be interested to see if others can get similar results on the HP/Lefthand arrays as well as other iSCSI and NFS arrays (Equallogic, EMC, NetApp)

The real question is; does this help in the real world or only under IOMeter “simulation” conditions.


HP P4000 G2 IOMeter Performance

Some quick benchmarks on a three node P4500 G2 cluster, with twelve 450GB 15K SAS drives in each node. All running SAN/iQ 8.5

The Setup:
Windows Server 2003 R2 VM Guest (Single vCPU, 2GB RAM) with a virtual disk on 50GB Volume @ Network RAID-10 (2-way replica)

ESX 4.0 Update 1, with two VMkernel ports on a single vSwitch (two Intel Pro/1000 NICs) bound to the software iSCSI adapter with Round Robin multi-path selection (shown below)

More info on the vSphere iSCSI config can be found on Chad Sakacs blog or from the PDF on HPs P400o site

IOMeter Settings

One worker
64 Outstanding requests (mid-heavy load)
8000000 sectors (4GB file)

The Results

Max Read: (100% Sequential, 100% Read, 64K Transfer size)
156 MB/s sustained

Max Write: (100% Sequential, 100% Write, 64K Transfer size)
105 MB/s sustained

Max IOPS (100% Sequential, 100% Read, 512byte Transfer size)
31785 IOPS

Harsh Test IOPs (50/50 Random/Sequential, 50/50 Read/Write, 8k Transfer size)
3400 IOPS and 34MB/s

Some new “real world” tests
(50/50 Random/Sequential, 50/50 Read/Write, 4k Transfer size)
3200 IOPS and 23MB/s

(50/50 Random/Sequential, 50/50 Read/Write, 32k Transfer size)
2700 IOPS  and 78MB/s

Overall:

Excellent throughput and IO for three 1Gb iSCSI  nodes on a software initiator and with replicated fault tolerant volumes.

And remember, IOPS scale linearly with each extra node added. And 10Gbe NICs are available should you really want to get that throughput flying.

SAN/iQ 8.5 Best Practice Analyzer

This is a new feature of 8.5 that alerts the user to any less than ideal configurations in their environment.

Firing up the 8.5 Central Management Console (CMC), looking on the left pane, you will see a Configuration Summary option.

Clicking this will present you with a Best Practices summary of your HP/Lefthand environment.

It assesses five areas of node and cluster health:

1. Are are all nodes correctly protected at a hardware RAID level?
2. Do you have enough nodes for redundancy? (i.e. more than one)
3. Are the volumes correctly set up with some level of redundancy? (i.e. not Network RAID-0)
4. Are the correct number of managers running in the cluster so a quorum can be maintained in the event of node failure?
5. Are all NICs on all nodes running at correctly at 1-Gbps full duplex? (essential) – with Adaptive Load Balancing and Flow Control highly recommended

If all is good, then the status shows as healthy with green checkmarks

However, if there is something less than ideal – things change.

An example of this is when creating a volume. It is highly recommended to used Network RAID to protect it. The default for a new volume is Network RAID-10 (2-way).

If you then decide to create the volume without any replication protection (Network RAID-0), the best practice analyzer will alert you to the problem (in this case with yellow exclamation marks).

Clicking the blue help (?) button will advise you what the best practice is and what to do to correct the situation.

In and Around the HP P4500 G2 SAN

This is a quick look around the latest generation of virtualised iSCSI P4000 G2 storage nodes from HP.

The HP P4500 G2 is a 2U rack mount unit with styling very typical of an HP DLxxx series server (charcoal grey with the port wine coloured hot swap SAS disks). Standard blue UID indicator lights are lit in the picture below.

Popping the top on the unit reveals nothing too out of the ordinary with regards to a rack server – you can see the twelve 450GB 15K SAS disks in the front, the fans, power supplies (top right) and the black air shroud covering the Intel CPU. The left side is where the HP P410 Smart Array controller lives and a rear mounted DVD-ROM drive.

The G2 nodes are Intel based, packing a single Xeon 5520 (Nehalem), running at 2.26 GHz with 8M Cache, 4-cores + HT (giving 8-threads). A lone 4GB DDR3 1333 MHz ECC memory module sits in the first channel and a second empty LGA 1366 CPU socket round out the view.

Another view showing the internal layout of the components in the HP P4500 G2. You can see the CPU and RAM bottom centre, power suppplies on the right, Smart Array Controller on the left and DVD-ROM a the rear.

The back of the P4500 G2 has mostly what you would expect in a 2u rack server.

Dual hot swap 750 watt power supplies, USB, serial,VGA, iLO-2 management port and the all important dual Intel gigabit network interfaces used for iSCSI traffic and central management of the nodes. These NICs are an Intel 82576 chipset integrated onto the motherboard.

There’s room for a couple of x16 PCI Express cards – for those awesome 10Gbe NIC upgrades (Current CX4 variety or much cheaper SFP+ based versions coming soon).

Finally a DVD-ROM is installed on the back of this server.

Each unit weighs in at around 20KG and is very easy to rack mount (as you would expect for any modern enterprise server or storage).

That about wraps it up for the hardware.

Where these units get really interesting is in the SAN/iQ software – the topic of future posts.