This is yet another voice to add to the opinion that jumbo frames are worth the effort, especially in an 10GbE iSCSI environment doing a hardware refresh of hosts, storage and switches or a pure greenfield deployment.
The ease of configuring jumbo frames across your environment is helped by a couple of things:
Most 10GbE switches ship with an MTU of 9214 or 9216 out of the box, so are ready for host and storage to be tuned to match. The Arista 7150 I have tested are configured out of the box this way.
Changing vSphere virtual switches and VMkernel interfaces to a large MTU size is now very easy via the vSphere client and Web Client.
Hardware I tested with:
vSphere host: Cisco UCSC-C220-M3S, 16-core, 196GB Memory, Intel X520 Dual Port 10Gb SFP+ Adapter. This was running ESXi 5.1 Update 1.
Switch: Arista 7150S-24 (24 SFP+ ports)
Storage: Nimble Storage CS260G (10GbE iSCSI)
This was not meant to be an all encompassing scientific test across all known workloads. Just some synthetic sequential benchmarks to see what difference existed between a default MTU of 1500 bytes vs 9000 MTU for iSCSI.
I set up a VM with Windows Server 2012, 4 x vCPU, 8GB RAM, 100GB system disk and a 100GB data disk (for iometer). This was on a 1TB iSCSI VMFS5 datastore presented from the Nimble.
Using the good old IOMeter, I setup a basic sequential read test of a 2GB file with 32 outstanding I/O’s and then ran 4K, 8K and 32K tests for 5 minutes (once with ESXi vSwitches/VMkernel interfaces and Nimble ‘s data interfaces at 1500MTU and then once with them set to 9000 MTU). The Arista was set to an MTU of 9214 across all ports at all times (the default)
The results showed a consistent 10%+ performance increase in both throughput and bandwidth.
It would be difficult not to recommend jumbo frames as a default configuration for new 10GbE iSCSI or NAS environments.
For more great in-depth reading on jumbo frames and vSphere, make sure to check out Michael Wesbter’s (@vcdxnz001) posts on his blog: