10GigE – changing VMware vSwitch best practices

As 10GbE adapter prices (for rack and blade servers) and per port switch prices plummet, this tech is fast becoming the easy choice when implementing new servers and storage.

In the past few years, best practice for ESX vSwitches has always been to separate out traffic types onto different vSwitches with dedicated and aggregated 1GigE links.

Most ESX hosts would have a setup similar to this:

Service Console/vMotion on one vSwitch with  2 x 1Gig NICs

LAN/VM Network traffic on another vSwitch with 2 x 1GigE NICs

iSCSI/VMkernel traffic on a third vSwitch with 2 x 1GigE NICs

This meant that you had at least six Ethernet cables snaking their way around your rack into structured cabling or top of rack switches.

With an armada of rack servers or a few blade chassis, cabling and network management can soon become an issue.

Enter affordable 10GigE adapters, cheap SFP+ direct attach cables (up to 8.5M) and dropping prices on 10GigE switch ports; Cable mayhem is history.

So now the networking configuration of your average ESXi  host might look like this:

Two fat 10Gig pipes into a single vSwitch, everything separated via port groups and VLANs. Simple, secure and manageable with plenty of performance.

Add some NIC team override settings and you can have different traffic nicely split across both paths.

Of course one config may not suit all, but I would wager that this would cover most ESX workloads out there.

This is easily my new best practice for ESX vSwitch configuration.

2 thoughts on “10GigE – changing VMware vSwitch best practices

  1. Hi

    Good article

    So how does the use of DVS switches impact this design? You then have the mgmt, and kernel connections in the DVS which isnt the best practise.

    The cost of the additional 10G connections to get the seperation or the ability here would be be costly

    But like you say this isnt for every design.


Comments are closed.