As 10GbE adapter prices (for rack and blade servers) and per port switch prices plummet, this tech is fast becoming the easy choice when implementing new servers and storage.
In the past few years, best practice for ESX vSwitches has always been to separate out traffic types onto different vSwitches with dedicated and aggregated 1GigE links.
Most ESX hosts would have a setup similar to this:
Service Console/vMotion on one vSwitch with 2 x 1Gig NICs
LAN/VM Network traffic on another vSwitch with 2 x 1GigE NICs
iSCSI/VMkernel traffic on a third vSwitch with 2 x 1GigE NICs
This meant that you had at least six Ethernet cables snaking their way around your rack into structured cabling or top of rack switches.
With an armada of rack servers or a few blade chassis, cabling and network management can soon become an issue.
Enter affordable 10GigE adapters, cheap SFP+ direct attach cables (up to 8.5M) and dropping prices on 10GigE switch ports; Cable mayhem is history.
So now the networking configuration of your average ESXi host might look like this:
Add some NIC team override settings and you can have different traffic nicely split across both paths.
Of course one config may not suit all, but I would wager that this would cover most ESX workloads out there.
This is easily my new best practice for ESX vSwitch configuration.