Virtual Switch on Teamed NICs = slow VMs?


 

wonder if has seen or might able suggest troubleshooting pointers.

 

i have hyper-v evaluation system. the hardware dell poweredge 2950:  2 x dual core 3.0ghz xeons, 32gm ram, 2 internal broadcom netextreme ii nics, 1 intel pro 1000 pt 4-port nic card, external sas array (dual channels, 14 disk raid 10).

 

os rtm w2k8 standard x64.  hyper-v in-the-rtm box version.

 

i have run while using 1 of internal nics system's nic , other nic virtual switch bound to.  once intel released w2k8 drivers, installed , set intel card 4 port team , created new virtual switch , bound intel team.

 

then fired vms , found vms multiple virtual nics unresponsive.  clicks take forever responded to, windows open molasses, etc.  vms respond pings though without issue (~4000 1 lost <1ms response time).

 

the vms in question using w2k8 standard x64.  (these dual nic'd vms testing exchange 2007 rollout in case curious -- ht servers, mb servers, , cas servers...)

 

these configs worked fine solo nic virtual switch.  ideas on look?  have forgotten provide useful clues?

 

thanks in advance advice or smacks upside head!

 

=== mf

 

 

generally speaking if function correctly without teaming driver present - not work teaming driver present - 99% bug teaming driver.  using standard ndis interfaces under windows our thing.

 

cheers,

ben



Windows Server  >  Hyper-V



Comments

Popular posts from this blog

WIMMount (HSM) causing cluster storage to go redirected (2012r2 DC)

Failed to delete the test record dcdiag-test-record in zone test.com

Azure MFA with Azure AD and RDS