Sharing iSCSI NIC with Live Migration Traffic


we've got blade server connected iscsi san.  blade has (6) total ports, 4x1gb ports & 2x10gb, & 10gb ports used iscsi connectivity configured mpio.  blade running server 2012 hyper-v.

i'd utilize 10gb network both iscsi & "shared nothing" live migration (we're not looking @ failover clustering quite yet) wondering if else has similar setup & if you'd advise for/against it?  understand it's best practice dedicate physical nics iscsi when possible, blade bit "port challenged" i'd love take advantage of 10gb speed vm transfers.  if that's not advisable i'll team (4) 1gb ports & route lm transfers on along normal guest/os traffic.

thanks in advance. -jb

well it's little less complicated set (especially in 2012), , great benefits. live migration shared storage faster without. have central management point of vm's (cluster manager) or better yet, can start testing out vmm. concerns 10gbe setup, can utilize other traffics, i'd suggest implementing qos policies don't grind performance halt if you're doing somehow choke pipe (unlikely) :)



Windows Server  >  Hyper-V



Comments

Popular posts from this blog

WIMMount (HSM) causing cluster storage to go redirected (2012r2 DC)

Failed to delete the test record dcdiag-test-record in zone test.com

Azure MFA with Azure AD and RDS