VM's running on Compute nodes and storage on Storage Spaces Direct


hi,

not sure if correct thread question kind of crosses 2 areas. understand make storage spaces direct cluster , present scale out file server on top of vhd storage, can gather it's recommended have rdma 10gbps network adapters between storage nodes high synchronization of storage between nodes, im not sure network speed between compute nodes run vms in memory , cpu, , storage cluster hosts vhds, link need 10gbps or away few 1gbps in team? kind of traffic goes between compute , storage cluster, i'm assuming need 10gbps because it's vm receives request storage io , speed disk system limited network link (if storage operate @ 10gbps, link compute storage using 1gbps ever going 1gbps data transfer, speed or request sent storage system , transfer occur @ 10gbps?)

many thanks



hi steve

the network between servers make scale-out file server should 10gbps or better. network used things writing out data in multiple copies ensure resiliency against node being down, data reconstruction when drive fails, data synchronization when node has been offline servicing or similar.

the network between hyper-v servers , scale-out file server dictates how fast storage io vms on hyper-v servers. technically can use 1gbps network, find slow, unless storage io requirements modest.

you can run vms on same nodes have storage physically attached. nodes should have sufficient cpu , memory resources run both storage , vms , should equipped 10gbps or better networking. rdma optional. many of our partners offer ready-made server configurations have been tested extensively configuration.

cheers

clausjor [msft]



Windows Server  >  Hyper-V



Comments

Popular posts from this blog

WIMMount (HSM) causing cluster storage to go redirected (2012r2 DC)

Failed to delete the test record dcdiag-test-record in zone test.com

Azure MFA with Azure AD and RDS