Hardware involved in Direct-connect (switchless) 2-node Storage Spaces Direct Cluster

1) You don't need any special Ethernet cabling for direct connection since 1 GbE era, everything will auto negotiate.

https://en.wikipedia.org/wiki/Medium-dependent_interface

2) Broadcom NICs you picked up are shit, you'll get a lot of issues with their performance and stability. Go Mellanox CX4LC family, it's waaaay better.

3) 10GBASE-T has higher latency compared to SFP(+) cabling. Don't do -T especially if you want to cross-connect them and you avoiding a switch.

http://www.fiber-optic-cable-sale.com/10g-technology-10gbase-t-technology-vs-sfp-plus-technology.html

4) Two-node S2D is super-fragile, you have no protection against second disk or node fault during "other" node weekly patch/reboot sequence. Go StarWind VSAN (Free?) which is both cheaper and way faster compared to Microsoft "solution".

https://www.starwindsoftware.com/starwind-virtual-san


For Base-T networking, you have to use Cat6a or Cat7 cables. The main difference between them is the distance (up to 55m in Cat6a and 100m in Cat7). However, as already mentioned, SFP+ is much better in case of bandwidth, stability, and latency.

From year to year (since S2D was released in GA) I've tried to configure it for 2 nodes and it always fails. The stability of work is very poor, the failures to tolerate is quite low and of course, the storage spaces itself. Never ever use the parity SS with HDD. Also, it's very painful to restore the storage spaces direct (and storage spaces) if one of the disks fails (you have to deal with not well-done PowerShell commands).

If you are looking to cluster 2 Hyper-V servers, take a look at other SDS solutions such as Starwind vSAN or HPE StoreVirtual. They are much stable in 2 nodes switchless configuration with convenient GUI (supports both storage spaces and hardware RAID)