Scale-Out File Server with Storage Spaces Direct

Your scenario is fine if you are OK with paying loads of money for licensing. You will also need an interconnect fabrics and switch that supports PFC in order to use SMB Direct feature (I assume you would like to use it because of RDMA-capable Mellanox NICs which are awesome).

Your drives allow you to create a parity-based capacity tier and add some SSD-based caching (or faster tier) on top of it which is good. Though I would not expect exceptional performance from this setup unless your workload exactly fits your SSD cache/tier. Software parity RAID/RAIN used in S2D still sucks in terms of performance.

There is a lot of information on how to plan an S2D cluster on TechNet here https://technet.microsoft.com/en-us/windows-server-docs/storage/storage-spaces/storage-spaces-direct-overview.

A step-by-step guidance that covers your particular scenario can be found here: https://www.starwindsoftware.com/blog/microsoft-storage-spaces-direct-4-node-setup-2


There's very little to zero sense building SoFS based on S2D. Here's why:

1) Datacenter edition everywhere. It's expensive ($6K+), and while for hyperconverged scenario you pay for licensed Windows Server VMs with SOFS you pay for... nothing! There are no VMs to run!

2) Dual-head configuration is possible since TP5 AFAIK, but there's no local reconstruction codes (Read: cluster isn't tolerant to double disk failure - this is NONSENSE in the storage array world!), there's no erasure coding, and there's no multi-resilient virtual disks. Going for more heads fixes issues, but that's ( 4 * Datacenter ) editions and... Did you see much quad active storage controllers in storage arrays?! Yup, there are two. Sometimes three (Infinidat?).

So stick with Clustered Storage Spaces as a SoFS back end (super mature solution) or use some replication between nodes for much less than $12K in software alone.