Synology and vmware with 4 way MPIO slow iSCSI speeds

  1. It might be accelerated with VAAI ZERO primitive (I can't tell exactly on your outdated vSphere version). But it's sequential write either way. I also depends how you created your iSCSI target. Newer DSM-s by default create Advanced LUNs that are created on top on file systems. Older versions by default used LVM disks directly and performed much worse.
  2. ~400MB/s should be achievable
  3. 400MB/s is not a problem, if target can provide the IO
  4. If you're looking at pure sequential throughput, then dd on Linux side or simple CrystalDiskMark on Windows will work.

LAGs and iSCSI usually don't mix. Disable bonding on Synology and configure as separate interfaces. Enable multi-initiator iSCSI on Synology. I don't have a Synology at hand unfortunately for exact instructions.

Configure vSphere like this.

  • vSphere initiator --> Synology target IP/port 1
  • vSphere initiator --> Synology target IP/port 2
  • vSphere initiator --> Synology target IP/port 3
  • vSphere initiator --> Synology target IP/port 4

Disable unneccessary paths (keep one vSphere source IP to one Synology IP), vSphere supports (not enforced) only 8 paths per target on iSCSI. I don't remember if you can limit target access per source on Synology side, likely not. Also you have already enough paths for reliability and any more will not help as you're likely bandwidth limited.

Change policy to a lower value, see here https://kb.vmware.com/s/article/2069356 Otherwise 1000 IOPS will get down one path until path change occurs.

Keep using jumbo frames. It's about 5% win on bandwidth alone and on gigabit you can easily become bandwidth starved.