diffrence between virtual machine storage and san storage

When it comes to performance delta between a VM-based server comparing to a bare-metal one, it’s clear the performance of bare-metal solution should be better, since it has literally fewer abstraction layers thus less penalty impact on I/O operations. It’s true for the case where a simple VHD is used. However, there is an option to increase the VM performance with help of block level iSCSI storage. VM uses iSCSI storage delivers faster performance since I/O are performed as it would be done on hardware directly.

For the case, you can take a look at StarWind Virtual SAN, HPE VSA, and UnityVSA that provides iSCSI storage. I can recommend StarWind since it runs with the speed of DAS storage, however, StarWind delivers more efficient DAS performance utilization with DRAM and SSD cache enabled.


Usually a shared storage array would be used for the management features, not necessarily the performance. High availability, shared storage, clones and snapshots, centralized management.

For database applications in particular, it may be valuable to make use of array level clones, via raw devices, for backup and test environments.

I don't think it makes a large difference difference with raw devices versus abstractions like VMware datastores. You would have to run some performance tests in your configuration to see, however.

And regarding host local storage versus shared array, either can perform quite well, particularly with some of the faster solid state storage.


First, my background. For that past 10 years, I have managed an environment running over 700 production servers, 75 of those are SQL Servers. I've gone through iterations of bare metal, VM with direct SAN storage, and VM with VMDKs that live on SAN. You mentioned VMWare, so I'm going to stick with that tech stack for this reply.

When comparing like hardware (SSD, HDD, etc.), bare metal is always faster. However, when virtualizing, I've found no measurable difference between having a VM that interacts directly with the SAN for storage vs. one that is using VMDKs for storage that live on the same SAN. The added layer of virtualization doesn't increase latency enough to notice in most cases. However, everyone's use case is different. It could be that VMDKs are faster. For example, are you looking at iSCSI to the SAN for direct attached storage over a 1GB ethernet interface, while you have 8Gb or 16Gb Fiberchannel for your compute layer to connect to SAN? If so, VMDK would be faster.

With that said, there are many other advantages to going VMDK. vMotion and Storage DRS being two. One potential drawback is depending on the size of your VMDK, you cannot increase the size of it while the VM is running. I believe VMWare stops letting you hot add space at 2TB. If you're talking about smaller VMDKs, this doesn't apply. With that, my recommendation is to virtualize the whole stack.

Also, if I were to recommend a SAN technology for databases, I have found Nimble (http://www.nimblestorage.com) to be unmatched in the industry. Particularly when coupled with VMWare. Blazing fast, affordable, stable. All around, a solid solution