What network file sharing protocol has the best performance and reliability?

Solution 1:

I vote for NFS.

NFSv4.1 added the Parallel NFS pNFS capability, which makes parallel data access possible. I am wondering what kind of clients are using the storage if only Unix-like then I would go for NFS based on the performance figures.

Solution 2:

The short answer is use NFS. According to this shootout and my own experience, it's faster.

But, you've got more options! You should consider a cluster FS like GFS, which is a filesystem multiple computers can access at once. Basically, you share a block device via iSCSI which is a GFS filesystem. All clients (initiators in iSCSI parlance) can read and write to it. Redhat has a whitepaper . You can also use oracle's cluster FS OCFS to manage the same thing.

The redhat paper does a good job listing the pros and cons of a cluster FS vs NFS. Basically if you want a lot of room to scale, GFS is probably worth the effort. Also, the GFS example uses a Fibre Channel SAN as an example, but that could just as easily be a RAID, DAS, or iSCSI SAN.

Lastly, make sure to look into Jumbo Frames, and if data integrity is critical, use CRC32 checksumming if you use iSCSI with Jumbo Frames.


Solution 3:

We have a 2 server load-blanacing web cluster.We have tried the following methods for syncing content between the servers:

  • Local drives on each server synced with RSYNC every 10 minutes
  • A central CIFS (SAMBA) share to both servers
  • A central NFS share to both servers
  • A shared SAN drive running OCFS2 mounted both servers

The RSYNC solution was the simplest, but it took 10 minutes for changes to show up and RSYNC put so much load on the servers we had to throttle it with custom script to pause it every second. We were also limited to only writing to the source drive.

The fastest performing shared drive was the OCFS2 clustered drive right up until it went insane and crashed the cluster. We have not been able to maintain stability with OCFS2. As soon as more than one server accesses the same files, load climbs through the roof and servers start rebooting. This may be a training failure on our part.

The next best was NFS. It has been extremely stable and fault tolerant. This is our current setup.

SMB (CIFS) had some locking problems. In particular changes to files on the SMB server were not being seen by the web servers. SMB also tended to hang when failing over the SMB server

Our conclusion was that OCFS2 has the most potential but requires a LOT of analysis before using it in production. If you want something straight-forward and reliable, I would recommend an NFS server cluster with Heartbeat for failover.


Solution 4:

I suggest You POHMELFS - it's created by russian programmer Evgeniy Polyakov and it's really, really fast.


Solution 5:

In terms of reliability and security, probably CIFS (aka Samba) but NFS "seems" much more lightweight, and with careful configuration, it's possible not to completely expose your valuable data to every other machine on the network ;-)

No insult to the FUSE stuff, but it still seems...fresh, if you know what I mean. I don't know if I trust it yet, but that could just be me being an old fogey, but old fogeyism is sometimes warranted when it comes to valuable enterprise data.

If you want to permanently mount one share on multiple machines, and you can play along with some of the weirdness (mostly UID/GID issues), then use NFS. I use it, and have for many years.