Options to efficiently synchronize 1 million files with remote servers?

Solution 1:

Since instant updates are also acceptable, you could use lsyncd.
It watches directories (inotify) and will rsync changes to slaves.
At startup it will do a full rsync, so that will take some time, but after that only changes are transmitted.
Recursive watching of directories is possible, if a slave server is down the sync will be retried until it comes back.

If this is all in a single directory (or a static list of directories) you could also use incron.
The drawback there is that it does not allow recursive watching of folders and you need to implement the sync functionality yourself.

Solution 2:

Consider using a distributed filesystem, such as GlusterFS. Being designed with replication and parallelism in mind, GlusterFS may scale up to 10 servers much more smoothly than ad-hoc solutions involving inotify and rsync.

For this particular use-case, one could build a 10-server GlusterFS volume of 10 replicas (i.e. 1 replica/brick per server), so that each replica would be an exact mirror of every other replica in the volume. GlusterFS would automatically propagate filesystem updates to all replicas.

Clients in each location would contact their local server, so read access to files would be fast. The key question is whether write latency could be kept acceptably low. The only way to answer that is to try it.


Solution 3:

I doubt rsync would work for this in the normal way, because scanning a million files and comparing it to the remote system 10 times would take to long. I would try to implement a system with something like inotify that keeps a list of modified files and pushes them to the remote servers (if these changes don't get logged in another way anyway). You can then use this list to quickly identify the files required to be transferred - maybe even with rsync (or better 10 parallel instances of it).

Edit: With a little bit of work, you could even use this inotify/log watch approach to copy the files over as soon as the modification happens.


Solution 4:

Some more alternatives:

  • Insert a job into RabbitMQ or Gearman to asynchronously go off and delete (or add) the same file on all remote servers whenever you delete or add a file on the primary server.
  • Store the files in a database and use replication to keep the remote servers in sync.
  • If you have ZFS you can use ZFS replication.
  • Some SANs have file replication. I have no idea if this can be used over the Internet.

Solution 5:

This seems to be an ideal storybook use case for MongoDB and maybe GridFS. Since the files are relatively small, MongoDB alone should be enough, although it may be convenient to use the GridFS API.

MongoDB is a nosql database and GridFS is a file storage build on top of it. MongoDB has a lot of built in options for replication and sharding, so it should scale very well in your use case.

In your case you will probably start with a replica set which consists of the master located in your primary datacenter (maybe a second one, in case you want to failover on the same location) and your ten "slaves" distributed around the world. Then do load tests to check if the write performance is enough and check the replication times to your nodes. If you need more performace, you could turn the setup into a sharded one (mostly to distribute the write load to more servers). MongoDB has been designed with scaling up huge setups with "cheap" hardware, so you can throw in a batch of inexpensive servers to improve performance.