Wordpress on IIS replication with robocopy

Having 4 front end servers that share the same files at the same time and each is able to write without using some kind of DFS or third party program dedicated to directory synchronization would be a night mare.

With azure you can looking into 3 things.

  1. Shared storage, there might be some cost associated with getting your own dedicated storage, and I am not sure of the configuration however Azure does offer this. This would ensure all you files are available to each server as soon as they are written.

  2. Azure DFS, DFS is a windows based directory synchronization tool that works rather well, also not sure about cost but the configuration might be a little easier. DFS does work asynchronously so there is a little delay but not not much.

  3. (Im going to explain how this would be done and then never talk about it again, because it is a horrible idea and will fail.) Create a script that will first compare the data on all four servers then copy the differential data. You would need to shared out each directory to the one server running the script, setup permission so the server can read and write and then troubleshoot troubleshoot, troubleshoot.

Either option from above will do the job, if your job depends on this working I would recommend you stay away from option 3.

That being said, and you are not trying to spend any money, follow the steps below.

  1. look at a program called "free file sync". There are some really good features to the free version I believe there is a paid for version but im not sure of the enhancements you get. I have used it in a lot of my dev environments when trying to achieve something similar to what you are looking to do and was to lazy to setup DFS.

  2. Make only one server writable, this can be easily done by configuring a URI on each server that says if create article go to ServerA, or a URL rewrite in your web.config, or being that WordPress is php use:

    header('Location: http://myhost.com/mypage.php');

Each will take a little bit of coding and PHP, IIS knowledge.

  1. The really fun part, with ServerA being the author server (only writable server) how do we direct traffic to ServerB, ServerC, and ServerD for reading without a load balancer?

Short answer you can't, well thats not exactly true, I had a customer once that was adamant about not using a load balencer he was able through a series of powershell scripts to move a connection from one server to other based on the amount of worker processes on each box or something like that. Either way very hard to do and not worth the time and energy to put forth.

See if you cannot configure Network Load Balancing on the servers, It will require an additional IP but its only one DNS change and the traffic can be distributed for reading across the 3 servers.

Good luck!


Thanks for all the suggestions people.

Our solution was using a peer-to-peer synchronization approach using a tool called resilio.

Resilio allowed us to configure a number of computers (in this case IIS Front ends) in a peer-to-peer synchronization cluster. A folder is selected from each computer in the cluster to be used for the synchronization process.

The resilio service (windows service running in the background), monitors these folders for any changes and if a change is made to any of the specified folders on the front ends in question, resilio will push that change to the other servers.

I hope this can help others facing a similar problem in the future.


I don't think scheduled tasks and Robocopy is a great approach. Because of the 5 minute window there will be times where a resource is requested but the server selected by the load balancer won't have it available. For largely static sites this will happen much less often than with busy sites frequently changed. A higher frequency or using a different sync technology like Bittorrent Sync (now called Resilio Sync) would improve this quite a bit, but not eliminate the problem.

Putting your wp-content or maybe just the wp-content/uploads folder onto a shared drive would be a better solution. Another way to look at this would be to have one of the servers host that folder, and have the others share it. With disk caching the load on the server shouldn't be all that much higher than the other servers.

Update

Have a look at this article for ideas about page caching, and this one for CDN. It's about Nginx so you'll need to work it out for IIS, but the theory behind it is valid for any web server.