Massive peer-to-peer replication topology?

For a meer 52 client sites replication will do just fine. Consider your local copy as the 'master' site and have the application work only on the local copy, always. Use replication to aggregate the data to the central repository for aggregated reporting. Index maintenance operations normally do not generate replicaiton traffic. Manage schema changes through application deployment migrations, ie. have upgrade scripts for each released version.

As the number of sites increases, managing replication becomes harder and harder. Also licensing would push toward deploying Express on periphery. And SQL Server upgrade is an absolute pain when replication is involved, as the order of upgrading the components (publisher, distributor, subscriber) is critical is but difficult to coordinate with many sites. Deployments that I know of with +1500 sites use customized data movement based on Service Broker. See Using Service Broker instead of Replication for an example. I've even seen designes that use Service Broker to push out to periphery new bits (application changes) which in turn deploy, when activated, migrations and schema changes, making the entire deployment operated from the center.


Why is there one database? Seems like it would make more sense to have each location keep a copy of the database schema locally for their own read/write operations and their own slice of the data, and the reporting portion of the data could be replicated centrally. Is there any reason site A would have to make its non-reporting data available to site B, or vice-versa? The reporting data could be distributed back to the central location using a variety of techniques, including roll-your-own (I have some experience there if you want further info). Or if the reporting data is most of the data size, you could just use log shipping or copy_only backups, keeping a copy of the whole database, with central reporting using views or other techniques against multiple databases when they're restored.

Schema changes are easy. I dealt with this quite a bit at my old job, where we had ~500 databases with identical schema. The key to deploying changes is to make sure the changes are backward compatible. If you can build a script that doesn't break one database, you can write a loop that deploys those changes to n databases / servers / environments. We used Red Gate to build our deployment scripts based on comparisons between beta and production, and then a home-grown sp_msforeachdb on each instance (because you can't trust the built-in one - see here and here). With a good mix of source control as well.

In this case I'm not sure how index maintenance would cause network overhead. Index maintenance should only be done on the source and not "replayed" if it can be avoided.