Linux Bulk/Remote Administration

It depends what exactly you need and what you are looking for. But in general there exists multiple solutions for "configuration management like:

  1. puppet
  2. chef
  3. cfengine
  4. ansible
  5. salt

etc. I personally would recommend puppet as it has a big community and a lot of external provided recipes. This allows you to configure and manage systems automatically. If you combine this with own repositories and automated updates via e.g. unattended-upgrades you can automatically update the system.

Another solution is just to provide your own packages like company-base etc. which automatically depends on the necessary software and can configure your system automatically.

You should also look into automates deployments (barebone and virtualized). If you combine this with configuration management or your own repository you can easily automate and reinstall your systems. If you want to get started with automated installation have a look at theforman which supports libvirt as well as bare bone installations and has integrated puppet support. If you want do do it yourself you can look into kickstart (redhat et. al.) or "preseeding" to automatically configure your system. For Debian you can also use something like debootstrap or a wrapper named grml-debootstrap supporting virtualized images.

To help providing the VirtualBox images for your developer have a look at vagrant it allows you to automate the creation of virtualized systems with VirtualBox supporting chef, puppet and shell scripts to customize your virtual environment.

If you want to use the solution by your existing provider you should ask them how they manage your systems but it will probably be some kind of configuration managment. It may be possible to run their agent on your systems if you can access the configuration server.

For google keywords look into devops, configuration management, it automation and server orchestration.

In short automate as much as possible and don't even think about doing stuff manual.


Ulrich already gave the answer regarding software deployment and automated server setup.

The principles behind this are

  • Define how your servers should look like - this includes common software that is installed by default, the partitioning scheme and the filesystem-layout
  • Production, staging, test and developement servers should not differ regarding these basic standards (else you will run into problems later on - as you did)
  • Use a proper change-management to document ALL changes you did (including tiny one-line-changes in any configuration)
  • Always do you change first in test, then in developement, then in staging and last in production

You asked for a handy tool to manage masses of servers - my personal favorite is cluster-ssh (cssh). Type once and do changes on many servers simultaneously.

If you discover a problem and have a fix for it that removes the problem:

  1. Apply the fix to Test/Dev/Staging/Prod (see above) if it really works
  2. Apply the fix to your virtual templates so future VM-clones will not have that bug
  3. Apply the fix to your physical installation process (kickstart/autoyast/whatever)
  4. Apply the fix to ALL servers

If you are facing masses of servers to fix this is a process that has to be well documented and at the end a different team should check if the fix has been completely applied.

We employ Mantis (open source, PHP) for that purpose.


I manage about 30 products and a few hundred servers in multiple countries. I'm the software configuration manager, so I do not have root access (by design), don't touch the databases or their servers (again, by design) and have to jump a lot of hoops because of corporate security. But I do manage the configurations in test, staging and production, including database links and changes. I have a number of scripts that go out to servers using combinations of ssh, python and shell scripts.

The primary things to think about are:

  1. What kinds of interactions are you going to have with your servers? Just file uploads? Running command-line programs? Running remote X clients?
  2. What level of security is needed to access these servers? Firewalls, secure networks, vpn? Is ssh sufficient and from a central secure location?
  3. How much can be automated on each server? Can you install a program on each server and run it, or do you need to stream the program through something like ssh to run it remotely? Can you script it with expect or just a command-line invocation?

VirtualBox gives a lot of command-line tools that you could administer through just ssh or systems like puppet as Ulrich mentions.