Distribute ssh public keys among hosts

Solution 1:

I've come up with a solution which works for me. I do create the public/private keys on my machine from where Ansible is run and on the first connection I put the keys in place.

Then I do add the keys from all the slaves to the master with the following:

# Tasks for PostgreSQL master
- name: add slave public key
  sudo: yes
  authorized_key: user=postgres state=present key="{{ lookup('file', '../../../keys/' + item + '/id_rsa.pub') }}"
  with_items: groups.databases_slave

The whole playbook can be found on github.com/soupdiver/ansible-cluster.

Solution 2:

I believe the following solution should work in your case. I've been using it for a similar scenario with a central backup server and multiple backup clients.

I have a role (let's say "db_replication_master") associated to the server receiving the connections:

    - role: db_replication_master
      db_slaves: ['someserver', 'someotherserver']
      db_slave_user: 'someuser' # in case you have different users
      db_master_user: 'someotheruser'
      extra_pubkeys: ['files/id_rsa.pub'] # other keys that need access to master

Then we create the actual tasks in the db_replication_master role:

    - name: create remote accounts ssh keys
      user:
        name: "{{ db_slave_user }}"
        generate_ssh_key: yes
      delegate_to: "{{ item }}"
      with_items: db_slaves

    - name: fetch pubkeys from remote users
      fetch:
        dest: "tmp/db_replication_role/{{ item }}.pub"
        src: "~{{db_slave_user}}/.ssh/id_rsa.pub"
        flat: yes
      delegate_to: "{{ item }}"
      with_items: db_slaves
      register: remote_pubkeys
      changed_when: false # we remove them in "remove temp local pubkey copies" below

    - name: add pubkeys to master server
      authorized_key:
        user: "{{ db_master_user }}"
        key: "{{ lookup('file', item) }}"
      with_flattened:
        - extra_pubkeys
        - "{{ remote_pubkeys.results | default({}) | map(attribute='dest') | list }}"

    - name: remove temp local pubkey copies
      local_action: file dest="tmp/db_replication_role" state=absent
      changed_when: false

So we're basically:

  • dynamically creating ssh-keys on those slaves that still don't have them
  • then we're using delegate_to to run the fetch module on the slaves and fetch their ssh pubkeys to the host running ansible, also saving the result of this operation in a variable so we can access the actual list of fetched files
  • after that we proceed to normally push the fetched ssh pubkeys (plus any extra pubkeys provided) to the master node with the authorized_keys module (we use a couple of jinja2 filters to dig out the filepaths from the variable in the task above)
  • finally we remove the pubkey files locally cached at the host running ansible

The limitation of having the same user on all hosts can probably be worked around, but from what I get from your question, that's probably not an issue for you (it's slighly more relevant for my backup scenario). You could of course also make the key type (rsa, dsa, ecdsa, etc) configurable.

Update: oops, I'd originally written using terminology specific to my problem, not yours! Should make more sense now.