Add Keypair to existing EC2 instance

You can't apply a keypair to a running instance. You can only use the new keypair to launch a new instance.

For recovery, if it's an EBS boot AMI, you can stop it, make a snapshot of the volume. Create a new volume based on it. And be able to use it back to start the old instance, create a new image, or recover data.

Though data at ephemeral storage will be lost.


Due to the popularity of this question and answer, I wanted to capture the information in the link that Rodney posted on his comment.

Credit goes to Eric Hammond for this information.

Fixing Files on the Root EBS Volume of an EC2 Instance

You can examine and edit files on the root EBS volume on an EC2 instance even if you are in what you considered a disastrous situation like:

  • You lost your ssh key or forgot your password
  • You made a mistake editing the /etc/sudoers file and can no longer gain root access with sudo to fix it
  • Your long running instance is hung for some reason, cannot be contacted, and fails to boot properly
  • You need to recover files off of the instance but cannot get to it

On a physical computer sitting at your desk, you could simply boot the system with a CD or USB stick, mount the hard drive, check out and fix the files, then reboot the computer to be back in business.

A remote EC2 instance, however, seems distant and inaccessible when you are in one of these situations. Fortunately, AWS provides us with the power and flexibility to be able to recover a system like this, provided that we are running EBS boot instances and not instance-store.

The approach on EC2 is somewhat similar to the physical solution, but we’re going to move and mount the faulty “hard drive” (root EBS volume) to a different instance, fix it, then move it back.

In some situations, it might simply be easier to start a new EC2 instance and throw away the bad one, but if you really want to fix your files, here is the approach that has worked for many:

Setup

Identify the original instance (A) and volume that contains the broken root EBS volume with the files you want to view and edit.

instance_a=i-XXXXXXXX

volume=$(ec2-describe-instances $instance_a |
  egrep '^BLOCKDEVICE./dev/sda1' | cut -f3)

Identify the second EC2 instance (B) that you will use to fix the files on the original EBS volume. This instance must be running in the same availability zone as instance A so that it can have the EBS volume attached to it. If you don’t have an instance already running, start a temporary one.

instance_b=i-YYYYYYYY

Stop the broken instance A (waiting for it to come to a complete stop), detach the root EBS volume from the instance (waiting for it to be detached), then attach the volume to instance B on an unused device.

ec2-stop-instances $instance_a
ec2-detach-volume $volume
ec2-attach-volume --instance $instance_b --device /dev/sdj $volume

ssh to instance B and mount the volume so that you can access its file system.

ssh ...instance b...

sudo mkdir -p 000 /vol-a
sudo mount /dev/sdj /vol-a

Fix It

At this point your entire root file system from instance A is available for viewing and editing under /vol-a on instance B. For example, you may want to:

  • Put the correct ssh keys in /vol-a/home/ubuntu/.ssh/authorized_keys
  • Edit and fix /vol-a/etc/sudoers
  • Look for error messages in /vol-a/var/log/syslog
  • Copy important files out of /vol-a/…

Note: The uids on the two instances may not be identical, so take care if you are creating, editing, or copying files that belong to non-root users. For example, your mysql user on instance A may have the same UID as your postfix user on instance B which could cause problems if you chown files with one name and then move the volume back to A.

Wrap Up

After you are done and you are happy with the files under /vol-a, unmount the file system (still on instance-B):

sudo umount /vol-a
sudo rmdir /vol-a

Now, back on your system with ec2-api-tools, continue moving the EBS volume back to it’s home on the original instance A and start the instance again:

ec2-detach-volume $volume
ec2-attach-volume --instance $instance_a --device /dev/sda1 $volume
ec2-start-instances $instance_a

Hopefully, you fixed the problem, instance A comes up just fine, and you can accomplish what you originally set out to do. If not, you may need to continue repeating these steps until you have it working.

Note: If you had an Elastic IP address assigned to instance A when you stopped it, you’ll need to reassociate it after starting it up again.

Remember! If your instance B was temporarily started just for this process, don’t forget to terminate it now.


Though you can't add a key pair to a running EC2 instance directly, you can create a linux user and create a new key pair for him, then use it like you would with the original user's key pair.

In your case, you can ask the instance owner (who created it) to do the following. Thus, the instance owner doesn't have to share his own keys with you, but you would still be able to ssh into these instances. These steps were originally posted by Utkarsh Sengar (aka. @zengr) at http://utkarshsengar.com/2011/01/manage-multiple-accounts-on-1-amazon-ec2-instance/. I've made only a few small changes.

  1. Step 1: login by default “ubuntu” user:

    $ ssh -i my_orig_key.pem [email protected]
    
  2. Step 2: create a new user, we will call our new user “john”:

    [ubuntu@ip-11-111-111-111 ~]$ sudo adduser john
    

    Set password for “john” by:

    [ubuntu@ip-11-111-111-111 ~]$ sudo su -
    [root@ip-11-111-111-111 ubuntu]# passwd john
    

    Add “john” to sudoer’s list by:

    [root@ip-11-111-111-111 ubuntu]# visudo
    

    .. and add the following to the end of the file:

    john   ALL = (ALL)    ALL
    

    Alright! We have our new user created, now you need to generate the key file which will be needed to login, like we have my_orin_key.pem in Step 1.

    Now, exit and go back to ubuntu, out of root.

    [root@ip-11-111-111-111 ubuntu]# exit
    [ubuntu@ip-11-111-111-111 ~]$
    
  3. Step 3: creating the public and private keys:

    [ubuntu@ip-11-111-111-111 ~]$ su john
    

    Enter the password you created for “john” in Step 2. Then create a key pair. Remember that the passphrase for key pair should be at least 4 characters.

    [john@ip-11-111-111-111 ubuntu]$ cd /home/john/
    [john@ip-11-111-111-111 ~]$ ssh-keygen -b 1024 -f john -t dsa
    [john@ip-11-111-111-111 ~]$ mkdir .ssh
    [john@ip-11-111-111-111 ~]$ chmod 700 .ssh
    [john@ip-11-111-111-111 ~]$ cat john.pub > .ssh/authorized_keys
    [john@ip-11-111-111-111 ~]$ chmod 600 .ssh/authorized_keys
    [john@ip-11-111-111-111 ~]$ sudo chown john:ubuntu .ssh
    

    In the above step, john is the user we created and ubuntu is the default user group.

    [john@ip-11-111-111-111 ~]$ sudo chown john:ubuntu .ssh/authorized_keys
    
  4. Step 4: now you just need to download the key called “john”. I use scp to download/upload files from EC2, here is how you can do it.

    You will still need to copy the file using ubuntu user, since you only have the key for that user name. So, you will need to move the key to ubuntu folder and chmod it to 777.

    [john@ip-11-111-111-111 ~]$ sudo cp john /home/ubuntu/
    [john@ip-11-111-111-111 ~]$ sudo chmod 777 /home/ubuntu/john
    

    Now come to local machine’s terminal, where you have my_orig_key.pem file and do this:

    $ cd ~/.ssh
    $ scp -i my_orig_key.pem [email protected]:/home/ubuntu/john john
    

    The above command will copy the key “john” to the present working directory on your local machine. Once you have copied the key to your local machine, you should delete “/home/ubuntu/john”, since it’s a private key.

    Now, one your local machine chmod john to 600.

    $ chmod 600 john
    
  5. Step 5: time to test your key:

    $ ssh -i john [email protected]
    

So, in this manner, you can setup multiple users to use one EC2 instance!!