Amazon EC2 - No SSH After Reboot, Connection Refused

Solution 1:

From the AWS Developer Forum post on this topic:

Try stopping the broken instance, detaching the EBS volume, and attaching it as a secondary volume to another instance. Once you've mounted the broken volume somewhere on the other instance, check the /etc/sshd_config file (near the bottom). I had a few RHEL instances where Yum scrogged the sshd_config inserting duplicate lines at the bottom that caused sshd to fail on startup because of syntax errors.

Once you've fixed it, just unmount the volume, detach, reattach to your other instance and fire it back up again.

Let's break this down, with links to the AWS documentation:

  1. Stop the broken instance and detach the EBS (root) volume by going into the EC2 Management Console, clicking on "Elastic Block Store" > "Volumes", the right-clicking on the volume associated with the instance you stopped.
  2. Start a new instance in the same region and of the same OS as the broken instance then attach the original EBS root volume as a secondary volume to your new instance. The commands in step 4 below assume you mount the volume to a folder called "data".
  3. Once you've mounted the broken volume somewhere on the other instance,
  4. check the "/etc/sshd_config" file for the duplicate entries by issuing these commands:
    • cd /etc/ssh
    • sudo nano sshd_config
    • ctrl-v a bunch of times to get to the bottom of the file
    • ctrl-k all the lines at the bottom mentioning "PermitRootLogin without-password" and "UseDNS no"
    • ctrl-x and Y to save and exit the edited file
  5. @Telegard points out (in his comment) that we've only fixed the symptom. We can fix the cause by commenting out the 3 related lines in the "/etc/rc.local" file. So:
    • cd /etc
    • sudo nano rc.local
    • look for the "PermitRootLogin..." lines and delete them
    • ctrl-x and Y to save and exit the edited file
  6. Once you've fixed it, just unmount the volume,
  7. detach by going into the EC2 Management Console, clicking on "Elastic Block Store" > "Volumes", the right-clicking on the volume associated with the instance you stopped,
  8. reattach to your other instance and
  9. fire it back up again.

Solution 2:

Had a similar behavior today on my ec2 instance, and tracked down the thing to this: when I do sudo reboot now the machine hangs and I have to restart it manually from the aws management console when I do sudo reboot it reboots just fine. Apparently "now" is not a valid option for reboot as pointed out here https://askubuntu.com/questions/397502/reboot-a-server-from-command-line

thoughts?