What causes SSH problems after rebooting a 14.04 server?

Quick answer:

SSH is not the problem. The command you use to reboot is the problem: don't do reboot now, do reboot or shutdown -r now to reboot your system.

The command syntax (since 13.04) has been:

reboot [OPTION]...  [REBOOTCOMMAND]

The REBOOTCOMMAND never existed before. In 12.04, your now was just ignored but now it's being used... And it's breaking everything.

Long answer, with my tests results and explanation:

I have a similar problem with some servers running 14.04 AND in VPS (hosted at the French OVH provider - running OpenVZ) AND when doing reboot now inside the server itself.

Like you I've issued the command reboot now from the console (logged in using SSH). A few second after I pressed RETURN, my session is automatically disconnected. Like you I've never been able to reconnect to the server via SSH after issuing this command.

So, I decided to open the KVM console provided by OVH. (emulating the direct access using keyboard and screen on a physical server for this kind of virtual server).

I was able to connect to my machine and I saw that she was entering into Single User Mode, waiting for me to press CTRL+D to continue or to enter the root password to go into maintenance mode. I pressed the key combination to go let the process continue and then was able to SSH into my system again. What was my surpise to see, after running uptime, that the uptime was not 2 or 3 minutes but yet a lot of day : reboot now executed inside an Ubuntu 14.04 VPS is not really rebooting but is just asking to go into Single User mode!

From this, I've learned to never ask a reboot from within my VPS but to request it from the command provided on the management interface of the hoster.

Thus there is no problem with your SSH installation. The problem is when you type reboot now. In fact, I tested it afterward also, if you had typed reboot (just the word, no option), it would have done what you were intending to do : reboot the server.

Using reboot with an argument (from the man page) call the command shutdown with the given arguments. And indeed, if I execute shutdown now, I have the same behaviour : the system is not rebooted, it goes into single user mode.

Remark: it looks like it is the intended behaviour as the message appearing on the screen after hiting executing this command says something like :

The system will be brought to the maintenance mode

Maintenance mode or single user mode, this represent the same, a runlevel with noting more than a shell, no network, no network processes, ...

This may be confusing, but note that the correct usage of shutdown is, for instance : shutdown -h now to halt the system now or shutdown -r now to reboot it now. I wasn't aware that shutdown now would only bring the system into single user mode. I usually do init S to achieve this.


Another potential cause is ufw losing the SSH port rule configuration. This has occurred to me on at least one or two occasions, where after applying updates and rebooting, the firewall configuration was blocking me gaining access to the server. Using my hosting provider's VPS console facility allowed me to get onto the machine and diagnose the problem. Example below showing the problem (ie. no entry for port 22):

user@host:~$ sudo ufw status verbose
[sudo] password for user:
Status: active
Logging: on (low)
    Default: deny (incoming), allow (outgoing), disabled (routed)
New profiles: skip

To                         Action      From
--                         ------      ----
80,443/tcp (Nginx Full)    ALLOW IN    Anywhere
25/tcp                     ALLOW IN    Anywhere
143                        ALLOW IN    Anywhere
110                        ALLOW IN    Anywhere
993/tcp (Dovecot Secure IMAP) ALLOW IN    Anywhere
995/tcp (Dovecot Secure POP3) ALLOW IN    Anywhere
25/tcp (Postfix)           ALLOW IN    Anywhere
465/tcp (Postfix SMTPS)    ALLOW IN    Anywhere
80,443/tcp (Nginx Full (v6)) ALLOW IN    Anywhere (v6)
25/tcp (v6)                ALLOW IN    Anywhere (v6)
143 (v6)                   ALLOW IN    Anywhere (v6)
110 (v6)                   ALLOW IN    Anywhere (v6)
993/tcp (Dovecot Secure IMAP (v6)) ALLOW IN    Anywhere (v6)
995/tcp (Dovecot Secure POP3 (v6)) ALLOW IN    Anywhere (v6)
25/tcp (Postfix (v6))      ALLOW IN    Anywhere (v6)
465/tcp (Postfix SMTPS (v6)) ALLOW IN    Anywhere (v6)

Re-enabling the port as follows does the trick:

user@host:~$ sudo ufw allow ssh
Rule added
Rule added (v6)

I may be late, and it may be obvious, but what worked for me was to check the configuration file /etc/ssh/sshd_config : running the daemon with /etc/init.d/ssh start or any other combination showed that the service was running even though it was not, but if I launch the executable with its absolute path (in my case /usr/sbin/sshd) I saw that there was a "0B" appended at the end of the configuration file that caused an error when starting, removing it solved the problem.