Is starting an AWS instance with only ssh to port 22 significantly insecure?

The answer depends on your risk appetite. Restricting access to the SSH port to only known IP addresses reduces the attack surface significantly. Whatever issue might arise (private key leaks, 0-day in SSH, etc.), it can only be exploited by an attacker coming from those specific IP addresses. Otherwise the attacker can access the port from anywhere, which is especially bad in case of an unpatched SSH vulnerability with an exploit available in the wild.

It is up to you to decide, how important the system and its data is to you. If it is not that critical, the convenience of an SSH port open to the world might be appropriate. Otherwise, I would recommend limiting access, just in case. Severe 0-days in SSH do not pop up on a daily basis, but you never know when the next one will.


The ssh key would be distributed to a small set of people.

No, don't do that. Never share private keys. Have your folks generate key pairs on their own and collect their public keys. Take reasonable measures to ensure the pubkeys actually come from the right people.

Or if you don't mind the hassle, you can try a unified authentication scheme instead for example an SSH CA, so that you can sign them certificates, both of which can safely be distributed (the certificate is useless without the private key).

LDAP is even better, but I wouldn't bother with small-scale servers. It's just too complex to set up and maintain.


Opening an SSH port to the internet isn't insecure per se. It depends on how it authenticates. SSH scanning happens every minute on the internet. Try leaving it on for just a day and check /var/log/auth.log for invalid usernames.

I would say as long as you're using public key authentication and keeping the private part secure, no one can brute-force into your server in a practical amount of time, given that common SSH implementations like OpenSSH don't have 0-days popping up frequently. Sharing a private key is not secure, nor is it convenient. The key may get leaked during transmission, probably at some point you're not even aware of. That's what's dangerous.


Answer No not trivially insecure, but still not ideal.

I manage multiple AWS instances, and while most of them have Security Groups limiting SSH inbound access, there is a business need for one of them to listen on port 22 for all connections.

As such this host gets hit by thousands of script-kid (skiddy) connections every day. This is indicated at login by MOTD messages like

Last login: Fri Jun 19 23:17:36 UTC 2020 on pts/2
Last failed login: Sat Jun 27 01:00:44 UTC 2020 from 120.70.103.239 on ssh:notty
There were 21655 failed login attempts since the last successful login.
host1234 ~ # date
Sat Jun 27 01:12:18 UTC 2020

So that's roughly 2,500 a day or a hundred an hour. Certainly most of them will simply be automated probes, but what happens if a zero-day vulnerability is found and exploited?
By limiting your exposure you reduce the risk.

Solutions include one/some/all:

  • Use AWS security groups to only permit connections from specific IPs on the internet
  • Use a VPN solution and require that SSH be done over the VPN. The VPN can listen to all sources, have certs and 2FA, and generally add more layers. OpenVPN works well, or there are multiple AWS offerings to do the same task.
  • Move SSH to another port - its not any added security, but this does cut down on the number of ssh connection attempts and therefore the noise. Anyone worth their salt will scan all ports anyway, not just the default.
  • If you HAVE to listen for SSH promiscuously, explore a solution like fail2ban which adds sources to /etc/hosts.deny if they fail more than X times in Y minutes, and can remove them again after a day or so.
  • Explore IPv6 - like changing the listening port, IPv6 increases the time taken to scan, so skiddies have more space to search. v6 scanning still happens though.

For me, the devices sshing-in are hardware, so they have a valid user certificate and they always auth successfully. We wrote a script that scans /var/log/secure and looks for "user not found" or similar, and immediately adds those sources to the hosts.deny file permanently.
We've considered extending this to block whole subnets based on lookups, but that hasn't been needed yet.

We currently block:

host1235 ~ # grep -ci all /etc/hosts.*
/etc/hosts.allow:79
/etc/hosts.deny:24292

I'm not going to share a list of bad source IPs, because some locations consider IP addresses to be Personally Identifiable Information (or PII)

Note that our Office IPs are in hosts.allow which trump the hosts.deny file, so if someone fails a login from an office, then it won't lock out human users.

Do ask for clarifications - I know I've handwaved a lot of details.

Tags:

Ssh

Aws