Is running "apt-get upgrade" every so often enough to keep a Web-server secure?

You've removed a lot of problems that normally get you in trouble (namely, assuming that the app you're hosting is completely secure). From a practical perspective, you absolutely have to consider those.

But presumably since you're aware of them, you have some protective measures in place. Let's talk about the rest, then.

As a start, you probably shouldn't run an update "every so often". Most distros operate security announcement mailing lists, and as soon as a vulnerability is announced there, it's rather public (well, it often is before that, but in your situation you can't really monitor all the security lists in the world). These are low-traffic lists, so you should really subscribe to your distro's and upgrade when you get notifications from it.

Often, a casually-maintained server can be brute-forced or dictionary attacked over a long period of time, since the maintainer isn't really looking for the signs. It's a good idea then to apply the usual counter-measures - no ssh password authentication, fail2ban on ssh and apache - and ideally to set up monitoring alerts when suspicious activity occurs. If that's out of your maintenance (time) budget, make a habit of logging in regularly to check those things manually.

While not traditionally thought of as a part of security, you want to make sure you can bring up a new server quickly. This means server configuration scripts (tools like Ansible, Chef, etc. are useful in system administration anyways) and an automatic backup system that you've tested. If your server's been breached, you've got to assume it's compromised forever and just wipe it, and that sucks if you haven't been taking regular backups of your data.


No. This is not enough to keep you secure.

It'll probably keep you secure some time but security is complex and quick paced so your approach really isn't good enough for long-term security. If everybody made the same assumptions as you're making in your question, the internet would be one big botnet by now.

So no, let's not limit this question to packages. Let's look at server security holistically so anybody reading this gets an idea of how many moving pieces there really are.

  • APT (eg Ubuntu's repos) only covers a portion of your software stack. If you're using (eg) Wordpress or another popular PHP library and that isn't repo-controlled, you need to update that too. The bigger frameworks have mechanisms to automate this but make sure you're taking backups and monitor service status because they don't always go well.

  • You wrote it all yourself so you think you're safe from the script kiddies? There are automated SQL injection and XSS exploit bot running around, poking every querystring and form alike.

    This is actually one of the places where a good framework helps protect against inadequate programmers who don't appreciate nuances of these attacks. Having a competent programmer audit the code also help allay fears here.

  • Does PHP (or Python, or whatever you're running) really need to be able to write everywhere? Harden your configuration and you'll mitigate against many attacks. Ideally the only places a webapp is able to write are a database, and places where scripting will never be executed (eg a nginx rule that only allows serving static files).

    The PHP defaults (at least how people use them) allow PHP to read and write PHP anywhere in the webroot. That has serious implications if your website is exploited.

    Note: if you do block off write access, things like WordPress won't be able to automagically update themselves. Look to tools like wp-cli and get them to run on a scheduled basis.

  • And your update schedule is actively harmful. What on earth is "every so often"? Critical remote security bugs have a short half-life but there's already a delay between 0-day and patch availability, and some exploits are also reversed engineered from patches (to catch the slow-pokes).

    If you're only applying updates once a month, there's a very strong possibility you'll be running exploitable software in the wild. TL;DR: Use automatic updates.

  • Versions of distributions don't last forever. If you were sensible and picked a LTS version of Ubuntu, you've got 5 years from initial release. Two more LTS versions will come out within that time and that gives you options.

    If you were on a "NEWER IS BETTER" rampage and went with 16.10 when you set your server up, you've got 9 months. Yeah. Then you have to upgrade through 17.04, 17.10 before being able to relax on 18.04 LTS.

    If your version of Ubuntu lapses, you can dist-upgrade all day long, you're not getting any security upgrades though.

  • And the LAMP stack itself isn't the only attack vector to a standard web server.

    • You need to harden your SSH configuration: only use SSH keys, disable passwords, shunt the port around, disable root logins, monitor brute attempts and block them with fail2ban.
    • Firewall off any other services with ufw (et alii).
    • Never expose the database (unless you need to, and then lock down the incoming IP in the firewall).
    • Don't leave random PHP scripts installed or you will forget them and they will get hacked.
  • There's no monitoring in your description. You're blind. If something does get on there, and start pumping out spam, infecting you webpages, etc, how can you tell something bad happened? Process monitoring. Scheduled file comparison against git (make sure it's read-only access from the server).

  • Consider the security (physical and remote) of your ISP. Are the dime-a-dozen "hosts" (aka CPanel pirates) —sqwanching out $2/month unlimited hosting plans— investing the same resources in security as a dedicated server facility? Ask around and investigate the history of breaches.

    Note: A publicised breach isn't necessarily a bad thing. Tiny hosts tend not to have any record and when things are broken into, there aren't the public "post-mortems" that many reputable hosts and services perform.

  • And then there's you. The security of the computer you code all this stuff on is almost as important as the server. If you use the same passwords, you're a liability. Secure your SSH keys with a physical FIDO-UF2 key.

I've been doing devops for ~15 years and it is something you can learn on the job, but it really only takes one breach —one teenager, one bot— to ruin an entire server and cause weeks of work disinfecting work product.

Just being conscious about what's running and what is exposed, helps you make better decisions about what you're doing. I just hope this helps somebody start the process of auditing their server.

But if you —the everyman average web app programmer— are unwilling to dig into this sort of stuff, should you even be running a server? That's a serious question. I'm not going to tell you you absolutely shouldn't, but what happens to you when you ignore all this, your server is hacked, your client loses money and you expose personal customer information (eg billing data) and you're sued? Are you insured for that level of loss and liability exposure?

But yeah, this is why managed services cost so much more than dumb servers.


On the virtue of backups...

A full system backup is possibly the worst thing you could keep around —for security— because you'll be tempted to use it if you get hacked. Their only place is recovering from a hardware failure.

The problem with using them in hacks is you reset to an even earlier point in time. Yet more flaws in your stack are apparent now, even more exploits exist for the hole that got you. If you put that server back online, you could be hacked instantly. You could firewall off incoming traffic and do a package upgrade and that might help you, but at this point you still don't know what got you, or when it got you. You're basing all your assumptions off a symptom you saw (ad injection on your pages, spam being bounced in your mailq). The hack could have been months before that.

They're obviously better than nothing, and fine in the case of a disk dying, but again, they're rubbish for security.

Good backups are recipes

You want something —just a plain-language document or something technical like an Ansible/Puppet/Chef routine— that can guide somebody through to restoring the entire site to a brand new server. Things to consider:

  • A list of packages to install
  • A list of configuration changes to make
  • How to restore the website source from version control.
  • How to restore the database dump*, and any other static files you might not version-control.

The more verbose you can be here, the better because this also serves as a personal backup. My clients know that if I die, they have a tested plan to restore their sites onto hardware they control directly.

A good scripted restore should take no more than 5 minutes. So even the time-delta between a scripted restore and restoring a disk image is minimal.

* Note: database dumps must be checked too. Make sure that there aren't any new admin users in your system, or random script blocks. This is as important as checking the source files or you'll just be hacked again.


The chance is high that you keep the server mostly secure if you do run updates often (i.e. at least daily, instead of only "every so often").

But, critical bugs happen from time to time, like Shellshock or ImageTragick. Also insecure server configuration might make attacks possible. This means that you should also take more actions than just running regular updates, like:

  • reduce the attack surface by running a minimal system, i.e. don't install any unnecessary software
  • reduce the attack surface by restricting any services accessible from outside, i.e. don't allow password based SSH login (only key based), don't run unneeded services etc
  • make sure you understand the impact of critical updates
  • expect the system to get attacked and try to reduce the impact, for example by running services which are accessible from outside inside some chroot, jail or container
  • log important events like failed logins, understand the logs and actually analyze the logs

Still, the most used initial attack vector are probably insecure web applications like Wordpress or other CMS. But your assumption was that the web application is fully secure so hopefully it really is.