How do I know if my Linux server has been hacked?

Solution 1:

  1. Keep a pristine copy of critical system files (such as ls, ps, netstat, md5sum) somewhere, with an md5sum of them, and compare them to the live versions regularly. Rootkits will invariably modify these files. Use these copies if you suspect the originals have been compromised.
  2. aide or tripwire will tell you of any files that have been modified - assuming their databases have not been tampered with.
  3. Configure syslog to send your logfiles to a remote log server where they can't be tampered with by an intruder. Watch these remote logfiles for suspicious activity
  4. read your logs regularly - use logwatch or logcheck to synthesize the critical information.
  5. Know your servers. Know what kinds of activities and logs are normal.

Solution 2:

You don't.

I know, I know - but it's the paranoid, sad truth, really ;) There are plenty of hints of course, but if the system was targeted specifically - it might be impossible to tell. It's good to understand that nothing is ever completely secure. But we need to work for more secure, so I will point at all the other answers instead ;)

If your system was compromised, none of your system tools can be trusted to reveal the truth.


Solution 3:

Tripwire is a commonly used tool - it notifies you when system files have changed, although obviously you need to have it installed beforehand. Otherwise items such as new user accounts you don't know about, weird processes and files you don't recognize, or increased bandwidth usage for no apparent reason are the usual signs.

Other monitoring systems such as Zabbix can be configured to alert you when files such as /etc/passwd are changed.


Solution 4:

Some things that have tipped me off in the past:

  • High load on a system that should be idle
  • Weird segfaults, eg. from standard utilities like ls (this can happen with broken root kits)
  • Hidden directories in / or /var/ (most script kiddies are too stupid or lazy to cover their tracks)
  • netstat shows open ports that shouldn't be there
  • Daemons in the process list that you normally use different flavours of (eg. bind, but you always use djbdns)

Additionally I've found the there's one reliable sign that a box is compromised: if you have a bad feeling about the diligence (with updates, etc.) of the admin from whom you inherited a system, keep a close eye on it!


Solution 5:

There's a method of checking hacked servers via kill -

Essentially, when you run "kill -0 $PID" you are sending a nop signal to process identifier $PID. If the process is running, the kill command will exit normally. (FWIW, since you're passing a nop kill signal, nothing will happen to the process). If a process isn't running, the kill command will fail (exit status less than zero).

When your server is hacked / a rootkit is installed, one of the first things it does is tell the kernel to hide the affected processes from the process tables etc. However it can do all sorts of cool things in kernel space to muck around with the processes. And so this means that

a) This check isn't an extensive check, since the well coded/intelligent rootkits will ensure that the kernel will reply with a "process doesn't exist" reply making this check redundant. b) Either way, when a hacked server has a "bad" process running, it's PID usually won't show under /proc.

So, if you're here until now, the method is to kill -0 every available process in the system (anything from 1 -> /proc/sys/kernel/pid_max) and see if there are processes that are running but not reported in /proc.

If some processes do come up as running, but not reported in /proc, you probably do have a problem any way you look at it.

Here's a bash script that implements all that - https://gist.github.com/1032229 . Save that in some file and execute it, if you find a process that comes up unreported in proc, you should have some lead to start digging in.

HTH.