Not enough disk space '/' in AWS instance

The answer is twofold.

Workaround: use /dev/xvdb (/mnt) for temporary data

This is the so called ephemeral storage of your Amazon EC2 instance and its characteristics are vastly different than the of the persistent Amazon EBS storage in use elsewhere. In particular, this ephemeral storage will be lost on stop/start cycles and can generally go away, so you definitely don't want to put anything of lasting value there, i.e. only put temporary data there you can afford to lose or rebuild easily, like a swap file or strictly temporary data in use during computations. Of course you might store huge indexes there for example, but must be prepared to rebuild these after the storage has been cleared for whatever reason (instance reboot, hardware failure, ...).

Solution: resize /dev/xvda1 (/) to gain desired storage

This is the so called Root Device Storage of your Amazon EBS-backed EC2 instance, which facilitates Amazon EBS for flexibility and durability in particular, i.e. data put there is reasonably safe and survives instance failures; you can increase flexibility and durability even further by taking regular snapshots of your EBS volume, which are stored on Amazon S3, featuring the well known 99.999999999% durability.

This snapshot features enables you to solve your problem in turn, insofar you can replace your current 8GB EBS root storage (/dev/xvda1) with one more or less as large as you desire. The process is outlined in Eric Hammond's excellent article Resizing the Root Disk on a Running EBS Boot EC2 Instance:

As long as you are ok with a little down time on the EC2 instance (few minutes), it is possible to change out the root EBS volume with a larger copy, without needing to start a new instance.

If you properly prepare the steps he describes (I highly recommend to test them with a throw away EC2 instance first to get acquainted with the procedure, or automate it via a tailored script even), you should be able to finish the process with a few minutes downtime only indeed.

Most of the outlined steps can be performed via the AWS Management Console as well, which avoids dealing with the Amazon EC2 API Tools; this boils down to:

  • stop (not terminate!) the EC2 instance
  • detach the EBS volume from the stopped instance
  • create a snapshot of the detached EBS volume
  • create a new (larger) EBS volume from the created snapshot
  • attach the new EBS volume to the EC2 instance (Important! If this is your root device be sure it to name it exactly as the root device of the instance as it was mentioned e.g (/dev/sda1) or (/dev/xdva1) otherwise it will be attached as a block device and not a root device and you will not be able to start the instance as there will be no root device listed for the instance.)
  • SSH into the running instance and confirm everything is in order via df -ah
    • in case your system hasn't automatically resized the file system, you'll need to do this manually as explained in Eric's article

Good luck!


Alternative

Given the versatility and ease of use of these EBS volumes, an additional option would be to attach more EBS volumes to your instance and move clearly separable areas of concern over there.

For example, we are using a couple of pretty heavyweight Java applications, each consuming 1-2GB storage per version; to ease upgrading versions and generally be able to move these apps to different instances at my discretion, I've placed them on dedicated EBS volumes each, mount these to an instance and soft link them to the desired location, e.g. usually /var/lib/<app>/<version> and /usr/local/<app>/<version>.

With this method, we are currently running EC2 instances with the root device storage still at its default size of 8GB (just like yours), but sometimes up to 8 EBS volumes with varying sizes (1-15GB) attached as well.

You need to be aware of potential network performance issues though, insofar all these EBS volumes are using the very same LAN for their I/O, which might yield respective performance gains even, or saturate your network in extreme cases - so as usual this depends on the use case and workload at hand.