So I noticed that my blog was down earlier today. My blog is a self-hosted Ghost instance on an AWS EC2 server. After connecting via SSH onto the server, I noticed that when trying to do bash completion or even editing a file using vim, I was getting the following error:

Cannot create temp file here for document. No space left on device

My search led me to this Stack Overflow post:

Check disk space using df -h

df -h is a command that displays disk space in a readable format. I ended up seeing something like this:

udev            2.0G     0  2.0G   0% /dev
tmpfs           396M   41M  355M  11% /run
/dev/xvda1      7.8G  7.4G     0 100% /
tmpfs           2.0G     0  2.0G   0% /dev/shm
tmpfs           5.0M     0  5.0M   0% /run/lock
tmpfs           2.0G     0  2.0G   0% /sys/fs/cgroup
tmpfs           396M     0  396M   0% /run/user/1000

Looks like I'm out of disk space for real. Here's I resolved the problem.

List files using the most disk space with du

You can list the files that are taking up the most space using the following command:

sudo du -x -h / | sort -h | tail -40

For me, I ended up seening a bunch of files that had "linux-aws-headers" in it. These are files that are remnants of automatic updates.

266M  /usr/src/
115M  /usr/src/linux-aws-headers-4.15.0-1035
115M  /usr/src/linux-aws-headers-4.15.0-1032
19M   /usr/src/linux-headers-4.15.0-1035-aws
19M   /usr/src/linux-headers-4.15.0-1032-aws

Safely remove linux-aws-headers

I safely removed these files by running the following two commands:

sudo apt-get -f install
sudo apt-get autoremove

This cleared up the bulk of the space on my EC2 instance.

Clean up docker containers and images

In addition to this, I also cleared up unused docker containers and images. Check out the docker image ls -a and docker container ls -a commands to see what images and containers you're no longer using. For me, I was able to reclaim about 1GB of space just be pruning these unused items.

Hope that was helpful. Cheers!