Yes, this can happen, as I just found out – there is something called iNodes in the file-system (in my case ext4), with each file on the partition assigned to an iNode. And each partition has a fixed limit on the number of iNodes.
For example, my 50Gb root partition has 3.2 million iNodes, meaning it can have up to 3.2 million files. Sounds like a lot, but due to one particular program (GreyHole, subject of a future post), I ended up with ~ 3 million files in my /var/spool folder. Once my iNode counter reached max, no new file could be created, with programs reporting ‘Out of disk space’. But df showed me I had plenty of disk space. After I figured out it was iNodes I had run out of, rather than bytes, by using:
I knew to look for something that was creating a large number of small files, and once I found the folder with ~3 million files, I just had to delete them. Unfortunately, you can’t just do rm * when there is that many files, I used this solution from stack overflow:
find . -name "*" -maxdepth 1 -print0 | xargs -0 rm
And all was well 🙂
Side node: If your root partition fills up, you may be wondering how to actually perform the since your system probably can’t boot. I’m running Fedora and found that I needed to press ‘e’ repeatedly during the boot sequence which would bring me to the GRUB bootloader. Esapce to cancel any changes, then select your kernel and press ‘e’ again to edit. Then add a space and the word ‘single’ (no quotes) to the end of the line and press Enter. Then press ‘b’ to boot that kernel into ‘single user’ mode, which is runlevel 1 (file systems mounted but not network). This will drop you at a command prompt with access to inspect and modify your file system to either free up space or iNodes, which ever you need to do.