Linux file system is full, but can’t find any large files? – When df and du don’t agree

Often df and du do not agree as df will be reporting on the disk space that is used by reading the filesystem meta data while du and ncdu report the disk space that is used by reading the information from the directory tree. Reading the whole tree is slower but it gives you a better picture of where the data is. I recently came across a situation where snmp was reporting a disk as nearly full and sure enough df- h shows that things are nearly full:

root@test-t3-01:~# df -h
Filesystem Size Used Avail Use% Mounted on

udev 16G 0 16G 0% /dev

tmpfs 3.2G 17M 3.2G 1% /run

/dev/mapper/ubuntu1404lts–vg-root 8.5G 7.5G 587M 93% /

tmpfs 16G 472K 16G 1% /dev/shm

tmpfs 5.0M 0 5.0M 0% /run/lock

tmpfs 16G 0 16G 0% /sys/fs/cgroup

/dev/sda1 236M 87M 137M 39% /boot

While du shows a different picture:

root@test-t3-01:~# du -Lsh /
5.4G /

So df thinks 7.5G is used while du thinks only 5.4G is in use. Where is the missing 2.1G?

Initially I thought this could be due to hidden files or areas the process cannot read but it turned out to be something much simpler. When a file is deleted, but there is still an active process writing to it. The file is hidden from utilities like du as it is a deleted / unlinked file. Unfortunately the space is not actually released until the process stops writing to the file. Running lsof +L1 will show all files that are unlinked open files.

For example:

root@test-t3-01:~# lsof +L1
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NLINK NODE NAME
dockerd 902 root 13r REG 252,0 1691426449 0 266537 /var/lib/docker/containers/d3569390cd7fed1eadba67627-json.log (deleted)
dockerd 902 root 14r REG 252,0 1691426449 0 266537 /var/lib/docker/containers/d3569390cd7fed1eadba678627-json.log (deleted)
dockerd 902 root 17w REG 252,0 1691426449 0 266537 /var/lib/docker/containers/d3569390cd7fed1eadba678627-json.log (deleted)
mysqld 924 mysql 4u REG 252,0 0 0 130242 /tmp/ib9FrYkL (deleted)
mysqld 924 mysql 5u REG 252,0 0 0 132358 /tmp/ibsW1bdg (deleted)
mysqld 924 mysql 6u REG 252,0 0 0 132359 /tmp/ibPi2p5K (deleted)
mysqld 924 mysql 7u REG 252,0 0 0 132360 /tmp/ibuTFORK (deleted)
mysqld 924 mysql 11u REG 252,0 0 0 132361 /tmp/ibH3DXVf (deleted)

The solution then becomes obvious, restart the server, process or service that is writing to these files.