# sudo btrfs fi df /mnt/disk3
Data, single: total=12.70TiB, used=12.27TiB
System, DUP: total=8.00MiB, used=1.34MiB
Metadata, DUP: total=15.00GiB, used=14.50GiB
GlobalReserve, single: total=512.00MiB, used=608.00KiB
# mkdir /mnt/disk3/tst
mkdir: cannot create directory ‘tst’: No space left on device
I suspect this is BTRFS balancing issue, but even BTRFS’s own utility is indicating there’s still SOME space left. Certainly should be enough to create a directory.
Any ideas?
Just in general BTRFS default options for creating new volumes seem to not work well for disks that I intend to fill completely immediately after formatting. Are there better options for this use case? I just use
mkfs.btrfs /dev/sdd1
When you create a filesystem, there is a parameter named as “block percent free”. This parameter should be “5%”, so a 5% of your partition size can only be written by the “root” user.
You can decrease this value or just free some space. You can try to create files or folders as root as well.
Is there any reason this 5% number still holds true? Back in the days of 40 MB hard drives it made sense to make sure the system didn’t totally run out while root was fixing the low disk situation … but these days even 1% is still several gigabytes of space, not likely to run out that quickly.
Fragmentation probably but seems arbitrary
Are you sure that’s the case with btrfs? I know ext has that feature. My understanding is btrfs just has a global reserve that can be used for any data in an low space situation.
# sudo btrfs fi usage /mnt/disk3 Overall: Device size: 12.73TiB Device allocated: 12.73TiB Device unallocated: 1.00MiB Device missing: 0.00B Device slack: 0.00B Used: 12.29TiB Free (estimated): 449.43GiB (min: 449.43GiB) Free (statfs, df): 449.43GiB Data ratio: 1.00 Metadata ratio: 2.00 Global reserve: 512.00MiB (used: 0.00B) Multiple profiles: no Data,single: Size:12.70TiB, Used:12.26TiB (96.55%) /dev/sdd1 12.70TiB Metadata,DUP: Size:15.00GiB, Used:14.49GiB (96.58%) /dev/sdd1 30.00GiB System,DUP: Size:8.00MiB, Used:1.34MiB (16.80%) /dev/sdd1 16.00MiB Unallocated: /dev/sdd1 1.00MiB
You/I learn something new every day. Cool info!
For me the answer is always “snapshots” and normally because of docker.
If you run a docker image store on a BTRFS drive, docker creates snapshots at various times. It never cleans them up; It has no commands that clean them up, and it means that if you delete a file it doesn’t free any space because the snapshots keep the file alive.
As a rule of thumb you should keep your disk usage around 60% or under.
My guess it that you have snapshots or other similar hidden data taking up space. List out your snapshots and sub volumes.
also, ‘df -i’. probably not this case but…
btrfs dynamically allocates inodes.
Would be nice if there’s some automatic solution, but after running into this issue I always run a couple different btrfs balance after deleting larger files for good measure. Took a while to figure out why Linux said there wasn’t any space left when df reported several GB available on the root partition
I am surprised there isn’t an automatic mechanism to handle this especially if it is such a frequent issue.