John P. Linderman <jpl.jpl(a)gmail.com> wrote:
I have several 12 TB disks scattered about my house.
5% of 12TB is 600GB.
At one point in hystery, ext2 performance was reported to suffer badly
if there was less than 5% of disk space available in an active
filesystem. My naive belief, probably informed by older and wiser heads
around Sun, was that when the file system was >95% full, ext2 spent a
lot of time seeking around in free lists finding single allocatable
blocks. And there were no built-in "defragmentation" programs that
could easily fix that.
Is that still a performance constraint in ext4, which has had a few
decades to work out those edge performance issues?
And if it's fixed, then to handle the "root needs lebensraum" argument,
perhaps the default reserved percentage should be set to 1% (or 0.1%,
which I think the tools do not currently support)?
Or perhaps it should be min(1%,10GB) reserved for root, so the calculation
works better on both large or small filesystems?
And on file systems that aren't expected to have root logfiles and such
in them, should the right default now be -m 0?
How many zettabytes of storage could we make available to users in 2030,
by improving this default from the 1980s now?
John