Skip site navigation (1)Skip section navigation (2)
Date:      27 Jan 2002 16:36:26 -0800
From:      swear@blarg.net (Gary W. Swearingen)
To:        Terry Lambert <tlambert2@mindspring.com>
Cc:        freebsd-chat@FreeBSD.ORG
Subject:   Re: Bad disk partitioning policies (was: "Re: FreeBSD Intaller (was    "Re: ... RedHat ...")")
Message-ID:  <ffbsffnwfp.sff@localhost.localdomain>
In-Reply-To: <3C53ED01.61407A02@mindspring.com>
References:  <20020123124025.A60889@HAL9000.wox.org> <3C4F5BEE.294FDCF5@mindspring.com> <20020123223104.SM01952@there> <p0510122eb875d9456cf4@[10.0.1.3]> <15440.35155.637495.417404@guru.mired.org> <p0510123fb876493753e0@[10.0.1.3]> <15440.53202.747536.126815@guru.mired.org> <p05101242b876db6cd5d7@[10.0.1.3]> <15441.17382.77737.291074@guru.mired.org> <p05101245b8771d04e19b@[10.0.1.3]> <20020125212742.C75216@over-yonder.net> <p05101203b8788a930767@[10.0.1.14]> <gc1ygc7sfi.ygc@localhost.localdomain> <3C534C4A.35673769@mindspring.com> <0s3d0s5dos.d0s@localhost.localdomain> <3C53ED01.61407A02@mindspring.com>

next in thread | previous in thread | raw e-mail | index | archive | help
Terry Lambert <tlambert2@mindspring.com> writes:

> "Gary W. Swearingen" wrote:
> 
> > Trust me.  It's not easy to understand from this thread so far, and I
> > don't expect it to be; I can go to the FFS treatise for understanding.
> > I feel bad even seeing you spend your time trying to explain reasons.
> 
> Nonsense.  If I can't explain reasons, then they are
> unsupportable (by me, at least ;^)).

But you probably can't, without rewriting much of the FFS treatise.
People are willing to trust experts when they say a certain behavior is
result of the chosen algorithm if there's a hint that the expert has
considered the issue (as you have more than hinted).  I thank you for
trying to explain the reasons, but this just isn't the forum for it.  I
don't wan't to seem ungrateful, but I think you should know that much of
your explaination is the sort of thing that is often referred to by the
very term you used in a physical context, "hand waving" (maybe with
flakes of "snow job" thrown in).  It's better than nothing, but it's
probably not worth the effort.  Please don't take offense; I'm trying,
as I did in my last message less bluntly (and unsuccessfully), to
convince you to not waste your time on incomplete explanations of
hard-to-explain reasons, especially when the only question is how the
system behaves, not why it does so.  (Thank you for having enough of the
former in the last msg.)

> Relative to the size of your disk, people complain about
> very large disks for even a very small free reserve
> percentage, mostly because they grew up in an era when
> "that was a lot of space!".

It's not just that.  It's a hunch that defrag considerations should
have as much to do with the size of files as it does with the amount
of unused FS.  If the former stay the same, it seems reasonable that
the free space/reserve/whatever should remain the same for similar
defrag performance, regardless of FS size.  OK, the hunch is wrong.

> The reality is that the algorithm needs a certain percentage
> of the space to work correctly, and if you take that away,
> then it doesn't work correctly.

People reading about -m (or not even that) need a statement at
least as blunt as that to prevent many from guessing that the
talk of percentages isn't just another obsolete rule of thumb.

> Really, it'd probably be a good idea to find a reasonable
> way to make swap take up disk space until you ran out, on

Interesting.

> This issue has been discussed many times before.  It's
> in the literature, and it's in the FreeBSD list archives
> dozens of times, at least.  8-).

And if it was discussed near the -m option or an SA-level article was
referred to, we wouldn't be doing it again.

> To address your suggestions: this would imply that the you
> could get non-worst-case performance on a full disk near a
> very small free reserve selected administratively.

OK, so it will take a few lines to explain better.

> The real answer is that the more data on the disk above
> the optimal free reserve for the algorithm used for block
> selection, the worse the performance will be, and "worst
> case" is defined as "the last write before hitting the
> free reserve limit".  So disk performance degrades
> steadily, the fuller it gets over the optimal free reserve
> (which is ~15%, much higher than the free reserve kept on
> most disks).

So it should say that performance degrades increasingly from negligible
at 85% of the full FS to about 3 times slower near 100% full (plus
increased permanent fragmentation of files).  And that this is a result
of the algorithms used and is independent of FS size.  And this needs
complication to mention the effects of the 5% switch and -o option.

If I understand this correctly (a bad assumption), the peformance at
95% full is the same regardless of whether I reserve 10% or 1%.  Since
I don't care if the "end" of the FS is slow, the only reason for picking
a large -m I see is to avoid permanently fragmented files.  Wrong?

Again, as it is, the documentation implies that performance with a
small -m is always bad regardless of FS space remaining.

> BTWBTW: If you screw up an important file this way, you
> can fix it by backing it up, deleting it, and restoring
> it, once the disk has dropped down to the optimal free
> reserve.  This is known as "the poor man's defragger".

mv file file.bak; cp -p file.bak file; rm file.bak   ## ?

Thanks again.  I've saved your ID in my PR-to-do list and if I ever get
the easier ones done and write one for -m, I'll CC it to you.

To Unsubscribe: send mail to majordomo@FreeBSD.org
with "unsubscribe freebsd-chat" in the body of the message




Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?ffbsffnwfp.sff>