Skip site navigation (1)Skip section navigation (2)
Date:      Wed, 8 Dec 2010 17:58:09 +0100 (CET)
From:      Oliver Fromme <olli@lurza.secnetix.de>
To:        freebsd-fs@FreeBSD.ORG, pjd@FreeBSD.ORG
Subject:   Re: TRIM support for UFS?
Message-ID:  <201012081658.oB8Gw9w3010495@lurza.secnetix.de>
In-Reply-To: <20101208152136.GG1692@garage.freebsd.pl>

next in thread | previous in thread | raw e-mail | index | archive | help
Pawel Jakub Dawidek wrote:
 > On Tue, Dec 07, 2010 at 04:31:14PM +0100, Oliver Fromme wrote:
 > > I've bought an OCZ Vertex2 E (120 GB SSD) and installed
 > > FreeBSD i386 stable/8 on it, using UFS (UFS2, to be exact).
 > > I've made sure that the partitions are aligned properly,
 > > and used newfs with 4k fragsize and 32k blocksize.
 > > It works very well so far.

(I should also mention that I mounted all filesystems from
the SSD with the "noatime" option, to reduce writes during
normal operation.)

 > > So, my question is, are there plans to add TRIM support
 > > to UFS?  Is anyone working on it?  Or is it already there
 > > and I just overlooked it?
 > 
 > I hacked up this patch mostly for Kris and md(4) memory-backed UFS, so
 > on file remove space can be returned to the system.

I see.

 > I think you should ask Kirk what to do about that, but I'm afraid my
 > patch can break SU - what if we TRIM, but then panic and on fsck decide
 > to actually use the block?

Oh, you're right.  That could be a problem.

Maybe it would be better to write a separate tool that
performs TRIM commands on areas of the file system that
are unused for a while.

I also remember that mav@ wrote that the TRIM command is
very slow.  So, it's probably not feasible to execute it
each time some blocks are freed, because it would make the
file system much slower and nullify all advantages of the
SSD.

Just found his comment from r201139:
"I have no idea whether it is normal, but for some reason it takes 200ms 
to handle any TRIM command on this drive, that was making delete extremely 
slow. But TRIM command is able to accept long list of LBAs and the length of 
that list seems doesn't affect it's execution time. Implemented request 
clusting algorithm allowed me to rise delete rate up to reasonable numbers, 
when many parallel DELETE requests running."

 > BTW. Have you actually observed any performance degradation without
 > TRIM?

Not yet.  My SSD is still very new.  It carries only the
base system (/home is on a normal 1TB disk), so not many
writes happened so far.  But as soon as I start doing more
write access (buildworld + installworld, updating ports
and so on), I expect that performance will degrade over
time.

I've also heard from several people on various mailing lists
that the performance of their SSD drives got worse after
some time.

That performance degradation is caused by so-called "static
wear leveling".  The drive will have to move the contents
of blocks that are never (or rarely) written to to other
blocks, so they can be overwritten, in order to distribute
wear equally over all blocks.  If a block is known to be
unused (which is the case when the drive is new, or after
a TRIM command), the contents don't have to be moved, so
the write operation is much faster.  I think all modern
SSD drives use static wear leveling.

Without TRIM support in the file system, a work-around is
to "newfs -E" the file system when the performance gets
too bad.  This requires a backup-restore cycle, of course,
so it's a somewhat annoying.

Another work-around is to leave some space unused, i.e.
don't use 20% at the end of the SSD for any file systems,
for example.  Since those 20% are never written to, they
are known to be unused to the SSD's firmware, so it can
use them for wear leveling.  This will postpone the
performance degradation somewhat, but it won't completely
avoid it, ultimately.  And wasting some space is not a
very satisfying solution either.

 > I've similar SSDs and from what I tested it somehow can handle
 > wear leveling internally. You can to TRIM entire disk using this simple
 > program below, newfs it and test it.

It does basically the same as "newfs -E", right?

 > Then fill it with random data, newfs it again, test it and compare
 > results.

Filling it just once will probably not have much of an
effect.  In fact, wear leveling will probably not kick
in if you just fill the whole disk, because all blocks
are used equally anyway.

The performance degradation will only start to occur
after a while (weeks or months) when some blocks are
written much more often than others.  In this situation,
(static) wear leveling will kick in and start moving
data in order to re-use seldom-written-to blocks.

Best regards
   Oliver

-- 
Oliver Fromme, secnetix GmbH & Co. KG, Marktplatz 29, 85567 Grafing b. M.
Handelsregister: Registergericht Muenchen, HRA 74606,  Geschäftsfuehrung:
secnetix Verwaltungsgesellsch. mbH, Handelsregister: Registergericht Mün-
chen, HRB 125758,  Geschäftsführer: Maik Bachmann, Olaf Erb, Ralf Gebhart

FreeBSD-Dienstleistungen, -Produkte und mehr:  http://www.secnetix.de/bsd

"C is quirky, flawed, and an enormous success."
        -- Dennis M. Ritchie.



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?201012081658.oB8Gw9w3010495>