Skip site navigation (1)Skip section navigation (2)
Date:      Tue, 22 Jan 2013 18:36:41 +1100
From:      Peter Jeremy <peter@rulingia.com>
To:        Wojciech Puchar <wojtek@wojtek.tensor.gdynia.pl>
Cc:        freebsd-fs <freebsd-fs@freebsd.org>, FreeBSD Hackers <freebsd-hackers@freebsd.org>
Subject:   Re: ZFS regimen: scrub, scrub, scrub and scrub again.
Message-ID:  <20130122073641.GH30633@server.rulingia.com>
In-Reply-To: <alpine.BSF.2.00.1301211201570.9447@wojtek.tensor.gdynia.pl>
References:  <CACpH0Mf6sNb8JOsTzC%2BWSfQRB62%2BZn7VtzEnihEKmEV2aO2p%2Bw@mail.gmail.com> <alpine.BSF.2.00.1301211201570.9447@wojtek.tensor.gdynia.pl>

next in thread | previous in thread | raw e-mail | index | archive | help

--n+lFg1Zro7sl44OB
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
Content-Transfer-Encoding: quoted-printable

On 2013-Jan-21 12:12:45 +0100, Wojciech Puchar <wojtek@wojtek.tensor.gdynia=
=2Epl> wrote:
>That's why i use properly tuned UFS, gmirror, and prefer not to use=20
>gstripe but have multiple filesystems

When I started using ZFS, I didn't fully trust it so I had a gmirrored
UFS root (including a full src tree).  Over time, I found that gmirror
plus UFS was giving me more problems than ZFS.  In particular, I was
seeing behaviour that suggested that the mirrors were out of sync,
even though gmirror insisted they were in sync.  Unfortunately, there
is no way to get gmirror to verify the mirroring or to get UFS to
check correctness of data or metadata (fsck can only check metadata
consistency).  I've since moved to a ZFS root.

>Which is marketing, not truth. If you want bullet-proof recoverability,=20
>UFS beats everything i've ever seen.

I've seen the opposite.  One big difference is that ZFS is designed to
ensure it returns the data that was written to it whereas UFS just
returns the bytes it finds where it thinks it wrote your data.  One
side effect of this is that ZFS is far fussier about hardware quality
- since it checksums everything, it is likely to pick up glitches that
UFS doesn't notice.

>If you want FAST crash recovery, use softupdates+journal, available in=20
>FreeBSD 9.

I'll admit that I haven't used SU+J but one downside of SU+J is that
it prevents the use of snapshots, which in turn prevents the (safe)
use of dump(8) (which is the official tool for UFS backups) on live
filesystems.

>> of fuss.  Even if you dislodge a drive ... so that it's missing the last
>> 'n' transactions, ZFS seems to figure this out (which I thought was extra
>> cudos).
>
>Yes this is marketing. practice is somehow different. as you discovered=20
>yourself.

Most of the time this works as designed.  It's possible there are bugs
in the implementation.

>While RAID-Z is already a king of bad performance,

I don't believe RAID-Z is any worse than RAID5.  Do you have any actual
measurements to back up your claim?

> i assume=20
>you mean two POOLS, not 2 RAID-Z sets. if you mixed 2 different RAID-Z poo=
ls you would=20
>spread load unevenly and make performance even worse.

There's no real reason why you could't have 2 different vdevs in the
same pool.

>> A full scrub of my drives weighs in at 36 hours or so.
>
>which is funny as ZFS is marketed as doing this efficient (like checking=
=20
>only used space).

It _does_ only check used space but it does so in logical order rather
than physical order.  For a fragmented pool, this means random accesses.

>Even better - use UFS.

Then you'll never know that your data has been corrupted.

>For both bullet proof recoverability and performance.
use ZFS.

--=20
Peter Jeremy

--n+lFg1Zro7sl44OB
Content-Type: application/pgp-signature

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v2.0.19 (FreeBSD)

iEYEARECAAYFAlD+QYkACgkQ/opHv/APuIdH7QCfQcSzk1BtPmFuSWNBqH/UUZL0
r+kAoKU/ks97MatHjPwjXl2BarlMyOzg
=KFNN
-----END PGP SIGNATURE-----

--n+lFg1Zro7sl44OB--



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?20130122073641.GH30633>