Skip site navigation (1)Skip section navigation (2)
Date:      Sun, 2 Nov 2008 16:09:24 +0100
From:      Peter Schuller <peter.schuller@infidyne.com>
To:        Andrew Snow <andrew@modulus.org>
Cc:        freebsd-fs@freebsd.org, Jeremy Chadwick <koitsu@FreeBSD.org>
Subject:   Re: Areca vs. ZFS performance testing.
Message-ID:  <20081102150924.GB59552@hyperion.scode.org>
In-Reply-To: <490A8D23.6030309@modulus.org>
References:  <490A782F.9060406@dannysplace.net> <20081031033208.GA21220@icarus.home.lan> <490A849C.7030009@dannysplace.net> <20081031043412.GA22289@icarus.home.lan> <490A8D23.6030309@modulus.org>

next in thread | previous in thread | raw e-mail | index | archive | help

--XOIedfhf+7KOe/yw
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
Content-Transfer-Encoding: quoted-printable

> Its probably worth playing with vfs.zfs.cache_flush_disable when using=20
> the hardware RAID.
>=20
> By default, ZFS will flush the entire hardware cache just to make sure=20
> the ZFS Intent Log (ZIL) has been written.
>=20
> This isn't so bad on a group of hard disks with small caches, but bad if=
=20
> you have 256mb of controller write cache.

Flushing the cache to constituent drives also has a direct impact on
latency, even without any dirty data (save what you just written) in
the cache. If you're doing anything that does frequent fsync():s,
you're likely to not want to wait for actual persistence to disk (with
battery backed cache).

In any case, why would the actual RAID controller cache be flushed,
unless someone expliclitly configured it such? I would expect a
regular BIO_FLUSH (that's all ZFS is going right?) to be satisfied by
the data being contained in the controller cache, under the assumption
that it is battery backed, and that the storage volume/controller has
not been explicitly configured otherwise to not rely on the battery
for persistence.

Please correct me if I'm wrong, but if synchronous writing to your
RAID device results in actually waiting for underlying disks to commit
the data to platters, that sounds like a driver/controller
problem/policy issue rather than anything that should be fixed by
tweaking ZFS.

Or is it the case that ZFS does both a "regular" request to commit
data (which I thought was the purpose of BIO_FLUSH, even though the
"FLUSH" sounds more specific) and separately does a "flush any actual
caches no matter what" type of request that ends up bypassing
controller policy (because it is needed on stupid SATA drives or
such)?

--=20
/ Peter Schuller

PGP userID: 0xE9758B7D or 'Peter Schuller <peter.schuller@infidyne.com>'
Key retrieval: Send an E-Mail to getpgpkey@scode.org
E-Mail: peter.schuller@infidyne.com Web: http://www.scode.org


--XOIedfhf+7KOe/yw
Content-Type: application/pgp-signature
Content-Disposition: inline

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v2.0.9 (FreeBSD)

iEYEARECAAYFAkkNwqQACgkQDNor2+l1i30+tQCfYKIGEHETqNg1gfwi9XwBuFuq
dMAAoK7OczyR6rVe6gigw+EG797v0L2I
=5oRU
-----END PGP SIGNATURE-----

--XOIedfhf+7KOe/yw--



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?20081102150924.GB59552>