Date: Wed, 25 Mar 2009 10:56:13 +0100 From: Alexander Leidinger <Alexander@Leidinger.net> To: Mark Powell <M.S.Powell@salford.ac.uk> Cc: kevin <kevinxlinuz@163.com>, FreeBSD Current <freebsd-current@freebsd.org>, Daniel Eriksson <daniel@toomuchdata.com> Subject: Re: Apparently spurious ZFS CRC errors (was Re: ZFS data error without reasons) Message-ID: <20090325105613.55624rkkgf2xkr6s@webmail.leidinger.net> In-Reply-To: <20090320152737.D641@rust.salford.ac.uk> References: <49BD117B.2080706@163.com> <4F9C9299A10AE74E89EA580D14AA10A635E68A@royal64.emp.zapto.org> <49BE4EC1.90207@163.com> <20090320102824.W75873@rust.salford.ac.uk> <20090320152737.D641@rust.salford.ac.uk>
next in thread | previous in thread | raw e-mail | index | archive | help
Quoting Mark Powell <M.S.Powell@salford.ac.uk> (from Fri, 20 Mar 2009 =20 15:30:11 +0000 (GMT)): > On Fri, 20 Mar 2009, Mark Powell wrote: > >> As this same hardware worked, well with 7 for a long time, and can =20 >> work perfectly with 8 for several days until the errors strike, =20 >> this seems like some curious 8 problem? > > Hmmm. Perhaps I'm not being fair on 8. Just had a look at my =20 > loader.conf for 7 and I can see that I used to run with every =20 > zfs*disable on. I've just rebooted 8 with: > > vfs.zfs.cache_flush_disable: 1 > vfs.zfs.mdcomp_disable: 1 > vfs.zfs.prefetch_disable: 1 > vfs.zfs.zil_disable: 1 > hw.ata.wc: 1 > > The current fs which produced errors on every scrub now reports no errors. > I now need to find which option fixed it. > I suspect hw.ata.wc. Is this still a known issue? I would expect that it is the combination of cache_flush_disable and =20 zil_disable with the wc enable. If you reenable the zil and the cache =20 flush, the wc should not cause the problems you see. You may want to =20 have a look at =20 http://www.solarisinternals.com/wiki/index.php/ZFS_Evil_Tuning_Guide =20 for a more detailed description of what those options are doing (and =20 why you should not disable those features on normal disks). I also =20 suggest to not disable the meta-data compression, as it seems it only =20 affects a small amount of meta-data instead of all meta-data. If you want to get more out of zfs, maybe vfs.zfs.vdev.max_pending =20 could help if you are using SATA (as I read the zfs tuning guide, it =20 makes sense to have a high value when you have command queueing, which =20 we have with SCSI drives, but not yet with SATA drives and probably =20 not at all with PATA drives). Bye, Alexander. --=20 QOTD: =09"I am not sure what this is, but an 'F' would only dignify it." http://www.Leidinger.net Alexander @ Leidinger.net: PGP ID =3D B0063FE7 http://www.FreeBSD.org netchild @ FreeBSD.org : PGP ID =3D 72077137
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?20090325105613.55624rkkgf2xkr6s>