Skip site navigation (1)Skip section navigation (2)
Date:      Sat, 28 Jun 2008 01:17:15 -0300
From:      JoaoBR <joao@matik.com.br>
To:        freebsd-stable@freebsd.org
Cc:        Torfinn Ingolfsen <torfinn.ingolfsen@broadpark.no>, Jeremy Chadwick <koitsu@freebsd.org>, freebsd-bugs@freebsd.org, Greg Byshenk <freebsd@byshenk.net>
Subject:   Re: possible zfs bug? lost all pools
Message-ID:  <200806280117.16057.joao@matik.com.br>
In-Reply-To: <20080518153911.GA22300@eos.sc1.parodius.com>
References:  <200805180956.18211.joao@matik.com.br> <200805181220.33599.joao@matik.com.br> <20080518153911.GA22300@eos.sc1.parodius.com>

next in thread | previous in thread | raw e-mail | index | archive | help
On Sunday 18 May 2008 12:39:11 Jeremy Chadwick wrote:

=2E..


>>> and if necessary /etc/rc.d/zfs should start hostid or at least set REQU=
IRE=20
>>> different and warn

=2E..

>>
>> I've been in the same boat you are, and I was told the same thing.  I've
>> documented the situation on my Wiki, and the necessary workarounds.
>>
>> http://wiki.freebsd.org/JeremyChadwick/Commonly_reported_issue

> so I changed the rcorder as you can see in the attached filesile

> http://suporte.matik.com.br/jm/zfs.rcfiles.tar.gz


i'm  coming back on this because I am convicted to zfs each day more and mo=
re=20
and like to express my gratitude not only to whom made zfs but also and=20
specially to the people who brought it to FBSD - and: thank you guys making=
=20
it public, this is really a step forward!

my zfs related rc files changes(above) made my problems go away and like to=
=20
share some other experience here

as on Jeremie's page explained I had similare problems with zfs but seems I=
=20
could get around them with (depending on machine's load) setting either to=
=20
500, 1000 or 1500k vm.kmem_size* ... but seems main problem on FBSD is zfs=
=20
recordsize, on ufs like partitions I set it to 64k and I never got panics a=
ny=20
more, even with several zpools (as said as dangerous), cache_dirs for squid=
=20
or mysql partitions might need lower values to get to there new and=20
impressive peaks.=20

this even seems to solve panics when copying large files from nfs|ufs to or=
=20
from zfs ...

so seems that FBSD do not like recordsize>64k  ...

I have now a mail server running, for almost two month,  with N zfs volumes=
=20
(one per user) in order simulating quotas (-/+ 1000 users) with success and=
=20
completely stable and performance is outstanding under all loads

web server, apache/php/mysql, gave up maior stability problems  but=20
distributing depending on workload to zpools with different recordsizes and=
=20
never >64k solved my problems and I am appearently panic free now

I run almost scsi-only, only my test machines are sata, lowest conf is X2/4=
G,=20
rest is X4 or opterons with 8g or more and I am extremely satisfied and hap=
py=20
with zfs

my backups are running twice as fast as on ufs, mirroring in comparims to=20
gmirror is fucking-incredible fast and the zfs snapshot thing deserves an=20
Oscar! ... and the zfs send|receive another

so thank you all who had fingers in/on zfs! (sometimes I press reset at my=
=20
home server only to see how fast it comes up) .. just kidding but true is:=
=20
thank's again! zfs is thE fs.


=2D-=20

Jo=E3o







A mensagem foi scaneada pelo sistema de e-mail e pode ser considerada segura.
Service fornecido pelo Datacenter Matik  https://datacenter.matik.com.br



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?200806280117.16057.joao>