Date: Thu, 5 Jun 2008 08:27:28 +0200 From: Pawel Jakub Dawidek <pjd@FreeBSD.org> To: Tz-Huan Huang <tzhuan@csie.org> Cc: Dag-Erling Sm??rgrav <des@des.no>, freebsd-hackers@freebsd.org Subject: Re: Is there any way to increase the KVM? Message-ID: <20080605062728.GA4278@garage.freebsd.pl> In-Reply-To: <6a7033710806041053g4a5c2fdftd7202b708bff363c@mail.gmail.com> References: <6a7033710805302252v43a7b240x66ca3f5e3dd5fda4@mail.gmail.com> <20080603135308.GC3434@garage.freebsd.pl> <6a7033710806032317g4dbe8845h26a1196016b9c440@mail.gmail.com> <86zlq140x0.fsf@ds4.des.no> <6a7033710806041053g4a5c2fdftd7202b708bff363c@mail.gmail.com>
next in thread | previous in thread | raw e-mail | index | archive | help
--nFreZHaLTZJo0R7j Content-Type: text/plain; charset=us-ascii Content-Disposition: inline Content-Transfer-Encoding: quoted-printable On Thu, Jun 05, 2008 at 01:53:37AM +0800, Tz-Huan Huang wrote: > On Thu, Jun 5, 2008 at 12:31 AM, Dag-Erling Sm??rgrav <des@des.no> wrote: > > "Tz-Huan Huang" <tzhuan@csie.org> writes: > >> The vfs.zfs.arc_max was set to 512M originally, the machine survived f= or > >> 4 days and panicked this morning. Now the vfs.zfs.arc_max is set to 64M > >> by Oliver's suggestion, let's see how long it will survive. :-) > > > > des@ds4 ~% uname -a > > FreeBSD ds4.des.no 8.0-CURRENT FreeBSD 8.0-CURRENT #27: Sat Feb 23 01:2= 4:32 CET 2008 des@ds4.des.no:/usr/obj/usr/src/sys/ds4 amd64 > > des@ds4 ~% sysctl -h vm.kmem_size_min vm.kmem_size_max vm.kmem_size vfs= .zfs.arc_min vfs.zfs.arc_max > > vm.kmem_size_min: 1,073,741,824 > > vm.kmem_size_max: 1,073,741,824 > > vm.kmem_size: 1,073,741,824 > > vfs.zfs.arc_min: 67,108,864 > > vfs.zfs.arc_max: 536,870,912 > > des@ds4 ~% zpool list > > NAME SIZE USED AVAIL CAP HEALTH ALTROOT > > raid 1.45T 435G 1.03T 29% ONLINE - > > des@ds4 ~% zfs list | wc -l > > 210 > > > > Haven't had a single panic in over six months. >=20 > Thanks for your information, the major difference is that we > runs on 7-stable and the size of our zfs pool is much bigger. I'm don't think the panics are related to pool size. More to the load and characteristics of your workload. > root@cml2$ uname -a > FreeBSD cml2.csie.ntu.edu.tw 7.0-STABLE FreeBSD 7.0-STABLE #40: Sat > May 31 10:29:16 CST 2008 > root@cml2.csie.ntu.edu.tw:/usr/local/obj/usr/local/src/sys/CML2 amd64 > root@cml2$ sysctl -h vm.kmem_size_min vm.kmem_size_max vm.kmem_size > vfs.zfs.arc_min vfs.zfs.arc_max > vm.kmem_size_min: 0 > vm.kmem_size_max: 1,610,612,736 > vm.kmem_size: 1,610,612,736 > vfs.zfs.arc_min: 16,777,216 > vfs.zfs.arc_max: 67,108,864 > root@cml2$ zpool list > NAME SIZE USED AVAIL CAP HEALTH ALTROOT > sun 11.3T 9.03T 2.30T 79% ONLINE - > root@cml2$ zfs list | wc -l > 295 If we're comparing who has bigger... :) beast:root:~# zpool list NAME SIZE USED AVAIL CAP HEALTH ALTROOT tank 732G 604G 128G 82% ONLINE - but: beast:root:~# zfs list | wc -l 1932 No panics. PS. I'm quite sure the ZFS version I've in perforce will fix most if not all 'kmem_map too small' panics. It's not yet committed, but I do want to MFC it into RELENG_7. --=20 Pawel Jakub Dawidek http://www.wheel.pl pjd@FreeBSD.org http://www.FreeBSD.org FreeBSD committer Am I Evil? Yes, I Am! --nFreZHaLTZJo0R7j Content-Type: application/pgp-signature Content-Disposition: inline -----BEGIN PGP SIGNATURE----- Version: GnuPG v2.0.4 (FreeBSD) iD8DBQFIR4dPForvXbEpPzQRAunQAJ9kw4ZGXadc2WLbVvflkLoRr7Zc7QCgvplM +2y24VAv5ARhILSdVMxcffY= =yIsd -----END PGP SIGNATURE----- --nFreZHaLTZJo0R7j--
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?20080605062728.GA4278>