From owner-freebsd-fs@FreeBSD.ORG Mon Aug 31 12:09:31 2009 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 3EC5B106566B; Mon, 31 Aug 2009 12:09:31 +0000 (UTC) (envelope-from m@plus-plus.su) Received: from ext-mail2.ux6.net (ext-mail2.ux6.net [213.163.72.53]) by mx1.freebsd.org (Postfix) with ESMTP id CBEC78FC18; Mon, 31 Aug 2009 12:09:30 +0000 (UTC) Received: from ermik.ux6.net ([91.206.231.146]) by ext-mail2.ux6.net with esmtps (TLSv1:AES256-SHA:256) (Exim 4.69 (FreeBSD)) (envelope-from ) id 1Mi5hR-000DTN-0I; Mon, 31 Aug 2009 14:09:29 +0200 Received: from [206.161.120.61] (helo=[192.168.5.15]) by ermik.ux6.net with esmtpsa (TLSv1:AES256-SHA:256) (Exim 4.69 (FreeBSD)) (envelope-from ) id 1Mi5hN-000977-9K; Mon, 31 Aug 2009 16:09:26 +0400 Message-ID: <4A9BBDDF.4030005@plus-plus.su> Date: Mon, 31 Aug 2009 16:11:11 +0400 From: "Mikhail (Plus Plus)" User-Agent: Thunderbird 2.0.0.17 (X11/20080925) MIME-Version: 1.0 To: Pawel Jakub Dawidek References: <4A927CB3.3040402@plus-plus.su> <4A964F4E.4080009@plus-plus.su> <4A9656CE.8020107@plus-plus.su> <367b2c980908270316n7a21673ek3a997573f2fadbb0@mail.gmail.com> <4A96731D.20406@plus-plus.su> <4A967680.2030205@plus-plus.su> <4A969D20.40809@plus-plus.su> <20090829160037.GA1848@garage.freebsd.pl> In-Reply-To: <20090829160037.GA1848@garage.freebsd.pl> Content-Type: text/plain; charset=windows-1251; format=flowed Content-Transfer-Encoding: 7bit Cc: freebsd-fs@freebsd.org Subject: Re: need help with ZFS X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 31 Aug 2009 12:09:31 -0000 Pawel Jakub Dawidek wrote: > I'm running your test on pretty low-end h/w (i386, 1GB of RAM, two cores) > and cannot reproduce the problem for few hours now. The only tuning I did > was to set vm.kmem_size to 1GB. You still need to do this very tuning even > on amd64. Thanks for your response. I just opened server case, and one of the possible reasons why system panics could be due to faulty hardware.. At least right now I see one SATA controller not sitting properly in it's slot. This could be due to bad transportation from colo DC. I'm going to fix all these small hardware-related issues and will re-run tests once again. Below is a list of settings you requested: > # sysctl vm.kmem_size vm.kmem_size: 2753769472 > # sysctl vm.kmem_size_max vm.kmem_size_max: 329853485875 > # sysctl vfs.zfs vfs.zfs.arc_meta_limit: 430276480 vfs.zfs.arc_meta_used: 1534208 vfs.zfs.mdcomp_disable: 0 vfs.zfs.arc_min: 215138240 vfs.zfs.arc_max: 1721105920 vfs.zfs.zfetch.array_rd_sz: 1048576 vfs.zfs.zfetch.block_cap: 256 vfs.zfs.zfetch.min_sec_reap: 2 vfs.zfs.zfetch.max_streams: 8 vfs.zfs.prefetch_disable: 0 vfs.zfs.recover: 0 vfs.zfs.txg.synctime: 5 vfs.zfs.txg.timeout: 30 vfs.zfs.scrub_limit: 10 vfs.zfs.vdev.cache.bshift: 16 vfs.zfs.vdev.cache.size: 10485760 vfs.zfs.vdev.cache.max: 16384 vfs.zfs.vdev.aggregation_limit: 131072 vfs.zfs.vdev.ramp_rate: 2 vfs.zfs.vdev.time_shift: 6 vfs.zfs.vdev.min_pending: 4 vfs.zfs.vdev.max_pending: 35 vfs.zfs.cache_flush_disable: 0 vfs.zfs.zil_disable: 0 vfs.zfs.version.zpl: 3 vfs.zfs.version.vdev_boot: 1 vfs.zfs.version.spa: 13 vfs.zfs.version.dmu_backup_stream: 1 vfs.zfs.version.dmu_backup_header: 2 vfs.zfs.version.acl: 1 vfs.zfs.debug: 0 vfs.zfs.super_owner: 0 > # zpool status pool: mp3pool state: ONLINE scrub: none requested config: NAME STATE READ WRITE CKSUM mp3pool ONLINE 0 0 0 raidz1 ONLINE 0 0 0 ad24 ONLINE 0 0 0 ad8 ONLINE 0 0 0 ad18 ONLINE 0 0 0 ad20 ONLINE 0 0 0 ad22 ONLINE 0 0 0 ad10 ONLINE 0 0 0 spares ad26 AVAIL errors: No known data errors > # zpool list NAME SIZE USED AVAIL CAP HEALTH ALTROOT mp3pool 5.44T 3.54T 1.90T 65% ONLINE - > # zfs get all NAME PROPERTY VALUE SOURCE mp3pool type filesystem - mp3pool creation Thu Feb 12 23:02 2009 - mp3pool used 2.94T - mp3pool available 1.51T - mp3pool referenced 2.94T - mp3pool compressratio 1.00x - mp3pool mounted yes - mp3pool quota none default mp3pool reservation none default mp3pool recordsize 128K default mp3pool mountpoint /mp3pool default mp3pool sharenfs off default mp3pool checksum on default mp3pool compression off default mp3pool atime on default mp3pool devices on default mp3pool exec on default mp3pool setuid on default mp3pool readonly off default mp3pool jailed off default mp3pool snapdir hidden default mp3pool aclmode groupmask default mp3pool aclinherit restricted default mp3pool canmount on default mp3pool shareiscsi off default mp3pool xattr off temporary mp3pool copies 1 default mp3pool version 3 - mp3pool utf8only off - mp3pool normalization none - mp3pool casesensitivity sensitive - mp3pool vscan off default mp3pool nbmand off default mp3pool sharesmb off default mp3pool refquota none default mp3pool refreservation none default mp3pool primarycache all default mp3pool secondarycache all default > > And place /var/run/dmesg.boot somewhere? http://91.206.231.132/~miha/zfs.dmesg.boot Thanks, Mikhail.