From owner-freebsd-fs@FreeBSD.ORG Sun May 5 01:00:34 2013 Return-Path: Delivered-To: fs@FreeBSD.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by hub.freebsd.org (Postfix) with ESMTP id 6B962BE0; Sun, 5 May 2013 01:00:34 +0000 (UTC) (envelope-from jase@FreeBSD.org) Received: from svr06-mx.btshosting.co.uk (mx-2.btshosting.co.uk [IPv6:2a01:4f8:121:2403:2::]) by mx1.freebsd.org (Postfix) with ESMTP id 2C5698E5; Sun, 5 May 2013 01:00:31 +0000 (UTC) Received: from [192.168.1.65] (unknown [2.222.62.37]) (using TLSv1 with cipher DHE-RSA-CAMELLIA256-SHA (256/256 bits)) (No client certificate requested) by svr06-mx.btshosting.co.uk (Postfix) with ESMTPSA id F02E36F649; Sun, 5 May 2013 01:00:21 +0000 (UTC) Message-ID: <5185AF20.5010308@FreeBSD.org> Date: Sun, 05 May 2013 02:00:16 +0100 From: Jase Thew Organization: The FreeBSD Project User-Agent: Mozilla/5.0 (Windows NT 6.2; WOW64; rv:17.0) Gecko/20130328 Thunderbird/17.0.5 MIME-Version: 1.0 To: Baptiste Daroussin Subject: Re: Marking some FS as jailable References: <20130212194047.GE12760@ithaqua.etoilebsd.net> <511B1F55.3080500@FreeBSD.org> <20130214132715.GG44004@ithaqua.etoilebsd.net> <511CF77A.2080005@FreeBSD.org> <20130214145600.GI44004@ithaqua.etoilebsd.net> <511CFBAC.3000803@FreeBSD.org> <20130214150857.GK44004@ithaqua.etoilebsd.net> In-Reply-To: <20130214150857.GK44004@ithaqua.etoilebsd.net> Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 7bit Cc: jail@FreeBSD.org, fs@FreeBSD.org, Jamie Gritton X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 05 May 2013 01:00:34 -0000 On 14/02/2013 15:08, Baptiste Daroussin wrote: > On Thu, Feb 14, 2013 at 07:58:52AM -0700, Jamie Gritton wrote: >> On 02/14/13 07:56, Baptiste Daroussin wrote: >>> On Thu, Feb 14, 2013 at 07:40:58AM -0700, Jamie Gritton wrote: >>>> On 02/14/13 06:27, Baptiste Daroussin wrote: >>>>> On Tue, Feb 12, 2013 at 10:06:29PM -0700, Jamie Gritton wrote: >>>>>> On 02/12/13 12:40, Baptiste Daroussin wrote: >>>>>>> >>>>>>> I would like to mark some filesystem as jailable, here is the one I need: >>>>>>> linprocfs, tmpfs and fdescfs, I was planning to do it with adding a >>>>>>> allow.mount.${fs} for each one. >>>>>>> >>>>>>> Anyone has an objection? >>>>>> >>>>>> Would it make sense for linprocfs to use the existing allow.mount.procfs >>>>>> flag? >>>>> >>>>> Here is a patch that uses allow.mount.procfs for linsysfs and linprocfs. >>>>> >>>>> It also addd a new allow.mount.tmpfs to allow tmpfs. >>>>> >>>>> It seems to work here, can anyone confirm this is the right way to do it? >>>>> >>>>> I'll commit in 2 parts: first lin*fs, second tmpfs related things >>>>> >>>>> http://people.freebsd.org/~bapt/jail-fs.diff >>>> >>>> There are some problems. The usage on the mount side of things looks >>>> correct, but it needs more on the jail side. I'm including a patch just >>>> of that part, with a correction in jail.h and further changes in kern_jail.c >>> >>> Thank you the patch has been updated with your fixes. >> >> One more bit (literally): PR_ALLOW_ALL in sys/jail.h needs updating. >> >> - Jamie > > Fixed thanks > > Bapt > Hi, Is this functionality likely to make its way into HEAD and if so, do you have any idea as to the timescale? Regards, Jase. -- Jase Thew jase@FreeBSD.org FreeBSD Ports Committer From owner-freebsd-fs@FreeBSD.ORG Sun May 5 08:50:49 2013 Return-Path: Delivered-To: fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by hub.freebsd.org (Postfix) with ESMTP id E09DBCF0; Sun, 5 May 2013 08:50:49 +0000 (UTC) (envelope-from baptiste.daroussin@gmail.com) Received: from mail-wi0-x234.google.com (mail-wi0-x234.google.com [IPv6:2a00:1450:400c:c05::234]) by mx1.freebsd.org (Postfix) with ESMTP id F069F693; Sun, 5 May 2013 08:50:45 +0000 (UTC) Received: by mail-wi0-f180.google.com with SMTP id h11so1735573wiv.13 for ; Sun, 05 May 2013 01:50:45 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=x-received:sender:date:from:to:cc:subject:message-id:references :mime-version:content-type:content-disposition:in-reply-to :user-agent; bh=GBbOVdO2zwjuuuNgJ0bypOYXr8eZJlQZTt2xT5hmmdQ=; b=Y94BCG2DQyxYsfddsq9QHYQesYG2HiypyzGFB05ECAaaM0Wt0SlHD11AEEZJrsi3UG g0MAGx78t3WeMfH0u0yN2n/iGW6F6Z4r6IqTHu1lAjFH2MifJPT5qjBiMtyusdUp9jMg 1oxEso4XBLgtScM5+xC0NUcEWBbgfwgmQCSP7ud35FqXjbuQ8qk/ZfNrUIND9SY5dcWf er59w11DdsvY3LDGifoBF4n5lVOyfw2jI8dNIgcSNwRzt8rVSMEH3OOIGlgqq7kWBNY7 +FkfSJpQRHwwD2t7WtT5AB/4wNcGP/Ol9XffGlzlMgi8GjcwlZhVjnjd+Kzte0BoWlkS UkVA== X-Received: by 10.180.206.204 with SMTP id lq12mr4212269wic.30.1367743845147; Sun, 05 May 2013 01:50:45 -0700 (PDT) Received: from ithaqua.etoilebsd.net (ithaqua.etoilebsd.net. [37.59.37.188]) by mx.google.com with ESMTPSA id fa6sm7617223wic.9.2013.05.05.01.50.43 for (version=TLSv1 cipher=RC4-SHA bits=128/128); Sun, 05 May 2013 01:50:44 -0700 (PDT) Sender: Baptiste Daroussin Date: Sun, 5 May 2013 10:50:42 +0200 From: Baptiste Daroussin To: Jase Thew Subject: Re: Marking some FS as jailable Message-ID: <20130505085042.GA12114@ithaqua.etoilebsd.net> References: <20130212194047.GE12760@ithaqua.etoilebsd.net> <511B1F55.3080500@FreeBSD.org> <20130214132715.GG44004@ithaqua.etoilebsd.net> <511CF77A.2080005@FreeBSD.org> <20130214145600.GI44004@ithaqua.etoilebsd.net> <511CFBAC.3000803@FreeBSD.org> <20130214150857.GK44004@ithaqua.etoilebsd.net> <5185AF20.5010308@FreeBSD.org> MIME-Version: 1.0 Content-Type: multipart/signed; micalg=pgp-sha1; protocol="application/pgp-signature"; boundary="M9u+pkcMrQJw6us1" Content-Disposition: inline In-Reply-To: <5185AF20.5010308@FreeBSD.org> User-Agent: Mutt/1.5.21 (2010-09-15) Cc: jail@FreeBSD.org, fs@FreeBSD.org, Jamie Gritton X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 05 May 2013 08:50:50 -0000 --M9u+pkcMrQJw6us1 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline Content-Transfer-Encoding: quoted-printable On Sun, May 05, 2013 at 02:00:16AM +0100, Jase Thew wrote: > On 14/02/2013 15:08, Baptiste Daroussin wrote: > > On Thu, Feb 14, 2013 at 07:58:52AM -0700, Jamie Gritton wrote: > >> On 02/14/13 07:56, Baptiste Daroussin wrote: > >>> On Thu, Feb 14, 2013 at 07:40:58AM -0700, Jamie Gritton wrote: > >>>> On 02/14/13 06:27, Baptiste Daroussin wrote: > >>>>> On Tue, Feb 12, 2013 at 10:06:29PM -0700, Jamie Gritton wrote: > >>>>>> On 02/12/13 12:40, Baptiste Daroussin wrote: > >>>>>>> > >>>>>>> I would like to mark some filesystem as jailable, here is the one= I need: > >>>>>>> linprocfs, tmpfs and fdescfs, I was planning to do it with adding= a > >>>>>>> allow.mount.${fs} for each one. > >>>>>>> > >>>>>>> Anyone has an objection? > >>>>>> > >>>>>> Would it make sense for linprocfs to use the existing allow.mount.= procfs > >>>>>> flag? > >>>>> > >>>>> Here is a patch that uses allow.mount.procfs for linsysfs and linpr= ocfs. > >>>>> > >>>>> It also addd a new allow.mount.tmpfs to allow tmpfs. > >>>>> > >>>>> It seems to work here, can anyone confirm this is the right way to = do it? > >>>>> > >>>>> I'll commit in 2 parts: first lin*fs, second tmpfs related things > >>>>> > >>>>> http://people.freebsd.org/~bapt/jail-fs.diff > >>>> > >>>> There are some problems. The usage on the mount side of things looks > >>>> correct, but it needs more on the jail side. I'm including a patch j= ust > >>>> of that part, with a correction in jail.h and further changes in ker= n_jail.c > >>> > >>> Thank you the patch has been updated with your fixes. > >> > >> One more bit (literally): PR_ALLOW_ALL in sys/jail.h needs updating. > >> > >> - Jamie > > > > Fixed thanks > > > > Bapt > > >=20 > Hi, >=20 > Is this functionality likely to make its way into HEAD and if so, do you= =20 > have any idea as to the timescale? >=20 > Regards, >=20 I would love to but I m still waiting for a security review noone has done = yet ;( --M9u+pkcMrQJw6us1 Content-Type: application/pgp-signature -----BEGIN PGP SIGNATURE----- Version: GnuPG v2.0.19 (FreeBSD) iEYEARECAAYFAlGGHWIACgkQ8kTtMUmk6Ez1ZACeJ5Uwa0vIA4iVc2u9SOWWzDN0 d4sAnA82Ma/SF2OK+OXJQZO6XzxdL7tZ =0ajI -----END PGP SIGNATURE----- --M9u+pkcMrQJw6us1-- From owner-freebsd-fs@FreeBSD.ORG Sun May 5 18:21:30 2013 Return-Path: Delivered-To: freebsd-fs@smarthost.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by hub.freebsd.org (Postfix) with ESMTP id 1659440C; Sun, 5 May 2013 18:21:30 +0000 (UTC) (envelope-from linimon@FreeBSD.org) Received: from freefall.freebsd.org (freefall.freebsd.org [IPv6:2001:1900:2254:206c::16:87]) by mx1.freebsd.org (Postfix) with ESMTP id E40CF88E; Sun, 5 May 2013 18:21:29 +0000 (UTC) Received: from freefall.freebsd.org (localhost [127.0.0.1]) by freefall.freebsd.org (8.14.6/8.14.6) with ESMTP id r45ILTsX024842; Sun, 5 May 2013 18:21:29 GMT (envelope-from linimon@freefall.freebsd.org) Received: (from linimon@localhost) by freefall.freebsd.org (8.14.6/8.14.6/Submit) id r45ILTgp024841; Sun, 5 May 2013 18:21:29 GMT (envelope-from linimon) Date: Sun, 5 May 2013 18:21:29 GMT Message-Id: <201305051821.r45ILTgp024841@freefall.freebsd.org> To: linimon@FreeBSD.org, freebsd-bugs@FreeBSD.org, freebsd-fs@FreeBSD.org From: linimon@FreeBSD.org Subject: Re: kern/178349: [zfs] zfs scrub on deduped data could be much less seeky X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 05 May 2013 18:21:30 -0000 Old Synopsis: zfs scrub on deduped data could be much less seeky New Synopsis: [zfs] zfs scrub on deduped data could be much less seeky Responsible-Changed-From-To: freebsd-bugs->freebsd-fs Responsible-Changed-By: linimon Responsible-Changed-When: Sun May 5 18:21:16 UTC 2013 Responsible-Changed-Why: Over to maintainer(s). http://www.freebsd.org/cgi/query-pr.cgi?pr=178349 From owner-freebsd-fs@FreeBSD.ORG Mon May 6 11:06:44 2013 Return-Path: Delivered-To: freebsd-fs@FreeBSD.org Received: from mx1.freebsd.org (mx1.FreeBSD.org [8.8.178.115]) by hub.freebsd.org (Postfix) with ESMTP id E8610991 for ; Mon, 6 May 2013 11:06:44 +0000 (UTC) (envelope-from owner-bugmaster@FreeBSD.org) Received: from freefall.freebsd.org (freefall.freebsd.org [IPv6:2001:1900:2254:206c::16:87]) by mx1.freebsd.org (Postfix) with ESMTP id CAF4A9FA for ; Mon, 6 May 2013 11:06:44 +0000 (UTC) Received: from freefall.freebsd.org (localhost [127.0.0.1]) by freefall.freebsd.org (8.14.6/8.14.6) with ESMTP id r46B6ii6023768 for ; Mon, 6 May 2013 11:06:44 GMT (envelope-from owner-bugmaster@FreeBSD.org) Received: (from gnats@localhost) by freefall.freebsd.org (8.14.6/8.14.6/Submit) id r46B6iLb023766 for freebsd-fs@FreeBSD.org; Mon, 6 May 2013 11:06:44 GMT (envelope-from owner-bugmaster@FreeBSD.org) Date: Mon, 6 May 2013 11:06:44 GMT Message-Id: <201305061106.r46B6iLb023766@freefall.freebsd.org> X-Authentication-Warning: freefall.freebsd.org: gnats set sender to owner-bugmaster@FreeBSD.org using -f From: FreeBSD bugmaster To: freebsd-fs@FreeBSD.org Subject: Current problem reports assigned to freebsd-fs@FreeBSD.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 06 May 2013 11:06:45 -0000 Note: to view an individual PR, use: http://www.freebsd.org/cgi/query-pr.cgi?pr=(number). The following is a listing of current problems submitted by FreeBSD users. These represent problem reports covering all versions including experimental development code and obsolete releases. S Tracker Resp. Description -------------------------------------------------------------------------------- o kern/178349 fs [zfs] zfs scrub on deduped data could be much less see o kern/178329 fs [zfs] extended attributes leak o kern/177985 fs [zfs] disk usage problem when copying from one zfs dat o kern/177971 fs [nfs] FreeBSD 9.1 nfs client dirlist problem w/ nfsv3, o kern/177966 fs [zfs] resilver completes but subsequent scrub reports o kern/177658 fs [ufs] FreeBSD panics after get full filesystem with uf o kern/177536 fs [zfs] zfs livelock (deadlock) with high write-to-disk o kern/177445 fs [hast] HAST panic o kern/177240 fs [zfs] zpool import failed with state UNAVAIL but all d o kern/176978 fs [zfs] [panic] zfs send -D causes "panic: System call i o kern/176857 fs [softupdates] [panic] 9.1-RELEASE/amd64/GENERIC panic o bin/176253 fs zpool(8): zfs pool indentation is misleading/wrong o kern/176141 fs [zfs] sharesmb=on makes errors for sharenfs, and still o kern/175950 fs [zfs] Possible deadlock in zfs after long uptime o kern/175897 fs [zfs] operations on readonly zpool hang o kern/175179 fs [zfs] ZFS may attach wrong device on move o kern/175071 fs [ufs] [panic] softdep_deallocate_dependencies: unrecov o kern/174372 fs [zfs] Pagefault appears to be related to ZFS o kern/174315 fs [zfs] chflags uchg not supported o kern/174310 fs [zfs] root point mounting broken on CURRENT with multi o kern/174279 fs [ufs] UFS2-SU+J journal and filesystem corruption o kern/174060 fs [ext2fs] Ext2FS system crashes (buffer overflow?) o kern/173830 fs [zfs] Brain-dead simple change to ZFS error descriptio o kern/173718 fs [zfs] phantom directory in zraid2 pool f kern/173657 fs [nfs] strange UID map with nfsuserd o kern/173363 fs [zfs] [panic] Panic on 'zpool replace' on readonly poo o kern/173136 fs [unionfs] mounting above the NFS read-only share panic o kern/172942 fs [smbfs] Unmounting a smb mount when the server became o kern/172348 fs [unionfs] umount -f of filesystem in use with readonly o kern/172334 fs [unionfs] unionfs permits recursive union mounts; caus o kern/171626 fs [tmpfs] tmpfs should be noisier when the requested siz o kern/171415 fs [zfs] zfs recv fails with "cannot receive incremental o kern/170945 fs [gpt] disk layout not portable between direct connect o bin/170778 fs [zfs] [panic] FreeBSD panics randomly o kern/170680 fs [nfs] Multiple NFS Client bug in the FreeBSD 7.4-RELEA o kern/170497 fs [xfs][panic] kernel will panic whenever I ls a mounted o kern/169945 fs [zfs] [panic] Kernel panic while importing zpool (afte o kern/169480 fs [zfs] ZFS stalls on heavy I/O o kern/169398 fs [zfs] Can't remove file with permanent error o kern/169339 fs panic while " : > /etc/123" o kern/169319 fs [zfs] zfs resilver can't complete o kern/168947 fs [nfs] [zfs] .zfs/snapshot directory is messed up when o kern/168942 fs [nfs] [hang] nfsd hangs after being restarted (not -HU o kern/168158 fs [zfs] incorrect parsing of sharenfs options in zfs (fs o kern/167979 fs [ufs] DIOCGDINFO ioctl does not work on 8.2 file syste o kern/167977 fs [smbfs] mount_smbfs results are differ when utf-8 or U o kern/167688 fs [fusefs] Incorrect signal handling with direct_io o kern/167685 fs [zfs] ZFS on USB drive prevents shutdown / reboot o kern/167612 fs [portalfs] The portal file system gets stuck inside po o kern/167272 fs [zfs] ZFS Disks reordering causes ZFS to pick the wron o kern/167260 fs [msdosfs] msdosfs disk was mounted the second time whe o kern/167109 fs [zfs] [panic] zfs diff kernel panic Fatal trap 9: gene o kern/167105 fs [nfs] mount_nfs can not handle source exports wiht mor o kern/167067 fs [zfs] [panic] ZFS panics the server o kern/167065 fs [zfs] boot fails when a spare is the boot disk o kern/167048 fs [nfs] [patch] RELEASE-9 crash when using ZFS+NULLFS+NF o kern/166912 fs [ufs] [panic] Panic after converting Softupdates to jo o kern/166851 fs [zfs] [hang] Copying directory from the mounted UFS di o kern/166477 fs [nfs] NFS data corruption. o kern/165950 fs [ffs] SU+J and fsck problem o kern/165521 fs [zfs] [hang] livelock on 1 Gig of RAM with zfs when 31 o kern/165392 fs Multiple mkdir/rmdir fails with errno 31 o kern/165087 fs [unionfs] lock violation in unionfs o kern/164472 fs [ufs] fsck -B panics on particular data inconsistency o kern/164370 fs [zfs] zfs destroy for snapshot fails on i386 and sparc o kern/164261 fs [nullfs] [patch] fix panic with NFS served from NULLFS o kern/164256 fs [zfs] device entry for volume is not created after zfs o kern/164184 fs [ufs] [panic] Kernel panic with ufs_makeinode o kern/163801 fs [md] [request] allow mfsBSD legacy installed in 'swap' o kern/163770 fs [zfs] [hang] LOR between zfs&syncer + vnlru leading to o kern/163501 fs [nfs] NFS exporting a dir and a subdir in that dir to o kern/162944 fs [coda] Coda file system module looks broken in 9.0 o kern/162860 fs [zfs] Cannot share ZFS filesystem to hosts with a hyph o kern/162751 fs [zfs] [panic] kernel panics during file operations o kern/162591 fs [nullfs] cross-filesystem nullfs does not work as expe o kern/162519 fs [zfs] "zpool import" relies on buggy realpath() behavi o kern/161968 fs [zfs] [hang] renaming snapshot with -r including a zvo o kern/161864 fs [ufs] removing journaling from UFS partition fails on o bin/161807 fs [patch] add option for explicitly specifying metadata o kern/161579 fs [smbfs] FreeBSD sometimes panics when an smb share is o kern/161533 fs [zfs] [panic] zfs receive panic: system ioctl returnin o kern/161438 fs [zfs] [panic] recursed on non-recursive spa_namespace_ o kern/161424 fs [nullfs] __getcwd() calls fail when used on nullfs mou o kern/161280 fs [zfs] Stack overflow in gptzfsboot o kern/161205 fs [nfs] [pfsync] [regression] [build] Bug report freebsd o kern/161169 fs [zfs] [panic] ZFS causes kernel panic in dbuf_dirty o kern/161112 fs [ufs] [lor] filesystem LOR in FreeBSD 9.0-BETA3 o kern/160893 fs [zfs] [panic] 9.0-BETA2 kernel panic f kern/160860 fs [ufs] Random UFS root filesystem corruption with SU+J o kern/160801 fs [zfs] zfsboot on 8.2-RELEASE fails to boot from root-o o kern/160790 fs [fusefs] [panic] VPUTX: negative ref count with FUSE o kern/160777 fs [zfs] [hang] RAID-Z3 causes fatal hang upon scrub/impo o kern/160706 fs [zfs] zfs bootloader fails when a non-root vdev exists o kern/160591 fs [zfs] Fail to boot on zfs root with degraded raidz2 [r o kern/160410 fs [smbfs] [hang] smbfs hangs when transferring large fil o kern/160283 fs [zfs] [patch] 'zfs list' does abort in make_dataset_ha o kern/159930 fs [ufs] [panic] kernel core o kern/159402 fs [zfs][loader] symlinks cause I/O errors o kern/159357 fs [zfs] ZFS MAXNAMELEN macro has confusing name (off-by- o kern/159356 fs [zfs] [patch] ZFS NAME_ERR_DISKLIKE check is Solaris-s o kern/159351 fs [nfs] [patch] - divide by zero in mountnfs() o kern/159251 fs [zfs] [request]: add FLETCHER4 as DEDUP hash option o kern/159077 fs [zfs] Can't cd .. with latest zfs version o kern/159048 fs [smbfs] smb mount corrupts large files o kern/159045 fs [zfs] [hang] ZFS scrub freezes system o kern/158839 fs [zfs] ZFS Bootloader Fails if there is a Dead Disk o kern/158802 fs amd(8) ICMP storm and unkillable process. o kern/158231 fs [nullfs] panic on unmounting nullfs mounted over ufs o f kern/157929 fs [nfs] NFS slow read o kern/157399 fs [zfs] trouble with: mdconfig force delete && zfs strip o kern/157179 fs [zfs] zfs/dbuf.c: panic: solaris assert: arc_buf_remov o kern/156797 fs [zfs] [panic] Double panic with FreeBSD 9-CURRENT and o kern/156781 fs [zfs] zfs is losing the snapshot directory, p kern/156545 fs [ufs] mv could break UFS on SMP systems o kern/156193 fs [ufs] [hang] UFS snapshot hangs && deadlocks processes o kern/156039 fs [nullfs] [unionfs] nullfs + unionfs do not compose, re o kern/155615 fs [zfs] zfs v28 broken on sparc64 -current o kern/155587 fs [zfs] [panic] kernel panic with zfs p kern/155411 fs [regression] [8.2-release] [tmpfs]: mount: tmpfs : No o kern/155199 fs [ext2fs] ext3fs mounted as ext2fs gives I/O errors o bin/155104 fs [zfs][patch] use /dev prefix by default when importing o kern/154930 fs [zfs] cannot delete/unlink file from full volume -> EN o kern/154828 fs [msdosfs] Unable to create directories on external USB o kern/154491 fs [smbfs] smb_co_lock: recursive lock for object 1 p kern/154228 fs [md] md getting stuck in wdrain state o kern/153996 fs [zfs] zfs root mount error while kernel is not located o kern/153753 fs [zfs] ZFS v15 - grammatical error when attempting to u o kern/153716 fs [zfs] zpool scrub time remaining is incorrect o kern/153695 fs [patch] [zfs] Booting from zpool created on 4k-sector o kern/153680 fs [xfs] 8.1 failing to mount XFS partitions o kern/153418 fs [zfs] [panic] Kernel Panic occurred writing to zfs vol o kern/153351 fs [zfs] locking directories/files in ZFS o bin/153258 fs [patch][zfs] creating ZVOLs requires `refreservation' s kern/153173 fs [zfs] booting from a gzip-compressed dataset doesn't w o bin/153142 fs [zfs] ls -l outputs `ls: ./.zfs: Operation not support o kern/153126 fs [zfs] vdev failure, zpool=peegel type=vdev.too_small o kern/152022 fs [nfs] nfs service hangs with linux client [regression] o kern/151942 fs [zfs] panic during ls(1) zfs snapshot directory o kern/151905 fs [zfs] page fault under load in /sbin/zfs o bin/151713 fs [patch] Bug in growfs(8) with respect to 32-bit overfl o kern/151648 fs [zfs] disk wait bug o kern/151629 fs [fs] [patch] Skip empty directory entries during name o kern/151330 fs [zfs] will unshare all zfs filesystem after execute a o kern/151326 fs [nfs] nfs exports fail if netgroups contain duplicate o kern/151251 fs [ufs] Can not create files on filesystem with heavy us o kern/151226 fs [zfs] can't delete zfs snapshot o kern/150503 fs [zfs] ZFS disks are UNAVAIL and corrupted after reboot o kern/150501 fs [zfs] ZFS vdev failure vdev.bad_label on amd64 o kern/150390 fs [zfs] zfs deadlock when arcmsr reports drive faulted o kern/150336 fs [nfs] mountd/nfsd became confused; refused to reload n o kern/149208 fs mksnap_ffs(8) hang/deadlock o kern/149173 fs [patch] [zfs] make OpenSolaris installa o kern/149015 fs [zfs] [patch] misc fixes for ZFS code to build on Glib o kern/149014 fs [zfs] [patch] declarations in ZFS libraries/utilities o kern/149013 fs [zfs] [patch] make ZFS makefiles use the libraries fro o kern/148504 fs [zfs] ZFS' zpool does not allow replacing drives to be o kern/148490 fs [zfs]: zpool attach - resilver bidirectionally, and re o kern/148368 fs [zfs] ZFS hanging forever on 8.1-PRERELEASE o kern/148138 fs [zfs] zfs raidz pool commands freeze o kern/147903 fs [zfs] [panic] Kernel panics on faulty zfs device o kern/147881 fs [zfs] [patch] ZFS "sharenfs" doesn't allow different " o kern/147420 fs [ufs] [panic] ufs_dirbad, nullfs, jail panic (corrupt o kern/146941 fs [zfs] [panic] Kernel Double Fault - Happens constantly o kern/146786 fs [zfs] zpool import hangs with checksum errors o kern/146708 fs [ufs] [panic] Kernel panic in softdep_disk_write_compl o kern/146528 fs [zfs] Severe memory leak in ZFS on i386 o kern/146502 fs [nfs] FreeBSD 8 NFS Client Connection to Server s kern/145712 fs [zfs] cannot offline two drives in a raidz2 configurat o kern/145411 fs [xfs] [panic] Kernel panics shortly after mounting an f bin/145309 fs bsdlabel: Editing disk label invalidates the whole dev o kern/145272 fs [zfs] [panic] Panic during boot when accessing zfs on o kern/145246 fs [ufs] dirhash in 7.3 gratuitously frees hashes when it o kern/145238 fs [zfs] [panic] kernel panic on zpool clear tank o kern/145229 fs [zfs] Vast differences in ZFS ARC behavior between 8.0 o kern/145189 fs [nfs] nfsd performs abysmally under load o kern/144929 fs [ufs] [lor] vfs_bio.c + ufs_dirhash.c p kern/144447 fs [zfs] sharenfs fsunshare() & fsshare_main() non functi o kern/144416 fs [panic] Kernel panic on online filesystem optimization s kern/144415 fs [zfs] [panic] kernel panics on boot after zfs crash o kern/144234 fs [zfs] Cannot boot machine with recent gptzfsboot code o kern/143825 fs [nfs] [panic] Kernel panic on NFS client o bin/143572 fs [zfs] zpool(1): [patch] The verbose output from iostat o kern/143212 fs [nfs] NFSv4 client strange work ... o kern/143184 fs [zfs] [lor] zfs/bufwait LOR o kern/142878 fs [zfs] [vfs] lock order reversal o kern/142597 fs [ext2fs] ext2fs does not work on filesystems with real o kern/142489 fs [zfs] [lor] allproc/zfs LOR o kern/142466 fs Update 7.2 -> 8.0 on Raid 1 ends with screwed raid [re o kern/142306 fs [zfs] [panic] ZFS drive (from OSX Leopard) causes two o kern/142068 fs [ufs] BSD labels are got deleted spontaneously o kern/141897 fs [msdosfs] [panic] Kernel panic. msdofs: file name leng o kern/141463 fs [nfs] [panic] Frequent kernel panics after upgrade fro o kern/141305 fs [zfs] FreeBSD ZFS+sendfile severe performance issues ( o kern/141091 fs [patch] [nullfs] fix panics with DIAGNOSTIC enabled o kern/141086 fs [nfs] [panic] panic("nfs: bioread, not dir") on FreeBS o kern/141010 fs [zfs] "zfs scrub" fails when backed by files in UFS2 o kern/140888 fs [zfs] boot fail from zfs root while the pool resilveri o kern/140661 fs [zfs] [patch] /boot/loader fails to work on a GPT/ZFS- o kern/140640 fs [zfs] snapshot crash o kern/140068 fs [smbfs] [patch] smbfs does not allow semicolon in file o kern/139725 fs [zfs] zdb(1) dumps core on i386 when examining zpool c o kern/139715 fs [zfs] vfs.numvnodes leak on busy zfs p bin/139651 fs [nfs] mount(8): read-only remount of NFS volume does n o kern/139407 fs [smbfs] [panic] smb mount causes system crash if remot o kern/138662 fs [panic] ffs_blkfree: freeing free block o kern/138421 fs [ufs] [patch] remove UFS label limitations o kern/138202 fs mount_msdosfs(1) see only 2Gb o kern/136968 fs [ufs] [lor] ufs/bufwait/ufs (open) o kern/136945 fs [ufs] [lor] filedesc structure/ufs (poll) o kern/136944 fs [ffs] [lor] bufwait/snaplk (fsync) o kern/136873 fs [ntfs] Missing directories/files on NTFS volume o kern/136865 fs [nfs] [patch] NFS exports atomic and on-the-fly atomic p kern/136470 fs [nfs] Cannot mount / in read-only, over NFS o kern/135546 fs [zfs] zfs.ko module doesn't ignore zpool.cache filenam o kern/135469 fs [ufs] [panic] kernel crash on md operation in ufs_dirb o kern/135050 fs [zfs] ZFS clears/hides disk errors on reboot o kern/134491 fs [zfs] Hot spares are rather cold... o kern/133676 fs [smbfs] [panic] umount -f'ing a vnode-based memory dis p kern/133174 fs [msdosfs] [patch] msdosfs must support multibyte inter o kern/132960 fs [ufs] [panic] panic:ffs_blkfree: freeing free frag o kern/132397 fs reboot causes filesystem corruption (failure to sync b o kern/132331 fs [ufs] [lor] LOR ufs and syncer o kern/132237 fs [msdosfs] msdosfs has problems to read MSDOS Floppy o kern/132145 fs [panic] File System Hard Crashes o kern/131441 fs [unionfs] [nullfs] unionfs and/or nullfs not combineab o kern/131360 fs [nfs] poor scaling behavior of the NFS server under lo o kern/131342 fs [nfs] mounting/unmounting of disks causes NFS to fail o bin/131341 fs makefs: error "Bad file descriptor" on the mount poin o kern/130920 fs [msdosfs] cp(1) takes 100% CPU time while copying file o kern/130210 fs [nullfs] Error by check nullfs o kern/129760 fs [nfs] after 'umount -f' of a stale NFS share FreeBSD l o kern/129488 fs [smbfs] Kernel "bug" when using smbfs in smbfs_smb.c: o kern/129231 fs [ufs] [patch] New UFS mount (norandom) option - mostly o kern/129152 fs [panic] non-userfriendly panic when trying to mount(8) o kern/127787 fs [lor] [ufs] Three LORs: vfslock/devfs/vfslock, ufs/vfs o bin/127270 fs fsck_msdosfs(8) may crash if BytesPerSec is zero o kern/127029 fs [panic] mount(8): trying to mount a write protected zi o kern/126287 fs [ufs] [panic] Kernel panics while mounting an UFS file o kern/125895 fs [ffs] [panic] kernel: panic: ffs_blkfree: freeing free s kern/125738 fs [zfs] [request] SHA256 acceleration in ZFS o kern/123939 fs [msdosfs] corrupts new files o kern/122380 fs [ffs] ffs_valloc:dup alloc (Soekris 4801/7.0/USB Flash o bin/122172 fs [fs]: amd(8) automount daemon dies on 6.3-STABLE i386, o bin/121898 fs [nullfs] pwd(1)/getcwd(2) fails with Permission denied o bin/121072 fs [smbfs] mount_smbfs(8) cannot normally convert the cha o kern/120483 fs [ntfs] [patch] NTFS filesystem locking changes o kern/120482 fs [ntfs] [patch] Sync style changes between NetBSD and F o kern/118912 fs [2tb] disk sizing/geometry problem with large array o kern/118713 fs [minidump] [patch] Display media size required for a k o kern/118318 fs [nfs] NFS server hangs under special circumstances o bin/118249 fs [ufs] mv(1): moving a directory changes its mtime o kern/118126 fs [nfs] [patch] Poor NFS server write performance o kern/118107 fs [ntfs] [panic] Kernel panic when accessing a file at N o kern/117954 fs [ufs] dirhash on very large directories blocks the mac o bin/117315 fs [smbfs] mount_smbfs(8) and related options can't mount o kern/117158 fs [zfs] zpool scrub causes panic if geli vdevs detach on o bin/116980 fs [msdosfs] [patch] mount_msdosfs(8) resets some flags f o conf/116931 fs lack of fsck_cd9660 prevents mounting iso images with o kern/116583 fs [ffs] [hang] System freezes for short time when using o bin/115361 fs [zfs] mount(8) gets into a state where it won't set/un o kern/114955 fs [cd9660] [patch] [request] support for mask,dirmask,ui o kern/114847 fs [ntfs] [patch] [request] dirmask support for NTFS ala o kern/114676 fs [ufs] snapshot creation panics: snapacct_ufs2: bad blo o bin/114468 fs [patch] [request] add -d option to umount(8) to detach o kern/113852 fs [smbfs] smbfs does not properly implement DFS referral o bin/113838 fs [patch] [request] mount(8): add support for relative p o bin/113049 fs [patch] [request] make quot(8) use getopt(3) and show o kern/112658 fs [smbfs] [patch] smbfs and caching problems (resolves b o kern/111843 fs [msdosfs] Long Names of files are incorrectly created o kern/111782 fs [ufs] dump(8) fails horribly for large filesystems s bin/111146 fs [2tb] fsck(8) fails on 6T filesystem o bin/107829 fs [2TB] fdisk(8): invalid boundary checking in fdisk / w o kern/106107 fs [ufs] left-over fsck_snapshot after unfinished backgro o kern/104406 fs [ufs] Processes get stuck in "ufs" state under persist o kern/104133 fs [ext2fs] EXT2FS module corrupts EXT2/3 filesystems o kern/103035 fs [ntfs] Directories in NTFS mounted disc images appear o kern/101324 fs [smbfs] smbfs sometimes not case sensitive when it's s o kern/99290 fs [ntfs] mount_ntfs ignorant of cluster sizes s bin/97498 fs [request] newfs(8) has no option to clear the first 12 o kern/97377 fs [ntfs] [patch] syntax cleanup for ntfs_ihash.c o kern/95222 fs [cd9660] File sections on ISO9660 level 3 CDs ignored o kern/94849 fs [ufs] rename on UFS filesystem is not atomic o bin/94810 fs fsck(8) incorrectly reports 'file system marked clean' o kern/94769 fs [ufs] Multiple file deletions on multi-snapshotted fil o kern/94733 fs [smbfs] smbfs may cause double unlock o kern/93942 fs [vfs] [patch] panic: ufs_dirbad: bad dir (patch from D o kern/92272 fs [ffs] [hang] Filling a filesystem while creating a sna o kern/91134 fs [smbfs] [patch] Preserve access and modification time a kern/90815 fs [smbfs] [patch] SMBFS with character conversions somet o kern/88657 fs [smbfs] windows client hang when browsing a samba shar o kern/88555 fs [panic] ffs_blkfree: freeing free frag on AMD 64 o bin/87966 fs [patch] newfs(8): introduce -A flag for newfs to enabl o kern/87859 fs [smbfs] System reboot while umount smbfs. o kern/86587 fs [msdosfs] rm -r /PATH fails with lots of small files o bin/85494 fs fsck_ffs: unchecked use of cg_inosused macro etc. o kern/80088 fs [smbfs] Incorrect file time setting on NTFS mounted vi o bin/74779 fs Background-fsck checks one filesystem twice and omits o kern/73484 fs [ntfs] Kernel panic when doing `ls` from the client si o bin/73019 fs [ufs] fsck_ufs(8) cannot alloc 607016868 bytes for ino o kern/71774 fs [ntfs] NTFS cannot "see" files on a WinXP filesystem o bin/70600 fs fsck(8) throws files away when it can't grow lost+foun o kern/68978 fs [panic] [ufs] crashes with failing hard disk, loose po o kern/65920 fs [nwfs] Mounted Netware filesystem behaves strange o kern/65901 fs [smbfs] [patch] smbfs fails fsx write/truncate-down/tr o kern/61503 fs [smbfs] mount_smbfs does not work as non-root o kern/55617 fs [smbfs] Accessing an nsmb-mounted drive via a smb expo o kern/51685 fs [hang] Unbounded inode allocation causes kernel to loc o kern/36566 fs [smbfs] System reboot with dead smb mount and umount o bin/27687 fs fsck(8) wrapper is not properly passing options to fsc o kern/18874 fs [2TB] 32bit NFS servers export wrong negative values t 310 problems total. From owner-freebsd-fs@FreeBSD.ORG Wed May 8 21:32:06 2013 Return-Path: Delivered-To: freebsd-fs@smarthost.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.FreeBSD.org [8.8.178.115]) by hub.freebsd.org (Postfix) with ESMTP id D00A27C7; Wed, 8 May 2013 21:32:06 +0000 (UTC) (envelope-from linimon@FreeBSD.org) Received: from freefall.freebsd.org (freefall.freebsd.org [IPv6:2001:1900:2254:206c::16:87]) by mx1.freebsd.org (Postfix) with ESMTP id AA37AE41; Wed, 8 May 2013 21:32:06 +0000 (UTC) Received: from freefall.freebsd.org (localhost [127.0.0.1]) by freefall.freebsd.org (8.14.6/8.14.6) with ESMTP id r48LW64u092921; Wed, 8 May 2013 21:32:06 GMT (envelope-from linimon@freefall.freebsd.org) Received: (from linimon@localhost) by freefall.freebsd.org (8.14.6/8.14.6/Submit) id r48LW6KC092920; Wed, 8 May 2013 21:32:06 GMT (envelope-from linimon) Date: Wed, 8 May 2013 21:32:06 GMT Message-Id: <201305082132.r48LW6KC092920@freefall.freebsd.org> To: linimon@FreeBSD.org, freebsd-bugs@FreeBSD.org, freebsd-fs@FreeBSD.org From: linimon@FreeBSD.org Subject: Re: kern/178412: [smbfs] Coredump when smbfs mounted X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 08 May 2013 21:32:06 -0000 Old Synopsis: Coredump when smbfs mounted New Synopsis: [smbfs] Coredump when smbfs mounted Responsible-Changed-From-To: freebsd-bugs->freebsd-fs Responsible-Changed-By: linimon Responsible-Changed-When: Wed May 8 21:31:55 UTC 2013 Responsible-Changed-Why: Over to maintainer(s). http://www.freebsd.org/cgi/query-pr.cgi?pr=178412 From owner-freebsd-fs@FreeBSD.ORG Wed May 8 21:34:18 2013 Return-Path: Delivered-To: freebsd-fs@smarthost.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by hub.freebsd.org (Postfix) with ESMTP id 00574915; Wed, 8 May 2013 21:34:17 +0000 (UTC) (envelope-from linimon@FreeBSD.org) Received: from freefall.freebsd.org (freefall.freebsd.org [IPv6:2001:1900:2254:206c::16:87]) by mx1.freebsd.org (Postfix) with ESMTP id CE46DE77; Wed, 8 May 2013 21:34:17 +0000 (UTC) Received: from freefall.freebsd.org (localhost [127.0.0.1]) by freefall.freebsd.org (8.14.6/8.14.6) with ESMTP id r48LYHbU093138; Wed, 8 May 2013 21:34:17 GMT (envelope-from linimon@freefall.freebsd.org) Received: (from linimon@localhost) by freefall.freebsd.org (8.14.6/8.14.6/Submit) id r48LYHJL093137; Wed, 8 May 2013 21:34:17 GMT (envelope-from linimon) Date: Wed, 8 May 2013 21:34:17 GMT Message-Id: <201305082134.r48LYHJL093137@freefall.freebsd.org> To: linimon@FreeBSD.org, freebsd-bugs@FreeBSD.org, freebsd-fs@FreeBSD.org From: linimon@FreeBSD.org Subject: Re: kern/178388: [zfs] [patch] allow up to 8MB recordsize X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 08 May 2013 21:34:18 -0000 Synopsis: [zfs] [patch] allow up to 8MB recordsize Responsible-Changed-From-To: freebsd-bugs->freebsd-fs Responsible-Changed-By: linimon Responsible-Changed-When: Wed May 8 21:34:09 UTC 2013 Responsible-Changed-Why: Over to maintainer(s). http://www.freebsd.org/cgi/query-pr.cgi?pr=178388 From owner-freebsd-fs@FreeBSD.ORG Wed May 8 21:35:48 2013 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by hub.freebsd.org (Postfix) with ESMTP id 5A7B1A0A for ; Wed, 8 May 2013 21:35:48 +0000 (UTC) (envelope-from brendan.gregg@joyent.com) Received: from mail-pb0-x22b.google.com (mail-pb0-x22b.google.com [IPv6:2607:f8b0:400e:c01::22b]) by mx1.freebsd.org (Postfix) with ESMTP id 35559E97 for ; Wed, 8 May 2013 21:35:48 +0000 (UTC) Received: by mail-pb0-f43.google.com with SMTP id md12so1516665pbc.30 for ; Wed, 08 May 2013 14:35:47 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20120113; h=mime-version:x-received:date:message-id:subject:from:to :content-type:x-gm-message-state; bh=EurvJxSw6ZsvaGSlrt5MT9gk9p75PRF+jbt0lysWEVU=; b=fmeNPqtrjOYKJ1qE0EE4gYRnlxqzesC4xXd3UkTtzU8CKH5CRv/QsAl702G4wIfdlW seBFvx0nLDdeevLG6qUQTpyWckGfvXCi4REtGI4VmG57EvTnayfuaCgo7OWCqJdQ9eHK g/tm79YqUcvinhB8lO63CWNH4a6NwikI27RjM/DXjN9m3QK9bw0VLHrpxoWn9sNer0m2 fKN8Pi49suOJ9/9TaR7i2jNhHYYW3MNOplZwyZYysyInP5wcDZiAcZKJwyvZkNyNt01c +RoyEqKzZ/0GMUft77Zq/LqIqd52JMxTr+RRTYof+j/MCGmNvsCL+ABIRC6WyaNiVH13 ZO2A== MIME-Version: 1.0 X-Received: by 10.68.114.100 with SMTP id jf4mr9459321pbb.144.1368048947182; Wed, 08 May 2013 14:35:47 -0700 (PDT) Received: by 10.68.66.168 with HTTP; Wed, 8 May 2013 14:35:46 -0700 (PDT) Date: Wed, 8 May 2013 14:35:46 -0700 Message-ID: Subject: Re: Strange slowdown when cache devices enabled in ZFS From: Brendan Gregg To: freebsd-fs@freebsd.org X-Gm-Message-State: ALoCoQmKfF7MThAINKT7DUeuvPf81g6IujG7dUMktQASSkHq8RHrhgRudvb0TbTln/mYXHuATgt2 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: quoted-printable X-Content-Filtered-By: Mailman/MimeDel 2.1.14 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 08 May 2013 21:35:48 -0000 Freddie Cash wrote (Mon Apr 29 16:01:55 UTC 2013): | | The following settings in /etc/sysctl.conf prevent the "stalls" completely, | even when the L2ARC devices are 100% full and all RAM is wired into the | ARC. Been running without issues for 5 days now: | | vfs.zfs.l2arc_norw=3D0 # Default is 1 | vfs.zfs.l2arc_feed_again=3D0 # Default is 1 | vfs.zfs.l2arc_noprefetch=3D0 # Default is 0 | vfs.zfs.l2arc_feed_min_ms=3D1000 # Default is 200 | vfs.zfs.l2arc_write_boost=3D320000000 # Default is 8 MBps | vfs.zfs.l2arc_write_max=3D160000000 # Default is 8 MBps | | With these settings, I'm also able to expand the ARC to use the full 128 GB | of RAM in the biggest box, and to use both L2ARC devices (60 GB in total)= . | And, can set primarycache and secondarycache to all (the default) instead | of just metadata. |[...] The thread earlier described a 100% CPU-bound l2arc_feed_thread, which could be caused by these settings: vfs.zfs.l2arc_write_boost=3D320000000 # Default is 8 MBps vfs.zfs.l2arc_write_max=3D160000000 # Default is 8 MBps If I'm reading that correctly, it's increasing the write max and boost to be 160 Mbytes and 320 Mbytes. To satisfy these, the L2ARC must scan memory from the tail of the ARC lists, lists which may be composed of tiny buffers (eg, 8k). Increasing that scan 20 fold could saturate a CPU. And, if it doesn't find many bytes to write out, then it will rescan the same buffers on the next interval, wasting CPU cycles. I understand the intent was probably to warm up the L2ARC faster. There is no easy way to do this: you are bounded by the throughput of random reads from the pool disks. Random read workloads usually have a 4 - 16 Kbyte record size. The l2arc feed thread can't eat uncached data faster than the random reads can be read from disk. Therefore, at 8 Kbytes, you need at least 1,000 random read disk IOPS to achieve a rate of 8 Mbytes from the ARC list tails, which, for rotational disks performing roughly 100 random IOPS (use a different rate if you like), means about a dozen disks - depending on the ZFS RAID config. All to feed at 8 Mbytes/sec. This is why 8 Mbytes/sec (plus the boost) is the default. To feed at 160 Mbytes/sec, with an 8 Kbyte recsize, you'll need at least 20,000 random read disk IOPS. How many spindles does that take? A lot. Do you have a lot? I wanted to point this out because the warm up problem isn't the l2arc_feed_thread (that it scans, how far it scans, whether it rescans, etc) =E2=80=93 it's the input to the system. ... I just noticed that the https://wiki.freebsd.org/ZFSTuningGuide writes: " vfs.zfs.l2arc_write_max vfs.zfs.l2arc_write_boost The former value sets the runtime max that data will be loaded into L2ARC. The latter can be used to accelerate the loading of a freshly booted system. For a device capable of 400MB/sec reasonable values might be 200MB and 380MB respectively. Note that the same caveats apply about these sysctls and pool imports as the previous one. Setting these values properly is the difference between an L2ARC subsystem that can take days to heat up versus one that heats up in minutes. " This advise seems a little unwise: you could tune the feed rates that high =E2=80=93 if you have enough spindles to feed it =E2=80=93 but I think for = most people this will waste CPU cycles failing to find buffers to cache. Can the author please double check? Brendan --=20 Brendan Gregg, Joyent http://dtrace.org/blogs/brendan From owner-freebsd-fs@FreeBSD.ORG Wed May 8 21:45:55 2013 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by hub.freebsd.org (Postfix) with ESMTP id BF6A9C3B for ; Wed, 8 May 2013 21:45:55 +0000 (UTC) (envelope-from fjwcash@gmail.com) Received: from mail-qe0-f49.google.com (mail-qe0-f49.google.com [209.85.128.49]) by mx1.freebsd.org (Postfix) with ESMTP id 8636DEF8 for ; Wed, 8 May 2013 21:45:55 +0000 (UTC) Received: by mail-qe0-f49.google.com with SMTP id 7so1446097qeb.22 for ; Wed, 08 May 2013 14:45:49 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:x-received:in-reply-to:references:date:message-id :subject:from:to:cc:content-type; bh=vFys/IiVV1h3WuDxSWL4Yh+KWOxQE80Aid5LSGg6wcg=; b=jt/n3FmAeM3vTE2+q9ueBL5ThzbgIlFuQBM3G7TRM3k/ZmQclLWIbcFve+VhO1YC02 sgLu3boYCcbQOuZU3KyWahjFLDJ9QyVELyYkcKKyH1VT1Qhv/LYYodE5iIFU5Fc/g6i+ qm5ZiNH2vJd52x7v+TbOVchpVmPMjrHCor7tRniGPo12jrp26bXsxo2z6lS0bLKltage 9PpJv/z9A8zkCX+uHgnZ3t4vi2U8/MsyHPSU61fn1Z32QNk5aYZ2bPQ9cYaBOaAYIPlO 27CRowaoVkzD0jun9z1tAEpvtSlJdoyex9qIdY4Ldzc73pDTgjdWajP2L/xd5WAaJMDi NiIA== MIME-Version: 1.0 X-Received: by 10.49.35.132 with SMTP id h4mr7329327qej.29.1368049549245; Wed, 08 May 2013 14:45:49 -0700 (PDT) Received: by 10.49.1.44 with HTTP; Wed, 8 May 2013 14:45:49 -0700 (PDT) In-Reply-To: References: Date: Wed, 8 May 2013 14:45:49 -0700 Message-ID: Subject: Re: Strange slowdown when cache devices enabled in ZFS From: Freddie Cash To: Brendan Gregg Content-Type: text/plain; charset=UTF-8 X-Content-Filtered-By: Mailman/MimeDel 2.1.14 Cc: FreeBSD Filesystems X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 08 May 2013 21:45:55 -0000 On Wed, May 8, 2013 at 2:35 PM, Brendan Gregg wrote: > Freddie Cash wrote (Mon Apr 29 16:01:55 UTC 2013): > | > | The following settings in /etc/sysctl.conf prevent the "stalls" > completely, > | even when the L2ARC devices are 100% full and all RAM is wired into the > | ARC. Been running without issues for 5 days now: > | > | vfs.zfs.l2arc_norw=0 # Default is 1 > | vfs.zfs.l2arc_feed_again=0 # Default is 1 > | vfs.zfs.l2arc_noprefetch=0 # Default is 0 > | vfs.zfs.l2arc_feed_min_ms=1000 # Default is 200 > | vfs.zfs.l2arc_write_boost=320000000 # Default is 8 MBps > | vfs.zfs.l2arc_write_max=160000000 # Default is 8 MBps > | > | With these settings, I'm also able to expand the ARC to use the full 128 > GB > | of RAM in the biggest box, and to use both L2ARC devices (60 GB in > total). > | And, can set primarycache and secondarycache to all (the default) instead > | of just metadata. > |[...] > > The thread earlier described a 100% CPU-bound l2arc_feed_thread, which > could be caused by these settings: > > vfs.zfs.l2arc_write_boost=320000000 # Default is 8 MBps > vfs.zfs.l2arc_write_max=160000000 # Default is 8 MBps > > If I'm reading that correctly, it's increasing the write max and boost to > be 160 Mbytes and 320 Mbytes. To satisfy these, the L2ARC must scan memory > from the tail of the ARC lists, lists which may be composed of tiny buffers > (eg, 8k). Increasing that scan 20 fold could saturate a CPU. And, if it > doesn't find many bytes to write out, then it will rescan the same buffers > on the next interval, wasting CPU cycles. > > I understand the intent was probably to warm up the L2ARC faster. There is > no easy way to do this: you are bounded by the throughput of random reads > from the pool disks. > > Random read workloads usually have a 4 - 16 Kbyte record size. The l2arc > feed thread can't eat uncached data faster than the random reads can be > read from disk. Therefore, at 8 Kbytes, you need at least 1,000 random read > disk IOPS to achieve a rate of 8 Mbytes from the ARC list tails, which, for > rotational disks performing roughly 100 random IOPS (use a different rate > if you like), means about a dozen disks - depending on the ZFS RAID config. > All to feed at 8 Mbytes/sec. This is why 8 Mbytes/sec (plus the boost) is > the default. > > To feed at 160 Mbytes/sec, with an 8 Kbyte recsize, you'll need at least > 20,000 random read disk IOPS. How many spindles does that take? A lot. Do > you have a lot? > > 45x 2 TB SATA harddrives, configured in raidz2 vdevs of 6 disks each for a total of 7 vdevs (with a few spare disks). With 2x SSD for log+OS and 2x SSD for cache. With plans to expand that out with another 45-disk JBOD next summer-ish (2014) With the settings above, I get 120 MBps of writes to the L2ARC until each SSD is over 90% full (boot), then it settles around 5-10 MBps while receiving snapshots from the other 3 servers. I guess I could change the settings to make the _boost 100-odd MBps and leave the _max at the default. I'll play with the l2arc_write_* settings to see if that makes a difference with l2arc_norw enabled. -- Freddie Cash fjwcash@gmail.com From owner-freebsd-fs@FreeBSD.ORG Wed May 8 21:46:52 2013 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by hub.freebsd.org (Postfix) with ESMTP id AA497CB7 for ; Wed, 8 May 2013 21:46:52 +0000 (UTC) (envelope-from brendan.gregg@joyent.com) Received: from mail-pb0-x230.google.com (mail-pb0-x230.google.com [IPv6:2607:f8b0:400e:c01::230]) by mx1.freebsd.org (Postfix) with ESMTP id 879B9F03 for ; Wed, 8 May 2013 21:46:52 +0000 (UTC) Received: by mail-pb0-f48.google.com with SMTP id ma3so1507593pbc.21 for ; Wed, 08 May 2013 14:46:52 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20120113; h=mime-version:x-received:in-reply-to:references:date:message-id :subject:from:to:content-type:x-gm-message-state; bh=+an+RgKszW21Zk90dx/dKd+txnldgiNG2Xmsm/vY1KI=; b=HktRmDB4VaawhaQ94rHfxdYCFEZjUYI8fPqh3mYcVAT6tKoYbWhjY84yVC87ZGadTH mx+q4e1aYrMnd7sL68qfXzWqAQXW+wP91OS95BNU+IuEMmgfkoaqU3Zq2vQeu2ceLl5D 40zr7NETpuRYvNXIV2veQAh7dTy323bIe/2VbajAHNvDZwjkj/3T0814+0hRQI0sFBDl i+skw/Vf7i39i7OS0QwJeA9vdFWtlSuLU03vxefoaxWL/liFrVhLc5LAIanQGw8H9o8g 7VNN4HyHoxDgpIJrVjawAASkfsu8aO3KCoeJrfoLZuOSURt3PRnvixDbBxs0SmE4LFN3 6PiA== MIME-Version: 1.0 X-Received: by 10.68.106.196 with SMTP id gw4mr9517904pbb.126.1368049612258; Wed, 08 May 2013 14:46:52 -0700 (PDT) Received: by 10.68.66.168 with HTTP; Wed, 8 May 2013 14:46:52 -0700 (PDT) In-Reply-To: References: Date: Wed, 8 May 2013 14:46:52 -0700 Message-ID: Subject: Re: Strange slowdown when cache devices enabled in ZFS From: Brendan Gregg To: freebsd-fs@freebsd.org X-Gm-Message-State: ALoCoQkXUWUODwUUrwuT3V+jwUEFQ1k/CDNwu7vERxAZq8ZLFTZ2qEsf3ejaNsupyVNWElQgknPu Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: quoted-printable X-Content-Filtered-By: Mailman/MimeDel 2.1.14 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 08 May 2013 21:46:52 -0000 On Wed, May 8, 2013 at 2:35 PM, Brendan Gregg wro= te: > Freddie Cash wrote (Mon Apr 29 16:01:55 UTC 2013): > | > | The following settings in /etc/sysctl.conf prevent the "stalls" > completely, > | even when the L2ARC devices are 100% full and all RAM is wired into the > | ARC. Been running without issues for 5 days now: > | > | vfs.zfs.l2arc_norw=3D0 # Default is 1 > | vfs.zfs.l2arc_feed_again=3D0 # Default is 1 > | vfs.zfs.l2arc_noprefetch=3D0 # Default is 0 > | vfs.zfs.l2arc_feed_min_ms=3D1000 # Default is 200 > | vfs.zfs.l2arc_write_boost=3D320000000 # Default is 8 MBps > | vfs.zfs.l2arc_write_max=3D160000000 # Default is 8 MBps > | > | With these settings, I'm also able to expand the ARC to use the full 12= 8 > GB > | of RAM in the biggest box, and to use both L2ARC devices (60 GB in > total). > | And, can set primarycache and secondarycache to all (the default) inste= ad > | of just metadata. > |[...] > > The thread earlier described a 100% CPU-bound l2arc_feed_thread, which > could be caused by these settings: > > vfs.zfs.l2arc_write_boost=3D320000000 # Default is 8 MBps > vfs.zfs.l2arc_write_max=3D160000000 # Default is 8 MBps > > If I'm reading that correctly, it's increasing the write max and boost to > be 160 Mbytes and 320 Mbytes. To satisfy these, the L2ARC must scan memor= y > from the tail of the ARC lists, lists which may be composed of tiny buffe= rs > (eg, 8k). Increasing that scan 20 fold could saturate a CPU. And, if it > doesn't find many bytes to write out, then it will rescan the same buffer= s > on the next interval, wasting CPU cycles. > > I understand the intent was probably to warm up the L2ARC faster. There i= s > no easy way to do this: you are bounded by the throughput of random reads > from the pool disks. > > Random read workloads usually have a 4 - 16 Kbyte record size. The l2arc > feed thread can't eat uncached data faster than the random reads can be > read from disk. Therefore, at 8 Kbytes, you need at least 1,000 random re= ad > disk IOPS to achieve a rate of 8 Mbytes from the ARC list tails, which, f= or > rotational disks performing roughly 100 random IOPS (use a different rate > if you like), means about a dozen disks - depending on the ZFS RAID confi= g. > All to feed at 8 Mbytes/sec. This is why 8 Mbytes/sec (plus the boost) is > the default. > > To feed at 160 Mbytes/sec, with an 8 Kbyte recsize, you'll need at least > 20,000 random read disk IOPS. How many spindles does that take? A lot. Do > you have a lot? > > I wanted to point this out because the warm up problem isn't the > l2arc_feed_thread (that it scans, how far it scans, whether it rescans, > etc) =E2=80=93 it's the input to the system. > > ... > > I just noticed that the https://wiki.freebsd.org/ZFSTuningGuide writes: > > " > vfs.zfs.l2arc_write_max > > vfs.zfs.l2arc_write_boost > > The former value sets the runtime max that data will be loaded into L2ARC= . > The latter can be used to accelerate the loading of a freshly booted > system. For a device capable of 400MB/sec reasonable values might be 200M= B > and 380MB respectively. Note that the same caveats apply about these > sysctls and pool imports as the previous one. Setting these values proper= ly > is the difference between an L2ARC subsystem that can take days to heat u= p > versus one that heats up in minutes. > " > > This advise seems a little unwise: you could tune the feed rates that hig= h > =E2=80=93 if you have enough spindles to feed it =E2=80=93 but I think fo= r most people this > will waste CPU cycles failing to find buffers to cache. Can the author > please double check? > Sorry - just noticed that vfs.zfs.l2arc_noprefetch=3D0 was also set, and, t= he guide recommends that. What I described was for the default of 1, where only random reads feed the L2ARC. Streaming workloads can feed it much quicker, so, you can increase the feed rate if either you have a lot of spindles, or, are caching streaming workloads =E2=80=93 both providing the throughput desired. Back when the L2ARC was developed, the SSD max throughput (around 200 Mbytes/sec) could not compete with the pool disks (say, 12 x 180 Mbytes/sec), so it didn't make sense to cache sequential workloads in the L2ARC. It's another subtlety that the ZFSTuningGuide might want to explain: your pool disks might already be very good at streaming workloads =E2=80=93= better than the L2ARC =E2=80=93 and so you want to leave sequential workloads to t= hem. Brendan --=20 Brendan Gregg, Joyent http://dtrace.org/blogs/brendan From owner-freebsd-fs@FreeBSD.ORG Wed May 8 22:02:30 2013 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by hub.freebsd.org (Postfix) with ESMTP id 8F936FD9 for ; Wed, 8 May 2013 22:02:30 +0000 (UTC) (envelope-from brendan.gregg@joyent.com) Received: from mail-pb0-x232.google.com (mail-pb0-x232.google.com [IPv6:2607:f8b0:400e:c01::232]) by mx1.freebsd.org (Postfix) with ESMTP id 6D0F0F8C for ; Wed, 8 May 2013 22:02:30 +0000 (UTC) Received: by mail-pb0-f50.google.com with SMTP id um15so1522014pbc.9 for ; Wed, 08 May 2013 15:02:30 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20120113; h=mime-version:x-received:in-reply-to:references:date:message-id :subject:from:to:cc:content-type:x-gm-message-state; bh=rrIBAA//Uaq3SBYTb0M7NL1s8GRcP8DVQ4wpLf4Dslg=; b=RYpG413+S7WcZ2GpMUZVR1ajp6idrqO7RlNpz1SdfXo/9BL5FbHedAvLBs26scyEc1 MkxufggQN2ECvz0sByn1PnraMJn2S2TWnOL3QxMhf8vS7ADPR+fGP+bX2Rn6sUhg+yXO hIX14gkZ4uRLCMbobACeA24n9udBUi9pH5nMJYMRBgupCS9CCnoo9mULt1zfv0Ta9wze 2+XfFv2uXtqtjkzyc3THr2hImdEvgqNhUNAfpuQvwmbnXJ7xCUQBziqlC5LEHzlLyIKu 236oLmhRYaQ5ZNC3+ucZBoHhYbbmzERdnyWMO3EHbgpZ6xRfO0scdB4j+0c6Gygrm3C9 SnDw== MIME-Version: 1.0 X-Received: by 10.68.11.164 with SMTP id r4mr9781197pbb.15.1368050550195; Wed, 08 May 2013 15:02:30 -0700 (PDT) Received: by 10.68.66.168 with HTTP; Wed, 8 May 2013 15:02:30 -0700 (PDT) In-Reply-To: References: Date: Wed, 8 May 2013 15:02:30 -0700 Message-ID: Subject: Re: Strange slowdown when cache devices enabled in ZFS From: Brendan Gregg To: Freddie Cash X-Gm-Message-State: ALoCoQn6V9Boqk82Cudza6DzM0ygN1XRKd0GI6JkH5l5//bjp6mII8gwwgStKlvUIbyMI9G98VoR Content-Type: text/plain; charset=UTF-8 X-Content-Filtered-By: Mailman/MimeDel 2.1.14 Cc: FreeBSD Filesystems X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 08 May 2013 22:02:30 -0000 On Wed, May 8, 2013 at 2:45 PM, Freddie Cash wrote: > On Wed, May 8, 2013 at 2:35 PM, Brendan Gregg wrote: > >> Freddie Cash wrote (Mon Apr 29 16:01:55 UTC 2013): >> | >> | The following settings in /etc/sysctl.conf prevent the "stalls" >> completely, >> [...] >> To feed at 160 Mbytes/sec, with an 8 Kbyte recsize, you'll need at least >> 20,000 random read disk IOPS. How many spindles does that take? A lot. Do >> you have a lot? >> >> > 45x 2 TB SATA harddrives, configured in raidz2 vdevs of 6 disks each for a > total of 7 vdevs (with a few spare disks). With 2x SSD for log+OS and 2x > SSD for cache. > What's the max random read rate? I'd expect (7 vdevs, modern disks) it to be something like 1,000. What is your recsize? (or if it is tiny files, then average size?). On the other hand, if it's caching streaming workloads, then do those 2 SSDs outperform 45 spindles? If you are getting 120 Mbytes/sec warmup, then I'm guessing it's either a 128 Kbyte recsize random reads, or sequential. Brendan > With plans to expand that out with another 45-disk JBOD next summer-ish > (2014) > > With the settings above, I get 120 MBps of writes to the L2ARC until each > SSD is over 90% full (boot), then it settles around 5-10 MBps while > receiving snapshots from the other 3 servers. > > I guess I could change the settings to make the _boost 100-odd MBps and > leave the _max at the default. I'll play with the l2arc_write_* settings > to see if that makes a difference with l2arc_norw enabled. > > -- > Freddie Cash > fjwcash@gmail.com > -- Brendan Gregg, Joyent http://dtrace.org/blogs/brendan From owner-freebsd-fs@FreeBSD.ORG Wed May 8 22:20:03 2013 Return-Path: Delivered-To: freebsd-fs@smarthost.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by hub.freebsd.org (Postfix) with ESMTP id 891FB411 for ; Wed, 8 May 2013 22:20:03 +0000 (UTC) (envelope-from gnats@FreeBSD.org) Received: from freefall.freebsd.org (freefall.freebsd.org [IPv6:2001:1900:2254:206c::16:87]) by mx1.freebsd.org (Postfix) with ESMTP id 626C173 for ; Wed, 8 May 2013 22:20:03 +0000 (UTC) Received: from freefall.freebsd.org (localhost [127.0.0.1]) by freefall.freebsd.org (8.14.6/8.14.6) with ESMTP id r48MK3am001463 for ; Wed, 8 May 2013 22:20:03 GMT (envelope-from gnats@freefall.freebsd.org) Received: (from gnats@localhost) by freefall.freebsd.org (8.14.6/8.14.6/Submit) id r48MK3jJ001462; Wed, 8 May 2013 22:20:03 GMT (envelope-from gnats) Date: Wed, 8 May 2013 22:20:03 GMT Message-Id: <201305082220.r48MK3jJ001462@freefall.freebsd.org> To: freebsd-fs@FreeBSD.org Cc: From: "Steven Hartland" Subject: Re: kern/178388: [zfs] [patch] allow up to 8MB recordsize X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list Reply-To: Steven Hartland List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 08 May 2013 22:20:03 -0000 The following reply was made to PR kern/178388; it has been noted by GNATS. From: "Steven Hartland" To: , Cc: Subject: Re: kern/178388: [zfs] [patch] allow up to 8MB recordsize Date: Wed, 8 May 2013 23:12:04 +0100 Seems interesting but it's really something that needs to be reviewed and submitted upstream (illumos). From owner-freebsd-fs@FreeBSD.ORG Wed May 8 22:22:53 2013 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by hub.freebsd.org (Postfix) with ESMTP id 48A19595 for ; Wed, 8 May 2013 22:22:53 +0000 (UTC) (envelope-from fjwcash@gmail.com) Received: from mail-qe0-f41.google.com (mail-qe0-f41.google.com [209.85.128.41]) by mx1.freebsd.org (Postfix) with ESMTP id 0D27EA3 for ; Wed, 8 May 2013 22:22:52 +0000 (UTC) Received: by mail-qe0-f41.google.com with SMTP id b10so1487419qen.0 for ; Wed, 08 May 2013 15:22:46 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:x-received:in-reply-to:references:date:message-id :subject:from:to:cc:content-type; bh=WEDv8olqYmGWUkj197hC8zU+r070z8POHN2lRNSCFrg=; b=jwGaIKt4wzR6JkBnRvia5beU8sZ7GtSbl4ewLRfyprZgVh1s3GOcUQZmnBnbSc9TTI gUYiJx6WzrBPfqGfrR4u/Bvv1OShy+qGJWbsLAU7mJDodVaSN9uOVFzjlI7hfVWf1hil fzlkUyWBLTSukj43s6ySEwkfvlYBRopUwcl8YXmnMohMEWufcknU36EtyY1NCFyaRa0O ceOb+rZClp/RR8xed4CbKV2EX8+clQv239LbS6CWB1BGejDJJjshJIa8EDwE7OIFVllG GJj9PoMRcf90up7SVLW70bEda9EMMtdZDbYc+E2MLtdk9opXcavHgAxAEiZZxvShC7iT epxQ== MIME-Version: 1.0 X-Received: by 10.224.4.202 with SMTP id 10mr6463125qas.70.1368051766489; Wed, 08 May 2013 15:22:46 -0700 (PDT) Received: by 10.49.1.44 with HTTP; Wed, 8 May 2013 15:22:46 -0700 (PDT) In-Reply-To: References: Date: Wed, 8 May 2013 15:22:46 -0700 Message-ID: Subject: Re: Strange slowdown when cache devices enabled in ZFS From: Freddie Cash To: Brendan Gregg Content-Type: text/plain; charset=UTF-8 X-Content-Filtered-By: Mailman/MimeDel 2.1.14 Cc: FreeBSD Filesystems X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 08 May 2013 22:22:53 -0000 On Wed, May 8, 2013 at 3:02 PM, Brendan Gregg wrote: > On Wed, May 8, 2013 at 2:45 PM, Freddie Cash wrote: > >> On Wed, May 8, 2013 at 2:35 PM, Brendan Gregg wrote: >> >>> Freddie Cash wrote (Mon Apr 29 16:01:55 UTC 2013): >>> | >>> | The following settings in /etc/sysctl.conf prevent the "stalls" >>> completely, >>> [...] >>> >>> To feed at 160 Mbytes/sec, with an 8 Kbyte recsize, you'll need at least >>> 20,000 random read disk IOPS. How many spindles does that take? A lot. Do >>> you have a lot? >>> >>> >> 45x 2 TB SATA harddrives, configured in raidz2 vdevs of 6 disks each for >> a total of 7 vdevs (with a few spare disks). With 2x SSD for log+OS and 2x >> SSD for cache. >> > > What's the max random read rate? I'd expect (7 vdevs, modern disks) it to > be something like 1,000. What is your recsize? (or if it is tiny files, > then average size?). > > On the other hand, if it's caching streaming workloads, then do those 2 > SSDs outperform 45 spindles? > > If you are getting 120 Mbytes/sec warmup, then I'm guessing it's either a > 128 Kbyte recsize random reads, or sequential. > > There's 128 GB of RAM in the box, arc_max set to 124 GB, arc_meta_max set to 120 GB. And 16 CPU cores (2x 8-core CPU at 2.0 GHz). Recordsize property for the pool is left at default (128 KB). LZJB compression is enabled. Dedupe is enabled. "zpool list" shows 76 TB total storage space in the pool, with 29 TB available (61% cap). "zfs list" shows just over 18 TB of actual usable space left in the pool. "zdb -DD" shows the following: DDT-sha256-zap-duplicate: 110879014 entries, size 557 on disk, 170 in core DDT-sha256-zap-unique: 259870524 entries, size 571 on disk, 181 in core DDT histogram (aggregated over all DDTs): bucket allocated referenced ______ ______________________________ ______________________________ refcnt blocks LSIZE PSIZE DSIZE blocks LSIZE PSIZE DSIZE ------ ------ ----- ----- ----- ------ ----- ----- ----- 1 248M 27.2T 18.6T 19.3T 248M 27.2T 18.6T 19.3T 2 80.0M 9.07T 7.56T 7.72T 175M 19.8T 16.5T 16.9T 4 16.0M 1.80T 1.40T 1.44T 77.2M 8.67T 6.72T 6.91T 8 4.51M 498G 345G 358G 47.6M 5.13T 3.51T 3.65T 16 2.53M 293G 137G 146G 53.8M 6.09T 2.84T 3.05T 32 1.55M 119G 63.8G 71.4G 72.6M 5.07T 2.77T 3.13T 64 762K 78.7G 45.6G 49.0G 71.5M 7.45T 4.25T 4.57T 128 264K 26.3G 18.3G 19.3G 44.8M 4.49T 3.25T 3.41T 256 57.5K 4.21G 2.28G 2.58G 18.2M 1.30T 704G 805G 512 9.25K 436M 216M 277M 6.38M 299G 144G 186G 1K 2.96K 116M 56.8M 76.5M 4.10M 166G 81.4G 109G 2K 1.15K 56.9M 27.1M 34.7M 3.26M 163G 76.0G 97.6G 4K 618 16.6M 3.10M 7.65M 3.27M 85.0G 17.0G 41.5G 8K 169 7.36M 3.11M 4.25M 1.89M 81.4G 33.2G 46.4G 16K 156 3.54M 948K 2.07M 3.42M 79.9G 20.2G 45.8G 32K 317 2.11M 763K 3.05M 13.8M 91.7G 32.1G 135G 64K 15 712K 32K 160K 1.26M 53.2G 2.44G 13.0G 128K 10 13.5K 8.50K 79.9K 1.60M 2.18G 1.37G 12.8G 256K 3 1.50K 1.50K 24.0K 926K 463M 463M 7.23G Total 354M 39.0T 28.2T 29.1T 848M 86.2T 59.5T 62.3T dedup = 2.14, compress = 1.45, copies = 1.05, dedup * compress / copies = 2.96 Not sure which zdb command to use to show the average block sizes in use, though. This is the off-site replication storage server for our backups systems, aggregating data from the three main backups servers (schools, non-schools, groupware). Each of those backups servers does an rsync of a remote Linux or FreeBSD server (65, 73, 1 resp) overnight, and then does a "zfs send" to push the data to this off-site server. The issue I noticed was during the zfs recv from the other 3 boxes. Would run fine without L2ARC devices, saturating the gigabit link between them. Would run fine with L2ARC devices enabled ... until the L2ARC usage neared 100%, then the l2arc_feed_thread would hit 100% CPU usage, and there would be 0 I/O to the pool. If I limited ARC to 64 GB, it would take longer to reach the "l2arc_feed_thread @ 100%; no I/O" issue. Turning l2arc_norw off, everything works. I've been running with the sysctl.conf settings shown before without any issues for over a week now. Full 124 GB ARC, 2x 64GB cache devices, L2ARC sitting at near 100% usage, and l2arc_feed_thread never goes above 50% CPU, usually around 20%. -- Freddie Cash fjwcash@gmail.com From owner-freebsd-fs@FreeBSD.ORG Thu May 9 12:31:50 2013 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by hub.freebsd.org (Postfix) with ESMTP id 1783D872 for ; Thu, 9 May 2013 12:31:50 +0000 (UTC) (envelope-from outbackdingo@gmail.com) Received: from mail-ob0-x235.google.com (mail-ob0-x235.google.com [IPv6:2607:f8b0:4003:c01::235]) by mx1.freebsd.org (Postfix) with ESMTP id DED043C6 for ; Thu, 9 May 2013 12:31:49 +0000 (UTC) Received: by mail-ob0-f181.google.com with SMTP id ta14so2788837obb.40 for ; Thu, 09 May 2013 05:31:49 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:x-received:date:message-id:subject:from:to :content-type; bh=fJHI0H1ICtEs5t7soLHlTi7DAzXHyTzAhQQKoKk3oks=; b=bDvvTDFVST3TI/GjvAdgEhhAoJZEbwawvKhndi6+gknSDUPp0/CyLAqz3ibJmE4M/s 3LNL2j+HIeF5FZKsy4ieIk1DPoZUNzZvKLMpQTsFOEWk9xxNFaCR6MV4pGUoFlkyavXD 0cltM7ZYxessd0b68SST9j2jaqMJKdW3PoX0Pv/lINsYghdDIX2kHGf/rSENZFDkzGsO kBnOeghnBGZTKtyVBqyo872GwPEDT6/VX1hJeDdbf7wcNBeBpG2lGQ+6iYM1l5RlHyYO ARdi0tUL7kULKuDffdg1+9MohF9IB6UOhut5DOLqx6jv29CoL/d/jY8D0Uhl/ulTigea dy9Q== MIME-Version: 1.0 X-Received: by 10.60.121.2 with SMTP id lg2mr2689684oeb.89.1368102709567; Thu, 09 May 2013 05:31:49 -0700 (PDT) Received: by 10.76.96.49 with HTTP; Thu, 9 May 2013 05:31:49 -0700 (PDT) Date: Thu, 9 May 2013 08:31:49 -0400 Message-ID: Subject: Corrupted zpool import -f FAILS state FAULTED From: Outback Dingo To: freebsd-fs@freebsd.org Content-Type: text/plain; charset=ISO-8859-1 X-Content-Filtered-By: Mailman/MimeDel 2.1.14 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 09 May 2013 12:31:50 -0000 ok zfsgurus, FreeBSD 9.1-STABLE box zpool import -f reports pool status Faulted, one of more devices contains corrupted data, however its showing the guid as faulted in the poll, and not the actual disk device /dev/daX, the pool is a single vdev 24 disk raidz3. Essentially the hardward platform is a dual node system, with 8 enclosures connected to 24 SAS drives via 4 LSI cards. I am not currently using geom_multipath, but the box is zoned so that each node can see 50% of the drives, in case of Failure, carp kicks in and migrates "zpool import -af" the pools onto the other node. it seems as though somehow the pool is now seeing guid and not devices, not sure if they have switched devices ids due to a reboot. From owner-freebsd-fs@FreeBSD.ORG Thu May 9 18:24:57 2013 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.FreeBSD.org [8.8.178.115]) by hub.freebsd.org (Postfix) with ESMTP id 3468E7D7 for ; Thu, 9 May 2013 18:24:57 +0000 (UTC) (envelope-from scrappy@hub.org) Received: from hub.org (hub.org [200.46.208.146]) by mx1.freebsd.org (Postfix) with ESMTP id 0008EA30 for ; Thu, 9 May 2013 18:24:56 +0000 (UTC) Received: from maia.hub.org (unknown [200.46.151.188]) by hub.org (Postfix) with ESMTP id 8214B10229EB; Thu, 9 May 2013 15:24:55 -0300 (ADT) Received: from hub.org ([200.46.208.146]) by maia.hub.org (mx1.hub.org [200.46.151.188]) (amavisd-maia, port 10024) with ESMTP id 85924-10; Thu, 9 May 2013 18:24:55 +0000 (UTC) Received: from [10.5.250.150] (remote.ilcs.sd63.bc.ca [142.31.148.2]) by hub.org (Postfix) with ESMTPA id 75B6010229EA; Thu, 9 May 2013 15:24:54 -0300 (ADT) Content-Type: text/plain; charset=windows-1252 Mime-Version: 1.0 (Mac OS X Mail 6.3 \(1503\)) Subject: Re: NFS Performance issue against NetApp From: "Marc G. Fournier" In-Reply-To: Date: Thu, 9 May 2013 11:24:53 -0700 Content-Transfer-Encoding: quoted-printable Message-Id: <030E4A04-D597-49BD-8979-27C3EFB6D276@hub.org> References: <834305228.13772274.1367527941142.JavaMail.root@k-state.edu> <75CB6F1E-385D-4E51-876E-7BB8D7140263@hub.org> <20130502221857.GJ32659@physics.umn.edu> <420165EE-BBBF-4E97-B476-58FFE55A52AA@hub.org> To: Mark Felder X-Mailer: Apple Mail (2.1503) Cc: freebsd-fs@freebsd.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 09 May 2013 18:24:57 -0000 FYI =85 I just installed Solaris 11 onto the same hardware and ran the = same test =85 so far, I'm seeing: Linux @ ~30s Solaris @ ~44s OpenBSD @ ~200s FreeBSD @ ~240s I've even tried FreeBSD 8.3 just to see if maybe its as 'newish' issue =85= same as 9.x =85 I could see Linux 'cutting corners', but Oracle/Solaris = too =85 ? On 2013-05-03, at 04:50 , Mark Felder wrote: > On Thu, 02 May 2013 18:43:17 -0500, Marc G. Fournier = wrote: >=20 >> Hadn't thought to do so with Linux, but =85 >> Linux =85=85. 20732ms, 20117ms, 20935ms, 20130ms, 20560ms >> FreeBSD .. 28996ms, 24794ms, 24702ms, 23311ms, 24153ms >=20 > Please make sure both platforms are using similar atime settings. I = think most distros use ext4 with diratime by default. I'd just do = noatime on both platforms to be safe. > _______________________________________________ > freebsd-fs@freebsd.org mailing list > http://lists.freebsd.org/mailman/listinfo/freebsd-fs > To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" From owner-freebsd-fs@FreeBSD.ORG Fri May 10 03:36:34 2013 Return-Path: Delivered-To: freebsd-fs@smarthost.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.FreeBSD.org [8.8.178.115]) by hub.freebsd.org (Postfix) with ESMTP id BBB9270E; Fri, 10 May 2013 03:36:34 +0000 (UTC) (envelope-from linimon@FreeBSD.org) Received: from freefall.freebsd.org (freefall.freebsd.org [IPv6:2001:1900:2254:206c::16:87]) by mx1.freebsd.org (Postfix) with ESMTP id 95E996D8; Fri, 10 May 2013 03:36:34 +0000 (UTC) Received: from freefall.freebsd.org (localhost [127.0.0.1]) by freefall.freebsd.org (8.14.7/8.14.7) with ESMTP id r4A3aYpP015357; Fri, 10 May 2013 03:36:34 GMT (envelope-from linimon@freefall.freebsd.org) Received: (from linimon@localhost) by freefall.freebsd.org (8.14.7/8.14.7/Submit) id r4A3aXPD015356; Fri, 10 May 2013 03:36:33 GMT (envelope-from linimon) Date: Fri, 10 May 2013 03:36:33 GMT Message-Id: <201305100336.r4A3aXPD015356@freefall.freebsd.org> To: jkeller@bbiinternational.com, linimon@FreeBSD.org, freebsd-bugs@FreeBSD.org, freebsd-fs@FreeBSD.org From: linimon@FreeBSD.org Subject: Re: kern/178467: [request] Optimized Checksum Code for ZFS X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 10 May 2013 03:36:34 -0000 Old Synopsis: Optimized Checksum Code for ZFS New Synopsis: [request] Optimized Checksum Code for ZFS State-Changed-From-To: open->suspended State-Changed-By: linimon State-Changed-When: Fri May 10 03:35:38 UTC 2013 State-Changed-Why: assign, and note that someone will need to provide a patch. Responsible-Changed-From-To: freebsd-bugs->freebsd-fs Responsible-Changed-By: linimon Responsible-Changed-When: Fri May 10 03:35:38 UTC 2013 Responsible-Changed-Why: http://www.freebsd.org/cgi/query-pr.cgi?pr=178467 From owner-freebsd-fs@FreeBSD.ORG Fri May 10 06:16:17 2013 Return-Path: Delivered-To: freebsd-fs@smarthost.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by hub.freebsd.org (Postfix) with ESMTP id F3BBE125; Fri, 10 May 2013 06:16:16 +0000 (UTC) (envelope-from delphij@FreeBSD.org) Received: from freefall.freebsd.org (freefall.freebsd.org [IPv6:2001:1900:2254:206c::16:87]) by mx1.freebsd.org (Postfix) with ESMTP id D0A25F9B; Fri, 10 May 2013 06:16:16 +0000 (UTC) Received: from freefall.freebsd.org (localhost [127.0.0.1]) by freefall.freebsd.org (8.14.7/8.14.7) with ESMTP id r4A6GG0o046241; Fri, 10 May 2013 06:16:16 GMT (envelope-from delphij@freefall.freebsd.org) Received: (from delphij@localhost) by freefall.freebsd.org (8.14.7/8.14.7/Submit) id r4A6GG1U046240; Fri, 10 May 2013 06:16:16 GMT (envelope-from delphij) Date: Fri, 10 May 2013 06:16:16 GMT Message-Id: <201305100616.r4A6GG1U046240@freefall.freebsd.org> To: delphij@FreeBSD.org, freebsd-fs@FreeBSD.org, zfs-devel@FreeBSD.org From: delphij@FreeBSD.org Subject: Re: kern/178467: [request] Optimized Checksum Code for ZFS X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 10 May 2013 06:16:17 -0000 Synopsis: [request] Optimized Checksum Code for ZFS Responsible-Changed-From-To: freebsd-fs->zfs-devel Responsible-Changed-By: delphij Responsible-Changed-When: Fri May 10 06:16:03 UTC 2013 Responsible-Changed-Why: Assign to zfs-devel@ http://www.freebsd.org/cgi/query-pr.cgi?pr=178467 From owner-freebsd-fs@FreeBSD.ORG Fri May 10 13:45:45 2013 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.FreeBSD.org [8.8.178.115]) by hub.freebsd.org (Postfix) with ESMTP id F0946679 for ; Fri, 10 May 2013 13:45:45 +0000 (UTC) (envelope-from c.kworr@gmail.com) Received: from mail-la0-x231.google.com (mail-la0-x231.google.com [IPv6:2a00:1450:4010:c03::231]) by mx1.freebsd.org (Postfix) with ESMTP id 79E82A76 for ; Fri, 10 May 2013 13:45:45 +0000 (UTC) Received: by mail-la0-f49.google.com with SMTP id ee20so731049lab.8 for ; Fri, 10 May 2013 06:45:44 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=x-received:message-id:date:from:user-agent:mime-version:to:subject :references:in-reply-to:content-type:content-transfer-encoding; bh=ZKFNc6yNos5PKmZ7QpMgeTL/8ciXrBxP4iQfZBczE7s=; b=golrbXJx5Ul3za4QKI/A2YFmdW+Y9BS86wmOn1OhGXe9BcqMBBRjQaG9dO7Kcz6Ptj SlupMENZ0qgCne/li/v7d9HpwUF5qqCyQ1MLR2q1udsl17jdTOkJEG/X10Pnia99m3OP NElvZhnJqS+TRo0iV6pvOaYxpo/03CjLPVuMxY0VutDpaT1tKWs9FHpbzXKIaLVWwAW+ NCV0AMTtZkHfoEH9OM69ZY5EhNopduOrqcyB3s3l+Qdr7g0NZtJxDVeGolsbkxVQsXmP n0/FfRybRnfRe1cdD1qWsfbuKENKe3J1gBPdKhH8ekmmiPwmisPq3F4mlM0pu21Y8kRj oRbA== X-Received: by 10.152.1.232 with SMTP id 8mr7767652lap.33.1368193544487; Fri, 10 May 2013 06:45:44 -0700 (PDT) Received: from [192.168.1.129] (mau.donbass.com. [92.242.127.250]) by mx.google.com with ESMTPSA id c15sm902727lbj.17.2013.05.10.06.45.43 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Fri, 10 May 2013 06:45:43 -0700 (PDT) Message-ID: <518CFA05.6090706@gmail.com> Date: Fri, 10 May 2013 16:45:41 +0300 From: Volodymyr Kostyrko User-Agent: Mozilla/5.0 (X11; FreeBSD amd64; rv:20.0) Gecko/20100101 Firefox/20.0 SeaMonkey/2.17 MIME-Version: 1.0 To: Outback Dingo , freebsd-fs@freebsd.org Subject: Re: Corrupted zpool import -f FAILS state FAULTED References: In-Reply-To: Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 7bit X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 10 May 2013 13:45:46 -0000 09.05.2013 15:31, Outback Dingo: > ok zfsgurus, FreeBSD 9.1-STABLE box zpool import -f reports pool status > Faulted, one of more devices contains corrupted data, however its showing > the guid as faulted in the poll, and not the actual disk device /dev/daX, > the pool is a single vdev 24 disk raidz3. Essentially the hardward platform > is a dual node system, with 8 enclosures connected to 24 SAS drives via 4 > LSI cards. I am not currently using geom_multipath, but the box is zoned so > that each node can see 50% of the drives, > in case of Failure, carp kicks in and migrates "zpool import -af" the pools > onto the other node. it seems as though somehow the pool is now seeing guid > and not devices, not sure if they have switched devices ids due to a reboot. Am not a zfs guru, but I'll try to help. Any console log snippets are welcome. What does "showing the guid as faulted in the pool" looks like. What are the guids for all partitions? Do they interlap for different nodes? ZFS recognizes devices by tasting they vdev labels and not by their logical location and naming. It can safely report any vdev location - but it requires the same set vdevs to bring pool online. -- Sphinx of black quartz, judge my vow. From owner-freebsd-fs@FreeBSD.ORG Fri May 10 14:07:37 2013 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by hub.freebsd.org (Postfix) with ESMTP id 7FC0A26F for ; Fri, 10 May 2013 14:07:37 +0000 (UTC) (envelope-from outbackdingo@gmail.com) Received: from mail-ob0-x232.google.com (mail-ob0-x232.google.com [IPv6:2607:f8b0:4003:c01::232]) by mx1.freebsd.org (Postfix) with ESMTP id 4EC8AB8B for ; Fri, 10 May 2013 14:07:37 +0000 (UTC) Received: by mail-ob0-f178.google.com with SMTP id v19so477216obq.23 for ; Fri, 10 May 2013 07:07:36 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:x-received:in-reply-to:references:date:message-id :subject:from:to:cc:content-type; bh=2ex/hC6BCSEtCOqKcqsySJcpN6CT15rtqR0G8A2G434=; b=d3kvOEMg08Kc+NjofY3AZBM6OF7pHopb9jGOxS/oLYwuYW6Qs8JlBTF0a9vS1EboND xO2BCsjk9CO+qQ/ioE2tiGQRCoTtOw3eZksqBXfuCjEkzI8vw1Res1eIr32K8ih1vZRR tG80iATz+0LxxCj6QfeHshUkyY+E2LDIOJ1JYWDP/E8wJvNY4OBU4j4aJn3hA37m1ICc CuVk0qbwPPdVeK3mMGfeywi6e1yXHnTGq2slTER3Ul70jVigxFIMGBU/EWLJ4NcE+SIx Q9cBazQ02Q1+5kuSoyt5JnOs1kyjX0/UdZ8V6OaIEnjgnSWAygv+MddZYxW4aGhZViOg quQg== MIME-Version: 1.0 X-Received: by 10.60.92.201 with SMTP id co9mr6912546oeb.113.1368194856899; Fri, 10 May 2013 07:07:36 -0700 (PDT) Received: by 10.76.96.49 with HTTP; Fri, 10 May 2013 07:07:36 -0700 (PDT) In-Reply-To: <518CFA05.6090706@gmail.com> References: <518CFA05.6090706@gmail.com> Date: Fri, 10 May 2013 10:07:36 -0400 Message-ID: Subject: Re: Corrupted zpool import -f FAILS state FAULTED From: Outback Dingo To: Volodymyr Kostyrko Content-Type: text/plain; charset=ISO-8859-1 X-Content-Filtered-By: Mailman/MimeDel 2.1.14 Cc: freebsd-fs@freebsd.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 10 May 2013 14:07:37 -0000 On Fri, May 10, 2013 at 9:45 AM, Volodymyr Kostyrko wrote: > 09.05.2013 15:31, Outback Dingo: > > ok zfsgurus, FreeBSD 9.1-STABLE box zpool import -f reports pool status >> Faulted, one of more devices contains corrupted data, however its showing >> the guid as faulted in the poll, and not the actual disk device /dev/daX, >> the pool is a single vdev 24 disk raidz3. Essentially the hardward >> platform >> is a dual node system, with 8 enclosures connected to 24 SAS drives via 4 >> LSI cards. I am not currently using geom_multipath, but the box is zoned >> so >> that each node can see 50% of the drives, >> in case of Failure, carp kicks in and migrates "zpool import -af" the >> pools >> onto the other node. it seems as though somehow the pool is now seeing >> guid >> and not devices, not sure if they have switched devices ids due to a >> reboot. >> > > Am not a zfs guru, but I'll try to help. > > Any console log snippets are welcome. What does "showing the guid as > faulted in the pool" looks like. > > What are the guids for all partitions? Do they interlap for different > nodes? > > ZFS recognizes devices by tasting they vdev labels and not by their > logical location and naming. It can safely report any vdev location - but > it requires the same set vdevs to bring pool online. zdb shows valid data on the drives, no drives have been removed from the box whats confusing is why its using guids and not devices daX is what puzzles me camcontrol devlist and dmesg clearly show the devices are there. The SAS bus is shared so both nodes with 2 LSI controllers each see all drives. We were utilizing a failover script if nodeA dies, carp would kick the script to import the pool to nodeB, both nodes are in the same chassis and see all the enclosures and all the drives > > -- > Sphinx of black quartz, judge my vow. > From owner-freebsd-fs@FreeBSD.ORG Sat May 11 00:33:55 2013 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.FreeBSD.org [8.8.178.115]) by hub.freebsd.org (Postfix) with ESMTP id 6B107267 for ; Sat, 11 May 2013 00:33:55 +0000 (UTC) (envelope-from rmacklem@uoguelph.ca) Received: from esa-annu.net.uoguelph.ca (esa-annu.mail.uoguelph.ca [131.104.91.36]) by mx1.freebsd.org (Postfix) with ESMTP id 02A5796 for ; Sat, 11 May 2013 00:33:54 +0000 (UTC) X-IronPort-Anti-Spam-Filtered: true X-IronPort-Anti-Spam-Result: AqIEAC+QjVGDaFvO/2dsb2JhbABSgz6DPLxkgRF0gh8BAQEDAQEBASAEJyALBRQCGBEZAgQlAQkmBggHBAEcBIdlBgyQcJshkSCNdn4ZGweCQoETA490hHaCQoEmkA+DKyAygQQ1 X-IronPort-AV: E=Sophos;i="4.87,651,1363147200"; d="scan'208";a="27618857" Received: from erie.cs.uoguelph.ca (HELO zcs3.mail.uoguelph.ca) ([131.104.91.206]) by esa-annu.net.uoguelph.ca with ESMTP; 10 May 2013 20:32:46 -0400 Received: from zcs3.mail.uoguelph.ca (localhost.localdomain [127.0.0.1]) by zcs3.mail.uoguelph.ca (Postfix) with ESMTP id 51CC7B3F48; Fri, 10 May 2013 20:32:46 -0400 (EDT) Date: Fri, 10 May 2013 20:32:46 -0400 (EDT) From: Rick Macklem To: "Marc G. Fournier" Message-ID: <968416157.282645.1368232366317.JavaMail.root@erie.cs.uoguelph.ca> In-Reply-To: <030E4A04-D597-49BD-8979-27C3EFB6D276@hub.org> Subject: Re: NFS Performance issue against NetApp MIME-Version: 1.0 Content-Type: multipart/mixed; boundary="----=_Part_282644_1724368082.1368232366315" X-Originating-IP: [172.17.91.203] X-Mailer: Zimbra 6.0.10_GA_2692 (ZimbraWebClient - FF3.0 (Win)/6.0.10_GA_2692) Cc: freebsd-fs@freebsd.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 11 May 2013 00:33:55 -0000 ------=_Part_282644_1724368082.1368232366315 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: quoted-printable Marc G. Fournier wrote: > FYI =E2=80=A6 I just installed Solaris 11 onto the same hardware and ran = the > same test =E2=80=A6 so far, I'm seeing: >=20 > Linux @ ~30s > Solaris @ ~44s >=20 > OpenBSD @ ~200s > FreeBSD @ ~240s >=20 > I've even tried FreeBSD 8.3 just to see if maybe its as 'newish' issue > =E2=80=A6 same as 9.x =E2=80=A6 I could see Linux 'cutting corners', but > Oracle/Solaris too =E2=80=A6 ? >=20 The three client implementations (BSD, Linux, Solaris) were developed independently and, as such, will all implement somewaht different caching algorithms (the RFCs specify what goes on the wire, but say little w.r.t. client side caching). I have a attached a patch that might be useful for determining if the client side buffer cache consistency algorithm in FreeBSD is causing the slow startup of jboss. Do not run this patch on a production system, since it pretty well disables all buffer cache coherency (ie. if another client modifies a file, the patched client won't notice and will continue to cache stale file data). If the patch does speed up startup of jboss significantly, you can use the sysctl: vfs.nfs.noconsist to check for which coherency check is involved by decreasing the value for the sysctl by 1 and then trying a startup again. (When vfs.nfs.noconsist=3D0, normal cache coherency will be applied.) I have no idea if buffer cache coherency is a factor, but trying the attached patch might determine if it is. Note that you have never posted updated "nfsstat -c" values. (Remember that what you posted indicated 88 RPCs, which seemed bogus.) Finding out if FreeBSD does a lot more of certain RPCs that Linux/Solaris might help isolate what is going on. rick > On 2013-05-03, at 04:50 , Mark Felder wrote: >=20 > > On Thu, 02 May 2013 18:43:17 -0500, Marc G. Fournier > > wrote: > > > >> Hadn't thought to do so with Linux, but =E2=80=A6 > >> Linux =E2=80=A6=E2=80=A6. 20732ms, 20117ms, 20935ms, 20130ms, 20560ms > >> FreeBSD .. 28996ms, 24794ms, 24702ms, 23311ms, 24153ms > > > > Please make sure both platforms are using similar atime settings. I > > think most distros use ext4 with diratime by default. I'd just do > > noatime on both platforms to be safe. > > _______________________________________________ > > freebsd-fs@freebsd.org mailing list > > http://lists.freebsd.org/mailman/listinfo/freebsd-fs > > To unsubscribe, send any mail to > > "freebsd-fs-unsubscribe@freebsd.org" >=20 > _______________________________________________ > freebsd-fs@freebsd.org mailing list > http://lists.freebsd.org/mailman/listinfo/freebsd-fs > To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" ------=_Part_282644_1724368082.1368232366315 Content-Type: text/x-patch; name=trynoconsist.patch Content-Transfer-Encoding: base64 Content-Disposition: attachment; filename=trynoconsist.patch LS0tIGZzL25mc2NsaWVudC9uZnNfY2x2bm9wcy5jLnNhdgkyMDEzLTA1LTEwIDE4OjMxOjAxLjAw MDAwMDAwMCAtMDQwMAorKysgZnMvbmZzY2xpZW50L25mc19jbHZub3BzLmMJMjAxMy0wNS0xMCAx OTozODo0NS4wMDAwMDAwMDAgLTA0MDAKQEAgLTI0Niw2ICsyNDYsMTAgQEAgaW50IG5mc19rZWVw X2RpcnR5X29uX2Vycm9yOwogU1lTQ1RMX0lOVChfdmZzX25mcywgT0lEX0FVVE8sIG5mc19rZWVw X2RpcnR5X29uX2Vycm9yLCBDVExGTEFHX1JXLAogICAgICZuZnNfa2VlcF9kaXJ0eV9vbl9lcnJv ciwgMCwgIlJldHJ5IHBhZ2VvdXQgaWYgZXJyb3IgcmV0dXJuZWQiKTsKIAoraW50IG5mc2NsX25v Y29uc2lzdCA9IDM7CitTWVNDVExfSU5UKF92ZnNfbmZzLCBPSURfQVVUTywgbm9jb25zaXN0LCBD VExGTEFHX1JXLAorICAgICZuZnNjbF9ub2NvbnNpc3QsIDAsICJUcnkgZGlzYWJsaW5nIGNhY2hl IGNvbnNpc3RlbmN5Iik7CisKIC8qCiAgKiBUaGlzIHN5c2N0bCBhbGxvd3Mgb3RoZXIgcHJvY2Vz c2VzIHRvIG1tYXAgYSBmaWxlIHRoYXQgaGFzIGJlZW4gb3BlbmVkCiAgKiBPX0RJUkVDVCBieSBh IHByb2Nlc3MuICBJbiBnZW5lcmFsLCBoYXZpbmcgcHJvY2Vzc2VzIG1tYXAgdGhlIGZpbGUgd2hp bGUKQEAgLTUzOCw2ICs1NDIsNyBAQCBuZnNfb3BlbihzdHJ1Y3Qgdm9wX29wZW5fYXJncyAqYXAp CiAJICovCiAJbXR4X2xvY2soJm5wLT5uX210eCk7CiAJaWYgKG5wLT5uX2ZsYWcgJiBOTU9ESUZJ RUQpIHsKKwkgICBpZiAobmZzY2xfbm9jb25zaXN0IDwgMikgewogCQltdHhfdW5sb2NrKCZucC0+ bl9tdHgpOwogCQllcnJvciA9IG5jbF92aW52YWxidWYodnAsIFZfU0FWRSwgYXAtPmFfdGQsIDEp OwogCQlpZiAoZXJyb3IgPT0gRUlOVFIgfHwgZXJyb3IgPT0gRUlPKSB7CkBAIC01NjEsNiArNTY2 LDcgQEAgbmZzX29wZW4oc3RydWN0IHZvcF9vcGVuX2FyZ3MgKmFwKQogCQlucC0+bl9tdGltZSA9 IHZhdHRyLnZhX210aW1lOwogCQlpZiAoTkZTX0lTVjQodnApKQogCQkJbnAtPm5fY2hhbmdlID0g dmF0dHIudmFfZmlsZXJldjsKKwkgICAgfQogCX0gZWxzZSB7CiAJCW10eF91bmxvY2soJm5wLT5u X210eCk7CiAJCWVycm9yID0gVk9QX0dFVEFUVFIodnAsICZ2YXR0ciwgYXAtPmFfY3JlZCk7CkBA IC01NzAsOCArNTc2LDkgQEAgbmZzX29wZW4oc3RydWN0IHZvcF9vcGVuX2FyZ3MgKmFwKQogCQkJ cmV0dXJuIChlcnJvcik7CiAJCX0KIAkJbXR4X2xvY2soJm5wLT5uX210eCk7Ci0JCWlmICgoTkZT X0lTVjQodnApICYmIG5wLT5uX2NoYW5nZSAhPSB2YXR0ci52YV9maWxlcmV2KSB8fAotCQkgICAg TkZTX1RJTUVTUEVDX0NPTVBBUkUoJm5wLT5uX210aW1lLCAmdmF0dHIudmFfbXRpbWUpKSB7CisJ CWlmICgoKE5GU19JU1Y0KHZwKSAmJiBucC0+bl9jaGFuZ2UgIT0gdmF0dHIudmFfZmlsZXJldikg fHwKKwkJICAgIE5GU19USU1FU1BFQ19DT01QQVJFKCZucC0+bl9tdGltZSwgJnZhdHRyLnZhX210 aW1lKSkgJiYKKwkJICAgIG5mc2NsX25vY29uc2lzdCA8IDEpIHsKIAkJCWlmICh2cC0+dl90eXBl ID09IFZESVIpCiAJCQkJbnAtPm5fZGlyZW9mb2Zmc2V0ID0gMDsKIAkJCW10eF91bmxvY2soJm5w LT5uX210eCk7Ci0tLSBmcy9uZnNjbGllbnQvbmZzX2NsYmlvLmMuc2F2CTIwMTMtMDUtMTAgMTg6 MzQ6MjQuMDAwMDAwMDAwIC0wNDAwCisrKyBmcy9uZnNjbGllbnQvbmZzX2NsYmlvLmMJMjAxMy0w NS0xMCAxOTozODo1Ny4wMDAwMDAwMDAgLTA0MDAKQEAgLTY5LDYgKzY5LDcgQEAgZXh0ZXJuIGVu dW0gbmZzaW9kX3N0YXRlIG5jbF9pb2R3YW50W05GUwogZXh0ZXJuIHN0cnVjdCBuZnNtb3VudCAq bmNsX2lvZG1vdW50W05GU19NQVhBU1lOQ0RBRU1PTl07CiBleHRlcm4gaW50IG5ld25mc19kaXJl Y3Rpb19lbmFibGU7CiBleHRlcm4gaW50IG5mc19rZWVwX2RpcnR5X29uX2Vycm9yOworZXh0ZXJu IGludCBuZnNjbF9ub2NvbnNpc3Q7CiAKIGludCBuY2xfcGJ1Zl9mcmVlY250ID0gLTE7CS8qIHN0 YXJ0IG91dCB1bmxpbWl0ZWQgKi8KIApAQCAtNDAyLDcgKzQwMyw4IEBAIG5mc19iaW9yZWFkX2No ZWNrX2NvbnMoc3RydWN0IHZub2RlICp2cCwKIAkJCXJldHVybiAoZXJyb3IpOwogCQltdHhfbG9j aygmbnAtPm5fbXR4KTsKIAkJaWYgKChucC0+bl9mbGFnICYgTlNJWkVDSEFOR0VEKQotCQkgICAg fHwgKE5GU19USU1FU1BFQ19DT01QQVJFKCZucC0+bl9tdGltZSwgJnZhdHRyLnZhX210aW1lKSkp IHsKKwkJICAgIHx8IChORlNfVElNRVNQRUNfQ09NUEFSRSgmbnAtPm5fbXRpbWUsICZ2YXR0ci52 YV9tdGltZSkgJiYKKwkJICAgIG5mc2NsX25vY29uc2lzdCA8IDMpKSB7CiAJCQltdHhfdW5sb2Nr KCZucC0+bl9tdHgpOwogCQkJaWYgKHZwLT52X3R5cGUgPT0gVkRJUikKIAkJCQluY2xfaW52YWxk aXIodnApOwo= ------=_Part_282644_1724368082.1368232366315-- From owner-freebsd-fs@FreeBSD.ORG Sat May 11 08:44:44 2013 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by hub.freebsd.org (Postfix) with ESMTP id 8117E662 for ; Sat, 11 May 2013 08:44:44 +0000 (UTC) (envelope-from ronald-freebsd8@klop.yi.org) Received: from smarthost1.greenhost.nl (smarthost1.greenhost.nl [195.190.28.78]) by mx1.freebsd.org (Postfix) with ESMTP id 1A40AEB2 for ; Sat, 11 May 2013 08:44:43 +0000 (UTC) Received: from smtp.greenhost.nl ([213.108.104.138]) by smarthost1.greenhost.nl with esmtps (TLS1.0:RSA_AES_256_CBC_SHA1:32) (Exim 4.69) (envelope-from ) id 1Ub5Q1-0002Lp-6y for freebsd-fs@freebsd.org; Sat, 11 May 2013 10:44:41 +0200 Received: from dhcp-077-251-158-153.chello.nl ([77.251.158.153] helo=ronaldradial) by smtp.greenhost.nl with esmtpsa (TLS1.0:DHE_RSA_AES_256_CBC_SHA1:32) (Exim 4.72) (envelope-from ) id 1Ub5Pz-0007D2-Vm for freebsd-fs@freebsd.org; Sat, 11 May 2013 10:44:40 +0200 Content-Type: text/plain; charset=us-ascii; format=flowed; delsp=yes To: freebsd-fs@freebsd.org Subject: Re: Corrupted zpool import -f FAILS state FAULTED References: <518CFA05.6090706@gmail.com> Date: Sat, 11 May 2013 10:44:39 +0200 MIME-Version: 1.0 Content-Transfer-Encoding: 8bit From: "Ronald Klop" Message-ID: In-Reply-To: User-Agent: Opera Mail/12.15 (Win32) X-Virus-Scanned: by clamav at smarthost1.samage.net X-Spam-Level: / X-Spam-Score: 0.8 X-Spam-Status: No, score=0.8 required=5.0 tests=BAYES_50 autolearn=disabled version=3.3.1 X-Scan-Signature: 4cc6a862e0a753e674eb374334b394fd X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 11 May 2013 08:44:44 -0000 On Fri, 10 May 2013 16:07:36 +0200, Outback Dingo wrote: > On Fri, May 10, 2013 at 9:45 AM, Volodymyr Kostyrko > wrote: > >> 09.05.2013 15:31, Outback Dingo: >> >> ok zfsgurus, FreeBSD 9.1-STABLE box zpool import -f reports pool status >>> Faulted, one of more devices contains corrupted data, however its >>> showing >>> the guid as faulted in the poll, and not the actual disk device >>> /dev/daX, >>> the pool is a single vdev 24 disk raidz3. Essentially the hardward >>> platform >>> is a dual node system, with 8 enclosures connected to 24 SAS drives >>> via 4 >>> LSI cards. I am not currently using geom_multipath, but the box is >>> zoned >>> so >>> that each node can see 50% of the drives, >>> in case of Failure, carp kicks in and migrates "zpool import -af" the >>> pools >>> onto the other node. it seems as though somehow the pool is now seeing >>> guid >>> and not devices, not sure if they have switched devices ids due to a >>> reboot. >>> >> >> Am not a zfs guru, but I'll try to help. >> >> Any console log snippets are welcome. What does "showing the guid as >> faulted in the pool" looks like. >> >> What are the guids for all partitions? Do they interlap for different >> nodes? >> >> ZFS recognizes devices by tasting they vdev labels and not by their >> logical location and naming. It can safely report any vdev location - >> but >> it requires the same set vdevs to bring pool online. > > > zdb shows valid data on the drives, no drives have been removed from the > box > whats confusing is why its using guids and not devices daX is what > puzzles > me > camcontrol devlist and dmesg clearly show the devices are there. The SAS > bus is shared > so both nodes with 2 LSI controllers each see all drives. We were > utilizing > a failover script > if nodeA dies, carp would kick the script to import the pool to nodeB, > both > nodes are in the > same chassis and see all the enclosures and all the drives Are the machines configured the same? As in _exactly_ the same. Glabel modules, hint files, sysctls, etc. Ronald. From owner-freebsd-fs@FreeBSD.ORG Sat May 11 11:16:46 2013 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.FreeBSD.org [8.8.178.115]) by hub.freebsd.org (Postfix) with ESMTP id 5F2541D5 for ; Sat, 11 May 2013 11:16:46 +0000 (UTC) (envelope-from outbackdingo@gmail.com) Received: from mail-oa0-f49.google.com (mail-oa0-f49.google.com [209.85.219.49]) by mx1.freebsd.org (Postfix) with ESMTP id 2DCA42C0 for ; Sat, 11 May 2013 11:16:45 +0000 (UTC) Received: by mail-oa0-f49.google.com with SMTP id k14so4273442oag.8 for ; Sat, 11 May 2013 04:16:45 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:x-received:in-reply-to:references:date:message-id :subject:from:to:cc:content-type; bh=RNFjfjST/3NrUpYso+v3AuQoehwYbwH1js9viOwHTRA=; b=DJWTzILoiFIIiVEX/+xxxCK36X5gWkKGPKxcgHsWz7J1LkZCSRjmLMEiOhZIaILFQz nvXZ+RuSfdA6T212EefmNI6vX55oLDem7d9qDef4Q+M67EbmlQ3pA/LJmjHYmpNnLn8p Y957IuWVHLXlijnc2AZhCialUpxsCHWwJ4GyPN6X1MjgO1o9fBTaT4QsvkZV7R9JEAm1 XRM3DW0cFonANhHf92+v+oWDm+fDeqtFHxH1ra8xoH27dpAGR3VyImNq+JEIBuZViNG1 4PsyQiYP0LFOvnUXc2tDHd6bCZx3+qPOfY9pLlSMZl6i3/O7KGMyLhBqe50mR3lp2Po3 tNKg== MIME-Version: 1.0 X-Received: by 10.182.226.162 with SMTP id rt2mr9058153obc.9.1368271005292; Sat, 11 May 2013 04:16:45 -0700 (PDT) Received: by 10.76.96.49 with HTTP; Sat, 11 May 2013 04:16:45 -0700 (PDT) In-Reply-To: References: <518CFA05.6090706@gmail.com> Date: Sat, 11 May 2013 07:16:45 -0400 Message-ID: Subject: Re: Corrupted zpool import -f FAILS state FAULTED From: Outback Dingo To: Ronald Klop Content-Type: text/plain; charset=ISO-8859-1 X-Content-Filtered-By: Mailman/MimeDel 2.1.14 Cc: freebsd-fs@freebsd.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 11 May 2013 11:16:46 -0000 On Sat, May 11, 2013 at 4:44 AM, Ronald Klop wrote: > On Fri, 10 May 2013 16:07:36 +0200, Outback Dingo > wrote: > > On Fri, May 10, 2013 at 9:45 AM, Volodymyr Kostyrko > >wrote: >> >> 09.05.2013 15:31, Outback Dingo: >>> >>> ok zfsgurus, FreeBSD 9.1-STABLE box zpool import -f reports pool status >>> >>>> Faulted, one of more devices contains corrupted data, however its >>>> showing >>>> the guid as faulted in the poll, and not the actual disk device >>>> /dev/daX, >>>> the pool is a single vdev 24 disk raidz3. Essentially the hardward >>>> platform >>>> is a dual node system, with 8 enclosures connected to 24 SAS drives via >>>> 4 >>>> LSI cards. I am not currently using geom_multipath, but the box is zoned >>>> so >>>> that each node can see 50% of the drives, >>>> in case of Failure, carp kicks in and migrates "zpool import -af" the >>>> pools >>>> onto the other node. it seems as though somehow the pool is now seeing >>>> guid >>>> and not devices, not sure if they have switched devices ids due to a >>>> reboot. >>>> >>>> >>> Am not a zfs guru, but I'll try to help. >>> >>> Any console log snippets are welcome. What does "showing the guid as >>> faulted in the pool" looks like. >>> >>> What are the guids for all partitions? Do they interlap for different >>> nodes? >>> >>> ZFS recognizes devices by tasting they vdev labels and not by their >>> logical location and naming. It can safely report any vdev location - but >>> it requires the same set vdevs to bring pool online. >>> >> >> >> zdb shows valid data on the drives, no drives have been removed from the >> box >> whats confusing is why its using guids and not devices daX is what puzzles >> me >> camcontrol devlist and dmesg clearly show the devices are there. The SAS >> bus is shared >> so both nodes with 2 LSI controllers each see all drives. We were >> utilizing >> a failover script >> if nodeA dies, carp would kick the script to import the pool to nodeB, >> both >> nodes are in the >> same chassis and see all the enclosures and all the drives >> > > Are the machines configured the same? As in _exactly_ the same. Glabel > modules, hint files, sysctls, etc. > > yes, both nodes are identical, from sysctl.conf to loader.conf, ive also noticed that playing around with enclosure zoning on the system i can now see which strikes me as quite odd..... now im wondering if i have a controller flaking out. right now according to the zoning, gmultipath should see 24+ LUNS however it sees nothing. zpool import -f pool: backup id: 8548776274175948174 state: UNAVAIL status: The pool was last accessed by another system. action: The pool cannot be imported due to damaged devices or data. see: http://illumos.org/msg/ZFS-8000-EY config: backup UNAVAIL insufficient replicas raidz3-0 UNAVAIL insufficient replicas da32 ONLINE da30 ONLINE da29 ONLINE da3 ONLINE da4 ONLINE da5 ONLINE da6 ONLINE da7 ONLINE da8 ONLINE label/big4 ONLINE 18084052867377310822 UNAVAIL cannot open 2641768775090614171 UNAVAIL cannot open 8083525846528480855 UNAVAIL cannot open 8200855950201180014 UNAVAIL cannot open da37 ONLINE da11 ONLINE 4678398398699137944 UNAVAIL cannot open 18315550984013241979 UNAVAIL cannot open da22 ONLINE da23 ONLINE label/backup ONLINE da25 ONLINE da26 ONLINE da27 ONLINE > Ronald. > > ______________________________**_________________ > freebsd-fs@freebsd.org mailing list > http://lists.freebsd.org/**mailman/listinfo/freebsd-fs > To unsubscribe, send any mail to "freebsd-fs-unsubscribe@**freebsd.org > " > From owner-freebsd-fs@FreeBSD.ORG Sat May 11 12:59:21 2013 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by hub.freebsd.org (Postfix) with ESMTP id 10A1B916; Sat, 11 May 2013 12:59:21 +0000 (UTC) (envelope-from universite@ukr.net) Received: from ffe11.ukr.net (ffe11.ukr.net [195.214.192.31]) by mx1.freebsd.org (Postfix) with ESMTP id C099A7AD; Sat, 11 May 2013 12:59:20 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=ukr.net; s=ffe; h=Date:Message-Id:From:To:Subject:Cc:Content-Type:Content-Transfer-Encoding:MIME-Version; bh=LqIt7Bnu7jQQJ+95i90nyRevO2EWFqOmEpyBIjA/aHs=; b=TxrBPSEkcapC34gUQpk9zVxPh5Bs59Mic/o8nrojpyGdAz/dUQnlkDOChh3SUPWgTZyeWdaRoV52Ml1agqBJvQHhf12OqTOVh/JohIpc4+ry95RxnbP/UJ1FPyeF7ilznkjNEj6OQTKufb6W0oB3e4ukvkCQxcJyoNRi6tYlPQU=; Received: from mail by ffe11.ukr.net with local ID 1Ub9OK-000BsS-Uk ; Sat, 11 May 2013 15:59:12 +0300 MIME-Version: 1.0 Content-Disposition: inline Content-Transfer-Encoding: binary Content-Type: text/plain; charset="utf-8" Subject: Tell me how to increase the virtual disk with ZFS? To: freebsd-fs@freebsd.org From: "Vladislav Prodan" X-Mailer: freemail.ukr.net 4.0 Message-Id: <43529.1368277152.10278121996412321792@ffe11.ukr.net> Date: Sat, 11 May 2013 15:59:12 +0300 Cc: freebsd-current@freebsd.org, freebsd-questions@freebsd.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 11 May 2013 12:59:21 -0000 I have a Debian server virtual ok with Proxmox. In one of the virtual machines is FreeBSD 9.1 ZFS with one disk to 100G. Free space is not enough, how to extend the virtual disk without losing data? Add another virtual disk and do a RAID0 - not an option. It is not clear how to distribute the data from the old virtual disk to the new virtual disk. The manual of the Proxmox http://pve.proxmox.com/wiki/Resizing_disks FreeBSD is not mentioned :( You may have to do a Native ZFS for Linux on Proxmox and it will be easier to resize the virtual disk for the virtual machines? -- Vladislav V. Prodan System & Network Administrator http://support.od.ua +380 67 4584408, +380 99 4060508 VVP88-RIPE From owner-freebsd-fs@FreeBSD.ORG Sat May 11 13:23:07 2013 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.FreeBSD.org [8.8.178.115]) by hub.freebsd.org (Postfix) with ESMTP id 00E03FF for ; Sat, 11 May 2013 13:23:06 +0000 (UTC) (envelope-from paul@kraus-haus.org) Received: from mail-ve0-x22f.google.com (mail-ve0-x22f.google.com [IPv6:2607:f8b0:400c:c01::22f]) by mx1.freebsd.org (Postfix) with ESMTP id B5258854 for ; Sat, 11 May 2013 13:23:06 +0000 (UTC) Received: by mail-ve0-f175.google.com with SMTP id cz11so2018016veb.34 for ; Sat, 11 May 2013 06:23:06 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20120113; h=x-received:subject:mime-version:content-type:from:in-reply-to:date :cc:content-transfer-encoding:message-id:references:to:x-mailer :x-gm-message-state; bh=i9ouDoo6+89kKzCGOLN68qAWiedLLI5PI4sC/0gtPmA=; b=gjs7SaK2n5Kdd8qQOFOEqTuqdx+SNk7INeFXtJ46YV7rhN/ZZWvYyOWrpVCncJxZah 4g2fVsGPIdHiPHD2HUDyf+wn84EwmvGg7pl5FZLwnyv0a0U/KFgYQWp8ZUZGTbqmkSjR MyX0VdbU+jx8lsSxHy7z5BU38P+ZQ2FyFiFv/sUakM09BpjmPYku5uM9h+9n3Y8n4QqI 026HkFAZ5PIOGgho+2YKGR8759QXt1DbaGQxyV/l4smjF/omWsF0IvPACZq/NafBb9+E gQHc3Km0YiTCJ6maQzsqGEGnlinBFwFZxvlyCJ7olYl3iGoM0PcFUnOIfyZdmnYAnm6u AbmQ== X-Received: by 10.52.88.239 with SMTP id bj15mr11867276vdb.68.1368278586247; Sat, 11 May 2013 06:23:06 -0700 (PDT) Received: from [192.168.2.99] ([96.236.21.119]) by mx.google.com with ESMTPSA id y20sm5870496vds.7.2013.05.11.06.23.04 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Sat, 11 May 2013 06:23:05 -0700 (PDT) Subject: Re: Tell me how to increase the virtual disk with ZFS? Mime-Version: 1.0 (Mac OS X Mail 6.3 \(1503\)) Content-Type: text/plain; charset=us-ascii From: Paul Kraus In-Reply-To: <43529.1368277152.10278121996412321792@ffe11.ukr.net> Date: Sat, 11 May 2013 09:23:04 -0400 Content-Transfer-Encoding: quoted-printable Message-Id: <5C2B4C2B-BAF2-49C1-8554-319EB5FE6C3B@kraus-haus.org> References: <43529.1368277152.10278121996412321792@ffe11.ukr.net> To: "Vladislav Prodan" X-Mailer: Apple Mail (2.1503) X-Gm-Message-State: ALoCoQmJi6+6THSGDhBwr7cPOaFQf/d0nBeP+XjgO3ItM/IwfgUBz7HlHASN5Mg9EWn77Ugam/zQ Cc: freebsd-fs@freebsd.org, freebsd-current@freebsd.org, freebsd-questions@freebsd.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 11 May 2013 13:23:07 -0000 On May 11, 2013, at 8:59 AM, "Vladislav Prodan" = wrote: > Add another virtual disk and do a RAID0 - not an option. It is not = clear how to distribute the data from the old virtual disk to the new = virtual disk. When you add an additional "disk" to a zpool (to create a STRIPE), the = ZFS code automatically stripes new writes across all top level vdevs = (drinks in this case). You will see a performance penalty until the data = distribution evens out. One way to force that (if you do NOT have = snapshots) is to just copy everything. The new copy will be striped = across all top level vdevs. The other option would be to add an additional disk that is as large as = you want to the VM, attach it to the zpool as a mirror. The mirror vdev = will only be as large as the original device, but once the mirror = completes resilvering, you can remove the old device and grow the = remaining device to full size (it may do that anyway based on the = setting of the auto expand property of the zpool. The default under 9.1 = is NOT to autoexpand: root@FreeBSD2:/root # zpool get autoexpand rootpool NAME PROPERTY VALUE SOURCE rootpool autoexpand off default root@FreeBSD2:/root #=20 -- Paul Kraus Deputy Technical Director, LoneStarCon 3 Sound Coordinator, Schenectady Light Opera Company From owner-freebsd-fs@FreeBSD.ORG Sat May 11 14:03:09 2013 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by hub.freebsd.org (Postfix) with ESMTP id 77AEEE65; Sat, 11 May 2013 14:03:09 +0000 (UTC) (envelope-from yerenkow@gmail.com) Received: from mail-pb0-x232.google.com (mail-pb0-x232.google.com [IPv6:2607:f8b0:400e:c01::232]) by mx1.freebsd.org (Postfix) with ESMTP id 4AB209AC; Sat, 11 May 2013 14:03:09 +0000 (UTC) Received: by mail-pb0-f50.google.com with SMTP id um15so3361068pbc.23 for ; Sat, 11 May 2013 07:03:09 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:x-received:in-reply-to:references:date:message-id :subject:from:to:cc:content-type; bh=CzmMD0v77RgEWHVz4fPBdRd0hbgOydreIlC2UjTph8c=; b=a46u5q09wRIbx+lxAKlgj89/LVFltH/Szt4MUHgaMeZfcAwb4aTNOLIefHSX2lFguN vLaotxseFqMmeZMRo1GiL7ZeFLkmovfwc9kdTCHsjwnN1Zw0jWhFxG8IyftRKRXRr1Pr SIZNaJoMm5tmODelm++W7055zZK9JxiVYqnZht9qn6ACb+TgmjkkdaqIBYiywo+IwVfy Mem7QISck23Pe7krOKG3j3NsJ9M2tKf3ZzZovu7Ys5MNiIXfurRlMGFPYkUCFelym7Es 3Or9HzWiQqYjv5V2jqb5c3FZ/WmGnsnWFrAmmbbFgYp37SAXba30a55aSDciIFua4ttS 9KjA== MIME-Version: 1.0 X-Received: by 10.68.13.168 with SMTP id i8mr21786301pbc.86.1368280989050; Sat, 11 May 2013 07:03:09 -0700 (PDT) Received: by 10.68.93.130 with HTTP; Sat, 11 May 2013 07:03:08 -0700 (PDT) In-Reply-To: <43529.1368277152.10278121996412321792@ffe11.ukr.net> References: <43529.1368277152.10278121996412321792@ffe11.ukr.net> Date: Sat, 11 May 2013 17:03:08 +0300 Message-ID: Subject: Re: Tell me how to increase the virtual disk with ZFS? From: Alexander Yerenkow To: Vladislav Prodan Content-Type: text/plain; charset=ISO-8859-1 X-Content-Filtered-By: Mailman/MimeDel 2.1.14 Cc: freebsd-fs@freebsd.org, freebsd-current , freebsd-questions@freebsd.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 11 May 2013 14:03:09 -0000 There's no mature (or flexible, or "can do what I want" ) way to increase/decrease disk sizes in FreeBSD for now {ZFS,UFS}. Best and quickest way - to have twice spare space, copy data, create new sufficient disk and copy back. 2013/5/11 Vladislav Prodan > > I have a Debian server virtual ok with Proxmox. > In one of the virtual machines is FreeBSD 9.1 ZFS with one disk to 100G. > Free space is not enough, how to extend the virtual disk without losing > data? > > Add another virtual disk and do a RAID0 - not an option. It is not clear > how to distribute the data from the old virtual disk to the new virtual > disk. > > The manual of the Proxmox http://pve.proxmox.com/wiki/Resizing_disksFreeBSD is not mentioned :( > > You may have to do a Native ZFS for Linux on Proxmox and it will be easier > to resize the virtual disk for the virtual machines? > > -- > Vladislav V. Prodan > System & Network Administrator > http://support.od.ua > +380 67 4584408, +380 99 4060508 > VVP88-RIPE > _______________________________________________ > freebsd-current@freebsd.org mailing list > http://lists.freebsd.org/mailman/listinfo/freebsd-current > To unsubscribe, send any mail to "freebsd-current-unsubscribe@freebsd.org" > -- Regards, Alexander Yerenkow From owner-freebsd-fs@FreeBSD.ORG Sat May 11 14:07:47 2013 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.FreeBSD.org [8.8.178.115]) by hub.freebsd.org (Postfix) with ESMTP id 260B322B for ; Sat, 11 May 2013 14:07:47 +0000 (UTC) (envelope-from paul@kraus-haus.org) Received: from mail-vb0-x234.google.com (mail-vb0-x234.google.com [IPv6:2607:f8b0:400c:c02::234]) by mx1.freebsd.org (Postfix) with ESMTP id DAB7F9FD for ; Sat, 11 May 2013 14:07:46 +0000 (UTC) Received: by mail-vb0-f52.google.com with SMTP id p13so4211176vbe.11 for ; Sat, 11 May 2013 07:07:46 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20120113; h=x-received:subject:mime-version:content-type:from:in-reply-to:date :cc:content-transfer-encoding:message-id:references:to:x-mailer :x-gm-message-state; bh=jWJoCfpjyy90DADB071iy6HQ1DHjt6xJOqLvmI1ScPo=; b=Ny0di/8XnU8veqCFcxrcIX+l/RB94dq/x5WSRA0RGstoISoMKER/0s6eWNvxhLsfWx NUvKht9J5rqpOWSgZg/1rk7HmQOFMKVVL/EiPPBLY2+jCtAGoAmCjVDgLuLK6U7v+XOD Tv5WPO/nsl+Haa95u0cgQjLPKRyH9qJLjUFtlEUZEix0MqKE5Xe4DaGrFI/DgFsHi3o8 lZoWCum3+noJUghUa4EUDWhlyiskZ03VqVVG4Srwl0FV91DsJ9UhF4dOTSU55ApsMvpA /Sb/uITw8qG1sqN2sUcxeq5motTYO5Sl0aBWrhk/78QExXO3EbN09GOOSYCYnk0Iojvb d4Cg== X-Received: by 10.220.100.138 with SMTP id y10mr13926103vcn.51.1368281266414; Sat, 11 May 2013 07:07:46 -0700 (PDT) Received: from [192.168.2.99] ([96.236.21.119]) by mx.google.com with ESMTPSA id 6sm5952831vei.0.2013.05.11.07.07.44 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Sat, 11 May 2013 07:07:45 -0700 (PDT) Subject: Re: Tell me how to increase the virtual disk with ZFS? Mime-Version: 1.0 (Mac OS X Mail 6.3 \(1503\)) Content-Type: text/plain; charset=us-ascii From: Paul Kraus In-Reply-To: Date: Sat, 11 May 2013 10:07:44 -0400 Content-Transfer-Encoding: quoted-printable Message-Id: References: <43529.1368277152.10278121996412321792@ffe11.ukr.net> To: Alexander Yerenkow X-Mailer: Apple Mail (2.1503) X-Gm-Message-State: ALoCoQmnjhYbhpp2Su0uBFCTndF0dy0sEtE0t3Bj1DKVdlHUjFdvthDGy73ugL56qjRs+655kxVR Cc: freebsd-fs@freebsd.org, freebsd-questions@freebsd.org, freebsd-current X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 11 May 2013 14:07:47 -0000 On May 11, 2013, at 10:03 AM, Alexander Yerenkow = wrote: > There's no mature (or flexible, or "can do what I want" ) way to > increase/decrease disk sizes in FreeBSD for now {ZFS,UFS}. > Best and quickest way - to have twice spare space, copy data, create = new > sufficient disk and copy back. Is this a statement or a question ? If a statement, then it is factually = FALSE. If it is supposed to be a question, it does not ask anything. -- Paul Kraus Deputy Technical Director, LoneStarCon 3 Sound Coordinator, Schenectady Light Opera Company From owner-freebsd-fs@FreeBSD.ORG Sat May 11 14:26:33 2013 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.FreeBSD.org [8.8.178.115]) by hub.freebsd.org (Postfix) with ESMTP id AB8D6627; Sat, 11 May 2013 14:26:33 +0000 (UTC) (envelope-from universite@ukr.net) Received: from ffe12.ukr.net (ffe12.ukr.net [195.214.192.40]) by mx1.freebsd.org (Postfix) with ESMTP id 2D387A7A; Sat, 11 May 2013 14:26:31 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=ukr.net; s=ffe; h=Date:Message-Id:From:To:References:In-Reply-To:Subject:Content-Type:Content-Transfer-Encoding:MIME-Version; bh=QKlP0WCPNjs9unF0crnSa5BZeMHyevktw3zsKY5IK2s=; b=bvKBpjDtaytur6ZZ0nsoATqwx9FMXvLKKNVkLTPCtz0JSDeYNP9hdqX8PFSMWuuF0u+yRLZdOx+fsYtV1zLclFgsAdjDI4hsztCMYD3dzWnZgH9GNiEjQh4c78ze5ncZ+bJ/S4M1ULvBCvMyI1odnGaRX/3DakynOy8tAyhfBMw=; Received: from mail by ffe12.ukr.net with local ID 1UbAUS-0009Ds-KP ; Sat, 11 May 2013 17:09:36 +0300 MIME-Version: 1.0 Content-Disposition: inline Content-Transfer-Encoding: binary Content-Type: text/plain; charset="utf-8" Subject: Re[2]: Tell me how to increase the virtual disk with ZFS? In-Reply-To: <5C2B4C2B-BAF2-49C1-8554-319EB5FE6C3B@kraus-haus.org> References: <43529.1368277152.10278121996412321792@ffe11.ukr.net> <5C2B4C2B-BAF2-49C1-8554-319EB5FE6C3B@kraus-haus.org> To: freebsd-fs@freebsd.org, freebsd-current@freebsd.org, freebsd-questions@freebsd.org From: "Vladislav Prodan" X-Mailer: freemail.ukr.net 4.0 Message-Id: <22011.1368281376.17500373698194767872@ffe12.ukr.net> Date: Sat, 11 May 2013 17:09:36 +0300 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 11 May 2013 14:26:33 -0000 > On May 11, 2013, at 8:59 AM, "Vladislav Prodan" wrote: > > > Add another virtual disk and do a RAID0 - not an option. It is not clear how to distribute the data from the old virtual disk to the new virtual disk. > The other option would be to add an additional disk that is as large as you want to the VM, attach it to the zpool as a mirror. The mirror vdev will only be as large as the original device, but once the mirror completes resilvering, you can remove the old device and grow the remaining device to full size (it may do that anyway based on the setting of the auto expand property of the zpool. The default under 9.1 is NOT to autoexpand: > > root@FreeBSD2:/root # zpool get autoexpand rootpool > NAME PROPERTY VALUE SOURCE > rootpool autoexpand off default > root@FreeBSD2:/root # Thanks. I did not realize that there was such an interesting and useful option :) # zpool get autoexpand tank NAME PROPERTY VALUE SOURCE tank autoexpand off default -- Vladislav V. Prodan System & Network Administrator http://support.od.ua +380 67 4584408, +380 99 4060508 VVP88-RIPE From owner-freebsd-fs@FreeBSD.ORG Sat May 11 14:31:07 2013 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by hub.freebsd.org (Postfix) with ESMTP id 13641909 for ; Sat, 11 May 2013 14:31:07 +0000 (UTC) (envelope-from paul@kraus-haus.org) Received: from mail-vc0-f175.google.com (mail-vc0-f175.google.com [209.85.220.175]) by mx1.freebsd.org (Postfix) with ESMTP id C8956ABA for ; Sat, 11 May 2013 14:31:06 +0000 (UTC) Received: by mail-vc0-f175.google.com with SMTP id lf10so4360081vcb.6 for ; Sat, 11 May 2013 07:31:06 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20120113; h=x-received:subject:mime-version:content-type:from:in-reply-to:date :cc:content-transfer-encoding:message-id:references:to:x-mailer :x-gm-message-state; bh=ctDHWtkd0B8edyz+SD89lrza1TEIiaO7KkrBEbiesJU=; b=ctHfJplPoHTx8nY+dSeyNgAgnFoh46VpSEvJnLMMid8gXf65l0HWsEwnrs6RXCKJKc WXomRuxBiZwOJPiqhNmF5Y/l/Xp73FmFXRWsiyUbDkqwdy9fPZPGyXVnWRXPyYRsRrs9 BzxdztrN/BlpEQwUqSf8TOA/IpW6IC2enOBMe5HxCD0dPcOnCOkghxp2c2xOuNe65y0J wHmn3zvs6kc75Y1BSK2ZE3pG6CeusJIqQ5FNVArl/GV+Sp2UlXM/l397Qdsiwe8Pn2xY 4I0H3XZbnKIWRJy0hCMRe3852Lm+T1TdxG4hCl6+LfNzvENKzIyW/YL4sy2O5HPyCei9 0DHw== X-Received: by 10.221.9.9 with SMTP id ou9mr13985203vcb.15.1368282665933; Sat, 11 May 2013 07:31:05 -0700 (PDT) Received: from [192.168.2.99] ([96.236.21.119]) by mx.google.com with ESMTPSA id u20sm6028378vdt.10.2013.05.11.07.31.05 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Sat, 11 May 2013 07:31:05 -0700 (PDT) Subject: Re: Tell me how to increase the virtual disk with ZFS? Mime-Version: 1.0 (Mac OS X Mail 6.3 \(1503\)) Content-Type: text/plain; charset=us-ascii From: Paul Kraus In-Reply-To: <22011.1368281376.17500373698194767872@ffe12.ukr.net> Date: Sat, 11 May 2013 10:31:04 -0400 Content-Transfer-Encoding: 7bit Message-Id: References: <43529.1368277152.10278121996412321792@ffe11.ukr.net> <5C2B4C2B-BAF2-49C1-8554-319EB5FE6C3B@kraus-haus.org> <22011.1368281376.17500373698194767872@ffe12.ukr.net> To: "Vladislav Prodan" X-Mailer: Apple Mail (2.1503) X-Gm-Message-State: ALoCoQlEgV+1Eu166VfwKifdR2wnzbn1sxdjBKbbMxDbFuTtzg71mSEMYJDg99elLaaoHFTKku/W Cc: freebsd-fs@freebsd.org, "freebsd-questions@freebsd.org List" X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 11 May 2013 14:31:07 -0000 On May 11, 2013, at 10:09 AM, "Vladislav Prodan" wrote: > > Thanks. > I did not realize that there was such an interesting and useful option :) > > # zpool get autoexpand tank > NAME PROPERTY VALUE SOURCE > tank autoexpand off default The man pages for zpool and zfs are full of such useful information :-) -- Paul Kraus Deputy Technical Director, LoneStarCon 3 Sound Coordinator, Schenectady Light Opera Company From owner-freebsd-fs@FreeBSD.ORG Sat May 11 14:55:24 2013 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by hub.freebsd.org (Postfix) with ESMTP id 5C2B04B7 for ; Sat, 11 May 2013 14:55:24 +0000 (UTC) (envelope-from nowakpl@platinum.linux.pl) Received: from platinum.linux.pl (platinum.edu.pl [81.161.192.4]) by mx1.freebsd.org (Postfix) with ESMTP id 20973B62 for ; Sat, 11 May 2013 14:55:23 +0000 (UTC) Received: by platinum.linux.pl (Postfix, from userid 87) id C632147E1A; Sat, 11 May 2013 16:45:28 +0200 (CEST) X-Spam-Checker-Version: SpamAssassin 3.3.2 (2011-06-06) on platinum.linux.pl X-Spam-Level: X-Spam-Status: No, score=-1.3 required=3.0 tests=ALL_TRUSTED,AWL autolearn=disabled version=3.3.2 Received: from [10.255.0.2] (unknown [83.151.38.73]) by platinum.linux.pl (Postfix) with ESMTPA id 8315647DE6 for ; Sat, 11 May 2013 16:45:28 +0200 (CEST) Message-ID: <518E5973.9090003@platinum.linux.pl> Date: Sat, 11 May 2013 16:45:07 +0200 From: Adam Nowacki User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:17.0) Gecko/20130328 Thunderbird/17.0.5 MIME-Version: 1.0 To: freebsd-fs@freebsd.org Subject: Re: Tell me how to increase the virtual disk with ZFS? References: <43529.1368277152.10278121996412321792@ffe11.ukr.net> In-Reply-To: Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 11 May 2013 14:55:24 -0000 # zfs list md1p1 NAME USED AVAIL REFER MOUNTPOINT md1p1 126K 1.96G 31K /md1p1 -- on-line resize partition to occupy added disk space # sysctl kern.geom.debugflags=16 kern.geom.debugflags: 0 -> 16 # gpart recover md1 md1 recovered # gpart resize -i 1 md1 md1p1 resized # sysctl kern.geom.debugflags=0 kern.geom.debugflags: 16 -> 0 -- tell zfs about it # zpool online -e md1p1 md1p1 -- done # zfs list md1p1 NAME USED AVAIL REFER MOUNTPOINT md1p1 136K 9.84G 31K /md1p1 On 2013-05-11 16:03, Alexander Yerenkow wrote: > There's no mature (or flexible, or "can do what I want" ) way to > increase/decrease disk sizes in FreeBSD for now {ZFS,UFS}. > Best and quickest way - to have twice spare space, copy data, create new > sufficient disk and copy back. > > > > 2013/5/11 Vladislav Prodan > >> >> I have a Debian server virtual ok with Proxmox. >> In one of the virtual machines is FreeBSD 9.1 ZFS with one disk to 100G. >> Free space is not enough, how to extend the virtual disk without losing >> data? >> >> Add another virtual disk and do a RAID0 - not an option. It is not clear >> how to distribute the data from the old virtual disk to the new virtual >> disk. >> >> The manual of the Proxmox http://pve.proxmox.com/wiki/Resizing_disksFreeBSD is not mentioned :( >> >> You may have to do a Native ZFS for Linux on Proxmox and it will be easier >> to resize the virtual disk for the virtual machines? >> >> -- >> Vladislav V. Prodan >> System & Network Administrator >> http://support.od.ua >> +380 67 4584408, +380 99 4060508 >> VVP88-RIPE >> _______________________________________________ >> freebsd-current@freebsd.org mailing list >> http://lists.freebsd.org/mailman/listinfo/freebsd-current >> To unsubscribe, send any mail to "freebsd-current-unsubscribe@freebsd.org" >> > > > From owner-freebsd-fs@FreeBSD.ORG Sat May 11 15:11:55 2013 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by hub.freebsd.org (Postfix) with ESMTP id D00789A1 for ; Sat, 11 May 2013 15:11:55 +0000 (UTC) (envelope-from universite@ukr.net) Received: from ffe6.ukr.net (ffe6.ukr.net [195.214.192.56]) by mx1.freebsd.org (Postfix) with ESMTP id 84168C2E for ; Sat, 11 May 2013 15:11:54 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=ukr.net; s=ffe; h=Date:Message-Id:From:To:References:In-Reply-To:Subject:Cc:Content-Type:Content-Transfer-Encoding:MIME-Version; bh=eZq1IRSNSW6eelP0ynT+E2Zg692//sXAZ3tRGaHlkH4=; b=Sbdtk6vUoe5+l+ykZYKqleFpu7wUBVe8Ap5C3LVGRwsMYKoJpXa8gl08IuLnvSmpFKBMqkdERcAfsVhpMtOo3BA/BWkUL8X5yyaw6cg/u/iKTc4K5kEQ07TUdbDMvXLZoUXDxStybp0TaRq5ACZhN5HwneEhDUBnXm9Hc9T0Bdw=; Received: from mail by ffe6.ukr.net with local ID 1UbBSc-000Kye-Ow ; Sat, 11 May 2013 18:11:46 +0300 MIME-Version: 1.0 Content-Disposition: inline Content-Transfer-Encoding: binary Content-Type: text/plain; charset="utf-8" Subject: Re[2]: Tell me how to increase the virtual disk with ZFS? In-Reply-To: <518E5973.9090003@platinum.linux.pl> References: <43529.1368277152.10278121996412321792@ffe11.ukr.net> <518E5973.9090003@platinum.linux.pl> To: "Adam Nowacki" From: "Vladislav Prodan" X-Mailer: freemail.ukr.net 4.0 Message-Id: <80049.1368285106.9876638397852811264@ffe6.ukr.net> Date: Sat, 11 May 2013 18:11:46 +0300 Cc: freebsd-fs@freebsd.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 11 May 2013 15:11:55 -0000 > # zfs list md1p1 > NAME USED AVAIL REFER MOUNTPOINT > md1p1 126K 1.96G 31K /md1p1 > > -- on-line resize partition to occupy added disk space > # sysctl kern.geom.debugflags=16 > kern.geom.debugflags: 0 -> 16 > # gpart recover md1 > md1 recovered > # gpart resize -i 1 md1 > md1p1 resized > # sysctl kern.geom.debugflags=0 > kern.geom.debugflags: 16 -> 0 > > -- tell zfs about it > # zpool online -e md1p1 md1p1 > > -- done > # zfs list md1p1 > NAME USED AVAIL REFER MOUNTPOINT > md1p1 136K 9.84G 31K /md1p1 > Can repeat resize the flag [-s size], so it was clearly indicate the amount of increase in partition? Can you putting a pair small files in /md1p1? And compare md5 and sha256 these files before and after the resize md1? Thank you in advance. -- Vladislav V. Prodan System & Network Administrator http://support.od.ua +380 67 4584408, +380 99 4060508 VVP88-RIPE From owner-freebsd-fs@FreeBSD.ORG Sat May 11 15:13:49 2013 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.FreeBSD.org [8.8.178.115]) by hub.freebsd.org (Postfix) with ESMTP id B405AA23; Sat, 11 May 2013 15:13:49 +0000 (UTC) (envelope-from yerenkow@gmail.com) Received: from mail-pa0-f44.google.com (mail-pa0-f44.google.com [209.85.220.44]) by mx1.freebsd.org (Postfix) with ESMTP id 86932C3A; Sat, 11 May 2013 15:13:49 +0000 (UTC) Received: by mail-pa0-f44.google.com with SMTP id jh10so3595218pab.31 for ; Sat, 11 May 2013 08:13:43 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:x-received:in-reply-to:references:date:message-id :subject:from:to:cc:content-type; bh=MlEyYrhB8zbI6sX+b2QGypysWskSrvVGsyRoNawytp8=; b=hvvFKbbTZPR9U9SJEUynyB3DPZct5iAE8b80ehMebO0GlDqO0FW2mKz7Fx6W4qOjmW pJ/7DB3ug4Tv7CQfCCTuCmX0+z1QoWGYCMjjgPQUUQt+01b3EivvuWu/mLEYWMPHpreG RRBvUr0WCvkxBO0fKth7r8q+4z6aJdzoiiK/kOPRL2/5tagRyCW7nuR2hUMjwiyr9eDN kWjeVkRtwt43VBseXihZPmEJhBCvj/FaMp48LiA1nzlU+mH2/XVFRTTgdvXhgOicZlji lc7ruqEEeqGIvAkWjvPfXfS3nz7oJAcdFbcxAmlFiTueWLxLoZ+calH1btOxIgSWE+OW XbTA== MIME-Version: 1.0 X-Received: by 10.68.13.168 with SMTP id i8mr22009725pbc.86.1368285223380; Sat, 11 May 2013 08:13:43 -0700 (PDT) Received: by 10.68.93.130 with HTTP; Sat, 11 May 2013 08:13:43 -0700 (PDT) In-Reply-To: References: <43529.1368277152.10278121996412321792@ffe11.ukr.net> Date: Sat, 11 May 2013 18:13:43 +0300 Message-ID: Subject: Re: Tell me how to increase the virtual disk with ZFS? From: Alexander Yerenkow To: Paul Kraus Content-Type: text/plain; charset=ISO-8859-1 X-Content-Filtered-By: Mailman/MimeDel 2.1.14 Cc: freebsd-fs@freebsd.org, freebsd-questions@freebsd.org, freebsd-current X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 11 May 2013 15:13:49 -0000 2013/5/11 Paul Kraus > On May 11, 2013, at 10:03 AM, Alexander Yerenkow > wrote: > > > There's no mature (or flexible, or "can do what I want" ) way to > > increase/decrease disk sizes in FreeBSD for now {ZFS,UFS}. > > Best and quickest way - to have twice spare space, copy data, create new > > sufficient disk and copy back. > > Is this a statement or a question ? If a statement, then it is factually > FALSE. If it is supposed to be a question, it does not ask anything. > It was a statement, and luckily I was partially wrong, as Vladislav did made what he wanted to. However, last time I checked there were no such easy ways to decrease zpools or increase/decrease UFS partitions. Or grow mirrored ZFS as easily as single zpool. Or (killer one) remove added by mistake vdev from zpool ;) Of course I'm not talking about real hw, rather virtual one. If you happen to point me somewhere to have such task solved I'd be much appreciated. > -- > Paul Kraus > Deputy Technical Director, LoneStarCon 3 > Sound Coordinator, Schenectady Light Opera Company > > -- Regards, Alexander Yerenkow From owner-freebsd-fs@FreeBSD.ORG Sat May 11 15:39:16 2013 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.FreeBSD.org [8.8.178.115]) by hub.freebsd.org (Postfix) with ESMTP id E66FE4CA for ; Sat, 11 May 2013 15:39:16 +0000 (UTC) (envelope-from paul@kraus-haus.org) Received: from mail-ve0-x230.google.com (mail-ve0-x230.google.com [IPv6:2607:f8b0:400c:c01::230]) by mx1.freebsd.org (Postfix) with ESMTP id A58D0D7F for ; Sat, 11 May 2013 15:39:15 +0000 (UTC) Received: by mail-ve0-f176.google.com with SMTP id db10so3079434veb.35 for ; Sat, 11 May 2013 08:39:15 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20120113; h=x-received:subject:mime-version:content-type:from:in-reply-to:date :cc:content-transfer-encoding:message-id:references:to:x-mailer :x-gm-message-state; bh=1Bedo8SEuxGa0v0+61ZRuRYi4+kITu/PGlyyW//N+s0=; b=CVsfEm+fyqpxSLRZixTc1QOYRWfrHstvKDkrwGa1pFYMQBN6pgGU2Hsl3+3/fp9ag+ 8m21kdjhoVv8BLtpzscArTUKBz7V2Nt0AU2J2JAYp7zD7o5ObVxTMoQ0r09cGBDEHovb nYf7TPp7PvOx1iBSyhpF+LG08RT5q33fscyknbLLxzW0dH76RMylTPOj4JU4HHe28rn2 STgCEvUE7gK9eD7urRvsp4OV6S7JQ6DrI80UgP9tyKt4YO/OuCRNWF9NGiTGrkqtpBQ8 N80WCKC3mrIMcX1sm2FEhwNxMkvIZRrThF9LK5KycVJWcoLNfCpEQd5iwSvcqL8H0DN3 UdVw== X-Received: by 10.58.243.102 with SMTP id wx6mr4483633vec.26.1368286755426; Sat, 11 May 2013 08:39:15 -0700 (PDT) Received: from [192.168.2.99] ([96.236.21.119]) by mx.google.com with ESMTPSA id 13sm6266694vdg.4.2013.05.11.08.39.13 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Sat, 11 May 2013 08:39:14 -0700 (PDT) Subject: Re: Tell me how to increase the virtual disk with ZFS? Mime-Version: 1.0 (Mac OS X Mail 6.3 \(1503\)) Content-Type: text/plain; charset=iso-8859-1 From: Paul Kraus In-Reply-To: Date: Sat, 11 May 2013 11:39:12 -0400 Content-Transfer-Encoding: quoted-printable Message-Id: References: <43529.1368277152.10278121996412321792@ffe11.ukr.net> To: Alexander Yerenkow X-Mailer: Apple Mail (2.1503) X-Gm-Message-State: ALoCoQngNuSpNP9mAUxcfFQeQ566JDh0FnpIqKXJpYklfFWpdu45cJHdEAPfYCGSs7rp0Be97Bs3 Cc: freebsd-fs@freebsd.org, freebsd-questions@freebsd.org, freebsd-current X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 11 May 2013 15:39:17 -0000 On May 11, 2013, at 11:13 AM, Alexander Yerenkow = wrote: 2013/5/11 Paul Kraus On May 11, 2013, at 10:03 AM, Alexander Yerenkow = wrote: >=20 > > There's no mature (or flexible, or "can do what I want" ) way to > > increase/decrease disk sizes in FreeBSD for now {ZFS,UFS}. > > Best and quickest way - to have twice spare space, copy data, create = new > > sufficient disk and copy back. >=20 > Is this a statement or a question ? If a statement, then it is = factually FALSE. If it is supposed to be a question, it does not ask = anything. >=20 > It was a statement, and luckily I was partially wrong, as Vladislav = did made what he wanted to. > However, last time I checked there were no such easy ways to decrease = zpools Correct, there is currently no way to decrease the size of a zpool. That = would require a defragmentation utility, which is on the roadmap as part = of the bp_rewrite code enhancement (and has been for many, many years = :-) > or increase/decrease UFS partitions. > Or grow mirrored ZFS as easily as single zpool. This one I do not understand. I have grown mirrored zpools many times. = Let's say you have a 2-way mirror of 1 TB drives. You can do one of two = things to grow the zpool: 1) add another pair of drives (of any size) as another top level vdev = mirror device (you *can* use a different type of top level vdev, raidZ, = simple, etc, but that is not recommended for both redundancy and = performance predictability reasons). 2) swap out one of the 1 TB drives for a 2 TB (zpool replace), you can = even offline one of the halves of the mirror to do this (but remember = that you are vulnerable to a failure of the remaining drive during the = resolver period), let the zpool resolver, then swap out the other 1 TB = drive for a 2 TB. If the auto expand property is set, then once the = resolver finishes you have doubled your net capacity. > Or (killer one) remove added by mistake vdev from zpool ;) Don't make that mistake. Seriously. If you are managing storage you need = to be double checking every single command you issue if you care about = your data integrity. You could easily make the same complaint about = issuing an 'rm -rf' in the wrong directory (I know people who have done = that). If you are using snapshots you may be safe, if not your data is = probably gone. On the other hand, depending on where in the tree you added the vdev, = you may be able to remove it. If it is a top level vdev, then you have = just changed the configuration of the zpool. While very not supported, = you just might be able, using zdb and rolling back to a TXG before you = added the device, remove the vdev. A good place to ask that question and = have the discussion would be the ZFS discuss list at illumos (the list = discussion is not limited to illumos, but covers all aspects of ZFS on = all platforms). Archives here: = http://www.listbox.com/member/archive/182191/sort/time_rev/=20 > Of course I'm not talking about real hw, rather virtual one. Doesn't matter to ZFS, whether a drive is a physical, a partition, or a = virtual disk you perform the same operations. > If you happen to point me somewhere to have such task solved I'd be = much appreciated. See above :-) Some of your issues I addressed above, others are not = there (and may never be). -- Paul Kraus Deputy Technical Director, LoneStarCon 3 Sound Coordinator, Schenectady Light Opera Company From owner-freebsd-fs@FreeBSD.ORG Sat May 11 16:20:22 2013 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.FreeBSD.org [8.8.178.115]) by hub.freebsd.org (Postfix) with ESMTP id CC3F4CCA; Sat, 11 May 2013 16:20:22 +0000 (UTC) (envelope-from jmg@h2.funkthat.com) Received: from h2.funkthat.com (gate2.funkthat.com [208.87.223.18]) by mx1.freebsd.org (Postfix) with ESMTP id AE194E94; Sat, 11 May 2013 16:20:22 +0000 (UTC) Received: from h2.funkthat.com (localhost [127.0.0.1]) by h2.funkthat.com (8.14.3/8.14.3) with ESMTP id r4BGKFuv035124 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO); Sat, 11 May 2013 09:20:15 -0700 (PDT) (envelope-from jmg@h2.funkthat.com) Received: (from jmg@localhost) by h2.funkthat.com (8.14.3/8.14.3/Submit) id r4BGKEI8035123; Sat, 11 May 2013 09:20:14 -0700 (PDT) (envelope-from jmg) Date: Sat, 11 May 2013 09:20:14 -0700 From: John-Mark Gurney To: Alexander Yerenkow Subject: Re: Tell me how to increase the virtual disk with ZFS? Message-ID: <20130511162014.GM1491@funkthat.com> Mail-Followup-To: Alexander Yerenkow , Paul Kraus , freebsd-fs@freebsd.org, Vladislav Prodan , freebsd-questions@freebsd.org, freebsd-current References: <43529.1368277152.10278121996412321792@ffe11.ukr.net> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: User-Agent: Mutt/1.4.2.3i X-Operating-System: FreeBSD 7.2-RELEASE i386 X-PGP-Fingerprint: 54BA 873B 6515 3F10 9E88 9322 9CB1 8F74 6D3F A396 X-Files: The truth is out there X-URL: http://resnet.uoregon.edu/~gurney_j/ X-Resume: http://resnet.uoregon.edu/~gurney_j/resume.html X-to-the-FBI-CIA-and-NSA: HI! HOW YA DOIN? can i haz chizburger? X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.2.2 (h2.funkthat.com [127.0.0.1]); Sat, 11 May 2013 09:20:15 -0700 (PDT) Cc: freebsd-fs@freebsd.org, Paul Kraus , freebsd-current , freebsd-questions@freebsd.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 11 May 2013 16:20:22 -0000 Alexander Yerenkow wrote this message on Sat, May 11, 2013 at 18:13 +0300: > zpools or increase/decrease UFS partitions. growfs(8) NAME growfs -- grow size of an existing ufs file system HISTORY The growfs utility first appeared in FreeBSD 4.4. -- John-Mark Gurney Voice: +1 415 225 5579 "All that I will do, has been done, All that I have, has not." From owner-freebsd-fs@FreeBSD.ORG Sat May 11 21:09:03 2013 Return-Path: Delivered-To: freebsd-fs@FreeBSD.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by hub.freebsd.org (Postfix) with ESMTP id D9C36C4C; Sat, 11 May 2013 21:09:03 +0000 (UTC) (envelope-from mckusick@mckusick.com) Received: from chez.mckusick.com (chez.mckusick.com [IPv6:2001:5a8:4:7e72:4a5b:39ff:fe12:452]) by mx1.freebsd.org (Postfix) with ESMTP id B62A2905; Sat, 11 May 2013 21:09:03 +0000 (UTC) Received: from chez.mckusick.com (localhost [127.0.0.1]) by chez.mckusick.com (8.14.3/8.14.3) with ESMTP id r4BL8w5S027367; Sat, 11 May 2013 14:08:59 -0700 (PDT) (envelope-from mckusick@chez.mckusick.com) Message-Id: <201305112108.r4BL8w5S027367@chez.mckusick.com> To: Marcel Moolenaar Subject: Re: svn commit: r250411 - in head/sys: conf kern sys In-reply-to: <6CBEB766-087B-41F4-B549-2D60F4FD2701@xcllnt.net> Date: Sat, 11 May 2013 14:08:58 -0700 From: Kirk McKusick X-Spam-Status: No, score=0.0 required=5.0 tests=MISSING_MID, UNPARSEABLE_RELAY autolearn=failed version=3.2.5 X-Spam-Checker-Version: SpamAssassin 3.2.5 (2008-06-10) on chez.mckusick.com Cc: attilio@FreeBSD.org, Marcel Moolenaar , freebsd-fs@FreeBSD.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 11 May 2013 21:09:03 -0000 As a filesystem developer, I am very dependent on being able to get vnode lock order reversal LORs. While I have long wanted the ability to individually identify the benign ones (such as vnode + dirhash which your fix will not eliminate since dirhash is not a vnode), that facility exists only for mutexes at the moment. This latest flurry of emails on your change may hasten the day when Attilio's extension gets added to make the identity for other types of locks available. In the meantime, I do understand your need to silence the offending benign messages. But I would like to suggest that instead of making it a compile time option, that you change it to be conditional using a sysctl such as debug.witness.novnodes (you would need to add the witness debug subclass as I don't think that it currently exists). You can set the default to be enabled (e.g., suppressed vnode warnings). This way if I get a filesystem deadlock bug reported to me, I can ask the user to reenable vnode witness warnings without having to have them build a whole new kernel. Kirk McKusick From owner-freebsd-fs@FreeBSD.ORG Sat May 11 21:31:50 2013 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by hub.freebsd.org (Postfix) with ESMTP id 7870C5DC for ; Sat, 11 May 2013 21:31:50 +0000 (UTC) (envelope-from mckusick@mckusick.com) Received: from chez.mckusick.com (chez.mckusick.com [IPv6:2001:5a8:4:7e72:4a5b:39ff:fe12:452]) by mx1.freebsd.org (Postfix) with ESMTP id 2EBC69B6 for ; Sat, 11 May 2013 21:31:50 +0000 (UTC) Received: from chez.mckusick.com (localhost [127.0.0.1]) by chez.mckusick.com (8.14.3/8.14.3) with ESMTP id r4BLVljp032538; Sat, 11 May 2013 14:31:47 -0700 (PDT) (envelope-from mckusick@chez.mckusick.com) Message-Id: <201305112131.r4BLVljp032538@chez.mckusick.com> To: Alexander Yerenkow Subject: Re: Tell me how to increase the virtual disk with ZFS? In-reply-to: Date: Sat, 11 May 2013 14:31:47 -0700 From: Kirk McKusick X-Spam-Status: No, score=0.0 required=5.0 tests=MISSING_MID, UNPARSEABLE_RELAY autolearn=failed version=3.2.5 X-Spam-Checker-Version: SpamAssassin 3.2.5 (2008-06-10) on chez.mckusick.com Cc: freebsd-fs@freebsd.org, Paul Kraus X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 11 May 2013 21:31:50 -0000 As of FreeBSD 9.1, the growfs(8) utility has the ability to increase the size of live filesystems. Kirk McKusick From owner-freebsd-fs@FreeBSD.ORG Sat May 11 23:58:08 2013 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by hub.freebsd.org (Postfix) with ESMTP id 7A275A40 for ; Sat, 11 May 2013 23:58:08 +0000 (UTC) (envelope-from mjacob@freebsd.org) Received: from virtual.feral.com (virtual.feral.com [216.224.170.83]) by mx1.freebsd.org (Postfix) with ESMTP id 2D8C4FAE for ; Sat, 11 May 2013 23:58:08 +0000 (UTC) Received: from [192.168.136.3] (76-14-48-84.sf-cable.astound.net [76.14.48.84] (may be forged)) by virtual.feral.com (8.14.4/8.14.4) with ESMTP id r4BNv0XR017846 (version=TLSv1/SSLv3 cipher=DHE-RSA-CAMELLIA256-SHA bits=256 verify=NO) for ; Sat, 11 May 2013 16:57:01 -0700 Message-ID: <518EDACC.2070609@freebsd.org> Date: Sat, 11 May 2013 16:57:00 -0700 From: Matthew Jacob Organization: FreeBSD User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:17.0) Gecko/20130328 Thunderbird/17.0.5 MIME-Version: 1.0 To: freebsd-fs@freebsd.org Subject: Re: Tell me how to increase the virtual disk with ZFS? References: <201305112131.r4BLVljp032538@chez.mckusick.com> In-Reply-To: <201305112131.r4BLVljp032538@chez.mckusick.com> Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 7bit X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.2.7 (virtual.feral.com [216.224.170.83]); Sat, 11 May 2013 16:57:01 -0700 (PDT) X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list Reply-To: mjacob@freebsd.org List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 11 May 2013 23:58:08 -0000 On 5/11/2013 2:31 PM, Kirk McKusick wrote: > As of FreeBSD 9.1, the growfs(8) utility has the ability to increase > the size of live filesystems. > but only for ufs, no?