From owner-freebsd-fs@FreeBSD.ORG Sun Mar 27 00:03:44 2011 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id E3423106566C for ; Sun, 27 Mar 2011 00:03:44 +0000 (UTC) (envelope-from ronald-freebsd8@klop.yi.org) Received: from fep21.mx.upcmail.net (fep21.mx.upcmail.net [62.179.121.41]) by mx1.freebsd.org (Postfix) with ESMTP id C81E88FC15 for ; Sun, 27 Mar 2011 00:03:43 +0000 (UTC) Received: from edge01.upcmail.net ([192.168.13.236]) by viefep16-int.chello.at (InterMail vM.8.01.02.02 201-2260-120-106-20100312) with ESMTP id <20110326234004.JHQE9043.viefep16-int.chello.at@edge01.upcmail.net>; Sun, 27 Mar 2011 00:40:04 +0100 Received: from pinky ([213.46.23.80]) by edge01.upcmail.net with edge id Pzg11g01F1jgp3H01zg34a; Sun, 27 Mar 2011 00:40:04 +0100 X-SourceIP: 213.46.23.80 Content-Type: text/plain; charset=us-ascii; format=flowed; delsp=yes To: "Dr Josef Karthauser" , "Alexander Leidinger" References: <9CF23177-92D6-40C5-8C68-B7E2F88236E6@unitedlane.com> <20110326225430.00006a76@unknown> Date: Sun, 27 Mar 2011 00:40:04 +0100 MIME-Version: 1.0 Content-Transfer-Encoding: 8bit From: "Ronald Klop" Message-ID: In-Reply-To: <20110326225430.00006a76@unknown> User-Agent: Opera Mail/11.01 (Win32) X-Cloudmark-Analysis: v=1.1 cv=vUpxTctd+kpWCBtSXXIkt5ll4Z8E5Qu9nLREXC/hfIo= c=1 sm=0 a=CeLh-koh8aAA:10 a=nkSoEsZQu3kA:10 a=bgpUlknNv7MA:10 a=kj9zAlcOel0A:10 a=jAMlq-imAAAA:8 a=UiauT8HWAAAA:8 a=dIkf0IUaSwJwWcRD3wEA:9 a=6WSvGYgch8t59nBQLtYA:7 a=i8np8l8iM2NFJaMlTNxy93OlhcoA:4 a=CjuIK1q_8ugA:10 a=xraHq2XsALYA:10 a=Z862xBbkbFwA:10 a=HpAAvcLHHh0Zw7uRqdWCyQ==:117 Cc: freebsd-fs@freebsd.org Subject: Re: ZFS Problem - full disk, can't recover space :(. X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 27 Mar 2011 00:03:45 -0000 On Sat, 26 Mar 2011 22:54:30 +0100, Alexander Leidinger wrote: > On Sat, 26 Mar 2011 20:59:39 +0000 Dr Josef Karthauser > wrote: > >> Help! >> >> Foolishly I let my ZFS system run out of disk space. I've removed the >> errant logs, but the space has not been returned. Not sure why. There >> are no snapshots, and I've even desperately rebooted the machine, but >> the space is still lost. >> >> # zfs list void/store >> NAME USED AVAIL REFER MOUNTPOINT >> void/store 57.2G 2.3G >> 57.2G /store # du -hs /store >> 34G /store >> >> Any idea on were the 23G has gone, or how I pursuade the zpool to >> return it? Why is the filesystem referencing storage that isn't being >> used? > > I suggest a > zfs list -r -t all void/store > to make really sure we/you see what we want to see. Or something like 'zfs list -t all'. Maybe there are more volumes on void. Ronald. From owner-freebsd-fs@FreeBSD.ORG Sun Mar 27 07:13:02 2011 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 6C1911065670 for ; Sun, 27 Mar 2011 07:13:02 +0000 (UTC) (envelope-from josef.karthauser@unitedlane.com) Received: from k2smtpout02-01.prod.mesa1.secureserver.net (k2smtpout02-01.prod.mesa1.secureserver.net [64.202.189.90]) by mx1.freebsd.org (Postfix) with SMTP id 29C998FC16 for ; Sun, 27 Mar 2011 07:13:01 +0000 (UTC) Received: (qmail 27884 invoked from network); 27 Mar 2011 07:13:01 -0000 Received: from unknown (HELO ip-72.167.34.38.ip.secureserver.net) (72.167.34.38) by k2smtpout02-01.prod.mesa1.secureserver.net (64.202.189.90) with ESMTP; 27 Mar 2011 07:13:01 -0000 Received: (qmail 10098 invoked from network); 27 Mar 2011 03:12:26 -0400 Received: from unknown (HELO ?90.155.77.76?) (90.155.77.76) by ip-72.167.34.38.ip.secureserver.net with (AES128-SHA encrypted) SMTP; 27 Mar 2011 03:12:25 -0400 Mime-Version: 1.0 (Apple Message framework v1082) Content-Type: text/plain; charset=us-ascii From: Dr Josef Karthauser In-Reply-To: <0A5070EE-8804-47F3-8744-68CF514B1B77@unitedlane.com> Date: Sun, 27 Mar 2011 08:13:23 +0100 Content-Transfer-Encoding: quoted-printable Message-Id: References: <9CF23177-92D6-40C5-8C68-B7E2F88236E6@unitedlane.com> <20110326225430.00006a76@unknown> <0A5070EE-8804-47F3-8744-68CF514B1B77@unitedlane.com> To: freebsd-fs@freebsd.org X-Mailer: Apple Mail (2.1082) Cc: Alexander Leidinger Subject: Re: ZFS Problem - full disk, can't recover space :(. X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 27 Mar 2011 07:13:02 -0000 On 26 Mar 2011, at 22:41, Dr Josef Karthauser wrote: > On 26 Mar 2011, at 21:54, Alexander Leidinger wrote: >>> Any idea on were the 23G has gone, or how I pursuade the zpool to >>> return it? Why is the filesystem referencing storage that isn't = being >>> used? >>=20 >> I suggest a >> zfs list -r -t all void/store >> to make really sure we/you see what we want to see. [snip] > Definitely no snapshots: >=20 [snip] > This is the problematic filesystem: >=20 > void/j/legacy-alpha 56.6G 3.41G 56.6G /j/legacy-alpha >=20 > No chance that an application is holding any data - I rebooting and = came up > in single user mode to try and get this resolved, but no cookie. Could this be a problem with zpool version 15, which might be resolved = with version 28? Joe From owner-freebsd-fs@FreeBSD.ORG Sun Mar 27 07:13:17 2011 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id EE991106566B for ; Sun, 27 Mar 2011 07:13:17 +0000 (UTC) (envelope-from josef.karthauser@unitedlane.com) Received: from k2smtpout03-01.prod.mesa1.secureserver.net (k2smtpout03-01.prod.mesa1.secureserver.net [64.202.189.171]) by mx1.freebsd.org (Postfix) with SMTP id C82C68FC14 for ; Sun, 27 Mar 2011 07:13:17 +0000 (UTC) Received: (qmail 24929 invoked from network); 27 Mar 2011 07:13:17 -0000 Received: from unknown (HELO ip-72.167.34.38.ip.secureserver.net) (72.167.34.38) by k2smtpout03-01.prod.mesa1.secureserver.net (64.202.189.171) with ESMTP; 27 Mar 2011 07:13:16 -0000 Received: (qmail 10109 invoked from network); 27 Mar 2011 03:12:41 -0400 Received: from unknown (HELO ?90.155.77.76?) (90.155.77.76) by ip-72.167.34.38.ip.secureserver.net with (AES128-SHA encrypted) SMTP; 27 Mar 2011 03:12:40 -0400 Content-Type: text/plain; charset=us-ascii Mime-Version: 1.0 (Apple Message framework v1082) From: Dr Josef Karthauser In-Reply-To: <20110326225430.00006a76@unknown> Date: Sun, 27 Mar 2011 08:13:44 +0100 Content-Transfer-Encoding: 7bit Message-Id: <3BBB1E36-8E09-4D07-B49E-ACA8548B0B44@unitedlane.com> References: <9CF23177-92D6-40C5-8C68-B7E2F88236E6@unitedlane.com> <20110326225430.00006a76@unknown> To: freebsd-fs@freebsd.org X-Mailer: Apple Mail (2.1082) Subject: Re: ZFS Problem - full disk, can't recover space :(. X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 27 Mar 2011 07:13:18 -0000 On 26 Mar 2011, at 21:54, Alexander Leidinger wrote: >> Any idea on were the 23G has gone, or how I pursuade the zpool to >> return it? Why is the filesystem referencing storage that isn't being >> used? > > I suggest a > zfs list -r -t all void/store > to make really sure we/you see what we want to see. > > Can it be that an application has the 23G still open? > >> p.s. this is FreeBSD 8.2 with ZFS pool version >> 15. > > The default setting of showing snapshots or not changed somewhere. As > long as you didn't configure the pool to show snapshots (zpool get > listsnapshots ), they are not shown by default. Definitely no snapshots: infinity# zfs list -tall NAME USED AVAIL REFER MOUNTPOINT void 99.1G 24.8G 2.60G legacy void/home 33.5K 24.8G 33.5K /home void/j 87.5G 24.8G 54K /j void/j/buttsby 136M 9.87G 2.40M /j/buttsby void/j/buttsby/home 34.5K 9.87G 34.5K /j/buttsby/home void/j/buttsby/local 130M 9.87G 130M /j/buttsby/local void/j/buttsby/tmp 159K 9.87G 159K /j/buttsby/tmp void/j/buttsby/var 3.97M 9.87G 104K /j/buttsby/var void/j/buttsby/var/db 2.40M 9.87G 1.55M /j/buttsby/var/db void/j/buttsby/var/db/pkg 866K 9.87G 866K /j/buttsby/var/db/pkg void/j/buttsby/var/empty 21K 9.87G 21K /j/buttsby/var/empty void/j/buttsby/var/log 838K 9.87G 838K /j/buttsby/var/log void/j/buttsby/var/mail 592K 9.87G 592K /j/buttsby/var/mail void/j/buttsby/var/run 30.5K 9.87G 30.5K /j/buttsby/var/run void/j/buttsby/var/tmp 23K 9.87G 23K /j/buttsby/var/tmp void/j/legacy-alpha 56.6G 3.41G 56.6G /j/legacy-alpha void/j/legacy-brightstorm 29.2G 10.8G 29.2G /j/legacy-brightstorm void/j/legacy-obleo 1.29G 1.71G 1.29G /j/legacy-obleo void/j/mesh 310M 3.70G 2.40M /j/mesh void/j/mesh/home 21K 3.70G 21K /j/mesh/home void/j/mesh/local 305M 3.70G 305M /j/mesh/local void/j/mesh/tmp 26K 3.70G 26K /j/mesh/tmp void/j/mesh/var 2.91M 3.70G 104K /j/mesh/var void/j/mesh/var/db 2.63M 3.70G 1.56M /j/mesh/var/db void/j/mesh/var/db/pkg 1.07M 3.70G 1.07M /j/mesh/var/db/pkg void/j/mesh/var/empty 21K 3.70G 21K /j/mesh/var/empty void/j/mesh/var/log 85K 3.70G 85K /j/mesh/var/log void/j/mesh/var/mail 24K 3.70G 24K /j/mesh/var/mail void/j/mesh/var/run 28.5K 3.70G 28.5K /j/mesh/var/run void/j/mesh/var/tmp 23K 3.70G 23K /j/mesh/var/tmp void/local 282M 1.72G 282M /local void/mysql 22K 78K 22K /mysql void/tmp 55K 2.00G 55K /tmp void/usr 1.81G 2.19G 275M /usr void/usr/obj 976M 2.19G 976M /usr/obj void/usr/ports 289M 2.19G 234M /usr/ports void/usr/ports/distfiles 54.8M 2.19G 54.8M /usr/ports/distfiles void/usr/ports/packages 21K 2.19G 21K /usr/ports/packages void/usr/src 311M 2.19G 311M /usr/src void/var 6.86G 3.14G 130K /var void/var/crash 22.5K 3.14G 22.5K /var/crash void/var/db 6.86G 3.14G 58.3M /var/db void/var/db/mysql 6.80G 3.14G 4.79G /var/db/mysql void/var/db/mysql/innodbdata 2.01G 3.14G 2.01G /var/db/mysql/innodbdata void/var/db/pkg 2.00M 3.14G 2.00M /var/db/pkg void/var/empty 21K 3.14G 21K /var/empty void/var/log 642K 3.14G 642K /var/log void/var/mail 712K 3.14G 712K /var/mail void/var/run 49.5K 3.14G 49.5K /var/run void/var/tmp 27K 3.14G 27K /var/tmp This is the problematic filesystem: void/j/legacy-alpha 56.6G 3.41G 56.6G /j/legacy-alpha No chance that an application is holding any data - I rebooting and came up in single user mode to try and get this resolved, but no cookie. Joe From owner-freebsd-fs@FreeBSD.ORG Sun Mar 27 07:58:16 2011 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 8CD57106566B for ; Sun, 27 Mar 2011 07:58:16 +0000 (UTC) (envelope-from jdc@koitsu.dyndns.org) Received: from qmta13.emeryville.ca.mail.comcast.net (qmta13.emeryville.ca.mail.comcast.net [76.96.27.243]) by mx1.freebsd.org (Postfix) with ESMTP id 747258FC08 for ; Sun, 27 Mar 2011 07:58:16 +0000 (UTC) Received: from omta22.emeryville.ca.mail.comcast.net ([76.96.30.89]) by qmta13.emeryville.ca.mail.comcast.net with comcast id Q7vQ1g0041vN32cAD7yGD0; Sun, 27 Mar 2011 07:58:16 +0000 Received: from koitsu.dyndns.org ([67.180.84.87]) by omta22.emeryville.ca.mail.comcast.net with comcast id Q7yF1g0081t3BNj8i7yFqs; Sun, 27 Mar 2011 07:58:15 +0000 Received: by icarus.home.lan (Postfix, from userid 1000) id C2BB49B429; Sun, 27 Mar 2011 00:58:14 -0700 (PDT) Date: Sun, 27 Mar 2011 00:58:14 -0700 From: Jeremy Chadwick To: Dr Josef Karthauser Message-ID: <20110327075814.GA71131@icarus.home.lan> References: <9CF23177-92D6-40C5-8C68-B7E2F88236E6@unitedlane.com> <20110326225430.00006a76@unknown> <3BBB1E36-8E09-4D07-B49E-ACA8548B0B44@unitedlane.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <3BBB1E36-8E09-4D07-B49E-ACA8548B0B44@unitedlane.com> User-Agent: Mutt/1.5.21 (2010-09-15) Cc: freebsd-fs@freebsd.org Subject: Re: ZFS Problem - full disk, can't recover space :(. X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 27 Mar 2011 07:58:16 -0000 On Sun, Mar 27, 2011 at 08:13:44AM +0100, Dr Josef Karthauser wrote: > On 26 Mar 2011, at 21:54, Alexander Leidinger wrote: > >> Any idea on were the 23G has gone, or how I pursuade the zpool to > >> return it? Why is the filesystem referencing storage that isn't being > >> used? > > > > I suggest a > > zfs list -r -t all void/store > > to make really sure we/you see what we want to see. > > > > Can it be that an application has the 23G still open? > > > >> p.s. this is FreeBSD 8.2 with ZFS pool version > >> 15. > > > > The default setting of showing snapshots or not changed somewhere. As > > long as you didn't configure the pool to show snapshots (zpool get > > listsnapshots ), they are not shown by default. > > Definitely no snapshots: > > infinity# zfs list -tall > NAME USED AVAIL REFER MOUNTPOINT > void 99.1G 24.8G 2.60G legacy > void/home 33.5K 24.8G 33.5K /home > void/j 87.5G 24.8G 54K /j > void/j/buttsby 136M 9.87G 2.40M /j/buttsby > void/j/buttsby/home 34.5K 9.87G 34.5K /j/buttsby/home > void/j/buttsby/local 130M 9.87G 130M /j/buttsby/local > void/j/buttsby/tmp 159K 9.87G 159K /j/buttsby/tmp > void/j/buttsby/var 3.97M 9.87G 104K /j/buttsby/var > void/j/buttsby/var/db 2.40M 9.87G 1.55M /j/buttsby/var/db > void/j/buttsby/var/db/pkg 866K 9.87G 866K /j/buttsby/var/db/pkg > void/j/buttsby/var/empty 21K 9.87G 21K /j/buttsby/var/empty > void/j/buttsby/var/log 838K 9.87G 838K /j/buttsby/var/log > void/j/buttsby/var/mail 592K 9.87G 592K /j/buttsby/var/mail > void/j/buttsby/var/run 30.5K 9.87G 30.5K /j/buttsby/var/run > void/j/buttsby/var/tmp 23K 9.87G 23K /j/buttsby/var/tmp > void/j/legacy-alpha 56.6G 3.41G 56.6G /j/legacy-alpha > void/j/legacy-brightstorm 29.2G 10.8G 29.2G /j/legacy-brightstorm > void/j/legacy-obleo 1.29G 1.71G 1.29G /j/legacy-obleo > void/j/mesh 310M 3.70G 2.40M /j/mesh > void/j/mesh/home 21K 3.70G 21K /j/mesh/home > void/j/mesh/local 305M 3.70G 305M /j/mesh/local > void/j/mesh/tmp 26K 3.70G 26K /j/mesh/tmp > void/j/mesh/var 2.91M 3.70G 104K /j/mesh/var > void/j/mesh/var/db 2.63M 3.70G 1.56M /j/mesh/var/db > void/j/mesh/var/db/pkg 1.07M 3.70G 1.07M /j/mesh/var/db/pkg > void/j/mesh/var/empty 21K 3.70G 21K /j/mesh/var/empty > void/j/mesh/var/log 85K 3.70G 85K /j/mesh/var/log > void/j/mesh/var/mail 24K 3.70G 24K /j/mesh/var/mail > void/j/mesh/var/run 28.5K 3.70G 28.5K /j/mesh/var/run > void/j/mesh/var/tmp 23K 3.70G 23K /j/mesh/var/tmp > void/local 282M 1.72G 282M /local > void/mysql 22K 78K 22K /mysql > void/tmp 55K 2.00G 55K /tmp > void/usr 1.81G 2.19G 275M /usr > void/usr/obj 976M 2.19G 976M /usr/obj > void/usr/ports 289M 2.19G 234M /usr/ports > void/usr/ports/distfiles 54.8M 2.19G 54.8M /usr/ports/distfiles > void/usr/ports/packages 21K 2.19G 21K /usr/ports/packages > void/usr/src 311M 2.19G 311M /usr/src > void/var 6.86G 3.14G 130K /var > void/var/crash 22.5K 3.14G 22.5K /var/crash > void/var/db 6.86G 3.14G 58.3M /var/db > void/var/db/mysql 6.80G 3.14G 4.79G /var/db/mysql > void/var/db/mysql/innodbdata 2.01G 3.14G 2.01G /var/db/mysql/innodbdata > void/var/db/pkg 2.00M 3.14G 2.00M /var/db/pkg > void/var/empty 21K 3.14G 21K /var/empty > void/var/log 642K 3.14G 642K /var/log > void/var/mail 712K 3.14G 712K /var/mail > void/var/run 49.5K 3.14G 49.5K /var/run > void/var/tmp 27K 3.14G 27K /var/tmp > > This is the problematic filesystem: > > void/j/legacy-alpha 56.6G 3.41G 56.6G /j/legacy-alpha > > No chance that an application is holding any data - I rebooting and came up > in single user mode to try and get this resolved, but no cookie. Are these filesystems using compression? Have any quota or reservation filesystem settings set? "zfs get all" might help, but it'll be a lot of data. We don't mind. -- | Jeremy Chadwick jdc@parodius.com | | Parodius Networking http://www.parodius.com/ | | UNIX Systems Administrator Mountain View, CA, USA | | Making life hard for others since 1977. PGP 4BD6C0CB | From owner-freebsd-fs@FreeBSD.ORG Sun Mar 27 08:13:10 2011 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 1BCFF1065673 for ; Sun, 27 Mar 2011 08:13:10 +0000 (UTC) (envelope-from josef.karthauser@unitedlane.com) Received: from k2smtpout03-01.prod.mesa1.secureserver.net (k2smtpout03-01.prod.mesa1.secureserver.net [64.202.189.171]) by mx1.freebsd.org (Postfix) with SMTP id E70028FC08 for ; Sun, 27 Mar 2011 08:13:09 +0000 (UTC) Received: (qmail 17140 invoked from network); 27 Mar 2011 08:13:09 -0000 Received: from unknown (HELO ip-72.167.34.38.ip.secureserver.net) (72.167.34.38) by k2smtpout03-01.prod.mesa1.secureserver.net (64.202.189.171) with ESMTP; 27 Mar 2011 08:13:07 -0000 Received: (qmail 11243 invoked from network); 27 Mar 2011 04:12:32 -0400 Received: from unknown (HELO ?90.155.77.76?) (90.155.77.76) by open3dhouse.com with (AES128-SHA encrypted) SMTP; 27 Mar 2011 04:12:32 -0400 Mime-Version: 1.0 (Apple Message framework v1082) Content-Type: text/plain; charset=us-ascii From: Dr Josef Karthauser In-Reply-To: <20110327075814.GA71131@icarus.home.lan> Date: Sun, 27 Mar 2011 09:13:32 +0100 Content-Transfer-Encoding: quoted-printable Message-Id: References: <9CF23177-92D6-40C5-8C68-B7E2F88236E6@unitedlane.com> <20110326225430.00006a76@unknown> <3BBB1E36-8E09-4D07-B49E-ACA8548B0B44@unitedlane.com> <20110327075814.GA71131@icarus.home.lan> To: Jeremy Chadwick X-Mailer: Apple Mail (2.1082) Cc: freebsd-fs@freebsd.org Subject: Re: ZFS Problem - full disk, can't recover space :(. X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 27 Mar 2011 08:13:10 -0000 On 27 Mar 2011, at 08:58, Jeremy Chadwick wrote: > On Sun, Mar 27, 2011 at 08:13:44AM +0100, Dr Josef Karthauser wrote: >> On 26 Mar 2011, at 21:54, Alexander Leidinger wrote: >>>> Any idea on were the 23G has gone, or how I pursuade the zpool to >>>> return it? Why is the filesystem referencing storage that isn't = being >>>> used? >>>=20 >>> I suggest a >>> zfs list -r -t all void/store >>> to make really sure we/you see what we want to see. >>>=20 >>> Can it be that an application has the 23G still open? >>>=20 >>>> p.s. this is FreeBSD 8.2 with ZFS pool version >>>> 15. >>>=20 >>> The default setting of showing snapshots or not changed somewhere. = As >>> long as you didn't configure the pool to show snapshots (zpool get >>> listsnapshots ), they are not shown by default. >>=20 >> Definitely no snapshots: >>=20 >> infinity# zfs list -tall >> NAME USED AVAIL REFER MOUNTPOINT >> void 99.1G 24.8G 2.60G legacy >> void/home 33.5K 24.8G 33.5K /home >> void/j 87.5G 24.8G 54K /j >> void/j/buttsby 136M 9.87G 2.40M /j/buttsby >> void/j/buttsby/home 34.5K 9.87G 34.5K /j/buttsby/home >> void/j/buttsby/local 130M 9.87G 130M /j/buttsby/local >> void/j/buttsby/tmp 159K 9.87G 159K /j/buttsby/tmp >> void/j/buttsby/var 3.97M 9.87G 104K /j/buttsby/var >> void/j/buttsby/var/db 2.40M 9.87G 1.55M /j/buttsby/var/db >> void/j/buttsby/var/db/pkg 866K 9.87G 866K = /j/buttsby/var/db/pkg >> void/j/buttsby/var/empty 21K 9.87G 21K = /j/buttsby/var/empty >> void/j/buttsby/var/log 838K 9.87G 838K /j/buttsby/var/log >> void/j/buttsby/var/mail 592K 9.87G 592K = /j/buttsby/var/mail >> void/j/buttsby/var/run 30.5K 9.87G 30.5K /j/buttsby/var/run >> void/j/buttsby/var/tmp 23K 9.87G 23K /j/buttsby/var/tmp >> void/j/legacy-alpha 56.6G 3.41G 56.6G /j/legacy-alpha >> void/j/legacy-brightstorm 29.2G 10.8G 29.2G = /j/legacy-brightstorm >> void/j/legacy-obleo 1.29G 1.71G 1.29G /j/legacy-obleo >> void/j/mesh 310M 3.70G 2.40M /j/mesh >> void/j/mesh/home 21K 3.70G 21K /j/mesh/home >> void/j/mesh/local 305M 3.70G 305M /j/mesh/local >> void/j/mesh/tmp 26K 3.70G 26K /j/mesh/tmp >> void/j/mesh/var 2.91M 3.70G 104K /j/mesh/var >> void/j/mesh/var/db 2.63M 3.70G 1.56M /j/mesh/var/db >> void/j/mesh/var/db/pkg 1.07M 3.70G 1.07M /j/mesh/var/db/pkg >> void/j/mesh/var/empty 21K 3.70G 21K /j/mesh/var/empty >> void/j/mesh/var/log 85K 3.70G 85K /j/mesh/var/log >> void/j/mesh/var/mail 24K 3.70G 24K /j/mesh/var/mail >> void/j/mesh/var/run 28.5K 3.70G 28.5K /j/mesh/var/run >> void/j/mesh/var/tmp 23K 3.70G 23K /j/mesh/var/tmp >> void/local 282M 1.72G 282M /local >> void/mysql 22K 78K 22K /mysql >> void/tmp 55K 2.00G 55K /tmp >> void/usr 1.81G 2.19G 275M /usr >> void/usr/obj 976M 2.19G 976M /usr/obj >> void/usr/ports 289M 2.19G 234M /usr/ports >> void/usr/ports/distfiles 54.8M 2.19G 54.8M = /usr/ports/distfiles >> void/usr/ports/packages 21K 2.19G 21K = /usr/ports/packages >> void/usr/src 311M 2.19G 311M /usr/src >> void/var 6.86G 3.14G 130K /var >> void/var/crash 22.5K 3.14G 22.5K /var/crash >> void/var/db 6.86G 3.14G 58.3M /var/db >> void/var/db/mysql 6.80G 3.14G 4.79G /var/db/mysql >> void/var/db/mysql/innodbdata 2.01G 3.14G 2.01G = /var/db/mysql/innodbdata >> void/var/db/pkg 2.00M 3.14G 2.00M /var/db/pkg >> void/var/empty 21K 3.14G 21K /var/empty >> void/var/log 642K 3.14G 642K /var/log >> void/var/mail 712K 3.14G 712K /var/mail >> void/var/run 49.5K 3.14G 49.5K /var/run >> void/var/tmp 27K 3.14G 27K /var/tmp >>=20 >> This is the problematic filesystem: >>=20 >> void/j/legacy-alpha 56.6G 3.41G 56.6G /j/legacy-alpha >>=20 >> No chance that an application is holding any data - I rebooting and = came up >> in single user mode to try and get this resolved, but no cookie. >=20 > Are these filesystems using compression? Have any quota or = reservation > filesystem settings set? >=20 > "zfs get all" might help, but it'll be a lot of data. We don't mind. >=20 Ok, here you are. ( http://www.josef-k.net/misc/zfsall.txt.bz2 ) I suspect that the problem is the same as reported here: http://web.archiveorange.com/archive/v/Lmwutp4HZLFDEkQ1UlX5 namely that = there was a bug with the handling of sparse files on zfs. The file in = question that caused the problem is a bayes database from spam assassin. Joe From owner-freebsd-fs@FreeBSD.ORG Sun Mar 27 08:43:57 2011 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id E7C9C106564A for ; Sun, 27 Mar 2011 08:43:57 +0000 (UTC) (envelope-from jdc@koitsu.dyndns.org) Received: from qmta14.emeryville.ca.mail.comcast.net (qmta14.emeryville.ca.mail.comcast.net [76.96.27.212]) by mx1.freebsd.org (Postfix) with ESMTP id CD6B68FC08 for ; Sun, 27 Mar 2011 08:43:56 +0000 (UTC) Received: from omta09.emeryville.ca.mail.comcast.net ([76.96.30.20]) by qmta14.emeryville.ca.mail.comcast.net with comcast id Q8jw1g00A0S2fkCAE8jwVA; Sun, 27 Mar 2011 08:43:56 +0000 Received: from koitsu.dyndns.org ([67.180.84.87]) by omta09.emeryville.ca.mail.comcast.net with comcast id Q8jv1g0091t3BNj8V8jwb5; Sun, 27 Mar 2011 08:43:56 +0000 Received: by icarus.home.lan (Postfix, from userid 1000) id A88DC9B429; Sun, 27 Mar 2011 01:43:55 -0700 (PDT) Date: Sun, 27 Mar 2011 01:43:55 -0700 From: Jeremy Chadwick To: Dr Josef Karthauser Message-ID: <20110327084355.GA71864@icarus.home.lan> References: <9CF23177-92D6-40C5-8C68-B7E2F88236E6@unitedlane.com> <20110326225430.00006a76@unknown> <3BBB1E36-8E09-4D07-B49E-ACA8548B0B44@unitedlane.com> <20110327075814.GA71131@icarus.home.lan> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: User-Agent: Mutt/1.5.21 (2010-09-15) Cc: freebsd-fs@freebsd.org Subject: Re: ZFS Problem - full disk, can't recover space :(. X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 27 Mar 2011 08:43:58 -0000 On Sun, Mar 27, 2011 at 09:13:32AM +0100, Dr Josef Karthauser wrote: > On 27 Mar 2011, at 08:58, Jeremy Chadwick wrote: > > > On Sun, Mar 27, 2011 at 08:13:44AM +0100, Dr Josef Karthauser wrote: > >> On 26 Mar 2011, at 21:54, Alexander Leidinger wrote: > >>>> Any idea on were the 23G has gone, or how I pursuade the zpool to > >>>> return it? Why is the filesystem referencing storage that isn't being > >>>> used? > >>> > >>> I suggest a > >>> zfs list -r -t all void/store > >>> to make really sure we/you see what we want to see. > >>> > >>> Can it be that an application has the 23G still open? > >>> > >>>> p.s. this is FreeBSD 8.2 with ZFS pool version > >>>> 15. > >>> > >>> The default setting of showing snapshots or not changed somewhere. As > >>> long as you didn't configure the pool to show snapshots (zpool get > >>> listsnapshots ), they are not shown by default. > >> > >> Definitely no snapshots: > >> > >> infinity# zfs list -tall > >> NAME USED AVAIL REFER MOUNTPOINT > >> void 99.1G 24.8G 2.60G legacy > >> void/home 33.5K 24.8G 33.5K /home > >> void/j 87.5G 24.8G 54K /j > >> void/j/buttsby 136M 9.87G 2.40M /j/buttsby > >> void/j/buttsby/home 34.5K 9.87G 34.5K /j/buttsby/home > >> void/j/buttsby/local 130M 9.87G 130M /j/buttsby/local > >> void/j/buttsby/tmp 159K 9.87G 159K /j/buttsby/tmp > >> void/j/buttsby/var 3.97M 9.87G 104K /j/buttsby/var > >> void/j/buttsby/var/db 2.40M 9.87G 1.55M /j/buttsby/var/db > >> void/j/buttsby/var/db/pkg 866K 9.87G 866K /j/buttsby/var/db/pkg > >> void/j/buttsby/var/empty 21K 9.87G 21K /j/buttsby/var/empty > >> void/j/buttsby/var/log 838K 9.87G 838K /j/buttsby/var/log > >> void/j/buttsby/var/mail 592K 9.87G 592K /j/buttsby/var/mail > >> void/j/buttsby/var/run 30.5K 9.87G 30.5K /j/buttsby/var/run > >> void/j/buttsby/var/tmp 23K 9.87G 23K /j/buttsby/var/tmp > >> void/j/legacy-alpha 56.6G 3.41G 56.6G /j/legacy-alpha > >> void/j/legacy-brightstorm 29.2G 10.8G 29.2G /j/legacy-brightstorm > >> void/j/legacy-obleo 1.29G 1.71G 1.29G /j/legacy-obleo > >> void/j/mesh 310M 3.70G 2.40M /j/mesh > >> void/j/mesh/home 21K 3.70G 21K /j/mesh/home > >> void/j/mesh/local 305M 3.70G 305M /j/mesh/local > >> void/j/mesh/tmp 26K 3.70G 26K /j/mesh/tmp > >> void/j/mesh/var 2.91M 3.70G 104K /j/mesh/var > >> void/j/mesh/var/db 2.63M 3.70G 1.56M /j/mesh/var/db > >> void/j/mesh/var/db/pkg 1.07M 3.70G 1.07M /j/mesh/var/db/pkg > >> void/j/mesh/var/empty 21K 3.70G 21K /j/mesh/var/empty > >> void/j/mesh/var/log 85K 3.70G 85K /j/mesh/var/log > >> void/j/mesh/var/mail 24K 3.70G 24K /j/mesh/var/mail > >> void/j/mesh/var/run 28.5K 3.70G 28.5K /j/mesh/var/run > >> void/j/mesh/var/tmp 23K 3.70G 23K /j/mesh/var/tmp > >> void/local 282M 1.72G 282M /local > >> void/mysql 22K 78K 22K /mysql > >> void/tmp 55K 2.00G 55K /tmp > >> void/usr 1.81G 2.19G 275M /usr > >> void/usr/obj 976M 2.19G 976M /usr/obj > >> void/usr/ports 289M 2.19G 234M /usr/ports > >> void/usr/ports/distfiles 54.8M 2.19G 54.8M /usr/ports/distfiles > >> void/usr/ports/packages 21K 2.19G 21K /usr/ports/packages > >> void/usr/src 311M 2.19G 311M /usr/src > >> void/var 6.86G 3.14G 130K /var > >> void/var/crash 22.5K 3.14G 22.5K /var/crash > >> void/var/db 6.86G 3.14G 58.3M /var/db > >> void/var/db/mysql 6.80G 3.14G 4.79G /var/db/mysql > >> void/var/db/mysql/innodbdata 2.01G 3.14G 2.01G /var/db/mysql/innodbdata > >> void/var/db/pkg 2.00M 3.14G 2.00M /var/db/pkg > >> void/var/empty 21K 3.14G 21K /var/empty > >> void/var/log 642K 3.14G 642K /var/log > >> void/var/mail 712K 3.14G 712K /var/mail > >> void/var/run 49.5K 3.14G 49.5K /var/run > >> void/var/tmp 27K 3.14G 27K /var/tmp > >> > >> This is the problematic filesystem: > >> > >> void/j/legacy-alpha 56.6G 3.41G 56.6G /j/legacy-alpha > >> > >> No chance that an application is holding any data - I rebooting and came up > >> in single user mode to try and get this resolved, but no cookie. > > > > Are these filesystems using compression? Have any quota or reservation > > filesystem settings set? > > > > "zfs get all" might help, but it'll be a lot of data. We don't mind. > > > > Ok, here you are. ( http://www.josef-k.net/misc/zfsall.txt.bz2 ) > > I suspect that the problem is the same as reported here: > http://web.archiveorange.com/archive/v/Lmwutp4HZLFDEkQ1UlX5 namely that there was a bug with the handling of sparse files on zfs. The file in question that caused the problem is a bayes database from spam assassin. That was going to be my next question, actually (yep really :-) ). -- | Jeremy Chadwick jdc@parodius.com | | Parodius Networking http://www.parodius.com/ | | UNIX Systems Administrator Mountain View, CA, USA | | Making life hard for others since 1977. PGP 4BD6C0CB | From owner-freebsd-fs@FreeBSD.ORG Sun Mar 27 09:30:40 2011 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id BD843106564A for ; Sun, 27 Mar 2011 09:30:40 +0000 (UTC) (envelope-from josef.karthauser@unitedlane.com) Received: from k2smtpout05-01.prod.mesa1.secureserver.net (k2smtpout05-01.prod.mesa1.secureserver.net [64.202.189.56]) by mx1.freebsd.org (Postfix) with SMTP id 93D718FC0C for ; Sun, 27 Mar 2011 09:30:40 +0000 (UTC) Received: (qmail 23444 invoked from network); 27 Mar 2011 09:30:39 -0000 Received: from unknown (HELO ip-72.167.34.38.ip.secureserver.net) (72.167.34.38) by k2smtpout05-01.prod.mesa1.secureserver.net (64.202.189.56) with ESMTP; 27 Mar 2011 09:30:39 -0000 Received: (qmail 14721 invoked from network); 27 Mar 2011 05:30:03 -0400 Received: from unknown (HELO ?90.155.77.76?) (90.155.77.76) by open3dhouse.com with (AES128-SHA encrypted) SMTP; 27 Mar 2011 05:30:03 -0400 Mime-Version: 1.0 (Apple Message framework v1082) Content-Type: text/plain; charset=us-ascii From: Dr Josef Karthauser In-Reply-To: <20110327084355.GA71864@icarus.home.lan> Date: Sun, 27 Mar 2011 10:31:05 +0100 Content-Transfer-Encoding: quoted-printable Message-Id: <094E71D9-B28B-46DB-8EA9-B11F17D5A32A@unitedlane.com> References: <9CF23177-92D6-40C5-8C68-B7E2F88236E6@unitedlane.com> <20110326225430.00006a76@unknown> <3BBB1E36-8E09-4D07-B49E-ACA8548B0B44@unitedlane.com> <20110327075814.GA71131@icarus.home.lan> <20110327084355.GA71864@icarus.home.lan> To: Jeremy Chadwick X-Mailer: Apple Mail (2.1082) Cc: freebsd-fs@freebsd.org Subject: Re: ZFS Problem - full disk, can't recover space :(. X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 27 Mar 2011 09:30:40 -0000 On 27 Mar 2011, at 09:43, Jeremy Chadwick wrote: >>>>=20 >>>> This is the problematic filesystem: >>>>=20 >>>> void/j/legacy-alpha 56.6G 3.41G 56.6G /j/legacy-alpha >>>>=20 >>>> No chance that an application is holding any data - I rebooting and = came up >>>> in single user mode to try and get this resolved, but no cookie. >>>=20 >>> Are these filesystems using compression? Have any quota or = reservation >>> filesystem settings set? >>>=20 >>> "zfs get all" might help, but it'll be a lot of data. We don't = mind. >>>=20 >>=20 >> Ok, here you are. ( http://www.josef-k.net/misc/zfsall.txt.bz2 ) >>=20 >> I suspect that the problem is the same as reported here: >> http://web.archiveorange.com/archive/v/Lmwutp4HZLFDEkQ1UlX5 namely = that there was a bug with the handling of sparse files on zfs. The file = in question that caused the problem is a bayes database from spam = assassin. >=20 > That was going to be my next question, actually (yep really :-) ). So, I guess my next question is, would I be mad to apply the zpool = version 28 patch to 8.2 and run with that? Or are sparse files so broken = on zfs that I ought to find some ufs to run the bayesdb on? Joe From owner-freebsd-fs@FreeBSD.ORG Sun Mar 27 09:41:23 2011 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 7FC5E106564A for ; Sun, 27 Mar 2011 09:41:23 +0000 (UTC) (envelope-from jdc@koitsu.dyndns.org) Received: from qmta05.emeryville.ca.mail.comcast.net (qmta05.emeryville.ca.mail.comcast.net [76.96.30.48]) by mx1.freebsd.org (Postfix) with ESMTP id 6456B8FC17 for ; Sun, 27 Mar 2011 09:41:22 +0000 (UTC) Received: from omta07.emeryville.ca.mail.comcast.net ([76.96.30.59]) by qmta05.emeryville.ca.mail.comcast.net with comcast id Q9fy1g0021GXsucA59hNhK; Sun, 27 Mar 2011 09:41:22 +0000 Received: from koitsu.dyndns.org ([67.180.84.87]) by omta07.emeryville.ca.mail.comcast.net with comcast id Q9hM1g00M1t3BNj8U9hMC1; Sun, 27 Mar 2011 09:41:22 +0000 Received: by icarus.home.lan (Postfix, from userid 1000) id 9AB549B429; Sun, 27 Mar 2011 02:41:21 -0700 (PDT) Date: Sun, 27 Mar 2011 02:41:21 -0700 From: Jeremy Chadwick To: Dr Josef Karthauser Message-ID: <20110327094121.GA72701@icarus.home.lan> References: <9CF23177-92D6-40C5-8C68-B7E2F88236E6@unitedlane.com> <20110326225430.00006a76@unknown> <3BBB1E36-8E09-4D07-B49E-ACA8548B0B44@unitedlane.com> <20110327075814.GA71131@icarus.home.lan> <20110327084355.GA71864@icarus.home.lan> <094E71D9-B28B-46DB-8EA9-B11F17D5A32A@unitedlane.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <094E71D9-B28B-46DB-8EA9-B11F17D5A32A@unitedlane.com> User-Agent: Mutt/1.5.21 (2010-09-15) Cc: freebsd-fs@freebsd.org Subject: Re: ZFS Problem - full disk, can't recover space :(. X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 27 Mar 2011 09:41:23 -0000 On Sun, Mar 27, 2011 at 10:31:05AM +0100, Dr Josef Karthauser wrote: > On 27 Mar 2011, at 09:43, Jeremy Chadwick wrote: > >>>> > >>>> This is the problematic filesystem: > >>>> > >>>> void/j/legacy-alpha 56.6G 3.41G 56.6G /j/legacy-alpha > >>>> > >>>> No chance that an application is holding any data - I rebooting and came up > >>>> in single user mode to try and get this resolved, but no cookie. > >>> > >>> Are these filesystems using compression? Have any quota or reservation > >>> filesystem settings set? > >>> > >>> "zfs get all" might help, but it'll be a lot of data. We don't mind. > >>> > >> > >> Ok, here you are. ( http://www.josef-k.net/misc/zfsall.txt.bz2 ) > >> > >> I suspect that the problem is the same as reported here: > >> http://web.archiveorange.com/archive/v/Lmwutp4HZLFDEkQ1UlX5 namely that there was a bug with the handling of sparse files on zfs. The file in question that caused the problem is a bayes database from spam assassin. > > > > That was going to be my next question, actually (yep really :-) ). > > So, I guess my next question is, would I be mad to apply the zpool version 28 patch to 8.2 and run with that? Or are sparse files so broken on zfs that I ought to find some ufs to run the bayesdb on? There have been a lot of problem reports (as far as the patch applying fine but then things breaking badly) from what I've seen regarding the ZFS v28 patch on RELENG_8. I will also point out that the administrator of cvsup9.freebsd.org just tried moving to that patch on RELENG_8 and broke the server badly. I have the mails, but they're off-list/private and I don't feel comfortable just dumping those here. My advice is that if you care about stability, don't run the v28 patch, period. I'm curious about something -- we use RELENG_8 systems with a mirror zpool (kinda funny how I did it too, since the system only has 2 disks) for /home. Our SpamAssassin configuration is set to obviously writes to $user/.spamassassin/bayes_* files. Yet, we do not see this sparse file problem that others are reporting. $ df -k /home Filesystem 1024-blocks Used Avail Capacity Mounted on data/home 239144704 107238740 131905963 45% /home $ zfs list data/home NAME USED AVAIL REFER MOUNTPOINT data/home 102G 126G 102G /home $ zpool status data pool: data state: ONLINE scrub: resilver completed after 0h9m with 0 errors on Wed Oct 20 03:08:22 2010 config: NAME STATE READ WRITE CKSUM data ONLINE 0 0 0 mirror ONLINE 0 0 0 ada1 ONLINE 0 0 0 ada0s1g ONLINE 0 0 0 26.0G resilvered $ grep bayes /usr/local/etc/mail/spamassassin/local.cf use_bayes 1 bayes_auto_learn 1 bayes_ignore_header X-Bogosity bayes_ignore_header X-Spam-Flag bayes_ignore_header X-Spam-Status $ ls -l .spamassassin/ total 4085 -rw------- 1 jdc users 102192 Mar 27 02:30 bayes_journal -rw------- 1 jdc users 360448 Mar 27 02:30 bayes_seen -rw------- 1 jdc users 4947968 Mar 27 02:30 bayes_toks -rw------- 1 jdc users 8719 Mar 20 04:11 user_prefs -- | Jeremy Chadwick jdc@parodius.com | | Parodius Networking http://www.parodius.com/ | | UNIX Systems Administrator Mountain View, CA, USA | | Making life hard for others since 1977. PGP 4BD6C0CB | From owner-freebsd-fs@FreeBSD.ORG Sun Mar 27 10:00:38 2011 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 981071065674 for ; Sun, 27 Mar 2011 10:00:38 +0000 (UTC) (envelope-from josef.karthauser@unitedlane.com) Received: from k2smtpout01-01.prod.mesa1.secureserver.net (k2smtpout01-01.prod.mesa1.secureserver.net [64.202.189.88]) by mx1.freebsd.org (Postfix) with SMTP id 6C1EE8FC12 for ; Sun, 27 Mar 2011 10:00:38 +0000 (UTC) Received: (qmail 10298 invoked from network); 27 Mar 2011 10:00:37 -0000 Received: from unknown (HELO ip-72.167.34.38.ip.secureserver.net) (72.167.34.38) by k2smtpout01-01.prod.mesa1.secureserver.net (64.202.189.88) with ESMTP; 27 Mar 2011 10:00:37 -0000 Received: (qmail 15418 invoked from network); 27 Mar 2011 06:00:02 -0400 Received: from unknown (HELO ?90.155.77.76?) (90.155.77.76) by ip-72.167.34.38.ip.secureserver.net with (AES128-SHA encrypted) SMTP; 27 Mar 2011 06:00:01 -0400 Mime-Version: 1.0 (Apple Message framework v1082) Content-Type: text/plain; charset=us-ascii From: Dr Josef Karthauser In-Reply-To: <20110327094121.GA72701@icarus.home.lan> Date: Sun, 27 Mar 2011 11:01:04 +0100 Content-Transfer-Encoding: quoted-printable Message-Id: <980F394D-36FC-42F2-9F3F-A3C44A385600@unitedlane.com> References: <9CF23177-92D6-40C5-8C68-B7E2F88236E6@unitedlane.com> <20110326225430.00006a76@unknown> <3BBB1E36-8E09-4D07-B49E-ACA8548B0B44@unitedlane.com> <20110327075814.GA71131@icarus.home.lan> <20110327084355.GA71864@icarus.home.lan> <094E71D9-B28B-46DB-8EA9-B11F17D5A32A@unitedlane.com> <20110327094121.GA72701@icarus.home.lan> To: Jeremy Chadwick X-Mailer: Apple Mail (2.1082) Cc: freebsd-fs@freebsd.org Subject: Re: ZFS Problem - full disk, can't recover space :(. X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 27 Mar 2011 10:00:38 -0000 On 27 Mar 2011, at 10:41, Jeremy Chadwick wrote: > I'm curious about something -- we use RELENG_8 systems with a mirror > zpool (kinda funny how I did it too, since the system only has 2 = disks) > for /home. Our SpamAssassin configuration is set to obviously writes = to > $user/.spamassassin/bayes_* files. Yet, we do not see this sparse = file > problem that others are reporting. >=20 > $ df -k /home > Filesystem 1024-blocks Used Avail Capacity Mounted on > data/home 239144704 107238740 131905963 45% /home > $ zfs list data/home > NAME USED AVAIL REFER MOUNTPOINT > data/home 102G 126G 102G /home >=20 > $ zpool status data > pool: data > state: ONLINE > scrub: resilver completed after 0h9m with 0 errors on Wed Oct 20 = 03:08:22 2010 > config: >=20 > NAME STATE READ WRITE CKSUM > data ONLINE 0 0 0 > mirror ONLINE 0 0 0 > ada1 ONLINE 0 0 0 > ada0s1g ONLINE 0 0 0 26.0G resilvered >=20 > $ grep bayes /usr/local/etc/mail/spamassassin/local.cf > use_bayes 1 > bayes_auto_learn 1 > bayes_ignore_header X-Bogosity > bayes_ignore_header X-Spam-Flag > bayes_ignore_header X-Spam-Status >=20 > $ ls -l .spamassassin/ > total 4085 > -rw------- 1 jdc users 102192 Mar 27 02:30 bayes_journal > -rw------- 1 jdc users 360448 Mar 27 02:30 bayes_seen > -rw------- 1 jdc users 4947968 Mar 27 02:30 bayes_toks > -rw------- 1 jdc users 8719 Mar 20 04:11 user_prefs No idea what caused it, but whenever I ran the bayes expiry it created a = new file that just blew up and filled all the available space. I've got = around the issue temporarily. I used 'swapoff' to recover a 4Gb swap = partition, created a UFS and mounted that in the jail in question. After = rsyncing the bayes database to that disk I was able to run an expire = with no trouble at all, so it wasn't that the bayes was corrupt or = anything. I've now copied it back and it runs fine. I expect that the = problem will reoccur at some inconvenient point in the future. I'd really like my disk space back though please! I suspect that I'm = going to have to wait for 28 to have that happen though :(. Joe From owner-freebsd-fs@FreeBSD.ORG Sun Mar 27 10:08:37 2011 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 6742A106564A for ; Sun, 27 Mar 2011 10:08:37 +0000 (UTC) (envelope-from alexander@leidinger.net) Received: from mail.ebusiness-leidinger.de (mail.ebusiness-leidinger.de [217.11.53.44]) by mx1.freebsd.org (Postfix) with ESMTP id 2249B8FC0C for ; Sun, 27 Mar 2011 10:08:36 +0000 (UTC) Received: from outgoing.leidinger.net (p5B154B63.dip.t-dialin.net [91.21.75.99]) by mail.ebusiness-leidinger.de (Postfix) with ESMTPSA id A2690844015; Sun, 27 Mar 2011 12:08:33 +0200 (CEST) Received: from unknown (IO.Leidinger.net [192.168.2.110]) by outgoing.leidinger.net (Postfix) with ESMTP id EA5481661; Sun, 27 Mar 2011 12:08:30 +0200 (CEST) Date: Sun, 27 Mar 2011 12:08:31 +0200 From: Alexander Leidinger To: Dr Josef Karthauser Message-ID: <20110327120831.00003f9a@unknown> In-Reply-To: References: <9CF23177-92D6-40C5-8C68-B7E2F88236E6@unitedlane.com> <20110326225430.00006a76@unknown> <0A5070EE-8804-47F3-8744-68CF514B1B77@unitedlane.com> X-Mailer: Claws Mail 3.7.8cvs47 (GTK+ 2.16.6; i586-pc-mingw32msvc) Mime-Version: 1.0 Content-Type: text/plain; charset=US-ASCII Content-Transfer-Encoding: 7bit X-EBL-MailScanner-Information: Please contact the ISP for more information X-EBL-MailScanner-ID: A2690844015.AB4D1 X-EBL-MailScanner: Found to be clean X-EBL-MailScanner-SpamCheck: not spam, spamhaus-ZEN, SpamAssassin (not cached, score=-0.923, required 6, autolearn=disabled, ALL_TRUSTED -1.00, TW_ZF 0.08) X-EBL-MailScanner-From: alexander@leidinger.net X-EBL-MailScanner-Watermark: 1301825314.37368@2/4uSs0/FC8oxFv/OJsm3w X-EBL-Spam-Status: No Cc: freebsd-fs@freebsd.org Subject: Re: ZFS Problem - full disk, can't recover space :(. X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 27 Mar 2011 10:08:37 -0000 On Sun, 27 Mar 2011 08:13:23 +0100 Dr Josef Karthauser wrote: > On 26 Mar 2011, at 22:41, Dr Josef Karthauser wrote: > > On 26 Mar 2011, at 21:54, Alexander Leidinger wrote: > >>> Any idea on were the 23G has gone, or how I pursuade the zpool to > >>> return it? Why is the filesystem referencing storage that isn't > >>> being used? > >> > >> I suggest a > >> zfs list -r -t all void/store > >> to make really sure we/you see what we want to see. > [snip] > > Definitely no snapshots: > > > [snip] > > This is the problematic filesystem: > > > > void/j/legacy-alpha 56.6G 3.41G 56.6G /j/legacy-alpha > > > > No chance that an application is holding any data - I rebooting and > > came up in single user mode to try and get this resolved, but no > > cookie. > > Could this be a problem with zpool version 15, which might be > resolved with version 28? As a first try you could export and reimport the pool. If it doesn't work: There is one problem with the current version in 8.x about space not freed (AFAIK in the ZIL). Just booting with a more recent version (9-current) and importing and exporting again should fix this issue (with 9-current there is also "zpool import -F " which is supposed to go back to a working state if a normal import is not possible, I suggest to search the net what it is doing exactly in case you need to use this). If you can not try this you could try to destroy this one FS on the pool and recreate it. Although, if you do not have (a place to make a) copy of this data, I do not know what to try else. Bye, Alexander. -- http://www.Leidinger.net Alexander @ Leidinger.net: PGP ID = B0063FE7 http://www.FreeBSD.org netchild @ FreeBSD.org : PGP ID = 72077137 From owner-freebsd-fs@FreeBSD.ORG Sun Mar 27 10:41:15 2011 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id B03371065673 for ; Sun, 27 Mar 2011 10:41:15 +0000 (UTC) (envelope-from jdc@koitsu.dyndns.org) Received: from qmta14.westchester.pa.mail.comcast.net (qmta14.westchester.pa.mail.comcast.net [76.96.59.212]) by mx1.freebsd.org (Postfix) with ESMTP id 574708FC19 for ; Sun, 27 Mar 2011 10:41:14 +0000 (UTC) Received: from omta19.westchester.pa.mail.comcast.net ([76.96.62.98]) by qmta14.westchester.pa.mail.comcast.net with comcast id QAhF1g00127AodY5EAhFhe; Sun, 27 Mar 2011 10:41:15 +0000 Received: from koitsu.dyndns.org ([67.180.84.87]) by omta19.westchester.pa.mail.comcast.net with comcast id QAhE1g00N1t3BNj3fAhEvs; Sun, 27 Mar 2011 10:41:15 +0000 Received: by icarus.home.lan (Postfix, from userid 1000) id B6F2F9B429; Sun, 27 Mar 2011 03:41:12 -0700 (PDT) Date: Sun, 27 Mar 2011 03:41:12 -0700 From: Jeremy Chadwick To: Dr Josef Karthauser Message-ID: <20110327104112.GA74250@icarus.home.lan> References: <9CF23177-92D6-40C5-8C68-B7E2F88236E6@unitedlane.com> <20110326225430.00006a76@unknown> <3BBB1E36-8E09-4D07-B49E-ACA8548B0B44@unitedlane.com> <20110327075814.GA71131@icarus.home.lan> <20110327084355.GA71864@icarus.home.lan> <094E71D9-B28B-46DB-8EA9-B11F17D5A32A@unitedlane.com> <20110327094121.GA72701@icarus.home.lan> <980F394D-36FC-42F2-9F3F-A3C44A385600@unitedlane.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <980F394D-36FC-42F2-9F3F-A3C44A385600@unitedlane.com> User-Agent: Mutt/1.5.21 (2010-09-15) Cc: freebsd-fs@freebsd.org Subject: Re: ZFS Problem - full disk, can't recover space :(. X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 27 Mar 2011 10:41:15 -0000 On Sun, Mar 27, 2011 at 11:01:04AM +0100, Dr Josef Karthauser wrote: > On 27 Mar 2011, at 10:41, Jeremy Chadwick wrote: > > I'm curious about something -- we use RELENG_8 systems with a mirror > > zpool (kinda funny how I did it too, since the system only has 2 disks) > > for /home. Our SpamAssassin configuration is set to obviously writes to > > $user/.spamassassin/bayes_* files. Yet, we do not see this sparse file > > problem that others are reporting. > > > > $ df -k /home > > Filesystem 1024-blocks Used Avail Capacity Mounted on > > data/home 239144704 107238740 131905963 45% /home > > $ zfs list data/home > > NAME USED AVAIL REFER MOUNTPOINT > > data/home 102G 126G 102G /home > > > > $ zpool status data > > pool: data > > state: ONLINE > > scrub: resilver completed after 0h9m with 0 errors on Wed Oct 20 03:08:22 2010 > > config: > > > > NAME STATE READ WRITE CKSUM > > data ONLINE 0 0 0 > > mirror ONLINE 0 0 0 > > ada1 ONLINE 0 0 0 > > ada0s1g ONLINE 0 0 0 26.0G resilvered > > > > $ grep bayes /usr/local/etc/mail/spamassassin/local.cf > > use_bayes 1 > > bayes_auto_learn 1 > > bayes_ignore_header X-Bogosity > > bayes_ignore_header X-Spam-Flag > > bayes_ignore_header X-Spam-Status > > > > $ ls -l .spamassassin/ > > total 4085 > > -rw------- 1 jdc users 102192 Mar 27 02:30 bayes_journal > > -rw------- 1 jdc users 360448 Mar 27 02:30 bayes_seen > > -rw------- 1 jdc users 4947968 Mar 27 02:30 bayes_toks > > -rw------- 1 jdc users 8719 Mar 20 04:11 user_prefs > > No idea what caused it, but whenever I ran the bayes expiry it created a new file that just blew up and filled all the available space. I've got around the issue temporarily. I used 'swapoff' to recover a 4Gb swap partition, created a UFS and mounted that in the jail in question. After rsyncing the bayes database to that disk I was able to run an expire with no trouble at all, so it wasn't that the bayes was corrupt or anything. I've now copied it back and it runs fine. I expect that the problem will reoccur at some inconvenient point in the future. Not to say you're wrong -- there are lots of people who experience this problem it seems -- but I can't reproduce it. $ cd .spamassassin/ $ ls -l bayes_* -rw------- 1 jdc users 42888 Mar 27 03:13 bayes_journal -rw------- 1 jdc users 360448 Mar 27 03:13 bayes_seen -rw------- 1 jdc users 4947968 Mar 27 03:13 bayes_toks $ rm bayes_* $ mail jdc > Subject: testing bayes > sfddsfdfs > i like snakes > . > EOT $ ls -l bayes_* -rw------- 1 jdc users 131072 Mar 27 03:38 bayes_seen -rw------- 1 jdc users 131072 Mar 27 03:38 bayes_toks The system in question: amd64 FreeBSD 8.1-STABLE #0: Wed Oct 20 00:54:42 PDT 2010 This system is currently running ZFS pool version 15. All pools are formatted using this version. This system is currently running ZFS filesystem version 4. All filesystems are formatted with the current version. Dunno what to say other than that. :-( -- | Jeremy Chadwick jdc@parodius.com | | Parodius Networking http://www.parodius.com/ | | UNIX Systems Administrator Mountain View, CA, USA | | Making life hard for others since 1977. PGP 4BD6C0CB | From owner-freebsd-fs@FreeBSD.ORG Sun Mar 27 12:16:22 2011 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 2F14F106566B; Sun, 27 Mar 2011 12:16:22 +0000 (UTC) (envelope-from to.my.trociny@gmail.com) Received: from mail-fx0-f54.google.com (mail-fx0-f54.google.com [209.85.161.54]) by mx1.freebsd.org (Postfix) with ESMTP id EA9048FC12; Sun, 27 Mar 2011 12:16:20 +0000 (UTC) Received: by fxm11 with SMTP id 11so2638145fxm.13 for ; Sun, 27 Mar 2011 05:16:20 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:from:to:cc:subject:references:x-comment-to :sender:date:in-reply-to:message-id:user-agent:mime-version :content-type; bh=RNw8cxP7uUema0n/2iBx6BBZ0XqmIMrHnxtccd3kl7g=; b=ar8m+psd84djSlwsEtFOVdFg54hew2SUjYbQ4cry2kcRg3EnF7MPXoGVxuNs6uSCeo 3gSM3qbmxLg/89cW4Cd33uRIwJh3WABvo6Y2LxDBJ3x6tWCmZzz5N2/HUI0UVgHTy1X1 9CP/ekMqqE46kyO7zPRT4RfWt2/tnJMIdMNDA= DomainKey-Signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=from:to:cc:subject:references:x-comment-to:sender:date:in-reply-to :message-id:user-agent:mime-version:content-type; b=xFp/L9ahnj2oNKGHVsKWEr2oPl1LSBcQBt5URS+ea35ZuNzTW/2OR5Zkv6cKW5DuYJ axs7JoIAPJDsNzvH8qAEOTOQp7/lrlq8UexQJ8jngDAcpkjiMZpCwCJxx2Cv/m6AEwii w1R/wkpR5w7JZw4DafZqxd791d7zgpzmDT3YY= Received: by 10.223.59.81 with SMTP id k17mr3204143fah.94.1301228179896; Sun, 27 Mar 2011 05:16:19 -0700 (PDT) Received: from localhost ([95.69.172.154]) by mx.google.com with ESMTPS id n1sm1070163fam.16.2011.03.27.05.16.17 (version=TLSv1/SSLv3 cipher=OTHER); Sun, 27 Mar 2011 05:16:18 -0700 (PDT) From: Mikolaj Golub To: Freddie Cash References: <20110325075541.GA1742@garage.freebsd.pl> X-Comment-To: Freddie Cash Sender: Mikolaj Golub Date: Sun, 27 Mar 2011 15:16:15 +0300 In-Reply-To: (Freddie Cash's message of "Sat, 26 Mar 2011 10:52:08 -0700") Message-ID: <86zkogep2o.fsf@kopusha.home.net> User-Agent: Gnus/5.13 (Gnus v5.13) Emacs/23.2 (berkeley-unix) MIME-Version: 1.0 Content-Type: multipart/mixed; boundary="=-=-=" Cc: FreeBSD Filesystems , FreeBSD Stable , FreeBSD-Current , Pawel Jakub Dawidek Subject: Re: Any success stories for HAST + ZFS? X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 27 Mar 2011 12:16:22 -0000 --=-=-= On Sat, 26 Mar 2011 10:52:08 -0700 Freddie Cash wrote: FC> hastd backtrace is here: FC> http://www.sd73.bc.ca/downloads/crash/hast-backtrace.png It is not a hastd crash, but a kernel crash triggered by hastd process. I am not sure I got the same crash as you but apparently the race is possible in g_gate on device creation. I got the following crash starting many hast providers simultaneously: fault virtual address = 0x0 #8 0xc0c11adc in calltrap () at /usr/src/sys/i386/i386/exception.s:168 #9 0xc086ac6b in g_gate_ioctl (dev=0xc6a24300, cmd=3374345472, addr=0xc9fec000 "\002", flags=3, td=0xc7ff0b80) at /usr/src/sys/geom/gate/g_gate.c:410 #10 0xc0853c5b in devfs_ioctl_f (fp=0xc9b9e310, com=3374345472, data=0xc9fec000, cred=0xc8c9c200, td=0xc7ff0b80) at /usr/src/sys/fs/devfs/devfs_vnops.c:678 #11 0xc09210cd in kern_ioctl (td=0xc7ff0b80, fd=3, com=3374345472, data=0xc9fec000 "\002") at file.h:262 #12 0xc0921254 in ioctl (td=0xc7ff0b80, uap=0xf5edbcec) at /usr/src/sys/kern/sys_generic.c:679 #13 0xc0916616 in syscallenter (td=0xc7ff0b80, sa=0xf5edbce4) at /usr/src/sys/kern/subr_trap.c:315 #14 0xc0c2b9ff in syscall (frame=0xf5edbd28) at /usr/src/sys/i386/i386/trap.c:1086 #15 0xc0c11b71 in Xint0x80_syscall () at /usr/src/sys/i386/i386/exception.s:266 Or just creating many ggate devices simultaneously: for i in `jot 100`; do ./ggiocreate $i& done ggiocreate.c is attached. In my case the kernel crashes in g_gate_create() when checking for name collisions in strcmp(): /* Check for name collision. */ for (unit = 0; unit < g_gate_maxunits; unit++) { if (g_gate_units[unit] == NULL) continue; if (strcmp(name, g_gate_units[unit]->sc_provider->name) != 0) continue; mtx_unlock(&g_gate_units_lock); mtx_destroy(&sc->sc_queue_mtx); free(sc, M_GATE); return (EEXIST); } I think the issue is the following. When preparing sc we take g_gate_units_lock, check for name collision, fill sc fields except sc->sc_provider, and registers sc in g_gate_units[unit]. sc_provider is filled later, when g_gate_units_lock is released. So the scenario is possible: 1) Thread A registers sc in g_gate_units[unit] with g_gate_units[unit]->sc_provider still null and releases g_gate_units_lock. 2) Thread B traverses g_gate_units[] when checking for name collision and craches accessing g_gate_units[unit]->sc_provider->name. The attached patch fixes the issue in my case. -- Mikolaj Golub --=-=-= Content-Type: application/octet-stream Content-Disposition: attachment; filename=ggiocreate.c Content-Transfer-Encoding: base64 I2luY2x1ZGUgPHN5cy9jZGVmcy5oPgojaW5jbHVkZSA8c3lzL3R5cGVzLmg+CgojaW5jbHVkZSA8 Z2VvbS9nYXRlL2dfZ2F0ZS5oPgoKI2luY2x1ZGUgPGVyci5oPgojaW5jbHVkZSA8ZXJybm8uaD4K I2luY2x1ZGUgPGZjbnRsLmg+CiNpbmNsdWRlIDxzdGRpby5oPgojaW5jbHVkZSA8c3RyaW5ncy5o PgoKaW50Cm1haW4gKGludCBhcmdjLCBjaGFyICphcmd2W10pCnsKCXN0cnVjdCBnX2dhdGVfY3Rs X2NyZWF0ZSBnZ2lvY3JlYXRlOwoJc3RydWN0IGdfZ2F0ZV9jdGxfY2FuY2VsIGdnaW9jYW5jZWw7 CglzdHJ1Y3QgZ19nYXRlX2N0bF9kZXN0cm95IGdnaW9kOwoJaW50IGZkLCB1bml0OwoKCWlmIChh cmdjIDwgMikKCQllcnJ4KDEsICJ1c2FnZTogZ2dpb2NyZWF0ZSBuYW1lIik7CgkKCWZkID0gb3Bl bigiL2Rldi8iIEdfR0FURV9DVExfTkFNRSwgT19SRFdSKTsKCWlmIChmZCA8IDApCgkJZXJyKDEs ICJVbmFibGUgdG8gb3BlbiAvZGV2LyIgR19HQVRFX0NUTF9OQU1FKTsKCWJ6ZXJvKCZnZ2lvY3Jl YXRlLCBzaXplb2YoZ2dpb2NyZWF0ZSkpOwoJZ2dpb2NyZWF0ZS5nY3RsX3ZlcnNpb24gPSBHX0dB VEVfVkVSU0lPTjsKCWdnaW9jcmVhdGUuZ2N0bF9tZWRpYXNpemUgPSAxMDI0ICogMTAyNDA7Cgln Z2lvY3JlYXRlLmdjdGxfc2VjdG9yc2l6ZSA9IDUxMjsKCWdnaW9jcmVhdGUuZ2N0bF9mbGFncyA9 IDA7CglnZ2lvY3JlYXRlLmdjdGxfbWF4Y291bnQgPSBHX0dBVEVfTUFYX1FVRVVFX1NJWkU7Cgln Z2lvY3JlYXRlLmdjdGxfdGltZW91dCA9IDA7CglnZ2lvY3JlYXRlLmdjdGxfdW5pdCA9IEdfR0FU RV9OQU1FX0dJVkVOOwoJc25wcmludGYoZ2dpb2NyZWF0ZS5nY3RsX25hbWUsIHNpemVvZihnZ2lv Y3JlYXRlLmdjdGxfbmFtZSksCgkgICAgInRlc3RoYXN0LyVzIiwgYXJndlsxXSk7CglpZiAoaW9j dGwoZmQsIEdfR0FURV9DTURfQ1JFQVRFLCAmZ2dpb2NyZWF0ZSkgIT0gMCkKCQllcnIoMSwgIlVu YWJsZSB0byBjcmVhdGUgdGVzdGhhc3QvJXMgZGV2aWNlIiwgYXJndlsxXSk7CgllbHNlCgkJdW5p dCA9IGdnaW9jcmVhdGUuZ2N0bF91bml0OwoJCglzbGVlcCgxMCk7CgoJYnplcm8oJmdnaW9kLCBz aXplb2YoZ2dpb2QpKTsKCWdnaW9kLmdjdGxfdmVyc2lvbiA9IEdfR0FURV9WRVJTSU9OOwoJZ2dp b2QuZ2N0bF91bml0ID0gdW5pdDsKCWdnaW9kLmdjdGxfZm9yY2UgPSAxOwoJaWYgKGlvY3RsKGZk LCBHX0dBVEVfQ01EX0RFU1RST1ksICZnZ2lvZCkgPCAwKQoJCWVycigxLCAiVW5hYmxlIHRvIGRl c3Ryb3kgdGVzdGhhc3QvJXMgZGV2aWNlIiwgYXJndlsxXSk7CgoJcmV0dXJuIDA7Cn0K --=-=-= Content-Type: text/x-patch Content-Disposition: attachment; filename=g_gate.patch Index: sys/geom/gate/g_gate.c =================================================================== --- sys/geom/gate/g_gate.c (revision 220050) +++ sys/geom/gate/g_gate.c (working copy) @@ -407,13 +407,14 @@ g_gate_create(struct g_gate_ctl_create *ggio) for (unit = 0; unit < g_gate_maxunits; unit++) { if (g_gate_units[unit] == NULL) continue; - if (strcmp(name, g_gate_units[unit]->sc_provider->name) != 0) + if (strcmp(name, g_gate_units[unit]->sc_name) != 0) continue; mtx_unlock(&g_gate_units_lock); mtx_destroy(&sc->sc_queue_mtx); free(sc, M_GATE); return (EEXIST); } + sc->sc_name = name; g_gate_units[sc->sc_unit] = sc; g_gate_nunits++; mtx_unlock(&g_gate_units_lock); @@ -432,6 +433,9 @@ g_gate_create(struct g_gate_ctl_create *ggio) sc->sc_provider = pp; g_error_provider(pp, 0); g_topology_unlock(); + mtx_lock(&g_gate_units_lock); + sc->sc_name = sc->sc_provider->name; + mtx_unlock(&g_gate_units_lock); if (sc->sc_timeout > 0) { callout_reset(&sc->sc_callout, sc->sc_timeout * hz, Index: sys/geom/gate/g_gate.h =================================================================== --- sys/geom/gate/g_gate.h (revision 220050) +++ sys/geom/gate/g_gate.h (working copy) @@ -76,6 +76,7 @@ * 'P:' means 'Protected by'. */ struct g_gate_softc { + char *sc_name; /* P: (read-only) */ int sc_unit; /* P: (read-only) */ int sc_ref; /* P: g_gate_list_mtx */ struct g_provider *sc_provider; /* P: (read-only) */ @@ -96,7 +97,6 @@ struct g_gate_softc { LIST_ENTRY(g_gate_softc) sc_next; /* P: g_gate_list_mtx */ char sc_info[G_GATE_INFOSIZE]; /* P: (read-only) */ }; -#define sc_name sc_provider->geom->name #define G_GATE_DEBUG(lvl, ...) do { \ if (g_gate_debug >= (lvl)) { \ --=-=-=-- From owner-freebsd-fs@FreeBSD.ORG Sun Mar 27 20:02:13 2011 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id E36881065690; Sun, 27 Mar 2011 20:02:13 +0000 (UTC) (envelope-from to.my.trociny@gmail.com) Received: from mail-fx0-f54.google.com (mail-fx0-f54.google.com [209.85.161.54]) by mx1.freebsd.org (Postfix) with ESMTP id C9D528FC0A; Sun, 27 Mar 2011 20:02:12 +0000 (UTC) Received: by mail-fx0-f54.google.com with SMTP id 11so2819599fxm.13 for ; Sun, 27 Mar 2011 13:02:12 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:from:to:cc:subject:references:x-comment-to :sender:date:in-reply-to:message-id:user-agent:mime-version :content-type; bh=t+iur5Sb6vp25CFtvZaYK55NauMgmwJR8RldTxQM3jM=; b=pGuFI9k1UFrRuHLiSCWudbRn8NT0obqIFVB7C+0cUTXgtNjOdzTGmZOskac7ip61P9 qg1m1g8ar1g0RTlcCCTeChPFitX6SPi5f3uo8MaSPvappIoO3OIF3mqQboHWv7+03BzS wvA+OUf58XjsCecOJxTPw1CvgemCzBDJ4J8pw= DomainKey-Signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=from:to:cc:subject:references:x-comment-to:sender:date:in-reply-to :message-id:user-agent:mime-version:content-type; b=sn6p8agLrS2s+ItIbNNgCSxsBPOBNtOxnh65qhxvdXJDKommyIM/UwnbZbGtP0NBIP ushvL+Pyds6RcCNFYKH/z621H3bUJ7huS2b64xiIBnGWz0sx3ZVZDf7gnb1CLY/s+j/o 3BwdHRtY8r1PM3DvuxqO/8pvhQWj4avnqdpSk= Received: by 10.223.15.92 with SMTP id j28mr3565462faa.56.1301256132473; Sun, 27 Mar 2011 13:02:12 -0700 (PDT) Received: from localhost ([95.69.172.154]) by mx.google.com with ESMTPS id c24sm1198140fak.7.2011.03.27.13.02.10 (version=TLSv1/SSLv3 cipher=OTHER); Sun, 27 Mar 2011 13:02:11 -0700 (PDT) From: Mikolaj Golub To: Mikolaj Golub References: <20110325075541.GA1742@garage.freebsd.pl> <86zkogep2o.fsf@kopusha.home.net> X-Comment-To: Mikolaj Golub Sender: Mikolaj Golub Date: Sun, 27 Mar 2011 23:02:09 +0300 In-Reply-To: <86zkogep2o.fsf@kopusha.home.net> (Mikolaj Golub's message of "Sun, 27 Mar 2011 15:16:15 +0300") Message-ID: <86k4fke3i6.fsf@kopusha.home.net> User-Agent: Gnus/5.13 (Gnus v5.13) Emacs/23.2 (berkeley-unix) MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Cc: FreeBSD Filesystems , Pawel Jakub Dawidek , FreeBSD-Current , FreeBSD Stable Subject: Re: Any success stories for HAST + ZFS? X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 27 Mar 2011 20:02:14 -0000 On Sun, 27 Mar 2011 15:16:15 +0300 Mikolaj Golub wrote to Freddie Cash: MG> The attached patch fixes the issue in my case. The patch is committed to current. -- Mikolaj Golub From owner-freebsd-fs@FreeBSD.ORG Mon Mar 28 07:20:10 2011 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id CBCAC106564A; Mon, 28 Mar 2011 07:20:10 +0000 (UTC) (envelope-from lev@FreeBSD.org) Received: from onlyone.friendlyhosting.spb.ru (unknown [IPv6:2a01:4f8:131:60a2::2]) by mx1.freebsd.org (Postfix) with ESMTP id 6C2038FC0C; Mon, 28 Mar 2011 07:20:10 +0000 (UTC) Received: from lion.home.serebryakov.spb.ru (89.112.15.178.pppoe.eltel.net [89.112.15.178]) (Authenticated sender: lev@serebryakov.spb.ru) by onlyone.friendlyhosting.spb.ru (Postfix) with ESMTPA id B21FD4AC2D; Mon, 28 Mar 2011 11:20:09 +0400 (MSD) Date: Mon, 28 Mar 2011 11:20:07 +0400 From: Lev Serebryakov Organization: FreeBSD X-Priority: 3 (Normal) Message-ID: <895726715.20110328112007@serebryakov.spb.ru> To: freebsd-stable@freebsd.org, freebsd-fs@freebsd.org MIME-Version: 1.0 Content-Type: text/plain; charset=windows-1251 Content-Transfer-Encoding: quoted-printable Cc: Subject: Backup tool fot ZFS with all "classic dump(8)" fetatures -- what should I use? (or is here any way to make dump -L works well on large FFS2+SU?) X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list Reply-To: lev@FreeBSD.org List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 28 Mar 2011 07:20:10 -0000 Hello, Freebsd-stable. Now I'm backing up my HOME filesystem with dump(8). It works perfectly for 80GiB FS with many features: snapshot for consistency, levels, "nodump" flag (my users use it a lot!), ability to extract only one removed file from backup without restoring full FS, simple sctipy wrap-up for levels schedule, etc. On new server I have huge HOME (500GiB). And even if it is filled up only with 25GiB of data, creating snapshot takes about 10 minutes, freeze all I/O, and sometimes FAILS (!!!). I'm thinking to transfer GOME filesystem to ZFS. But I can not find appropriate tools for backing it up. Here is some requirements: (1) One-file (one-stream) backup. Not directory mirror. I need to store it on FTP server and upload with single command. (2) Levels & increment backups. Now I have "Monthly (0) - Weekly (1,2,3) - daily (4,5,6,7,8,9)" scheme. I could afford other schemes, but if they doesn't store full backup every day and doesn't need full backup more often than weekly. (3) Minimum of local metadata. Storing previous backups locally to calculate next one is not appropriate solution. "zfs send" needs previous snapshots for incremental backup, for example. (4) Working with snapshot (I think, it is trivial in case of ZFS). (5) Backup exclusions should be controlled by users (not super-user) thems= elves, like "nodump" flag in case of FFS/dump(8). "zfs send" can not provide this. I have very responsible users, so full backup now takes only up to 10GiB when all HOME FS is about 25GiB, so it is big help when backup is sent over Internet to other host. (6) Storing of ALL FS-specific information -- ACLs, etc. (7) Free :) Is here something like this for ZFS? "zfs send" looks promising, EXCEPT item (5) and, maybe, (3) :( gnu tar looks like everything but (6) :( --=20 // Black Lion AKA Lev Serebryakov From owner-freebsd-fs@FreeBSD.ORG Mon Mar 28 08:57:35 2011 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 4ED50106564A for ; Mon, 28 Mar 2011 08:57:35 +0000 (UTC) (envelope-from alexander@leidinger.net) Received: from mail.ebusiness-leidinger.de (mail.ebusiness-leidinger.de [217.11.53.44]) by mx1.freebsd.org (Postfix) with ESMTP id F30EC8FC0A for ; Mon, 28 Mar 2011 08:57:34 +0000 (UTC) Received: from outgoing.leidinger.net (p5B1548A0.dip.t-dialin.net [91.21.72.160]) by mail.ebusiness-leidinger.de (Postfix) with ESMTPSA id EDCD2844015; Mon, 28 Mar 2011 10:57:30 +0200 (CEST) Received: from webmail.leidinger.net (webmail.Leidinger.net [IPv6:fd73:10c7:2053:1::2:102]) by outgoing.leidinger.net (Postfix) with ESMTP id 0D2CB1792; Mon, 28 Mar 2011 10:57:28 +0200 (CEST) Received: (from www@localhost) by webmail.leidinger.net (8.14.4/8.13.8/Submit) id p2S8vQ2l080985; Mon, 28 Mar 2011 10:57:26 +0200 (CEST) (envelope-from Alexander@Leidinger.net) Received: from pslux.ec.europa.eu (pslux.ec.europa.eu [158.169.9.14]) by webmail.leidinger.net (Horde Framework) with HTTP; Mon, 28 Mar 2011 10:57:26 +0200 Message-ID: <20110328105726.1928377ryc8ppkis@webmail.leidinger.net> Date: Mon, 28 Mar 2011 10:57:26 +0200 From: Alexander Leidinger To: Dr Josef Karthauser References: <9CF23177-92D6-40C5-8C68-B7E2F88236E6@unitedlane.com> <20110326225430.00006a76@unknown> <3BBB1E36-8E09-4D07-B49E-ACA8548B0B44@unitedlane.com> <20110327075814.GA71131@icarus.home.lan> <20110327084355.GA71864@icarus.home.lan> <094E71D9-B28B-46DB-8EA9-B11F17D5A32A@unitedlane.com> <20110327094121.GA72701@icarus.home.lan> <980F394D-36FC-42F2-9F3F-A3C44A385600@unitedlane.com> In-Reply-To: <980F394D-36FC-42F2-9F3F-A3C44A385600@unitedlane.com> MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8; DelSp="Yes"; format="flowed" Content-Disposition: inline Content-Transfer-Encoding: 7bit User-Agent: Dynamic Internet Messaging Program (DIMP) H3 (1.1.6) X-EBL-MailScanner-Information: Please contact the ISP for more information X-EBL-MailScanner-ID: EDCD2844015.AFE12 X-EBL-MailScanner: Found to be clean X-EBL-MailScanner-SpamCheck: not spam, spamhaus-ZEN, SpamAssassin (not cached, score=0, required 6, autolearn=disabled) X-EBL-MailScanner-From: alexander@leidinger.net X-EBL-MailScanner-Watermark: 1301907451.57901@m9N7X0awlIDiDukY5x47cg X-EBL-Spam-Status: No Cc: freebsd-fs@freebsd.org Subject: Re: ZFS Problem - full disk, can't recover space :(. X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 28 Mar 2011 08:57:35 -0000 Quoting Dr Josef Karthauser (from Sun, 27 Mar 2011 11:01:04 +0100): > I'd really like my disk space back though please! I suspect that I'm > going to have to wait for 28 to have that happen though :(. As an intermediate action you could export the pool, boot a 9-current live-image, import the pool there and export it again. I do not know if you need to do a scrub or not to recover the free space or not. AFAIK you do not need to update to v28, the new code should take care about the issue without an update. This will not prevent loosing space again, but at least it should give back the lost space for the moment. Bye, Alexander. -- If a man had a child who'd gone anti-social, killed perhaps, he'd still tend to protect that child. -- McCoy, "The Ultimate Computer", stardate 4731.3 http://www.Leidinger.net Alexander @ Leidinger.net: PGP ID = B0063FE7 http://www.FreeBSD.org netchild @ FreeBSD.org : PGP ID = 72077137 From owner-freebsd-fs@FreeBSD.ORG Mon Mar 28 09:47:26 2011 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id E63A3106566C; Mon, 28 Mar 2011 09:47:26 +0000 (UTC) (envelope-from petefrench@ingresso.co.uk) Received: from constantine.ingresso.co.uk (constantine.ingresso.co.uk [IPv6:2001:470:1f09:176e::3]) by mx1.freebsd.org (Postfix) with ESMTP id 845CF8FC13; Mon, 28 Mar 2011 09:47:26 +0000 (UTC) Received: from dilbert.london-internal.ingresso.co.uk ([10.64.50.6] helo=dilbert.ticketswitch.com) by constantine.ingresso.co.uk with esmtps (TLSv1:AES256-SHA:256) (Exim 4.73 (FreeBSD)) (envelope-from ) id 1Q492g-0004HU-EE; Mon, 28 Mar 2011 10:47:22 +0100 Received: from petefrench by dilbert.ticketswitch.com with local (Exim 4.74 (FreeBSD)) (envelope-from ) id 1Q492g-000CvZ-DI; Mon, 28 Mar 2011 10:47:22 +0100 To: fjwcash@gmail.com, trociny@freebsd.org In-Reply-To: <86zkogep2o.fsf@kopusha.home.net> Message-Id: From: Pete French Date: Mon, 28 Mar 2011 10:47:22 +0100 Cc: freebsd-fs@freebsd.org, pjd@freebsd.org, freebsd-current@freebsd.org, freebsd-stable@freebsd.org Subject: Re: Any success stories for HAST + ZFS? X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 28 Mar 2011 09:47:27 -0000 > It is not a hastd crash, but a kernel crash triggered by hastd process. > > I am not sure I got the same crash as you but apparently the race is possible > in g_gate on device creation. > > I got the following crash starting many hast providers simultaneously: This is very interestng to me - my successful ZFS+HAST only had a single drive, but in my new setup I am intending to use two HAST processes and then mirror across thhem under ZFS, so I am likely to hit this bug. Are the processes stable once launched ? I dont have a system on whcih to try your patch at the moment, but will do so when I get the opportunity! From owner-freebsd-fs@FreeBSD.ORG Mon Mar 28 10:07:19 2011 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 5FD5C1065673 for ; Mon, 28 Mar 2011 10:07:19 +0000 (UTC) (envelope-from josef.karthauser@unitedlane.com) Received: from k2smtpout03-01.prod.mesa1.secureserver.net (k2smtpout03-01.prod.mesa1.secureserver.net [64.202.189.171]) by mx1.freebsd.org (Postfix) with SMTP id 2F2138FC16 for ; Mon, 28 Mar 2011 10:07:18 +0000 (UTC) Received: (qmail 24519 invoked from network); 28 Mar 2011 10:07:18 -0000 Received: from unknown (HELO ip-72.167.34.38.ip.secureserver.net) (72.167.34.38) by k2smtpout03-01.prod.mesa1.secureserver.net (64.202.189.171) with ESMTP; 28 Mar 2011 10:07:18 -0000 Received: (qmail 3235 invoked from network); 28 Mar 2011 06:06:42 -0400 Received: from unknown (HELO ?90.155.77.76?) (90.155.77.76) by unitedlane.com with (AES128-SHA encrypted) SMTP; 28 Mar 2011 06:06:41 -0400 Mime-Version: 1.0 (Apple Message framework v1082) Content-Type: text/plain; charset=us-ascii From: Dr Josef Karthauser In-Reply-To: <20110328105726.1928377ryc8ppkis@webmail.leidinger.net> Date: Mon, 28 Mar 2011 11:07:40 +0100 Content-Transfer-Encoding: quoted-printable Message-Id: References: <9CF23177-92D6-40C5-8C68-B7E2F88236E6@unitedlane.com> <20110326225430.00006a76@unknown> <3BBB1E36-8E09-4D07-B49E-ACA8548B0B44@unitedlane.com> <20110327075814.GA71131@icarus.home.lan> <20110327084355.GA71864@icarus.home.lan> <094E71D9-B28B-46DB-8EA9-B11F17D5A32A@unitedlane.com> <20110327094121.GA72701@icarus.home.lan> <980F394D-36FC-42F2-9F3F-A3C44A385600@unitedlane.com> <20110328105726.1928377ryc8ppkis@webmail.leidinger.net> To: Alexander Leidinger X-Mailer: Apple Mail (2.1082) Cc: freebsd-fs@freebsd.org Subject: Re: ZFS Problem - full disk, can't recover space :(. X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 28 Mar 2011 10:07:19 -0000 On 28 Mar 2011, at 09:57, Alexander Leidinger wrote: > Quoting Dr Josef Karthauser (from = Sun, 27 Mar 2011 11:01:04 +0100): >=20 >> I'd really like my disk space back though please! I suspect that I'm = going to have to wait for 28 to have that happen though :(. >=20 > As an intermediate action you could export the pool, boot a 9-current = live-image, import the pool there and export it again. I do not know if = you need to do a scrub or not to recover the free space or not. AFAIK = you do not need to update to v28, the new code should take care about = the issue without an update. >=20 > This will not prevent loosing space again, but at least it should give = back the lost space for the moment. That looks like a plan. I'll give it a go. Thanks Alex, Joe From owner-freebsd-fs@FreeBSD.ORG Mon Mar 28 10:52:52 2011 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id AD0AA106564A; Mon, 28 Mar 2011 10:52:52 +0000 (UTC) (envelope-from to.my.trociny@gmail.com) Received: from mail-ww0-f42.google.com (mail-ww0-f42.google.com [74.125.82.42]) by mx1.freebsd.org (Postfix) with ESMTP id B97608FC18; Mon, 28 Mar 2011 10:52:51 +0000 (UTC) Received: by wwk4 with SMTP id 4so1596784wwk.1 for ; Mon, 28 Mar 2011 03:52:50 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:from:to:cc:subject:organization:references :sender:date:in-reply-to:message-id:user-agent:mime-version :content-type; bh=KViQbIxGRkfFC1VESkTyldOT3bJ/nQHrGEjGbMnWNzo=; b=h4z5H97oRpkcl3MwDgITGaBtwOl3O7Xeao5wszwLSHwXo/IoPo4btPFc+/md6EdeVS mB883b5Hi0maPHsXKEaWrsA4FBXac9OxhnSw46Z2aiDgy3HNvovAcDE04xVVHb29cojc Suh6QKz6TMn8RZSJ+Zd2VwNqddX6XocVamXJc= DomainKey-Signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=from:to:cc:subject:organization:references:sender:date:in-reply-to :message-id:user-agent:mime-version:content-type; b=VEXjXYf4Wg6bDy8B/eZu2YLO5mM0Lhezj9cqnBtPoIe30frrvvRDX9rzEyhkbQtIJD 7bbNWCxnexfNtKVSAYfAyz5LGn+x/9OxI1yVdjzCWCjjclT8tCSdlUXXggKhbKMYEEXO yMXxEYDzIkDTRig3u9m3kjuOnAiwODnJLNRWA= Received: by 10.216.9.200 with SMTP id 50mr2585158wet.83.1301309570572; Mon, 28 Mar 2011 03:52:50 -0700 (PDT) Received: from localhost ([94.27.39.186]) by mx.google.com with ESMTPS id c54sm1436080wer.30.2011.03.28.03.52.48 (version=TLSv1/SSLv3 cipher=OTHER); Mon, 28 Mar 2011 03:52:49 -0700 (PDT) From: Mikolaj Golub To: Pete French Organization: TOA Ukraine References: Sender: Mikolaj Golub Date: Mon, 28 Mar 2011 13:52:45 +0300 In-Reply-To: (Pete French's message of "Mon, 28 Mar 2011 10:47:22 +0100") Message-ID: <86wrjj5xfm.fsf@in138.ua3> User-Agent: Gnus/5.13 (Gnus v5.13) Emacs/23.2 (berkeley-unix) MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Cc: freebsd-fs@freebsd.org, freebsd-stable@freebsd.org, freebsd-current@freebsd.org, pjd@freebsd.org Subject: Re: Any success stories for HAST + ZFS? X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 28 Mar 2011 10:52:52 -0000 On Mon, 28 Mar 2011 10:47:22 +0100 Pete French wrote: >> It is not a hastd crash, but a kernel crash triggered by hastd process. >> >> I am not sure I got the same crash as you but apparently the race is possible >> in g_gate on device creation. >> >> I got the following crash starting many hast providers simultaneously: PF> This is very interestng to me - my successful ZFS+HAST only had PF> a single drive, but in my new setup I am intending to use two PF> HAST processes and then mirror across thhem under ZFS, so I am PF> likely to hit this bug. Are the processes stable once launched ? Yes, you may hit it only on hast devices creation. The workaround is to avoid using 'hastctl role primary all', start providers one by one instead. -- Mikolaj Golub From owner-freebsd-fs@FreeBSD.ORG Mon Mar 28 11:06:56 2011 Return-Path: Delivered-To: freebsd-fs@FreeBSD.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 2A4F51065673 for ; Mon, 28 Mar 2011 11:06:56 +0000 (UTC) (envelope-from owner-bugmaster@FreeBSD.org) Received: from freefall.freebsd.org (freefall.freebsd.org [IPv6:2001:4f8:fff6::28]) by mx1.freebsd.org (Postfix) with ESMTP id EFE638FC0C for ; Mon, 28 Mar 2011 11:06:55 +0000 (UTC) Received: from freefall.freebsd.org (localhost [127.0.0.1]) by freefall.freebsd.org (8.14.4/8.14.4) with ESMTP id p2SB6tI4026633 for ; Mon, 28 Mar 2011 11:06:55 GMT (envelope-from owner-bugmaster@FreeBSD.org) Received: (from gnats@localhost) by freefall.freebsd.org (8.14.4/8.14.4/Submit) id p2SB6tfL026631 for freebsd-fs@FreeBSD.org; Mon, 28 Mar 2011 11:06:55 GMT (envelope-from owner-bugmaster@FreeBSD.org) Date: Mon, 28 Mar 2011 11:06:55 GMT Message-Id: <201103281106.p2SB6tfL026631@freefall.freebsd.org> X-Authentication-Warning: freefall.freebsd.org: gnats set sender to owner-bugmaster@FreeBSD.org using -f From: FreeBSD bugmaster To: freebsd-fs@FreeBSD.org Cc: Subject: Current problem reports assigned to freebsd-fs@FreeBSD.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 28 Mar 2011 11:06:56 -0000 Note: to view an individual PR, use: http://www.freebsd.org/cgi/query-pr.cgi?pr=(number). The following is a listing of current problems submitted by FreeBSD users. These represent problem reports covering all versions including experimental development code and obsolete releases. S Tracker Resp. Description -------------------------------------------------------------------------------- o kern/155615 fs [zfs] zfs v28 broken on sparc64 -current o kern/155587 fs [zfs] [panic] kernel panic with zfs o kern/155484 fs [ufs] GPT + UFS boot don't work well together o kern/155411 fs [regression] [8.2-release] [tmpfs]: mount: tmpfs : No o kern/155199 fs [ext2fs] ext3fs mounted as ext2fs gives I/O errors o bin/155104 fs [zfs][patch] use /dev prefix by default when importing o kern/154930 fs [zfs] cannot delete/unlink file from full volume -> EN o kern/154828 fs [msdosfs] Unable to create directories on external USB o kern/154491 fs [smbfs] smb_co_lock: recursive lock for object 1 o kern/154447 fs [zfs] [panic] Occasional panics - solaris assert somew f kern/154228 fs [md] md getting stuck in wdrain state o kern/153996 fs [zfs] zfs root mount error while kernel is not located o kern/153847 fs [nfs] [panic] Kernel panic from incorrect m_free in nf o kern/153753 fs [zfs] ZFS v15 - grammatical error when attempting to u o kern/153716 fs [zfs] zpool scrub time remaining is incorrect o kern/153695 fs [patch] [zfs] Booting from zpool created on 4k-sector o kern/153680 fs [xfs] 8.1 failing to mount XFS partitions o kern/153520 fs [zfs] Boot from GPT ZFS root on HP BL460c G1 unstable o kern/153418 fs [zfs] [panic] Kernel Panic occurred writing to zfs vol o kern/153351 fs [zfs] locking directories/files in ZFS o bin/153258 fs [patch][zfs] creating ZVOLs requires `refreservation' s kern/153173 fs [zfs] booting from a gzip-compressed dataset doesn't w o kern/153126 fs [zfs] vdev failure, zpool=peegel type=vdev.too_small p kern/152488 fs [tmpfs] [patch] mtime of file updated when only inode o kern/152079 fs [msdosfs] [patch] Small cleanups from the other NetBSD o kern/152022 fs [nfs] nfs service hangs with linux client [regression] o kern/151942 fs [zfs] panic during ls(1) zfs snapshot directory o kern/151905 fs [zfs] page fault under load in /sbin/zfs o kern/151845 fs [smbfs] [patch] smbfs should be upgraded to support Un o bin/151713 fs [patch] Bug in growfs(8) with respect to 32-bit overfl o kern/151648 fs [zfs] disk wait bug o kern/151629 fs [fs] [patch] Skip empty directory entries during name o kern/151330 fs [zfs] will unshare all zfs filesystem after execute a o kern/151326 fs [nfs] nfs exports fail if netgroups contain duplicate o kern/151251 fs [ufs] Can not create files on filesystem with heavy us o kern/151226 fs [zfs] can't delete zfs snapshot o kern/151111 fs [zfs] vnodes leakage during zfs unmount o kern/150503 fs [zfs] ZFS disks are UNAVAIL and corrupted after reboot o kern/150501 fs [zfs] ZFS vdev failure vdev.bad_label on amd64 o kern/150390 fs [zfs] zfs deadlock when arcmsr reports drive faulted o kern/150336 fs [nfs] mountd/nfsd became confused; refused to reload n o kern/150207 fs zpool(1): zpool import -d /dev tries to open weird dev o kern/149208 fs mksnap_ffs(8) hang/deadlock o kern/149173 fs [patch] [zfs] make OpenSolaris installa f kern/149022 fs [hang] File system operations hangs with suspfs state o kern/149015 fs [zfs] [patch] misc fixes for ZFS code to build on Glib o kern/149014 fs [zfs] [patch] declarations in ZFS libraries/utilities o kern/149013 fs [zfs] [patch] make ZFS makefiles use the libraries fro o kern/148504 fs [zfs] ZFS' zpool does not allow replacing drives to be o kern/148490 fs [zfs]: zpool attach - resilver bidirectionally, and re o kern/148368 fs [zfs] ZFS hanging forever on 8.1-PRERELEASE o bin/148296 fs [zfs] [loader] [patch] Very slow probe in /usr/src/sys o kern/148204 fs [nfs] UDP NFS causes overload o kern/148138 fs [zfs] zfs raidz pool commands freeze o kern/147903 fs [zfs] [panic] Kernel panics on faulty zfs device o kern/147881 fs [zfs] [patch] ZFS "sharenfs" doesn't allow different " o kern/147790 fs [zfs] zfs set acl(mode|inherit) fails on existing zfs o kern/147560 fs [zfs] [boot] Booting 8.1-PRERELEASE raidz system take o kern/147420 fs [ufs] [panic] ufs_dirbad, nullfs, jail panic (corrupt o kern/146941 fs [zfs] [panic] Kernel Double Fault - Happens constantly o kern/146786 fs [zfs] zpool import hangs with checksum errors o kern/146708 fs [ufs] [panic] Kernel panic in softdep_disk_write_compl o kern/146528 fs [zfs] Severe memory leak in ZFS on i386 o kern/146502 fs [nfs] FreeBSD 8 NFS Client Connection to Server s kern/145712 fs [zfs] cannot offline two drives in a raidz2 configurat o kern/145411 fs [xfs] [panic] Kernel panics shortly after mounting an o bin/145309 fs bsdlabel: Editing disk label invalidates the whole dev o kern/145272 fs [zfs] [panic] Panic during boot when accessing zfs on o kern/145246 fs [ufs] dirhash in 7.3 gratuitously frees hashes when it o kern/145238 fs [zfs] [panic] kernel panic on zpool clear tank o kern/145229 fs [zfs] Vast differences in ZFS ARC behavior between 8.0 o kern/145189 fs [nfs] nfsd performs abysmally under load o kern/144929 fs [ufs] [lor] vfs_bio.c + ufs_dirhash.c p kern/144447 fs [zfs] sharenfs fsunshare() & fsshare_main() non functi o kern/144416 fs [panic] Kernel panic on online filesystem optimization s kern/144415 fs [zfs] [panic] kernel panics on boot after zfs crash o kern/144234 fs [zfs] Cannot boot machine with recent gptzfsboot code o kern/143825 fs [nfs] [panic] Kernel panic on NFS client o bin/143572 fs [zfs] zpool(1): [patch] The verbose output from iostat o kern/143212 fs [nfs] NFSv4 client strange work ... o kern/143184 fs [zfs] [lor] zfs/bufwait LOR o kern/142914 fs [zfs] ZFS performance degradation over time o kern/142878 fs [zfs] [vfs] lock order reversal o kern/142597 fs [ext2fs] ext2fs does not work on filesystems with real o kern/142489 fs [zfs] [lor] allproc/zfs LOR o kern/142466 fs Update 7.2 -> 8.0 on Raid 1 ends with screwed raid [re o kern/142401 fs [ntfs] [patch] Minor updates to NTFS from NetBSD o kern/142306 fs [zfs] [panic] ZFS drive (from OSX Leopard) causes two o kern/142068 fs [ufs] BSD labels are got deleted spontaneously o kern/141897 fs [msdosfs] [panic] Kernel panic. msdofs: file name leng o kern/141463 fs [nfs] [panic] Frequent kernel panics after upgrade fro o kern/141305 fs [zfs] FreeBSD ZFS+sendfile severe performance issues ( o kern/141091 fs [patch] [nullfs] fix panics with DIAGNOSTIC enabled o kern/141086 fs [nfs] [panic] panic("nfs: bioread, not dir") on FreeBS o kern/141010 fs [zfs] "zfs scrub" fails when backed by files in UFS2 o kern/140888 fs [zfs] boot fail from zfs root while the pool resilveri o kern/140661 fs [zfs] [patch] /boot/loader fails to work on a GPT/ZFS- o kern/140640 fs [zfs] snapshot crash o kern/140134 fs [msdosfs] write and fsck destroy filesystem integrity o kern/140068 fs [smbfs] [patch] smbfs does not allow semicolon in file o kern/139725 fs [zfs] zdb(1) dumps core on i386 when examining zpool c o kern/139715 fs [zfs] vfs.numvnodes leak on busy zfs p bin/139651 fs [nfs] mount(8): read-only remount of NFS volume does n o kern/139597 fs [patch] [tmpfs] tmpfs initializes va_gen but doesn't u o kern/139564 fs [zfs] [panic] 8.0-RC1 - Fatal trap 12 at end of shutdo o kern/139407 fs [smbfs] [panic] smb mount causes system crash if remot o kern/138662 fs [panic] ffs_blkfree: freeing free block o kern/138421 fs [ufs] [patch] remove UFS label limitations o kern/138202 fs mount_msdosfs(1) see only 2Gb o kern/136968 fs [ufs] [lor] ufs/bufwait/ufs (open) o kern/136945 fs [ufs] [lor] filedesc structure/ufs (poll) o kern/136944 fs [ffs] [lor] bufwait/snaplk (fsync) o kern/136873 fs [ntfs] Missing directories/files on NTFS volume o kern/136865 fs [nfs] [patch] NFS exports atomic and on-the-fly atomic p kern/136470 fs [nfs] Cannot mount / in read-only, over NFS o kern/135546 fs [zfs] zfs.ko module doesn't ignore zpool.cache filenam o kern/135469 fs [ufs] [panic] kernel crash on md operation in ufs_dirb o kern/135050 fs [zfs] ZFS clears/hides disk errors on reboot o kern/134491 fs [zfs] Hot spares are rather cold... o kern/133676 fs [smbfs] [panic] umount -f'ing a vnode-based memory dis o kern/133174 fs [msdosfs] [patch] msdosfs must support utf-encoded int o kern/132960 fs [ufs] [panic] panic:ffs_blkfree: freeing free frag o kern/132397 fs reboot causes filesystem corruption (failure to sync b o kern/132331 fs [ufs] [lor] LOR ufs and syncer o kern/132237 fs [msdosfs] msdosfs has problems to read MSDOS Floppy o kern/132145 fs [panic] File System Hard Crashes o kern/131441 fs [unionfs] [nullfs] unionfs and/or nullfs not combineab o kern/131360 fs [nfs] poor scaling behavior of the NFS server under lo o kern/131342 fs [nfs] mounting/unmounting of disks causes NFS to fail o bin/131341 fs makefs: error "Bad file descriptor" on the mount poin o kern/130920 fs [msdosfs] cp(1) takes 100% CPU time while copying file o kern/130210 fs [nullfs] Error by check nullfs o kern/129760 fs [nfs] after 'umount -f' of a stale NFS share FreeBSD l o kern/129488 fs [smbfs] Kernel "bug" when using smbfs in smbfs_smb.c: o kern/129231 fs [ufs] [patch] New UFS mount (norandom) option - mostly o kern/129152 fs [panic] non-userfriendly panic when trying to mount(8) o kern/127787 fs [lor] [ufs] Three LORs: vfslock/devfs/vfslock, ufs/vfs o bin/127270 fs fsck_msdosfs(8) may crash if BytesPerSec is zero o kern/127029 fs [panic] mount(8): trying to mount a write protected zi o kern/126287 fs [ufs] [panic] Kernel panics while mounting an UFS file o kern/125895 fs [ffs] [panic] kernel: panic: ffs_blkfree: freeing free s kern/125738 fs [zfs] [request] SHA256 acceleration in ZFS o kern/123939 fs [msdosfs] corrupts new files o kern/122380 fs [ffs] ffs_valloc:dup alloc (Soekris 4801/7.0/USB Flash o bin/122172 fs [fs]: amd(8) automount daemon dies on 6.3-STABLE i386, o bin/121898 fs [nullfs] pwd(1)/getcwd(2) fails with Permission denied o bin/121366 fs [zfs] [patch] Automatic disk scrubbing from periodic(8 o bin/121072 fs [smbfs] mount_smbfs(8) cannot normally convert the cha f kern/120991 fs [panic] [ffs] [snapshot] System crashes when manipulat o kern/120483 fs [ntfs] [patch] NTFS filesystem locking changes o kern/120482 fs [ntfs] [patch] Sync style changes between NetBSD and F o kern/118912 fs [2tb] disk sizing/geometry problem with large array o kern/118713 fs [minidump] [patch] Display media size required for a k o bin/118249 fs [ufs] mv(1): moving a directory changes its mtime o kern/118107 fs [ntfs] [panic] Kernel panic when accessing a file at N o kern/117954 fs [ufs] dirhash on very large directories blocks the mac o bin/117315 fs [smbfs] mount_smbfs(8) and related options can't mount o kern/117314 fs [ntfs] Long-filename only NTFS fs'es cause kernel pani o kern/117158 fs [zfs] zpool scrub causes panic if geli vdevs detach on o bin/116980 fs [msdosfs] [patch] mount_msdosfs(8) resets some flags f o conf/116931 fs lack of fsck_cd9660 prevents mounting iso images with o kern/116583 fs [ffs] [hang] System freezes for short time when using o kern/116170 fs [panic] Kernel panic when mounting /tmp o bin/115361 fs [zfs] mount(8) gets into a state where it won't set/un o kern/114955 fs [cd9660] [patch] [request] support for mask,dirmask,ui o kern/114847 fs [ntfs] [patch] [request] dirmask support for NTFS ala o kern/114676 fs [ufs] snapshot creation panics: snapacct_ufs2: bad blo o bin/114468 fs [patch] [request] add -d option to umount(8) to detach o kern/113852 fs [smbfs] smbfs does not properly implement DFS referral o bin/113838 fs [patch] [request] mount(8): add support for relative p o bin/113049 fs [patch] [request] make quot(8) use getopt(3) and show o kern/112658 fs [smbfs] [patch] smbfs and caching problems (resolves b o kern/111843 fs [msdosfs] Long Names of files are incorrectly created o kern/111782 fs [ufs] dump(8) fails horribly for large filesystems s bin/111146 fs [2tb] fsck(8) fails on 6T filesystem o kern/109024 fs [msdosfs] [iconv] mount_msdosfs: msdosfs_iconv: Operat o kern/109010 fs [msdosfs] can't mv directory within fat32 file system o bin/107829 fs [2TB] fdisk(8): invalid boundary checking in fdisk / w o kern/106107 fs [ufs] left-over fsck_snapshot after unfinished backgro o kern/106030 fs [ufs] [panic] panic in ufs from geom when a dead disk o kern/104406 fs [ufs] Processes get stuck in "ufs" state under persist o kern/104133 fs [ext2fs] EXT2FS module corrupts EXT2/3 filesystems o kern/103035 fs [ntfs] Directories in NTFS mounted disc images appear o kern/101324 fs [smbfs] smbfs sometimes not case sensitive when it's s o kern/99290 fs [ntfs] mount_ntfs ignorant of cluster sizes s bin/97498 fs [request] newfs(8) has no option to clear the first 12 o kern/97377 fs [ntfs] [patch] syntax cleanup for ntfs_ihash.c o kern/95222 fs [cd9660] File sections on ISO9660 level 3 CDs ignored o kern/94849 fs [ufs] rename on UFS filesystem is not atomic o bin/94810 fs fsck(8) incorrectly reports 'file system marked clean' o kern/94769 fs [ufs] Multiple file deletions on multi-snapshotted fil o kern/94733 fs [smbfs] smbfs may cause double unlock o kern/93942 fs [vfs] [patch] panic: ufs_dirbad: bad dir (patch from D o kern/92272 fs [ffs] [hang] Filling a filesystem while creating a sna o kern/91134 fs [smbfs] [patch] Preserve access and modification time a kern/90815 fs [smbfs] [patch] SMBFS with character conversions somet o kern/88657 fs [smbfs] windows client hang when browsing a samba shar o kern/88555 fs [panic] ffs_blkfree: freeing free frag on AMD 64 o kern/88266 fs [smbfs] smbfs does not implement UIO_NOCOPY and sendfi o bin/87966 fs [patch] newfs(8): introduce -A flag for newfs to enabl o kern/87859 fs [smbfs] System reboot while umount smbfs. o kern/86587 fs [msdosfs] rm -r /PATH fails with lots of small files o bin/85494 fs fsck_ffs: unchecked use of cg_inosused macro etc. o kern/80088 fs [smbfs] Incorrect file time setting on NTFS mounted vi o bin/74779 fs Background-fsck checks one filesystem twice and omits o kern/73484 fs [ntfs] Kernel panic when doing `ls` from the client si o bin/73019 fs [ufs] fsck_ufs(8) cannot alloc 607016868 bytes for ino o kern/71774 fs [ntfs] NTFS cannot "see" files on a WinXP filesystem o bin/70600 fs fsck(8) throws files away when it can't grow lost+foun o kern/68978 fs [panic] [ufs] crashes with failing hard disk, loose po o kern/65920 fs [nwfs] Mounted Netware filesystem behaves strange o kern/65901 fs [smbfs] [patch] smbfs fails fsx write/truncate-down/tr o kern/61503 fs [smbfs] mount_smbfs does not work as non-root o kern/55617 fs [smbfs] Accessing an nsmb-mounted drive via a smb expo o kern/51685 fs [hang] Unbounded inode allocation causes kernel to loc o kern/51583 fs [nullfs] [patch] allow to work with devices and socket o kern/36566 fs [smbfs] System reboot with dead smb mount and umount o kern/33464 fs [ufs] soft update inconsistencies after system crash o bin/27687 fs fsck(8) wrapper is not properly passing options to fsc o kern/18874 fs [2TB] 32bit NFS servers export wrong negative values t 220 problems total. From owner-freebsd-fs@FreeBSD.ORG Mon Mar 28 11:50:41 2011 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 2352210656D0; Mon, 28 Mar 2011 11:50:41 +0000 (UTC) (envelope-from avg@freebsd.org) Received: from citadel.icyb.net.ua (citadel.icyb.net.ua [212.40.38.140]) by mx1.freebsd.org (Postfix) with ESMTP id 02D9B8FC17; Mon, 28 Mar 2011 11:50:39 +0000 (UTC) Received: from odyssey.starpoint.kiev.ua (alpha-e.starpoint.kiev.ua [212.40.38.101]) by citadel.icyb.net.ua (8.8.8p3/ICyb-2.3exp) with ESMTP id OAA27894; Mon, 28 Mar 2011 14:50:38 +0300 (EEST) (envelope-from avg@freebsd.org) Message-ID: <4D90760D.70906@freebsd.org> Date: Mon, 28 Mar 2011 14:50:37 +0300 From: Andriy Gapon User-Agent: Mozilla/5.0 (X11; U; FreeBSD amd64; en-US; rv:1.9.2.15) Gecko/20110309 Lightning/1.0b2 Thunderbird/3.1.9 MIME-Version: 1.0 To: lev@freebsd.org References: <895726715.20110328112007@serebryakov.spb.ru> In-Reply-To: <895726715.20110328112007@serebryakov.spb.ru> X-Enigmail-Version: 1.1.2 Content-Type: text/plain; charset=us-ascii Content-Transfer-Encoding: 7bit Cc: freebsd-fs@freebsd.org, freebsd-stable@freebsd.org Subject: Re: Backup tool fot ZFS with all "classic dump(8)" fetatures -- what should I use? (or is here any way to make dump -L works well on large FFS2+SU?) X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 28 Mar 2011 11:50:41 -0000 on 28/03/2011 10:20 Lev Serebryakov said the following: > Hello, Freebsd-stable. > > Now I'm backing up my HOME filesystem with dump(8). It works > perfectly for 80GiB FS with many features: snapshot for consistency, > levels, "nodump" flag (my users use it a lot!), ability to extract > only one removed file from backup without restoring full FS, simple > sctipy wrap-up for levels schedule, etc. > > On new server I have huge HOME (500GiB). And even if it is filled > up only with 25GiB of data, creating snapshot takes about 10 minutes, > freeze all I/O, and sometimes FAILS (!!!). > > I'm thinking to transfer GOME filesystem to ZFS. But I can not find > appropriate tools for backing it up. Here is some requirements: > > (1) One-file (one-stream) backup. Not directory mirror. I need to > store it on FTP server and upload with single command. > > (2) Levels & increment backups. Now I have "Monthly (0) - Weekly > (1,2,3) - daily (4,5,6,7,8,9)" scheme. I could afford other schemes, > but if they doesn't store full backup every day and doesn't need full > backup more often than weekly. > > (3) Minimum of local metadata. Storing previous backups locally to > calculate next one is not appropriate solution. "zfs send" needs > previous snapshots for incremental backup, for example. > > (4) Working with snapshot (I think, it is trivial in case of ZFS). > > (5) Backup exclusions should be controlled by users (not super-user) themselves, > like "nodump" flag in case of FFS/dump(8). "zfs send" can not > provide this. I have very responsible users, so full backup > now takes only up to 10GiB when all HOME FS is about 25GiB, so it > is big help when backup is sent over Internet to other host. > > (6) Storing of ALL FS-specific information -- ACLs, etc. > > (7) Free :) > > Is here something like this for ZFS? "zfs send" looks promising, > EXCEPT item (5) and, maybe, (3) :( > > gnu tar looks like everything but (6) :( I have a script built around zfs snapshot and star (archivers/star) that has functionality similar to your requirements. -- Andriy Gapon From owner-freebsd-fs@FreeBSD.ORG Mon Mar 28 12:54:54 2011 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id EA5B2106566B for ; Mon, 28 Mar 2011 12:54:54 +0000 (UTC) (envelope-from peterjeremy@acm.org) Received: from fallbackmx09.syd.optusnet.com.au (fallbackmx09.syd.optusnet.com.au [211.29.132.242]) by mx1.freebsd.org (Postfix) with ESMTP id 5EDAA8FC12 for ; Mon, 28 Mar 2011 12:54:53 +0000 (UTC) Received: from mail17.syd.optusnet.com.au (mail17.syd.optusnet.com.au [211.29.132.198]) by fallbackmx09.syd.optusnet.com.au (8.13.1/8.13.1) with ESMTP id p2SB0X2H022298 for ; Mon, 28 Mar 2011 22:00:33 +1100 Received: from server.vk2pj.dyndns.org (c220-239-116-103.belrs4.nsw.optusnet.com.au [220.239.116.103]) by mail17.syd.optusnet.com.au (8.13.1/8.13.1) with ESMTP id p2SB0TgB007893 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO); Mon, 28 Mar 2011 22:00:29 +1100 X-Bogosity: Ham, spamicity=0.000000 Received: from server.vk2pj.dyndns.org (localhost.vk2pj.dyndns.org [127.0.0.1]) by server.vk2pj.dyndns.org (8.14.4/8.14.4) with ESMTP id p2SB0SpN011447; Mon, 28 Mar 2011 22:00:28 +1100 (EST) (envelope-from peter@server.vk2pj.dyndns.org) Received: (from peter@localhost) by server.vk2pj.dyndns.org (8.14.4/8.14.4/Submit) id p2SB0RFi011437; Mon, 28 Mar 2011 22:00:27 +1100 (EST) (envelope-from peter) Date: Mon, 28 Mar 2011 22:00:26 +1100 From: Peter Jeremy To: Anders Andersson Message-ID: <20110328110026.GA96624@server.vk2pj.dyndns.org> References: MIME-Version: 1.0 Content-Type: multipart/signed; micalg=pgp-sha1; protocol="application/pgp-signature"; boundary="cWoXeonUoKmBZSoM" Content-Disposition: inline In-Reply-To: X-PGP-Key: http://members.optusnet.com.au/peterjeremy/pubkey.asc User-Agent: Mutt/1.5.21 (2010-09-15) Cc: freebsd-fs@freebsd.org Subject: Re: Recover a ufs2 filesystem from a reformat with another ufs2 filesystem X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 28 Mar 2011 12:54:55 -0000 --cWoXeonUoKmBZSoM Content-Type: text/plain; charset=us-ascii Content-Disposition: inline Content-Transfer-Encoding: quoted-printable On 2011-Mar-26 15:27:09 +0100, Anders Andersson wrote: >Perhaps it would be beneficial if some of this information was spread >out at random for recover purpose, although I don't know what bad side >effects this would create. Well, the backup superblocks have to be at known locations so that fsck and/or the user can find them. Traditionally, they were offset by a track more than an integral number of cylinders so the loss of an entire platter wouldn't destroy all superblocks - the code still does this but CHS values are no longer meaningful. Much of the UFS code works on the assumption that cylinder groups and inodes are located at regular locations through the disk so they can be located using simple arithmetic - check out the various macros in . About the best you can do is to build your filesystems with non-default parameters so that if you accidently newfs it with default parameters, it won't overwrite all the superblock copies. Of course, it will write superblocks all over your data. OTOH, since UFS2 doesn't pre-allocate inodes, if you newfs with the same parameters, you have destroyed the CG-level metadata but the majority of the inodes and all of the data will remain untouched. They can't be recovered by fsck but a custom process that looked at all the potential inodes associated with a cylinder group would find most of the files. --=20 Peter Jeremy --cWoXeonUoKmBZSoM Content-Type: application/pgp-signature -----BEGIN PGP SIGNATURE----- Version: GnuPG v2.0.17 (FreeBSD) iEYEARECAAYFAk2QakoACgkQ/opHv/APuIfa1QCgpOT42OneHllPLO5I8m8uEBmJ 8iEAoJXP/y3SZRAHpHQY/0GdeF3FA7x5 =Puup -----END PGP SIGNATURE----- --cWoXeonUoKmBZSoM-- From owner-freebsd-fs@FreeBSD.ORG Mon Mar 28 14:48:13 2011 Return-Path: Delivered-To: freebsd-fs@FreeBSD.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id D1985106567A for ; Mon, 28 Mar 2011 14:48:13 +0000 (UTC) (envelope-from brde@optusnet.com.au) Received: from mail05.syd.optusnet.com.au (mail05.syd.optusnet.com.au [211.29.132.186]) by mx1.freebsd.org (Postfix) with ESMTP id 5538A8FC1E for ; Mon, 28 Mar 2011 14:48:12 +0000 (UTC) Received: from c122-106-155-58.carlnfd1.nsw.optusnet.com.au (c122-106-155-58.carlnfd1.nsw.optusnet.com.au [122.106.155.58]) by mail05.syd.optusnet.com.au (8.13.1/8.13.1) with ESMTP id p2SElwd4031983 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO); Tue, 29 Mar 2011 01:48:00 +1100 Date: Tue, 29 Mar 2011 01:47:58 +1100 (EST) From: Bruce Evans X-X-Sender: bde@besplex.bde.org To: Kostik Belousov In-Reply-To: <20110326135211.GB78089@deviant.kiev.zoral.com.ua> Message-ID: <20110329012951.N779@besplex.bde.org> References: <20110326003818.GT78089@deviant.kiev.zoral.com.ua> <20110327000956.U1316@besplex.bde.org> <20110326135211.GB78089@deviant.kiev.zoral.com.ua> MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII; format=flowed Cc: freebsd-fs@FreeBSD.org Subject: Re: tying down adaX to physical interfaces X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 28 Mar 2011 14:48:13 -0000 On Sat, 26 Mar 2011, Kostik Belousov wrote: > On Sun, Mar 27, 2011 at 12:24:03AM +1100, Bruce Evans wrote: >> To hijack this thread a little, I'll ask how people handle removable media >> changing the addresses of non-removable media. I use the following to >> prevent USB drives stealing da0 from my 1 real SCSI disk on 1 machine: >> >> hint.scbus.0.at="sym0" >> hint.da.0.at="scbus0" >> >> This works OK and is easy to manage with only 1 SCSI disk. But 1 of my >> USB drives also steals cd0 from a not-so-real ATAPI drive under atapicam, >> depending on whether the USB drive is present at boot time: >> >> USB drive not present at boot time: >> ad* (no SCSI disks on this machine) >> cd0 = acd0 (but no further ATAPI drives on this machine) >> insert USB drive: >> da1 (da0 was reserved by above) >> cd1 (phantom ATAPI drive on the USB drive. Accessing this hangs >> parts of the ata system but it doesn't get used since various >> places only point cd0) >> >> USB drive present at boot time: >> ad* >> da1 on USB >> cd0 phantom on USB >> cd1 = acd[0 or 1] (normal cd0). Accessing cd0 now hangs parts of the >> ata system and this happens too easily since various places point >> to cd0. >> >> How do people defend agains random USB drives present or not at boot time? > > Wouldn't it be cd0 on scbusX on ahciY, and cd1 on scbusZ on umass-simT ? > I believe similar hints would wire the cd0/cd1 in your case. That works of course. I was hoping for something more automatic and general. Maybe reserve lots of bus numbers for fixed devices. What seems to happen is that buses are allocated sequentially in probe order, and at least in my configuration, USB drives are probed before atapi drives, so the removable drives always renumber the fixed drives. However, the order of the probe messages is the opposite -- atapi drives before USB drives (with higher unit numbers for atapi drives!). E.g., acd0, cd1 (same physical drive as acd0), cd0 (USB drive). This is with an older kernel and usb. Bruce From owner-freebsd-fs@FreeBSD.ORG Mon Mar 28 14:56:56 2011 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 7A4271065670; Mon, 28 Mar 2011 14:56:56 +0000 (UTC) (envelope-from jhellenthal@gmail.com) Received: from mail-iy0-f182.google.com (mail-iy0-f182.google.com [209.85.210.182]) by mx1.freebsd.org (Postfix) with ESMTP id 29C498FC14; Mon, 28 Mar 2011 14:56:55 +0000 (UTC) Received: by iyj12 with SMTP id 12so4636588iyj.13 for ; Mon, 28 Mar 2011 07:56:55 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:sender:date:from:to:cc:subject:in-reply-to :message-id:references:user-agent:x-openpgp-key-id :x-openpgp-key-fingerprint:mime-version:content-type; bh=GmJjFzWnsUJ5WUybe5dkksWg4HRxgpI3VHk7KP7Tfys=; b=rsFSTlTWiziabrDgHfvg0rSM3FkWiBaLOsj1E5clRTDSNbuuerfBewDIxpSbeRpRVS 6zJYKmKIs3TqkmIoA0icfOQoN6RImAkOeIDv3msJJS+TDclvpym/hG0eHl+HUTlswKy0 IFIitjBdGL++IA28FJe8R9cJsnFNkQkEXQCNE= DomainKey-Signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=sender:date:from:to:cc:subject:in-reply-to:message-id:references :user-agent:x-openpgp-key-id:x-openpgp-key-fingerprint:mime-version :content-type; b=mXtMzwStFEYXLOxuqrANstllrM/GoyJOCBRUM5tWTQDZmME5QPcy4bLo/zZGmDvbcr XdeyA/McisgFs+Ef4DLtckRfcfLmnzIX3Y4u7rqIgX9unIfCiU3iyHPknKniiUYorKGl /SscDwo98Xvct8Ou+KbaNLmFw0M0mw8ipwIL8= Received: by 10.43.64.132 with SMTP id xi4mr6911027icb.165.1301322674321; Mon, 28 Mar 2011 07:31:14 -0700 (PDT) Received: from disbatch.dataix.local (adsl-99-181-153-110.dsl.klmzmi.sbcglobal.net [99.181.153.110]) by mx.google.com with ESMTPS id hc41sm1437406ibb.64.2011.03.28.07.31.11 (version=TLSv1/SSLv3 cipher=OTHER); Mon, 28 Mar 2011 07:31:12 -0700 (PDT) Sender: "J. Hellenthal" Date: Mon, 28 Mar 2011 10:30:59 -0400 From: "J. Hellenthal" To: Lev Serebryakov In-Reply-To: <895726715.20110328112007@serebryakov.spb.ru> Message-ID: References: <895726715.20110328112007@serebryakov.spb.ru> User-Agent: Alpine 2.00 (BSF 1167 2008-08-23) X-OpenPGP-Key-Id: 0x89D8547E X-OpenPGP-Key-Fingerprint: 85EF E26B 07BB 3777 76BE B12A 9057 8789 89D8 547E MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII; format=flowed Cc: freebsd-fs@freebsd.org, freebsd-stable@freebsd.org Subject: Re: Backup tool fot ZFS with all "classic dump(8)" fetatures -- what should I use? (or is here any way to make dump -L works well on large FFS2+SU?) X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 28 Mar 2011 14:56:56 -0000 On Mon, 28 Mar 2011 03:20, lev@ wrote: > Hello, Freebsd-stable. > > Now I'm backing up my HOME filesystem with dump(8). It works > perfectly for 80GiB FS with many features: snapshot for consistency, > levels, "nodump" flag (my users use it a lot!), ability to extract > only one removed file from backup without restoring full FS, simple > sctipy wrap-up for levels schedule, etc. > > On new server I have huge HOME (500GiB). And even if it is filled > up only with 25GiB of data, creating snapshot takes about 10 minutes, > freeze all I/O, and sometimes FAILS (!!!). > > I'm thinking to transfer GOME filesystem to ZFS. But I can not find > appropriate tools for backing it up. Here is some requirements: > > (1) One-file (one-stream) backup. Not directory mirror. I need to > store it on FTP server and upload with single command. > > (2) Levels & increment backups. Now I have "Monthly (0) - Weekly > (1,2,3) - daily (4,5,6,7,8,9)" scheme. I could afford other schemes, > but if they doesn't store full backup every day and doesn't need full > backup more often than weekly. > > (3) Minimum of local metadata. Storing previous backups locally to > calculate next one is not appropriate solution. "zfs send" needs > previous snapshots for incremental backup, for example. > > (4) Working with snapshot (I think, it is trivial in case of ZFS). > > (5) Backup exclusions should be controlled by users (not super-user) themselves, > like "nodump" flag in case of FFS/dump(8). "zfs send" can not > provide this. I have very responsible users, so full backup > now takes only up to 10GiB when all HOME FS is about 25GiB, so it > is big help when backup is sent over Internet to other host. > > (6) Storing of ALL FS-specific information -- ACLs, etc. > > (7) Free :) > > Is here something like this for ZFS? "zfs send" looks promising, > EXCEPT item (5) and, maybe, (3) :( > > gnu tar looks like everything but (6) :( > There is information all over the place for this. I would suggest that you take the time and go over the required reading to understand ZFS and its concepts before you jump into conclusions. Here is some docs to start with. http://download.oracle.com/docs/cd/E19253-01/819-5461/index.html http://hub.opensolaris.org/bin/view/Community+Group+zfs/faq http://www.solarisinternals.com/wiki/index.php/ZFS_Best_Practices_Guide -- Regards, J. Hellenthal (0x89D8547E) JJH48-ARIN From owner-freebsd-fs@FreeBSD.ORG Mon Mar 28 20:06:48 2011 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 256071065670; Mon, 28 Mar 2011 20:06:48 +0000 (UTC) (envelope-from fjwcash@gmail.com) Received: from mail-yi0-f54.google.com (mail-yi0-f54.google.com [209.85.218.54]) by mx1.freebsd.org (Postfix) with ESMTP id 84CB38FC15; Mon, 28 Mar 2011 20:06:47 +0000 (UTC) Received: by yie12 with SMTP id 12so1504050yie.13 for ; Mon, 28 Mar 2011 13:06:46 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:mime-version:in-reply-to:references:date :message-id:subject:from:to:cc:content-type :content-transfer-encoding; bh=NXg609SdF0hMNEGm0DRYYAT9/J8sljE139yLOzfweWM=; b=EphSr5rSpAZ0N6ouQZyNrNHVD3pCrDr2Ffa/xy0ONp5ufxEUmkQE/E1mSWXUMSBLmQ FDw6L+qhbqmlIFAlktfNphP44B2AC0xheJAprkS10uI9ktCpgLhAf/T/9J/yyUCjzBf0 y1ZCrFTU1CjaWmgj4jZBjiHYX7hgyw6/HkHNA= DomainKey-Signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type:content-transfer-encoding; b=dbDiTJTbG91Bg/o304EahLrqUjHw0gWOrI2ZcfaEb8lrnNMjYvFgcx+5WcrISa8tQX uWvDDWJSBOyLQNw6W5bNdXo6/xU5BfS21VtuPTFZEYQjFGfPsO/z+kxblUGfQd0dUF3i bTMp4ueWIsz2HJbSct8tJuiFjc3cehdLesxUQ= MIME-Version: 1.0 Received: by 10.91.76.2 with SMTP id d2mr4256904agl.208.1301342806632; Mon, 28 Mar 2011 13:06:46 -0700 (PDT) Received: by 10.90.100.10 with HTTP; Mon, 28 Mar 2011 13:06:46 -0700 (PDT) In-Reply-To: <86zkogep2o.fsf@kopusha.home.net> References: <20110325075541.GA1742@garage.freebsd.pl> <86zkogep2o.fsf@kopusha.home.net> Date: Mon, 28 Mar 2011 13:06:46 -0700 Message-ID: From: Freddie Cash To: Mikolaj Golub Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: quoted-printable Cc: FreeBSD Filesystems , FreeBSD Stable , FreeBSD-Current , Pawel Jakub Dawidek Subject: Re: Any success stories for HAST + ZFS? X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 28 Mar 2011 20:06:48 -0000 On Sun, Mar 27, 2011 at 5:16 AM, Mikolaj Golub wrote: On Sat, 26 Mar 2011 10:52:08 -0700 Freddie Cash wrote: > > =C2=A0FC> hastd backtrace is here: > =C2=A0FC> http://www.sd73.bc.ca/downloads/crash/hast-backtrace.png > > It is not a hastd crash, but a kernel crash triggered by hastd process. Ah, interesting. > I am not sure I got the same crash as you but apparently the race is poss= ible > in g_gate on device creation. 95% of the time that it would crash, would be when creating the /dev/hast/* devices (switching to primary role). Most of the crashes happened when doing "hastctl role primary all", but would occasionally happen when doing it manually for each resource. Creating the resources by hand, one every 2 seconds or so, would usually create them all without crashing. The other 5% of the time, the hastd crashes occurred either when importing the ZFS pool, or when running multiple parallel rsyncs to the pool. hastd was always shown as the last running process in the backtrace onscreen. > I got the following crash starting many hast providers simultaneously: > > fault virtual address =C2=A0 =3D 0x0 > > #8 =C2=A00xc0c11adc in calltrap () at /usr/src/sys/i386/i386/exception.s:= 168 > #9 =C2=A00xc086ac6b in g_gate_ioctl (dev=3D0xc6a24300, cmd=3D3374345472, > =C2=A0 =C2=A0addr=3D0xc9fec000 "\002", flags=3D3, td=3D0xc7ff0b80) > =C2=A0 =C2=A0at /usr/src/sys/geom/gate/g_gate.c:410 > #10 0xc0853c5b in devfs_ioctl_f (fp=3D0xc9b9e310, com=3D3374345472, > =C2=A0 =C2=A0data=3D0xc9fec000, cred=3D0xc8c9c200, td=3D0xc7ff0b80) > =C2=A0 =C2=A0at /usr/src/sys/fs/devfs/devfs_vnops.c:678 > #11 0xc09210cd in kern_ioctl (td=3D0xc7ff0b80, fd=3D3, com=3D3374345472, > =C2=A0 =C2=A0data=3D0xc9fec000 "\002") at file.h:262 > #12 0xc0921254 in ioctl (td=3D0xc7ff0b80, uap=3D0xf5edbcec) > =C2=A0 =C2=A0at /usr/src/sys/kern/sys_generic.c:679 > #13 0xc0916616 in syscallenter (td=3D0xc7ff0b80, sa=3D0xf5edbce4) > =C2=A0 =C2=A0at /usr/src/sys/kern/subr_trap.c:315 > #14 0xc0c2b9ff in syscall (frame=3D0xf5edbd28) > =C2=A0 =C2=A0at /usr/src/sys/i386/i386/trap.c:1086 > #15 0xc0c11b71 in Xint0x80_syscall () > =C2=A0 =C2=A0at /usr/src/sys/i386/i386/exception.s:266 > > Or just creating many ggate devices simultaneously: > > for i in `jot 100`; do > =C2=A0 =C2=A0./ggiocreate $i& > done > > ggiocreate.c is attached. > > In my case the kernel crashes in g_gate_create() when checking for name > collisions in strcmp(): > > =C2=A0 =C2=A0 =C2=A0 =C2=A0/* Check for name collision. */ > =C2=A0 =C2=A0 =C2=A0 =C2=A0for (unit =3D 0; unit < g_gate_maxunits; unit+= +) { > =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0if (g_gate_units[u= nit] =3D=3D NULL) > =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0continue; > =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0if (strcmp(name, g= _gate_units[unit]->sc_provider->name) !=3D 0) > =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0continue; > =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0mtx_unlock(&g_gate= _units_lock); > =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0mtx_destroy(&sc->s= c_queue_mtx); > =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0free(sc, M_GATE); > =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0return (EEXIST); > =C2=A0 =C2=A0 =C2=A0 =C2=A0} > > I think the issue is the following. When preparing sc we take > g_gate_units_lock, check for name collision, fill sc fields except > sc->sc_provider, and registers sc in g_gate_units[unit]. sc_provider is f= illed > later, when g_gate_units_lock is released. So the scenario is possible: > > 1) Thread A registers sc in g_gate_units[unit] with > g_gate_units[unit]->sc_provider still null and releases g_gate_units_lock= . > > 2) Thread B traverses g_gate_units[] when checking for name collision and > craches accessing g_gate_units[unit]->sc_provider->name. > > The attached patch fixes the issue in my case. Patch applied cleanly to 8-STABLE with ZFSv28 patch also applied. Just to be safe, did a full buildwold/kernel cycle, running GENERIC kernel. So far, I have not been able to produce a crash in hastd, through several reboots, switching from primary to secondary and back, and just switching from primary to init and back. So far, so good. Now to see if I can reproduce any of the ZFS crashes I had earlier. --=20 Freddie Cash fjwcash@gmail.com From owner-freebsd-fs@FreeBSD.ORG Wed Mar 30 21:23:44 2011 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id EDF571065674 for ; Wed, 30 Mar 2011 21:23:44 +0000 (UTC) (envelope-from kungfujesus06@gmail.com) Received: from mail-fx0-f54.google.com (mail-fx0-f54.google.com [209.85.161.54]) by mx1.freebsd.org (Postfix) with ESMTP id 845468FC20 for ; Wed, 30 Mar 2011 21:23:44 +0000 (UTC) Received: by fxm11 with SMTP id 11so1794641fxm.13 for ; Wed, 30 Mar 2011 14:23:43 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:mime-version:date:message-id:subject:from:to :content-type; bh=oEZLmXYieYJgR1YLAhmXJ+tAJ5ducGcuWdw1hxLofzs=; b=E+53Y/pJRUiMeyux3TBc0RBwiShCZ1RwEemGgPfJvpm+Bz1FhDpgAsFdUNwWTVYQWB xk0ewB6PQgc6AodKqTI1lrkWsSyMjmgx1KI2WZQcXWe+izu38JhXYSukH0RFijU+FZyF ycP49yC82oqi7Yeh0h9QtBlsLj8SO+S6s2AYA= DomainKey-Signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=mime-version:date:message-id:subject:from:to:content-type; b=p3WqgUfEM5isSrBwDAdXuKWn6KgmxVS5wkXymP1JQVNSkQmb3cSBtwIFXcAqDr9dmV 1YIY+ys0f5jxVQPLF9OxuMMz/0h6JU3JkcgRJCR+Dd9h5ig70whqDEuNKm1DkEfrYJdK y1TwQhvdQzEq18C9QNmiOk1eZNKVDQiZT5xjY= MIME-Version: 1.0 Received: by 10.223.160.5 with SMTP id l5mr1889423fax.85.1301519793925; Wed, 30 Mar 2011 14:16:33 -0700 (PDT) Received: by 10.223.110.147 with HTTP; Wed, 30 Mar 2011 14:16:33 -0700 (PDT) Date: Wed, 30 Mar 2011 17:16:33 -0400 Message-ID: From: Adam Stylinski To: freebsd-fs@freebsd.org Content-Type: text/plain; charset=ISO-8859-1 X-Content-Filtered-By: Mailman/MimeDel 2.1.5 Subject: oops X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 30 Mar 2011 21:23:45 -0000 Sorry, wrong list. From owner-freebsd-fs@FreeBSD.ORG Wed Mar 30 21:40:23 2011 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 45B5C106576A for ; Wed, 30 Mar 2011 21:40:22 +0000 (UTC) (envelope-from kungfujesus06@gmail.com) Received: from mail-fx0-f54.google.com (mail-fx0-f54.google.com [209.85.161.54]) by mx1.freebsd.org (Postfix) with ESMTP id 3054A8FC1D for ; Wed, 30 Mar 2011 21:40:21 +0000 (UTC) Received: by fxm11 with SMTP id 11so1806467fxm.13 for ; Wed, 30 Mar 2011 14:40:21 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:mime-version:date:message-id:subject:from:to :content-type; bh=TQK5LccUqQmSj7YqFxhFu+nYX2qp2dYId6keWL1a3eA=; b=bALvK/TdmtRqFFibVJ5fdUZc0B5GaLEJ8DaWdMupMFWnY5joT/Kt4qZGbwfrs0kOxH ncTdTWYG7gHhfx3+scDAwIzC2kDcLoXyLNJ13EGf34g7nkWbOU/M3WVMP5i8h0PpNIv/ L1+nwtnBLMByWeO7Y29QEdAd5bK5aV1UWCv5I= DomainKey-Signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=mime-version:date:message-id:subject:from:to:content-type; b=ODvhYG2w5XFopT9N6U/s8EDndDnsE4UlnsxYzTPwgoPEhBUWRcNTDJb5r6EcJzTT8Z DsLxuvr/My86ycJd6PccHfMxbhM7pvmuhrxZiXLBcXaAVMALzb93GI/ZX/ZYFOgUavtQ tDQZ3m6OwWk0TgTJw9H7phJgYnDVXMVS/iPds= MIME-Version: 1.0 Received: by 10.223.106.76 with SMTP id w12mr949580fao.104.1301519452282; Wed, 30 Mar 2011 14:10:52 -0700 (PDT) Received: by 10.223.110.147 with HTTP; Wed, 30 Mar 2011 14:10:52 -0700 (PDT) Date: Wed, 30 Mar 2011 17:10:52 -0400 Message-ID: From: Adam Stylinski To: freebsd-fs@freebsd.org Content-Type: text/plain; charset=ISO-8859-1 X-Content-Filtered-By: Mailman/MimeDel 2.1.5 Subject: net80211 and interface requests X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 30 Mar 2011 21:40:23 -0000 Hello, This list has helped me before so I'll email again with the hopes that somebody has an answer. All is working well with my project, however for the life of me I cannot get the interface to inject the raw frames faster than 11mbps. I'm following the example given in /usr/src/tools/tools/net80211/wlaninject.c, and manually specifying parameters such as ucastrate, mcastrate, and mgmtrate within ifconfig. I'm putting the card into pureg mode, and yet I still can't inject any faster. I've even gone so far as to specify an ieee802211_txparam struct giving values of 255 both mcast and ucast rates within the struct (and of course anding them by 0xff). I then used the ioctl call to set the flags within the interface request. Any help would be greatly appreciated. I am doing nanosleeps in between transmissions as if I don't the bpf clone can't inject due to the buffer being too full. There's probably a better way of doing this, but I doubt the nanosleeps are the issue (afterall, I get almost exactly 11mbps). I should probably note I'm not doing any ACKs, this is pure transmits. If anybody cares enough to look at my unpolished code to get a better idea, look here: http://projhinternet.svn.sourceforge.net/ The idea is to allow unidirectional traffic so that with an FCC amateur license (yes I know I'm not currently broadcasting the call sign as of yet) you can broadcast unencrypted transmissions for miles (with a linear amplifier spec'd to 2.4ghz). With the license FCC part15 no longer applies and you can operate just like in any other amateur band. From owner-freebsd-fs@FreeBSD.ORG Thu Mar 31 06:34:35 2011 Return-Path: Delivered-To: fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id EE155106564A; Thu, 31 Mar 2011 06:34:35 +0000 (UTC) (envelope-from dnewman@networktest.com) Received: from mail3.networktest.com (mail3.networktest.com [69.55.234.104]) by mx1.freebsd.org (Postfix) with ESMTP id CD3678FC17; Thu, 31 Mar 2011 06:34:35 +0000 (UTC) Received: from localhost (localhost [69.55.234.104]) by mail3.networktest.com (Postfix) with ESMTP id 9EFAD2560D3; Wed, 30 Mar 2011 23:15:27 -0700 (PDT) Received: from mail3.networktest.com ([69.55.234.104]) by localhost (mail3.networktest.com [69.55.234.104]) (amavisd-maia, port 10024) with ESMTP id 63555-06; Wed, 30 Mar 2011 23:15:27 -0700 (PDT) Received: from sagan.local (unknown [12.229.246.2]) (using TLSv1 with cipher DHE-RSA-CAMELLIA256-SHA (256/256 bits)) (No client certificate requested) (Authenticated sender: dnewman@networktest.com) by mail3.networktest.com (Postfix) with ESMTPSA id 468362560D2; Wed, 30 Mar 2011 23:15:27 -0700 (PDT) Message-ID: <4D941BFF.6050807@networktest.com> Date: Wed, 30 Mar 2011 23:15:27 -0700 From: David Newman User-Agent: Mozilla/5.0 (Macintosh; U; Intel Mac OS X 10.6; en-US; rv:1.9.2.15) Gecko/20110303 Lightning/1.0b2 Thunderbird/3.1.9 MIME-Version: 1.0 To: Martin Matuska References: <4C51ECAA.2070707@networktest.com> <4C51FE41.8030906@FreeBSD.org> In-Reply-To: <4C51FE41.8030906@FreeBSD.org> X-Enigmail-Version: 1.1.1 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Cc: fs@freebsd.org Subject: Re: fixing a busted ZFS upgrade X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 31 Mar 2011 06:34:36 -0000 On 7/29/10 3:18 PM, Martin Matuska wrote: > > For recovering a system that does not boot anymore, you can use mfsBSD > ISO's: > http://mfsbsd.vx.sk > > You can boot from the iso and repair the boot record. Nearly a year ago mfsBSD saved me from a munged 8.0->8.1 upgrade of a ZFS box and allowed me to revive a ZFS root partition. I've done the same stupid thing again in moving from 8.1 to 8.2, only now the server won't boot from the 8.2 mfsBSD ISO, or the 8.1 ISO. In both cases it hangs at loader.conf. Thanks in advance for any clues on reviving this system. dn I recommend you check your gpart partitions with "gpart show" and verify > discovered pools with "zpool import" > (without any flags or arguments) first. > > mm > > Dňa 29. 7. 2010 23:03, David Newman wrote / napísal(a): >> Attempting to upgrade an 8.0-RELEASE to 8.1-RELEASE failed on a system >> running a bootable ZFS partition. >> >> The system boots to the loader prompt and complains there's no bootable >> kernel. Running 'lsmod' shows there are four ZFS disks present. >> >> Thanks in advance for clues on fixing this, and also on the right way to >> upgrade FreeBSD systems with bootable ZFS partitions. >> >> Steps to reproduce: >> >> 1. Build 8.0-RELEASE system following the freebsd.org wiki: >> >> http://wiki.freebsd.org/RootOnZFS/GPTZFSBoot/RAIDZ1 >> >> In this case the system uses raidz1 across four SATA drives. >> >> 2. Upgrade to 8.1-RELEASE using the 'FreeBSD Update' directions: >> >> http://www.freebsd.org/releases/8.1R/announce.html >> >> 3. After first reboot, system boots to the loader prompt. >> >> dn >> >> _______________________________________________ >> freebsd-fs@freebsd.org mailing list >> http://lists.freebsd.org/mailman/listinfo/freebsd-fs >> To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" From owner-freebsd-fs@FreeBSD.ORG Thu Mar 31 11:23:42 2011 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 3D91D1065670; Thu, 31 Mar 2011 11:23:42 +0000 (UTC) (envelope-from buganini@gmail.com) Received: from mail-iy0-f182.google.com (mail-iy0-f182.google.com [209.85.210.182]) by mx1.freebsd.org (Postfix) with ESMTP id D36868FC14; Thu, 31 Mar 2011 11:23:41 +0000 (UTC) Received: by iyj12 with SMTP id 12so2904071iyj.13 for ; Thu, 31 Mar 2011 04:23:41 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:mime-version:in-reply-to:references:date :message-id:subject:from:to:cc:content-type; bh=mLp+DbrUZtUrvCuDBkFeZDFJSocqQwVNhxpSOGX+z10=; b=m7iImlnPJH7HUuHQ9lBVTwlOPpkCUnNH4l12ayvIWxBIYDX+nTLOxGkRqmpRU/jRoe JvPrLmXKR+CJsVsA1IEzo6ZpN16Kfrc+80iED2GRqQX9fjhYt1E9afAL0/K8uGj6nYqd 4bdwkK+xGIF/motICDLDVu3kXj8NYflvsvbPs= DomainKey-Signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type; b=dtOTq3Ml+TxU3TMoPcSQ19m8uKsL4yZnlxMzSCHf0Had2gp0dJPCOg99uSlg3vKUif L1QutFgnWEau+N69O0fnseQfcHlXxvq5CvZBVzpOkuSDrKs+9C2HFd/HRviV9cpx35kW bCBCaIJmyCvm0lVxwHL/KNZAHoHxHpLxvtjRM= MIME-Version: 1.0 Received: by 10.231.202.132 with SMTP id fe4mr2478235ibb.79.1301569012556; Thu, 31 Mar 2011 03:56:52 -0700 (PDT) Received: by 10.231.33.130 with HTTP; Thu, 31 Mar 2011 03:56:52 -0700 (PDT) In-Reply-To: References: <1289442296.2128.16.camel@monet> <20101111122455.GA2098@tops> Date: Thu, 31 Mar 2011 18:56:52 +0800 Message-ID: From: Buganini To: Antony Mawer Content-Type: text/plain; charset=UTF-8 Cc: freebsd-fs@freebsd.org, Kevin Lo , delphij@freebsd.org Subject: Re: patch: let msdosfs(vfat)/ntfs to support UTF-8 locale well X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 31 Mar 2011 11:23:42 -0000 http://security-hole.info/~buganini/patches/kiconv_msdosfs/ I've adapted the 4th patchset for CURRENT, but I've only test it with -ro on amd64. another i386 machine is on the way. Regards, Buganini From owner-freebsd-fs@FreeBSD.ORG Thu Mar 31 17:11:11 2011 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 58665106567D for ; Thu, 31 Mar 2011 17:11:11 +0000 (UTC) (envelope-from ppaczyn@gmail.com) Received: from mail-iw0-f182.google.com (mail-iw0-f182.google.com [209.85.214.182]) by mx1.freebsd.org (Postfix) with ESMTP id 247EA8FC2B for ; Thu, 31 Mar 2011 17:11:10 +0000 (UTC) Received: by iwn33 with SMTP id 33so3244212iwn.13 for ; Thu, 31 Mar 2011 10:11:10 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:mime-version:date:message-id:subject:from:to :content-type:content-transfer-encoding; bh=aXHRKrqpvH37OConE2gHXRn3NPxvi7PeLZMPQ4vOJE0=; b=RyKo7A3iW9WYznewsa47Mm+SeqIR/rveRvDH560VTWhk6cR32D4/YQd5VqvOzWVvBK AmtHYXizgXy01geAbPdetCUXHa0BmxD1iqqvYota8wEk1Qn3bldmxYC1bpP9WuZUOIFL zQiT/IdLX7YgtWSNPIbYvLMDTdQi6jlR55Ibg= DomainKey-Signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=mime-version:date:message-id:subject:from:to:content-type :content-transfer-encoding; b=fWlpNMTeodtD+F227RhVEf9LBVfsjpokPYmaKC9ogKVZzJUh/hlXkROTUF9k2jDEBo FsclnuSBEpbkndpgdhuKHLKFnxwl4eQQ4X1uMcvn7IFItAUo+Ai7zln/byHnaMEA/rIm xZWQtHFIrlSUYW4pxwCt1BIYIUgRqWtcnBYyM= MIME-Version: 1.0 Received: by 10.42.146.135 with SMTP id j7mr1854323icv.198.1301590092557; Thu, 31 Mar 2011 09:48:12 -0700 (PDT) Received: by 10.42.172.201 with HTTP; Thu, 31 Mar 2011 09:48:12 -0700 (PDT) Date: Thu, 31 Mar 2011 18:48:12 +0200 Message-ID: From: Piotr Paczynski To: freebsd-fs@freebsd.org Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: quoted-printable Subject: ZFS failed after hard power off X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 31 Mar 2011 17:11:11 -0000 Hi all, I urgently need help. After hard power-off (power cable disconnected) my FreeBSD 8.1-STABLE server fails to boot from ZFS with an I/O Error. I was able to boot to Fixit console from 8.2 LiveFS, prepare it for ZFS and mount the pool using "zpool import -Ff" command. Here are the results: Fixit# zpool status =A0 pool: zroot =A0state: FAULTED status: The pool metadata is corrupted and the pool cannot be opened. action: Destroy and re-create the pool from a backup source. =A0=A0 see: http://www.sun.com/msg/ZFS-8000-72 =A0scrub: none requested config: =A0=A0=A0=A0=A0=A0=A0 NAME=A0=A0=A0=A0=A0=A0=A0=A0 STATE=A0=A0 READ WRITE C= KSUM =A0=A0=A0=A0=A0=A0=A0 zroot=A0=A0=A0=A0=A0=A0=A0 FAULTED=A0=A0=A0 0=A0=A0 = =A0 0=A0=A0=A0=A0 1 corrupted data =A0=A0=A0=A0=A0=A0=A0=A0=A0 gpt/array0 ONLINE=A0=A0=A0=A0 0=A0=A0=A0=A0 0= =A0=A0=A0=A0 6 Also "zdb -l /dev/gpt/array0" shows 4 LABELS. Each has the same attributes, in particular: =A0 version=3D14 =A0 state=3D0 =A0 txg=3D4 Here are the screen-shots in case I missed something: https://picasaweb.google.com/113032262178118660549/ZfsFailure# Any pointers how to go about recovering data from this pool anyone? --=20 Piotr Paczynski From owner-freebsd-fs@FreeBSD.ORG Thu Mar 31 17:23:02 2011 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 3746D106566B for ; Thu, 31 Mar 2011 17:23:02 +0000 (UTC) (envelope-from artemb@gmail.com) Received: from mail-qy0-f182.google.com (mail-qy0-f182.google.com [209.85.216.182]) by mx1.freebsd.org (Postfix) with ESMTP id E07688FC13 for ; Thu, 31 Mar 2011 17:23:01 +0000 (UTC) Received: by qyk27 with SMTP id 27so2094882qyk.13 for ; Thu, 31 Mar 2011 10:23:01 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:mime-version:sender:in-reply-to:references:date :x-google-sender-auth:message-id:subject:from:to:cc:content-type :content-transfer-encoding; bh=nU8ahM7x5SBDN5pn9Xo2PiaqULy+STVCFOLqS1pmmHI=; b=xx5ijbvuTTdgvvDhzW+n3UVS9beQ+b+OmmenS/Su4AmIwRSGu3f8aBWXPii4jx426Z AKjwL/Es9geOMzlsWeE5V65FCdM0Zcu71F44HCRsviFMk3zhciVZmoM6bXCcmRSWmWC+ hLpDNynCoSojloj+S9BVQwi6iWRGdRQYE8o0g= DomainKey-Signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=mime-version:sender:in-reply-to:references:date :x-google-sender-auth:message-id:subject:from:to:cc:content-type :content-transfer-encoding; b=BK1oTapiKl/rJ5cBkonGTSdqjE5yc3G9ngTEIrAAM2aoX3pJYl/zGqGEGndgfoYJNa 6ajua9ez02hmWSy5uUZE2WLonv6xTYJA0vNhqFtQ8J4eaDNSq0TZYGvNxtM6HGeYv+Ru EuVv2eQO+JPCxjCln54GBj8jXn62r8tVlODLY= MIME-Version: 1.0 Received: by 10.229.63.229 with SMTP id c37mr2462973qci.212.1301592149485; Thu, 31 Mar 2011 10:22:29 -0700 (PDT) Sender: artemb@gmail.com Received: by 10.229.233.195 with HTTP; Thu, 31 Mar 2011 10:22:29 -0700 (PDT) In-Reply-To: References: Date: Thu, 31 Mar 2011 10:22:29 -0700 X-Google-Sender-Auth: LKA6wGaVaf8doav3nTxXM-RLq0w Message-ID: From: Artem Belevich To: Piotr Paczynski Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: quoted-printable Cc: freebsd-fs@freebsd.org Subject: Re: ZFS failed after hard power off X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 31 Mar 2011 17:23:02 -0000 On Thu, Mar 31, 2011 at 9:48 AM, Piotr Paczynski wrote: > Hi all, > > I urgently need help. After hard power-off (power cable disconnected) > my FreeBSD 8.1-STABLE server fails to boot from ZFS with an I/O Error. > I was able to boot to Fixit console from 8.2 LiveFS, prepare it for > ZFS and mount the pool using "zpool import -Ff" command. Here are the > results: > > Fixit# zpool status > =A0 pool: zroot > =A0state: FAULTED > status: The pool metadata is corrupted and the pool cannot be opened. > action: Destroy and re-create the pool from a backup source. > =A0=A0 see: http://www.sun.com/msg/ZFS-8000-72 > =A0scrub: none requested > config: > =A0=A0=A0=A0=A0=A0=A0 NAME=A0=A0=A0=A0=A0=A0=A0=A0 STATE=A0=A0 READ WRITE= CKSUM > =A0=A0=A0=A0=A0=A0=A0 zroot=A0=A0=A0=A0=A0=A0=A0 FAULTED=A0=A0=A0 0=A0=A0= =A0 0=A0=A0=A0=A0 1 corrupted data > =A0=A0=A0=A0=A0=A0=A0=A0=A0 gpt/array0 ONLINE=A0=A0=A0=A0 0=A0=A0=A0=A0 0= =A0=A0=A0=A0 6 > > Also "zdb -l /dev/gpt/array0" shows 4 LABELS. Each has the same > attributes, in particular: > =A0 version=3D14 > =A0 state=3D0 > =A0 txg=3D4 > Something like this could've happened if the drive lied about having data committed to platters. If power fails, you may end up with partially written data and inconsistent on-disk ZFS state. You may need to boot into FreeBSD-9 with ZFS v28 or with OpenSolaris live CD and re-import the pool with "zpool import -F". http://solori.wordpress.com/2010/07/15/zfs-pool-import-fails-after-power-ou= tage/ --Artem From owner-freebsd-fs@FreeBSD.ORG Thu Mar 31 20:22:39 2011 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 7D08C1065670 for ; Thu, 31 Mar 2011 20:22:39 +0000 (UTC) (envelope-from sife.mailling@yahoo.com) Received: from nm21-vm0.bullet.mail.sp2.yahoo.com (nm21-vm0.bullet.mail.sp2.yahoo.com [98.139.91.220]) by mx1.freebsd.org (Postfix) with SMTP id 3C3878FC14 for ; Thu, 31 Mar 2011 20:22:39 +0000 (UTC) Received: from [98.139.91.62] by nm21.bullet.mail.sp2.yahoo.com with NNFMP; 31 Mar 2011 20:09:38 -0000 Received: from [98.139.91.10] by tm2.bullet.mail.sp2.yahoo.com with NNFMP; 31 Mar 2011 20:09:38 -0000 Received: from [127.0.0.1] by omp1010.mail.sp2.yahoo.com with NNFMP; 31 Mar 2011 20:09:38 -0000 X-Yahoo-Newman-Property: ymail-3 X-Yahoo-Newman-Id: 603516.79691.bm@omp1010.mail.sp2.yahoo.com Received: (qmail 34010 invoked by uid 60001); 31 Mar 2011 20:09:38 -0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=yahoo.com; s=s1024; t=1301602178; bh=qDrt0l/ab43oVzz6zh+Gslnr/SfzvCtrykYA8FzRTLE=; h=Message-ID:X-YMail-OSG:Received:X-Mailer:Date:From:Subject:To:MIME-Version:Content-Type; b=EFKunxNgeJU4NxmVabidlDIToWTTdJzDBqOhNqWRVJfQuJytemmWjhEUT/LW3s0yu3T8aNvtFTguP5F4DFePY47TximNZAbNpSmpmWIgW2Bagjuu+UrNxJzXaGOPiiXYNx8guijscwFDVwSERXNaxw/Argqt/KtmymuOyDZh6Zw= DomainKey-Signature: a=rsa-sha1; q=dns; c=nofws; s=s1024; d=yahoo.com; h=Message-ID:X-YMail-OSG:Received:X-Mailer:Date:From:Subject:To:MIME-Version:Content-Type; b=F3v0ol6JX3xvganIcnjZbkrQ1IN76GoTTGc6Vr3QLQpp/j042/cL6HwH/80zkzk0abD58VtfuPiMMVQpGZ/MwZY8bsRacx7GfJ2JHAgkT0YVsfWSDwEUi9XZoTI6g89TEmsff6l+KAZh7stw4EC1QMUknB2sD8IRjWsNqst3gP4=; Message-ID: <159073.32577.qm@web113102.mail.gq1.yahoo.com> X-YMail-OSG: RkciNEQVM1mvODmphcI4xWwE5OiMP2Yp_uhwpSJSVbtxUYz rXiwJHbOCu0vOqb2g8A.Ww5_VleXhf.So0lMaJcaG3Ng5mw.3iapnr9GNtm8 gCpQhWXEiAg8ddOTcdoc6DjxnEN1iMBEDqKxuY2JFtKQVkjaco73jcNuPAEz 7wf5P0n7ZbpPXYl8TXie6cn3BrSJpsJlZIuqfpdL6nBA0UfmvmjNw4TIXVpB .EeYpi83852Ts1_WvlP2IXWvIpY79bHivGa2apaup.5.OQbWEQSrCLEkkdMn 63UmS0trdUbSaTR0lZPn1jCZWTCci.eFyPT7lXVqZOpYCR5d8Wg-- Received: from [41.100.70.18] by web113102.mail.gq1.yahoo.com via HTTP; Thu, 31 Mar 2011 13:09:38 PDT X-Mailer: YahooMailClassic/12.0.2 YahooMailWebService/0.8.109.295617 Date: Thu, 31 Mar 2011 13:09:38 -0700 (PDT) From: Sife Mailling To: freebsd-fs@freebsd.org MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii X-Content-Filtered-By: Mailman/MimeDel 2.1.5 Subject: fail to boot with zfs on root X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 31 Mar 2011 20:22:39 -0000 Recently I installed FreeBSD 8.2 AMD64 with zfs on root, the boot stop in this strange message: can't exec getty '/usr/libexec/getty' for port /dev/ttyv* no such file or directoryI tried to set setuid=on on /usr but it doesn't help. From owner-freebsd-fs@FreeBSD.ORG Fri Apr 1 00:36:48 2011 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 901381065674 for ; Fri, 1 Apr 2011 00:36:48 +0000 (UTC) (envelope-from ppaczyn@gmail.com) Received: from mail-iw0-f182.google.com (mail-iw0-f182.google.com [209.85.214.182]) by mx1.freebsd.org (Postfix) with ESMTP id 58E078FC0C for ; Fri, 1 Apr 2011 00:36:48 +0000 (UTC) Received: by iwn33 with SMTP id 33so3708413iwn.13 for ; Thu, 31 Mar 2011 17:36:47 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:mime-version:in-reply-to:references:date :message-id:subject:from:to:content-type; bh=KVpR3m77GODsCqsRlNLcWj8x+Tt5ik6hWfZ6rITmq98=; b=B+ob6ubSUOgmPtYn4DI5nHXslWIxAuSJPIsfpZLHRRWXDMW/hpJL6Q8gh/Bt4C9HYn kpSTWfx/xli2gQPGKCKGDx/gEpZSweX8s2Ew52nJULPhHzHdxmEnCsbP5Sot4pHYQUGj nUTXmmu/JOLjq/FYGAy5rSzYt6okq29Of+Ggg= DomainKey-Signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :content-type; b=iQMgx+io3J45rB15CqM3s1xLUeXv1AbfRd6ap77xkqzltcZHlL6eNzW1kIIKEBAVgh pFoHJW4N5ze3bLddK/XdXM2qjOvOm0nPm3YrPP4fzpxcbrY8DDAlUNVYvs+JAs0TJL3x 8/ukkZsA/hTlQa0NH3U+b096VdBSKQ+fv7s4c= MIME-Version: 1.0 Received: by 10.43.131.195 with SMTP id hr3mr263024icc.268.1301618207482; Thu, 31 Mar 2011 17:36:47 -0700 (PDT) Received: by 10.42.172.201 with HTTP; Thu, 31 Mar 2011 17:36:47 -0700 (PDT) In-Reply-To: References: Date: Fri, 1 Apr 2011 00:36:47 +0000 Message-ID: From: Piotr Paczynski To: freebsd-fs@freebsd.org Content-Type: text/plain; charset=ISO-8859-1 Subject: Re: ZFS failed after hard power off X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 01 Apr 2011 00:36:48 -0000 > v28 was committed to -current on Feb 27th, so your snapshot is too > old. You should be able to fix the pool with OpenIndiana liveDVD or > liveUSB > http://openindiana.org/download/ OK, after fiddling for like 4 hours I managed to run OpenIndiana LiveDVD (wouldnt boot up which eventually turned out to by my KVM fault) and then get it to see my 3ware 9650 arrays (needed to install drivers from 3ware). The problem is my corrupted pool is not visible by zpool command. Now, I have two pools on the server: zroot and backup. The backup is visible (and not faulted) but zroot is not visible under Solaris - this is the one I have problems with, and also use as boot disk in FreeBSD: root@openindiana:~# zpool import pool: backup id: 8416389847782759507 state: ONLINE status: The pool is formatted using an older on-disk version. action: The pool can be imported using its name or numeric identifier, though some features will not be available without an explicit 'zpool upgrade'. config: backup ONLINE mirror-0 ONLINE c4t1d0p0 ONLINE c4t2d0s2 ONLINE root@openindiana:~# zpool import zroot cannot import 'zroot': no such pool available root@openindiana:~# zpool import -fFn zroot cannot import 'zroot': no such pool available But zroot is sort of visible by zdb: root@openindiana:~# zdb -l /dev/dsk/c4t0d0s2 -------------------------------------------- LABEL 0 -------------------------------------------- version: 14 name: 'zroot' state: 0 txg: 4 pool_guid: 2082617533358360017 hostname: '' top_guid: 1617266672942229358 guid: 1617266672942229358 vdev_tree: type: 'disk' id: 0 guid: 1617266672942229358 path: '/dev/gpt/array0' whole_disk: 0 metaslab_array: 23 metaslab_shift: 31 ashift: 9 asize: 1995678416896 is_log: 0 ... I presume custom FreeBSD partitioning or gpt is the problem here... How do I import it in Solaris? -- Piotr Paczynski From owner-freebsd-fs@FreeBSD.ORG Fri Apr 1 01:36:05 2011 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 62991106564A for ; Fri, 1 Apr 2011 01:36:05 +0000 (UTC) (envelope-from jdc@koitsu.dyndns.org) Received: from qmta11.emeryville.ca.mail.comcast.net (qmta11.emeryville.ca.mail.comcast.net [76.96.27.211]) by mx1.freebsd.org (Postfix) with ESMTP id 497998FC16 for ; Fri, 1 Apr 2011 01:36:04 +0000 (UTC) Received: from omta22.emeryville.ca.mail.comcast.net ([76.96.30.89]) by qmta11.emeryville.ca.mail.comcast.net with comcast id S1LV1g0041vN32cAB1c43q; Fri, 01 Apr 2011 01:36:04 +0000 Received: from koitsu.dyndns.org ([67.180.84.87]) by omta22.emeryville.ca.mail.comcast.net with comcast id S1c31g00e1t3BNj8i1c4VF; Fri, 01 Apr 2011 01:36:04 +0000 Received: by icarus.home.lan (Postfix, from userid 1000) id 946D09B422; Thu, 31 Mar 2011 18:36:03 -0700 (PDT) Date: Thu, 31 Mar 2011 18:36:03 -0700 From: Jeremy Chadwick To: Piotr Paczynski Message-ID: <20110401013603.GA31034@icarus.home.lan> References: MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: User-Agent: Mutt/1.5.21 (2010-09-15) Cc: freebsd-fs@freebsd.org Subject: Re: ZFS failed after hard power off X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 01 Apr 2011 01:36:05 -0000 On Fri, Apr 01, 2011 at 12:36:47AM +0000, Piotr Paczynski wrote: > > v28 was committed to -current on Feb 27th, so your snapshot is too > > old. You should be able to fix the pool with OpenIndiana liveDVD or > > liveUSB > > http://openindiana.org/download/ > > OK, after fiddling for like 4 hours I managed to run OpenIndiana > LiveDVD (wouldnt boot up which eventually turned out to by my KVM > fault) and then get it to see my 3ware 9650 arrays (needed to install > drivers from 3ware). The problem is my corrupted pool is not visible > by zpool command. Now, I have two pools on the server: zroot and > backup. The backup is visible (and not faulted) but zroot is not > visible under Solaris - this is the one I have problems with, and also > use as boot disk in FreeBSD: > > root@openindiana:~# zpool import > pool: backup > id: 8416389847782759507 > state: ONLINE > status: The pool is formatted using an older on-disk version. > action: The pool can be imported using its name or numeric identifier, though > some features will not be available without an explicit 'zpool upgrade'. > config: > > backup ONLINE > mirror-0 ONLINE > c4t1d0p0 ONLINE > c4t2d0s2 ONLINE > root@openindiana:~# zpool import zroot > cannot import 'zroot': no such pool available > root@openindiana:~# zpool import -fFn zroot > cannot import 'zroot': no such pool available I believe the command here is wrong, and that you should be using "zpool import 8416389847782759507" or "zpool import 8416389847782759507 zroot". I've seen many cases where using the pool name doesn't work. -- | Jeremy Chadwick jdc@parodius.com | | Parodius Networking http://www.parodius.com/ | | UNIX Systems Administrator Mountain View, CA, USA | | Making life hard for others since 1977. PGP 4BD6C0CB | From owner-freebsd-fs@FreeBSD.ORG Fri Apr 1 07:53:59 2011 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id C58DC106566B for ; Fri, 1 Apr 2011 07:53:59 +0000 (UTC) (envelope-from numisemis@gmail.com) Received: from mail-wy0-f182.google.com (mail-wy0-f182.google.com [74.125.82.182]) by mx1.freebsd.org (Postfix) with ESMTP id 524F88FC16 for ; Fri, 1 Apr 2011 07:53:58 +0000 (UTC) Received: by wyf23 with SMTP id 23so3231427wyf.13 for ; Fri, 01 Apr 2011 00:53:58 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:subject:mime-version:content-type:from :in-reply-to:date:cc:content-transfer-encoding:message-id:references :to:x-mailer; bh=KIEY3LjT4CBQkT+liyMSo9gEhmyRfP9qnuQlkHKYaio=; b=UC3oujjAOyefh6zMxTfXq/NkBqk28N6oozeOyoQFKObdmdQfCeWY7NyV0DaopmQKpC x3kenzrzu+HwLTkKo0bz+ZkEKgUiuHa5P1SiYI5VLkmaYiPPoAAcZNzo2AVt1qsNPXMm Oyw6AcHr3zpEq3A8ZqvFo35KLE0Gj5Io11Lyk= DomainKey-Signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=subject:mime-version:content-type:from:in-reply-to:date:cc :content-transfer-encoding:message-id:references:to:x-mailer; b=EHuB/FmZdede6kvSkRL5FypBrIvGKS+jpXgWpCDUx0CpNU4Y54Km680Z56saAinQgc WkoG+vWmGAuvwrWj58JG7qD2oB+BKLKdDiolv9Wv6o1LrhPoUYvnZPixvAfmvm1fTuOR TLn9Jh7XGYxKibYYouFeRCQdhXWA/GrgQWuSA= Received: by 10.227.58.72 with SMTP id f8mr3848173wbh.181.1301644437377; Fri, 01 Apr 2011 00:53:57 -0700 (PDT) Received: from sime-imac.logos.hr ([213.147.110.159]) by mx.google.com with ESMTPS id o23sm1104341wbc.27.2011.04.01.00.53.56 (version=TLSv1/SSLv3 cipher=OTHER); Fri, 01 Apr 2011 00:53:56 -0700 (PDT) Mime-Version: 1.0 (Apple Message framework v1084) Content-Type: text/plain; charset=us-ascii From: =?iso-8859-2?Q?=A9imun_Mikecin?= In-Reply-To: <20110401013603.GA31034@icarus.home.lan> Date: Fri, 1 Apr 2011 09:53:52 +0200 Content-Transfer-Encoding: quoted-printable Message-Id: <84DF4838-CE43-430E-8C3A-4CC7881E44BD@gmail.com> References: <20110401013603.GA31034@icarus.home.lan> To: Jeremy Chadwick X-Mailer: Apple Mail (2.1084) Cc: freebsd-fs@freebsd.org Subject: Re: ZFS failed after hard power off X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 01 Apr 2011 07:53:59 -0000 On 1. tra. 2011., at 03:36, Jeremy Chadwick wrote: > On Fri, Apr 01, 2011 at 12:36:47AM +0000, Piotr Paczynski wrote: >>> v28 was committed to -current on Feb 27th, so your snapshot is too >>> old. You should be able to fix the pool with OpenIndiana liveDVD or >>> liveUSB >>> http://openindiana.org/download/ >>=20 >> OK, after fiddling for like 4 hours I managed to run OpenIndiana >> LiveDVD (wouldnt boot up which eventually turned out to by my KVM >> fault) and then get it to see my 3ware 9650 arrays (needed to install >> drivers from 3ware). The problem is my corrupted pool is not visible >> by zpool command. Now, I have two pools on the server: zroot and >> backup. The backup is visible (and not faulted) but zroot is not >> visible under Solaris - this is the one I have problems with, and = also >> use as boot disk in FreeBSD: >>=20 >> root@openindiana:~# zpool import >> pool: backup >> id: 8416389847782759507 >> state: ONLINE >> status: The pool is formatted using an older on-disk version. >> action: The pool can be imported using its name or numeric = identifier, though >> some features will not be available without an explicit 'zpool = upgrade'. >> config: >>=20 >> backup ONLINE >> mirror-0 ONLINE >> c4t1d0p0 ONLINE >> c4t2d0s2 ONLINE >> root@openindiana:~# zpool import zroot >> cannot import 'zroot': no such pool available >> root@openindiana:~# zpool import -fFn zroot >> cannot import 'zroot': no such pool available >=20 > I believe the command here is wrong, and that you should be using = "zpool > import 8416389847782759507" or "zpool import 8416389847782759507 = zroot". > I've seen many cases where using the pool name doesn't work. He has two pools: backup and zroot. Only backup is visible. So he shouldn't do as you suggested, because it will rename his backup = pool to zroot, which will bring the confusion, because it is not the = original zroot pool. To be able to import his original zroot pool, it needs to be visible to = "zpool import" as a first step. From owner-freebsd-fs@FreeBSD.ORG Fri Apr 1 10:34:45 2011 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 021B91065673 for ; Fri, 1 Apr 2011 10:34:45 +0000 (UTC) (envelope-from ppaczyn@gmail.com) Received: from mail-iy0-f182.google.com (mail-iy0-f182.google.com [209.85.210.182]) by mx1.freebsd.org (Postfix) with ESMTP id B96918FC15 for ; Fri, 1 Apr 2011 10:34:44 +0000 (UTC) Received: by iyj12 with SMTP id 12so4248605iyj.13 for ; Fri, 01 Apr 2011 03:34:44 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:mime-version:in-reply-to:references:date :message-id:subject:from:to:cc:content-type; bh=4ElW7d7fg/pjX4VwvOfsuRxQ0ojVoVHTu9BB/oLwl8w=; b=UbT4tYA4MW202R+Dr7TpL/mv77cMx7+NkLH1d1+v/dhKY+p2hC6YTExBOEcOMEoMaZ 9AIKBzYeg3T3+t5fbQ36OtAklnfvaEC8eZmBLWUiyb+vjEuGgxZVpZrW8LW0nnzFkYqH 77yxA7rJjJVeT6n5F5ATrfSl1b/65HK2W+cdk= DomainKey-Signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type; b=sm+xuXlrvBLJvbf8ETtzHRw2/nShxmXmda9MuUKSsjysYYZwRaDA8jhE3nZ++RauIP bK8QL+NO8hiiINXWJQljp37kLigs07iufgLeHa5E5Vn4RaPW7U43oZj2htf07F0i3/eV hXprc53L2ECqndyoHpn7DEpTKhOij6/5yN7jk= MIME-Version: 1.0 Received: by 10.43.63.212 with SMTP id xf20mr1055427icb.265.1301654084071; Fri, 01 Apr 2011 03:34:44 -0700 (PDT) Received: by 10.42.172.201 with HTTP; Fri, 1 Apr 2011 03:34:44 -0700 (PDT) In-Reply-To: <84DF4838-CE43-430E-8C3A-4CC7881E44BD@gmail.com> References: <20110401013603.GA31034@icarus.home.lan> <84DF4838-CE43-430E-8C3A-4CC7881E44BD@gmail.com> Date: Fri, 1 Apr 2011 12:34:44 +0200 Message-ID: From: Piotr Paczynski To: =?UTF-8?Q?=C5=A0imun_Mikecin?= Content-Type: text/plain; charset=ISO-8859-1 Cc: freebsd-fs@freebsd.org Subject: Re: ZFS failed after hard power off X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 01 Apr 2011 10:34:45 -0000 >> I believe the command here is wrong, and that you should be using "zpool >> import 8416389847782759507" or "zpool import 8416389847782759507 zroot". >> I've seen many cases where using the pool name doesn't work. I can import the "backup" pool just fine using either its name or the guid. The zroot pool is the problem. > He has two pools: backup and zroot. Only backup is visible. > So he shouldn't do as you suggested, because it will rename his backup pool to zroot, which will bring the confusion, because it is not the original zroot pool. > To be able to import his original zroot pool, it needs to be visible to "zpool import" as a first step. Any idea how to make it visible to "zpool import"? I've tried importing the zroot pool using its various guids, as show by "zdb -l /dev/dsk/c3t0d0s2": version: 14 name: 'zroot' pool_guid: 2082617533358360017 top_guid: 1617266672942229358 but still no luck: root@openindiana:~# zpool import 2082617533358360017 cannot import '2082617533358360017': no such pool available root@openindiana:~# zpool import 1617266672942229358 cannot import '1617266672942229358': no such pool available From owner-freebsd-fs@FreeBSD.ORG Fri Apr 1 10:40:15 2011 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 2FF2E106566B; Fri, 1 Apr 2011 10:40:15 +0000 (UTC) (envelope-from petefrench@ingresso.co.uk) Received: from constantine.ingresso.co.uk (constantine.ingresso.co.uk [IPv6:2001:470:1f09:176e::3]) by mx1.freebsd.org (Postfix) with ESMTP id EB8E98FC13; Fri, 1 Apr 2011 10:40:14 +0000 (UTC) Received: from dilbert.london-internal.ingresso.co.uk ([10.64.50.6] helo=dilbert.ticketswitch.com) by constantine.ingresso.co.uk with esmtps (TLSv1:AES256-SHA:256) (Exim 4.73 (FreeBSD)) (envelope-from ) id 1Q5blz-000Czm-Of; Fri, 01 Apr 2011 11:40:11 +0100 Received: from petefrench by dilbert.ticketswitch.com with local (Exim 4.74 (FreeBSD)) (envelope-from ) id 1Q5blz-00084y-NW; Fri, 01 Apr 2011 11:40:11 +0100 To: petefrench@ingresso.co.uk, trociny@freebsd.org In-Reply-To: <86wrjj5xfm.fsf@in138.ua3> Message-Id: From: Pete French Date: Fri, 01 Apr 2011 11:40:11 +0100 Cc: freebsd-fs@freebsd.org, pjd@freebsd.org, freebsd-current@freebsd.org, freebsd-stable@freebsd.org Subject: Re: Any success stories for HAST + ZFS? X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 01 Apr 2011 10:40:15 -0000 > Yes, you may hit it only on hast devices creation. The workaround is to avoid > using 'hastctl role primary all', start providers one by one instead. Interesting to note that I just hit a lockup in hast (the discs froze up - could not run hastctl or zpool import, and could not kill them). I have two hast devices instead of one, but I am starting them individually instead of using 'all'. The copde includes all the latest patches which have gone into STABLE over the last few days, none of which look particularly controversial! I havent tried your atch yet, nor been able to reporduce the lockup, but thought you might be interested to know that I also had problems with multiple providers. cheers, -pete. From owner-freebsd-fs@FreeBSD.ORG Fri Apr 1 11:22:48 2011 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 54B7D106564A; Fri, 1 Apr 2011 11:22:48 +0000 (UTC) (envelope-from petefrench@ingresso.co.uk) Received: from constantine.ingresso.co.uk (constantine.ingresso.co.uk [IPv6:2001:470:1f09:176e::3]) by mx1.freebsd.org (Postfix) with ESMTP id 11F808FC0A; Fri, 1 Apr 2011 11:22:48 +0000 (UTC) Received: from dilbert.london-internal.ingresso.co.uk ([10.64.50.6] helo=dilbert.ticketswitch.com) by constantine.ingresso.co.uk with esmtps (TLSv1:AES256-SHA:256) (Exim 4.73 (FreeBSD)) (envelope-from ) id 1Q5cRC-000EXF-KS; Fri, 01 Apr 2011 12:22:46 +0100 Received: from petefrench by dilbert.ticketswitch.com with local (Exim 4.74 (FreeBSD)) (envelope-from ) id 1Q5cRC-0000iz-JX; Fri, 01 Apr 2011 12:22:46 +0100 To: fjwcash@gmail.com, trociny@freebsd.org In-Reply-To: Message-Id: From: Pete French Date: Fri, 01 Apr 2011 12:22:46 +0100 Cc: freebsd-fs@freebsd.org, pjd@freebsd.org, freebsd-current@freebsd.org, freebsd-stable@freebsd.org Subject: Re: Any success stories for HAST + ZFS? X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 01 Apr 2011 11:22:48 -0000 > The other 5% of the time, the hastd crashes occurred either when > importing the ZFS pool, or when running multiple parallel rsyncs to > the pool. hastd was always shown as the last running process in the > backtrace onscreen. This is what I am seeing - did you manage to reproduce this with the patch, or does it fix the issue for you ? Am doing more test now, with only a single hast device to see if it is stable. Am Ok to run without mirroring across hast devices for now, but wouldnt like to do so long term! -pete. From owner-freebsd-fs@FreeBSD.ORG Fri Apr 1 12:31:42 2011 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 7D79E106566C; Fri, 1 Apr 2011 12:31:42 +0000 (UTC) (envelope-from to.my.trociny@gmail.com) Received: from mail-ww0-f50.google.com (mail-ww0-f50.google.com [74.125.82.50]) by mx1.freebsd.org (Postfix) with ESMTP id 92F628FC17; Fri, 1 Apr 2011 12:31:41 +0000 (UTC) Received: by wwc33 with SMTP id 33so3862779wwc.31 for ; Fri, 01 Apr 2011 05:31:40 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:from:to:cc:subject:organization:references :sender:date:in-reply-to:message-id:user-agent:mime-version :content-type; bh=bqNf/7fEK7qxSIg9S8OIcCHJLt8Rcc3c2UBI+62+pAg=; b=k1Q+4VG66AllL4RQ0nd3iqdSFI+F8M9hyf1PCTJnkEcxC4aGqguTtx/fAYgsrxx7bX UujtrF8+iZX79XS8U+xjdz/HDjWmmIhXap0y7F99TQLegA1i4zI+we1TQLnIMFTHIEKo a7CwXxbqLwbpV10oOHnweFW07myExWI4cWPIw= DomainKey-Signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=from:to:cc:subject:organization:references:sender:date:in-reply-to :message-id:user-agent:mime-version:content-type; b=Y51PgGxrnqR/YHYEkkRnfTh4Tn+P8I36EMZTMEppFMkytJn/xKEj8r566lHO6N0Pq7 KFq6wWa5k6v2pYcYlAq1zAt9jters7N5Mv6qNMpLCLY4T8gm63wEC7ZSj9teGCjZmJRR 0BPaAvj50FiCVloC6UOOWXDwOBcpFogXdEKyo= Received: by 10.216.121.208 with SMTP id r58mr771357weh.61.1301661100530; Fri, 01 Apr 2011 05:31:40 -0700 (PDT) Received: from localhost ([94.27.39.186]) by mx.google.com with ESMTPS id x1sm1240001wbh.53.2011.04.01.05.31.38 (version=TLSv1/SSLv3 cipher=OTHER); Fri, 01 Apr 2011 05:31:39 -0700 (PDT) From: Mikolaj Golub To: Pete French Organization: TOA Ukraine References: Sender: Mikolaj Golub Date: Fri, 01 Apr 2011 15:31:36 +0300 In-Reply-To: (Pete French's message of "Fri, 01 Apr 2011 11:40:11 +0100") Message-ID: <86wrjei253.fsf@in138.ua3> User-Agent: Gnus/5.13 (Gnus v5.13) Emacs/23.2 (berkeley-unix) MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Cc: freebsd-fs@freebsd.org, pjd@freebsd.org, freebsd-current@freebsd.org, freebsd-stable@freebsd.org Subject: Re: Any success stories for HAST + ZFS? X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 01 Apr 2011 12:31:42 -0000 On Fri, 01 Apr 2011 11:40:11 +0100 Pete French wrote: >> Yes, you may hit it only on hast devices creation. The workaround is to avoid >> using 'hastctl role primary all', start providers one by one instead. PF> Interesting to note that I just hit a lockup in hast (the discs froze PF> up - could not run hastctl or zpool import, and could not kill PF> them). I have two hast devices instead of one, but I am starting them PF> individually instead of using 'all'. The copde includes all the latest PF> patches which have gone into STABLE over the last few days, none of which PF> look particularly controversial! PF> I havent tried your atch yet, nor been able to reporduce the lockup, but PF> thought you might be interested to know that I also had problems with PF> multiple providers. This looks like a different problem. If you have this again please provide the output of 'procstat -kka'. -- Mikolaj Golub From owner-freebsd-fs@FreeBSD.ORG Fri Apr 1 12:32:45 2011 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id CCF0B1065672; Fri, 1 Apr 2011 12:32:45 +0000 (UTC) (envelope-from petefrench@ingresso.co.uk) Received: from constantine.ingresso.co.uk (constantine.ingresso.co.uk [IPv6:2001:470:1f09:176e::3]) by mx1.freebsd.org (Postfix) with ESMTP id 948488FC15; Fri, 1 Apr 2011 12:32:45 +0000 (UTC) Received: from dilbert.london-internal.ingresso.co.uk ([10.64.50.6] helo=dilbert.ticketswitch.com) by constantine.ingresso.co.uk with esmtps (TLSv1:AES256-SHA:256) (Exim 4.73 (FreeBSD)) (envelope-from ) id 1Q5dWs-000GvP-2N; Fri, 01 Apr 2011 13:32:42 +0100 Received: from petefrench by dilbert.ticketswitch.com with local (Exim 4.74 (FreeBSD)) (envelope-from ) id 1Q5dWs-0000xJ-1N; Fri, 01 Apr 2011 13:32:42 +0100 To: petefrench@ingresso.co.uk, trociny@freebsd.org In-Reply-To: <86wrjei253.fsf@in138.ua3> Message-Id: From: Pete French Date: Fri, 01 Apr 2011 13:32:42 +0100 Cc: freebsd-fs@freebsd.org, pjd@freebsd.org, freebsd-current@freebsd.org, freebsd-stable@freebsd.org Subject: Re: Any success stories for HAST + ZFS? X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 01 Apr 2011 12:32:45 -0000 > This looks like a different problem. If you have this again please provide the > output of 'procstat -kka'. Will do... -pete. From owner-freebsd-fs@FreeBSD.ORG Fri Apr 1 12:55:28 2011 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 45CB4106566C for ; Fri, 1 Apr 2011 12:55:28 +0000 (UTC) (envelope-from Nicholas.Radonicich@cogeco.com) Received: from bupnmail1.cogeco.com (bupnmail1.cogeco.com [24.226.15.15]) by mx1.freebsd.org (Postfix) with ESMTP id 0D14D8FC19 for ; Fri, 1 Apr 2011 12:55:27 +0000 (UTC) Received: from bupnmail1.cogeco.com (localhost.localdomain [127.0.0.1]) by localhost (Email Security Appliance) with SMTP id 6758A1B9049F_D95C8B3B for ; Fri, 1 Apr 2011 12:44:35 +0000 (GMT) Received: from BUPWXMT1.cogeco.com (bupwxmt1.cogeco.com [10.1.1.241]) by bupnmail1.cogeco.com (Sophos Email Appliance) with ESMTP id 3DA8D1B903E0_D95C8B3F for ; Fri, 1 Apr 2011 12:44:35 +0000 (GMT) Received: from BUPWXDB1.cogeco.com ([10.1.1.240]) by BUPWXMT1.cogeco.com with Microsoft SMTPSVC(6.0.3790.3959); Fri, 1 Apr 2011 08:44:35 -0400 Priority: normal Importance: normal X-MimeOLE: Produced By Microsoft MimeOLE V6.00.3790.4721 MIME-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: quoted-printable x-cr-hashedpuzzle: VCQ= Ac3Y BHYD Ba0i CP7y CsvH DGqm DKLT ETJt Eve6 FULr Fkz4 GFe9 GZpH Hzqb H6F1; 1; ZgByAGUAZQBiAHMAZAAtAGYAcwBAAGYAcgBlAGUAYgBzAGQALgBvAHIAZwA=; Sosha1_v1; 7; {A1FB5B43-579C-41CB-9FD4-4BF7BBB807D6}; bgBpAGMAaABvAGwAYQBzAC4AcgBhAGQAbwBuAGkAYwBpAGMAaABAAGMAbwBnAGUAYwBvAC4AYwBvAG0A; Fri, 01 Apr 2011 12:43:46 GMT; VQBuAGEAYgBsAGUAIAB0AG8AIAByAGUAbQBvAHYAZQAgAFoASQBMACAAZgByAG8AbQAgAFoARgBTACAAdgAyADgA x-cr-puzzleid: {A1FB5B43-579C-41CB-9FD4-4BF7BBB807D6} Content-class: urn:content-classes:message Date: Fri, 1 Apr 2011 08:43:46 -0400 Message-ID: <4E9C445FE9190248B4F2CFB707137B5DCA4F5E@BUPWXDB1.cogeco.com> X-MS-Has-Attach: X-MS-TNEF-Correlator: Thread-Topic: Unable to remove ZIL from ZFS v28 Thread-Index: AcvwanFRR8KnoUKpQTib0LILzXfjVw== From: "Nicholas Radonicich" To: X-OriginalArrivalTime: 01 Apr 2011 12:44:35.0287 (UTC) FILETIME=[8E2F8670:01CBF06A] Subject: Unable to remove ZIL from ZFS v28 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 01 Apr 2011 12:55:28 -0000 Hello, I tried to remove the log device I was using testing ZFS, when I ran zpool remove tank ad10 the system became unresponsive. I left it overnight and no change thinking that the system was working things out. Below is my zpool info, both the cache and log drives are basic SATA discs. On reboot of the system I was unable to get past loading ZFS without booting into singleuser and putting the log device offline. Once offline (and unmounted) if I try to remove the log device the system still hangs. I'm currently scrubbing the system but any help on getting the device removed would be appreciated... or if more information is needed that is no problem either. FreeBSD less.cogeco.net 9.0-CURRENT FreeBSD 9.0-CURRENT #1: Thu Mar 24 08:26:39 UTC 2011 nick@:/usr/obj/usr/src/sys/GENERIC amd64 NAME STATE READ WRITE CKSUM tank DEGRADED 0 0 0 aacd0 ONLINE 0 0 0 aacd1 ONLINE 0 0 0 aacd2 ONLINE 0 0 0 aacd3 ONLINE 0 0 0 logs ad10 OFFLINE 0 0 0 cache ad6s2 ONLINE 0 0 0 errors: No known data errors=20 =20 Do you really need to print this email? Help preserve our environment! = Devez-vous vraiment imprimer ce courriel? Pensons a l'environnement! __________________________________________________________ =20 Cogeco Cable operates under various legal entities and, except as = specifically provided herein, this communication cannot be attributed to = any given entity or be regarded as a statement of any given entity. The = information in this message, including in all attachments, is = confidential or privileged. In the event you have received this message = in error and are not the intended recipient, you are hereby advised that = any use, copying or reproduction of this document is strictly forbidden. = Please notify immediately the sender of this error and destroy this = message, including its attachments, as the case may be. =20 Cogeco Cable exerce ses activites par l'entremise de differentes entites = legales et, sauf si autrement indique dans la presente communication, = celle-ci ne peut etre attribuee a l'une ou l'autre de ces entites en = particulier ou consideree comme un enonce d'une de ces entites en = particulier. L'information apparaissant dans ce message electronique et = dans les documents qui y sont joints est de nature confidentielle ou = privilegiee. Si ce message vous est parvenu par erreur et que vous n'en = etes pas le destinataire vise, vous etes par les presentes avises que = toute utilisation, copie ou distribution de ce message est strictement = interdite. Vous etes donc prie d'en informer immediatement l'expediteur = et de detruire ce message, ainsi que les documents qui y sont joints, le = cas echeant. __________________________________________________________ From owner-freebsd-fs@FreeBSD.ORG Fri Apr 1 14:18:03 2011 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 9B1EE106566B; Fri, 1 Apr 2011 14:18:03 +0000 (UTC) (envelope-from fjwcash@gmail.com) Received: from mail-yw0-f54.google.com (mail-yw0-f54.google.com [209.85.213.54]) by mx1.freebsd.org (Postfix) with ESMTP id 0D0038FC13; Fri, 1 Apr 2011 14:18:02 +0000 (UTC) Received: by ywf9 with SMTP id 9so1644792ywf.13 for ; Fri, 01 Apr 2011 07:18:02 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:mime-version:in-reply-to:references:date :message-id:subject:from:to:cc:content-type :content-transfer-encoding; bh=hhYgRZzxQZ9eSzDt4Ga4096kf+EKibrlQ4ZLg6EMPsc=; b=u/IqUM1L/fDTW+6aYoMgwJE3kD5UHvtcQiWuH8M8tavlJg4xN1BqGwGDfq4CyyDheY laAg5EJIk9BQ9kl6ctkiLIc1y3cqINdIFtOZKReJbIGPcWL8GqnuEVsrxhOKbVu1H2Ev zBBdW8T4rJ7O2HDP35yNW1RmVJQUfZKuyewLQ= DomainKey-Signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type:content-transfer-encoding; b=gY108ENFj0Vm7+qvcuv8ZIL7xI8KbhTIlz2maolyu0xOZKN4OxYKE9tyVh+E9JlK1v XGz8vMHAY1E/4+m7fPMAGeQuVjgPRdHJzH6G8HgwZkHPLb73Pbw0wbFOImAOHBxLhMxB wBSS2QG9RyWW0TNgdyDvB9Q+R0RF22uH7AbOM= MIME-Version: 1.0 Received: by 10.91.56.2 with SMTP id i2mr1065604agk.19.1301667482163; Fri, 01 Apr 2011 07:18:02 -0700 (PDT) Received: by 10.90.100.10 with HTTP; Fri, 1 Apr 2011 07:18:01 -0700 (PDT) In-Reply-To: References: Date: Fri, 1 Apr 2011 07:18:01 -0700 Message-ID: From: Freddie Cash To: Pete French Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: quoted-printable Cc: trociny@freebsd.org, freebsd-fs@freebsd.org, freebsd-current@freebsd.org, freebsd-stable@freebsd.org, pjd@freebsd.org Subject: Re: Any success stories for HAST + ZFS? X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 01 Apr 2011 14:18:03 -0000 On Fri, Apr 1, 2011 at 4:22 AM, Pete French wro= te: >> The other 5% of the time, the hastd crashes occurred either when >> importing the ZFS pool, or when running multiple parallel rsyncs to >> the pool. =C2=A0hastd was always shown as the last running process in th= e >> backtrace onscreen. > > This is what I am seeing - did you manage to reproduce this with the patc= h, > or does it fix the issue for you ? Am doing more test now, with only a si= ngle > hast device to see if it is stable. Am Ok to run without mirroring across > hast devices for now, but wouldnt like to do so long term! I have not been able to crash or hang the box since applying Mikolaj's patc= h. I've tried the following: - destroy pool - create pool - destroy hast providers - create hast providers - switch from master to slave via hastctl using "role secondary all" - switch from slave to master via hastctl using "role primary all" - switch roles via hast-carp-switch which does one provider per second - import/export pool I've been running 6 parallel rsyncs for the past 48 hours, getting a consistent 200 Mbps of transfers, with just under 2 TB of deduped data in the pool, without any lockups. So far, so good. --=20 Freddie Cash fjwcash@gmail.com From owner-freebsd-fs@FreeBSD.ORG Sat Apr 2 06:33:16 2011 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 6A210106566B for ; Sat, 2 Apr 2011 06:33:16 +0000 (UTC) (envelope-from pawel@dawidek.net) Received: from mail.garage.freebsd.pl (60.wheelsystems.com [83.12.187.60]) by mx1.freebsd.org (Postfix) with ESMTP id 069258FC08 for ; Sat, 2 Apr 2011 06:33:14 +0000 (UTC) Received: by mail.garage.freebsd.pl (Postfix, from userid 65534) id 747FD45F21; Sat, 2 Apr 2011 08:33:12 +0200 (CEST) Received: from localhost (89-73-195-149.dynamic.chello.pl [89.73.195.149]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by mail.garage.freebsd.pl (Postfix) with ESMTP id B49C345CA6; Sat, 2 Apr 2011 08:33:06 +0200 (CEST) Date: Sat, 2 Apr 2011 08:33:03 +0200 From: Pawel Jakub Dawidek To: Sife Mailling Message-ID: <20110402063303.GA1849@garage.freebsd.pl> References: <159073.32577.qm@web113102.mail.gq1.yahoo.com> MIME-Version: 1.0 Content-Type: multipart/signed; micalg=pgp-sha1; protocol="application/pgp-signature"; boundary="X1bOJ3K7DJ5YkBrT" Content-Disposition: inline In-Reply-To: <159073.32577.qm@web113102.mail.gq1.yahoo.com> X-OS: FreeBSD 9.0-CURRENT amd64 User-Agent: Mutt/1.5.21 (2010-09-15) X-Spam-Checker-Version: SpamAssassin 3.0.4 (2005-06-05) on mail.garage.freebsd.pl X-Spam-Level: X-Spam-Status: No, score=-0.6 required=4.5 tests=BAYES_00,RCVD_IN_SORBS_DUL autolearn=no version=3.0.4 Cc: freebsd-fs@freebsd.org Subject: Re: fail to boot with zfs on root X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 02 Apr 2011 06:33:16 -0000 --X1bOJ3K7DJ5YkBrT Content-Type: text/plain; charset=us-ascii Content-Disposition: inline Content-Transfer-Encoding: quoted-printable On Thu, Mar 31, 2011 at 01:09:38PM -0700, Sife Mailling wrote: > Recently I installed FreeBSD 8.2 AMD64 with zfs on root, the boot stop in= this strange message: > can't exec getty '/usr/libexec/getty' for port /dev/ttyv* no such file or= directoryI tried to set setuid=3Don on /usr but it doesn't help. Do you have /dev/ttyv entries? Do you have 'device sc' in you kernel config? Is /usr/ file system mounted at that time and the /usr/lobexec/getty file exists? --=20 Pawel Jakub Dawidek http://www.wheelsystems.com FreeBSD committer http://www.FreeBSD.org Am I Evil? Yes, I Am! http://yomoli.com --X1bOJ3K7DJ5YkBrT Content-Type: application/pgp-signature -----BEGIN PGP SIGNATURE----- Version: GnuPG v2.0.14 (FreeBSD) iEYEARECAAYFAk2Wwx4ACgkQForvXbEpPzSLBQCgkoDjsxMQqgALWoEQGNVi/rxU VDQAoJhrVg5oUMra6xfJ4H1jd2oGVGgv =KVGZ -----END PGP SIGNATURE----- --X1bOJ3K7DJ5YkBrT-- From owner-freebsd-fs@FreeBSD.ORG Sat Apr 2 08:45:41 2011 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 9649010657C5; Sat, 2 Apr 2011 08:45:41 +0000 (UTC) (envelope-from pawel@dawidek.net) Received: from mail.garage.freebsd.pl (60.wheelsystems.com [83.12.187.60]) by mx1.freebsd.org (Postfix) with ESMTP id 40BC18FC13; Sat, 2 Apr 2011 08:45:40 +0000 (UTC) Received: by mail.garage.freebsd.pl (Postfix, from userid 65534) id DBE0F46B74; Sat, 2 Apr 2011 10:45:39 +0200 (CEST) Received: from localhost (89-73-195-149.dynamic.chello.pl [89.73.195.149]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by mail.garage.freebsd.pl (Postfix) with ESMTP id C3B3E45C9C; Sat, 2 Apr 2011 10:44:35 +0200 (CEST) Date: Sat, 2 Apr 2011 10:44:31 +0200 From: Pawel Jakub Dawidek To: Freddie Cash Message-ID: <20110402084431.GB1849@garage.freebsd.pl> References: MIME-Version: 1.0 Content-Type: multipart/signed; micalg=pgp-sha1; protocol="application/pgp-signature"; boundary="s2ZSL+KKDSLx8OML" Content-Disposition: inline In-Reply-To: X-OS: FreeBSD 9.0-CURRENT amd64 User-Agent: Mutt/1.5.21 (2010-09-15) X-Spam-Checker-Version: SpamAssassin 3.0.4 (2005-06-05) on mail.garage.freebsd.pl X-Spam-Level: X-Spam-Status: No, score=-0.6 required=4.5 tests=BAYES_00,RCVD_IN_SORBS_DUL autolearn=no version=3.0.4 Cc: FreeBSD Filesystems , FreeBSD-Current , FreeBSD Stable Subject: Re: Any success stories for HAST + ZFS? X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 02 Apr 2011 08:45:41 -0000 --s2ZSL+KKDSLx8OML Content-Type: text/plain; charset=us-ascii Content-Disposition: inline Content-Transfer-Encoding: quoted-printable On Thu, Mar 24, 2011 at 01:36:32PM -0700, Freddie Cash wrote: > [Not sure which list is most appropriate since it's using HAST + ZFS > on -RELEASE, -STABLE, and -CURRENT. Feel free to trim the CC: on > replies.] >=20 > I'm having a hell of a time making this work on real hardware, and am > not ruling out hardware issues as yet, but wanted to get some > reassurance that someone out there is using this combination (FreeBSD > + HAST + ZFS) successfully, without kernel panics, without core dumps, > without deadlocks, without issues, etc. I need to know I'm not > chasing a dead rabbit. I just committed a fix for a problem that might look like a deadlock. With trociny@ patch and my last fix (to GEOM GATE and hastd) do you still have any issues? --=20 Pawel Jakub Dawidek http://www.wheelsystems.com FreeBSD committer http://www.FreeBSD.org Am I Evil? Yes, I Am! http://yomoli.com --s2ZSL+KKDSLx8OML Content-Type: application/pgp-signature -----BEGIN PGP SIGNATURE----- Version: GnuPG v2.0.14 (FreeBSD) iEYEARECAAYFAk2W4e8ACgkQForvXbEpPzT5MQCcCyNhQpd0Ql+wNhlciiNm1N+w m1YAoKJX8PAnwxzQy/U+myNAt0tIeUjU =xeh6 -----END PGP SIGNATURE----- --s2ZSL+KKDSLx8OML-- From owner-freebsd-fs@FreeBSD.ORG Sat Apr 2 09:45:44 2011 Return-Path: Delivered-To: fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 4160D106566B; Sat, 2 Apr 2011 09:45:44 +0000 (UTC) (envelope-from kraduk@gmail.com) Received: from mail-ww0-f50.google.com (mail-ww0-f50.google.com [74.125.82.50]) by mx1.freebsd.org (Postfix) with ESMTP id 9FD5C8FC1A; Sat, 2 Apr 2011 09:45:43 +0000 (UTC) Received: by wwc33 with SMTP id 33so4747076wwc.31 for ; Sat, 02 Apr 2011 02:45:42 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:mime-version:in-reply-to:references:date :message-id:subject:from:to:cc:content-type; bh=zkpWQCK0b2xukR/XofHw5gCQIDwpILZfuK6LZJYJqpw=; b=TUyJK0NjiFuabK9j8b9Yr++vO2a7PjZ25CjjYTdYGtHHrPp0PRI7jlkfE9Tz5et3rx /vlvvCYRzBtjX5s2QSpivlDrSDsqoFCpJrd66zIjyfj+evcYSfzwFQcjxPmOHGVOtvfK io6nez+yIOJtJbjQZ5Qnoxhetx0SmfZpv7JBE= DomainKey-Signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type; b=PXng/XSOrsNcdVxzatMzOM8Vzj2VjHSkOPU0vWIVWScVS0n5IKfxoVwEbRa7X92Fdq 3UNnjTXsme2RVy+4pmele8YeFyy+TJ700TimqBnsFt7OkbnbMT2knEXo3zjoydd9xzcV pjSmQaFRgvvTTdpWhd46ub94v4k3SeoNDdD6M= MIME-Version: 1.0 Received: by 10.216.82.142 with SMTP id o14mr4608724wee.114.1301736205779; Sat, 02 Apr 2011 02:23:25 -0700 (PDT) Received: by 10.216.187.16 with HTTP; Sat, 2 Apr 2011 02:23:25 -0700 (PDT) In-Reply-To: <4D941BFF.6050807@networktest.com> References: <4C51ECAA.2070707@networktest.com> <4C51FE41.8030906@FreeBSD.org> <4D941BFF.6050807@networktest.com> Date: Sat, 2 Apr 2011 10:23:25 +0100 Message-ID: From: krad To: David Newman Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: quoted-printable X-Content-Filtered-By: Mailman/MimeDel 2.1.5 Cc: fs@freebsd.org Subject: Re: fixing a busted ZFS upgrade X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 02 Apr 2011 09:45:44 -0000 On 31 March 2011 07:15, David Newman wrote: > On 7/29/10 3:18 PM, Martin Matuska wrote: > > > > > For recovering a system that does not boot anymore, you can use mfsBSD > > ISO's: > > http://mfsbsd.vx.sk > > > > You can boot from the iso and repair the boot record. > > Nearly a year ago mfsBSD saved me from a munged 8.0->8.1 upgrade of a > ZFS box and allowed me to revive a ZFS root partition. > > I've done the same stupid thing again in moving from 8.1 to 8.2, only > now the server won't boot from the 8.2 mfsBSD ISO, or the 8.1 ISO. In > both cases it hangs at loader.conf. > > Thanks in advance for any clues on reviving this system. > > dn > > > > I recommend you check your gpart partitions with "gpart show" and verify > > discovered pools with "zpool import" > > (without any flags or arguments) first. > > > > mm > > > > D=C5=88a 29. 7. 2010 23:03, David Newman wrote / nap=C3=ADsal(a): > >> Attempting to upgrade an 8.0-RELEASE to 8.1-RELEASE failed on a system > >> running a bootable ZFS partition. > >> > >> The system boots to the loader prompt and complains there's no bootabl= e > >> kernel. Running 'lsmod' shows there are four ZFS disks present. > >> > >> Thanks in advance for clues on fixing this, and also on the right way = to > >> upgrade FreeBSD systems with bootable ZFS partitions. > >> > >> Steps to reproduce: > >> > >> 1. Build 8.0-RELEASE system following the freebsd.org wiki: > >> > >> http://wiki.freebsd.org/RootOnZFS/GPTZFSBoot/RAIDZ1 > >> > >> In this case the system uses raidz1 across four SATA drives. > >> > >> 2. Upgrade to 8.1-RELEASE using the 'FreeBSD Update' directions: > >> > >> http://www.freebsd.org/releases/8.1R/announce.html > >> > >> 3. After first reboot, system boots to the loader prompt. > >> > >> dn > >> > >> _______________________________________________ > >> freebsd-fs@freebsd.org mailing list > >> http://lists.freebsd.org/mailman/listinfo/freebsd-fs > >> To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" > _______________________________________________ > freebsd-fs@freebsd.org mailing list > http://lists.freebsd.org/mailman/listinfo/freebsd-fs > To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" > I script my installs for upgrading zfsroot as its safer that way. Hes my script for installing after everything is built #!/usr/local/bin/bash if [ $UID !=3D 0 ] ; then echo your not root !! ; exit 1 fi date=3D`date '+%Y%m%d'` oroot=3D`grep "vfs.root.mountfrom=3D\"zfs:system-4k/" /boot/loader.conf | s= ed -e "s#^.*\"zfs:system-4k/be/##" -e "s#\"##"` nroot=3D"root$date" snap=3D"autoup-$RANDOM" zpool=3Dsystem-4k export DESTDIR=3D/$zpool/be/$nroot if [ "$oroot" =3D "$nroot" ] ; then echo "i cant update twice in one day"; exit 1 fi echo building in $zpool/be/$nroot zfs snapshot $zpool/be/$oroot@$snap && zfs send $zpool/be/$oroot@$snap | zfs receive -vv $zpool/be/$nroot && cd /usr/src && make installkernel && make installworld && sed -i -e "s#$zpool/be/$oroot#$zpool/be/$nroot#" /$zpool/be/$nroot/boot/loader.conf && \ echo "Installing boot records.." && zpool status system-4k | grep -A 2 mirror | grep ad | sed -e "s/p[0-9]//" | while read a b; do gpart bootcode -b /zfsboot/pmbr -p /zfsboot/gptzfsboot -i 1 $a; done && cp -v /zfsboot/zfsloader /$zpool/be/$nroot/boot/. && echo -en "\n\nNow run these two commands to make the changes live, and reboot zfs set mountpoint=3Dlegacy $zpool/be/$nroot zpool set bootfs=3D$zpool/be/$nroot $zpool\n\n" it assumes this kind of layout $ zfs list | grep be system-4k/be 35.7G 1.03T 156K /system-4k/b= e system-4k/be/current 1.40G 1.03T 924M legacy system-4k/be/root20110226 2.80G 1.03T 882M legacy system-4k/be/root20110302 3.24G 1.03T 882M legacy system-4k/be/root20110306 1.32G 1.03T 882M legacy system-4k/be/root20110312 1.36G 1.03T 923M legacy system-4k/be/tmp 852K 1.03T 336K /tmp system-4k/be/usr-local 2.98G 1.03T 2.61G /usr/local/ system-4k/be/usr-obj 5.10G 1.03T 2.10G /usr/obj system-4k/be/usr-ports 5.99G 1.03T 2.29G /usr/ports system-4k/be/usr-ports/distfiles 1.18G 1.03T 156K /usr/ports/distfiles system-4k/be/usr-src 1.53G 1.03T 999M /usr/src system-4k/be/var 5.30G 1.03T 812M /var system-4k/be/var/log 4.21G 1.03T 2.67G /var/log system-4k/be/var/mysql 82.5M 1.03T 33.9M /var/db/mysq= l From owner-freebsd-fs@FreeBSD.ORG Sat Apr 2 10:18:00 2011 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id EC618106566C for ; Sat, 2 Apr 2011 10:18:00 +0000 (UTC) (envelope-from ronald-freebsd8@klop.yi.org) Received: from fep14.mx.upcmail.net (fep14.mx.upcmail.net [62.179.121.34]) by mx1.freebsd.org (Postfix) with ESMTP id 07C9C8FC13 for ; Sat, 2 Apr 2011 10:17:59 +0000 (UTC) Received: from edge05.upcmail.net ([192.168.13.212]) by viefep14-int.chello.at (InterMail vM.8.01.02.02 201-2260-120-106-20100312) with ESMTP id <20110402101757.KWEA1458.viefep14-int.chello.at@edge05.upcmail.net>; Sat, 2 Apr 2011 12:17:57 +0200 Received: from pinky ([213.46.23.80]) by edge05.upcmail.net with edge id SaHv1g00g1jgp3H05aHwei; Sat, 02 Apr 2011 12:17:57 +0200 X-SourceIP: 213.46.23.80 Content-Type: text/plain; charset=us-ascii; format=flowed; delsp=yes To: freebsd-stable@freebsd.org, freebsd-fs@freebsd.org, "Lev Serebryakov" References: <895726715.20110328112007@serebryakov.spb.ru> Date: Sat, 02 Apr 2011 12:18:00 +0200 MIME-Version: 1.0 Content-Transfer-Encoding: 8bit From: "Ronald Klop" Message-ID: In-Reply-To: <895726715.20110328112007@serebryakov.spb.ru> User-Agent: Opera Mail/11.01 (Win32) X-Cloudmark-Analysis: v=1.1 cv=CqMFsqQC4gx7bBgpmnW/wKYuJF/a5pXPeCAfngFtYkU= c=1 sm=0 a=CeLh-koh8aAA:10 a=2SeDAVI1De4A:10 a=bgpUlknNv7MA:10 a=kj9zAlcOel0A:10 a=6I5d2MoRAAAA:8 a=2U9UMn1IRtENjP8MFb8A:9 a=GVfkVPEfbwqaSSjG61AA:7 a=CjuIK1q_8ugA:10 a=SV7veod9ZcQA:10 a=HpAAvcLHHh0Zw7uRqdWCyQ==:117 Cc: Subject: Re: Backup tool fot ZFS with all "classic dump(8)" fetatures -- what should I use? (or is here any way to make dump -L works well on large FFS2+SU?) X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 02 Apr 2011 10:18:01 -0000 Looked at rsync or tarsnap? On Mon, 28 Mar 2011 09:20:07 +0200, Lev Serebryakov wrote: > Hello, Freebsd-stable. > > Now I'm backing up my HOME filesystem with dump(8). It works > perfectly for 80GiB FS with many features: snapshot for consistency, > levels, "nodump" flag (my users use it a lot!), ability to extract > only one removed file from backup without restoring full FS, simple > sctipy wrap-up for levels schedule, etc. > > On new server I have huge HOME (500GiB). And even if it is filled > up only with 25GiB of data, creating snapshot takes about 10 minutes, > freeze all I/O, and sometimes FAILS (!!!). > > I'm thinking to transfer GOME filesystem to ZFS. But I can not find > appropriate tools for backing it up. Here is some requirements: > > (1) One-file (one-stream) backup. Not directory mirror. I need to > store it on FTP server and upload with single command. > > (2) Levels & increment backups. Now I have "Monthly (0) - Weekly > (1,2,3) - daily (4,5,6,7,8,9)" scheme. I could afford other schemes, > but if they doesn't store full backup every day and doesn't need full > backup more often than weekly. > > (3) Minimum of local metadata. Storing previous backups locally to > calculate next one is not appropriate solution. "zfs send" needs > previous snapshots for incremental backup, for example. > > (4) Working with snapshot (I think, it is trivial in case of ZFS). > > (5) Backup exclusions should be controlled by users (not super-user) > themselves, > like "nodump" flag in case of FFS/dump(8). "zfs send" can not > provide this. I have very responsible users, so full backup > now takes only up to 10GiB when all HOME FS is about 25GiB, so it > is big help when backup is sent over Internet to other host. > > (6) Storing of ALL FS-specific information -- ACLs, etc. > > (7) Free :) > > Is here something like this for ZFS? "zfs send" looks promising, > EXCEPT item (5) and, maybe, (3) :( > > gnu tar looks like everything but (6) :( From owner-freebsd-fs@FreeBSD.ORG Sat Apr 2 14:34:57 2011 Return-Path: Delivered-To: fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 6ECEB106566C; Sat, 2 Apr 2011 14:34:57 +0000 (UTC) (envelope-from dnewman@networktest.com) Received: from mail3.networktest.com (mail3.networktest.com [69.55.234.104]) by mx1.freebsd.org (Postfix) with ESMTP id 4D0908FC08; Sat, 2 Apr 2011 14:34:57 +0000 (UTC) Received: from localhost (localhost [69.55.234.104]) by mail3.networktest.com (Postfix) with ESMTP id BAFDF2560D3; Sat, 2 Apr 2011 07:34:54 -0700 (PDT) Received: from mail3.networktest.com ([69.55.234.104]) by localhost (mail3.networktest.com [69.55.234.104]) (amavisd-maia, port 10024) with ESMTP id 89438-01; Sat, 2 Apr 2011 07:34:51 -0700 (PDT) Received: from sagan.local (cpe-76-95-196-192.socal.res.rr.com [76.95.196.192]) (using TLSv1 with cipher DHE-RSA-CAMELLIA256-SHA (256/256 bits)) (No client certificate requested) (Authenticated sender: dnewman@networktest.com) by mail3.networktest.com (Postfix) with ESMTPSA id 6F4A92560D2; Sat, 2 Apr 2011 07:34:44 -0700 (PDT) Message-ID: <4D973400.9060404@networktest.com> Date: Sat, 02 Apr 2011 07:34:40 -0700 From: David Newman User-Agent: Mozilla/5.0 (Macintosh; U; Intel Mac OS X 10.6; en-US; rv:1.9.2.15) Gecko/20110303 Lightning/1.0b2 Thunderbird/3.1.9 MIME-Version: 1.0 To: krad References: <4C51ECAA.2070707@networktest.com> <4C51FE41.8030906@FreeBSD.org> <4D941BFF.6050807@networktest.com> In-Reply-To: X-Enigmail-Version: 1.1.1 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 7bit Cc: fs@freebsd.org Subject: Re: fixing a busted ZFS upgrade X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 02 Apr 2011 14:34:57 -0000 On 4/2/11 2:23 AM, krad wrote: > > > On 31 March 2011 07:15, David Newman > wrote: > > On 7/29/10 3:18 PM, Martin Matuska wrote: > > > > > For recovering a system that does not boot anymore, you can use mfsBSD > > ISO's: > > http://mfsbsd.vx.sk > > > > You can boot from the iso and repair the boot record. > > Nearly a year ago mfsBSD saved me from a munged 8.0->8.1 upgrade of a > ZFS box and allowed me to revive a ZFS root partition. > > I've done the same stupid thing again in moving from 8.1 to 8.2, only > now the server won't boot from the 8.2 mfsBSD ISO, or the 8.1 ISO. In > both cases it hangs at loader.conf. > > Thanks in advance for any clues on reviving this system. > I script my installs for upgrading zfsroot as its safer that way. Hes my > script for installing after everything is built > > #!/usr/local/bin/bash > > if [ $UID != 0 ] ; then > echo your not root !! ; exit 1 > fi > > date=`date '+%Y%m%d'` > oroot=`grep "vfs.root.mountfrom=\"zfs:system-4k/" /boot/loader.conf | > sed -e "s#^.*\"zfs:system-4k/be/##" -e "s#\"##"` > nroot="root$date" > snap="autoup-$RANDOM" > zpool=system-4k > > export DESTDIR=/$zpool/be/$nroot > > > if [ "$oroot" = "$nroot" ] ; then > echo "i cant update twice in one day"; exit 1 > fi > > echo building in $zpool/be/$nroot > > zfs snapshot $zpool/be/$oroot@$snap && > zfs send $zpool/be/$oroot@$snap | zfs receive -vv $zpool/be/$nroot && > cd /usr/src && > make installkernel && > make installworld && > sed -i -e "s#$zpool/be/$oroot#$zpool/be/$nroot#" > /$zpool/be/$nroot/boot/loader.conf && \ > echo "Installing boot records.." && > zpool status system-4k | grep -A 2 mirror | grep ad | sed -e > "s/p[0-9]//" | > while read a b; do > gpart bootcode -b /zfsboot/pmbr -p /zfsboot/gptzfsboot > -i 1 $a; > done && > cp -v /zfsboot/zfsloader /$zpool/be/$nroot/boot/. && > echo -en "\n\nNow run these two commands to make the changes live, and > reboot > zfs set mountpoint=legacy $zpool/be/$nroot > zpool set bootfs=$zpool/be/$nroot $zpool\n\n" > > it assumes this kind of layout > > > $ zfs list | grep be > system-4k/be 35.7G 1.03T 156K /system-4k/be > system-4k/be/current 1.40G 1.03T 924M legacy > system-4k/be/root20110226 2.80G 1.03T 882M legacy > system-4k/be/root20110302 3.24G 1.03T 882M legacy > system-4k/be/root20110306 1.32G 1.03T 882M legacy > system-4k/be/root20110312 1.36G 1.03T 923M legacy > system-4k/be/tmp 852K 1.03T 336K /tmp > system-4k/be/usr-local 2.98G 1.03T 2.61G /usr/local/ > system-4k/be/usr-obj 5.10G 1.03T 2.10G /usr/obj > system-4k/be/usr-ports 5.99G 1.03T 2.29G /usr/ports > system-4k/be/usr-ports/distfiles 1.18G 1.03T 156K > /usr/ports/distfiles > system-4k/be/usr-src 1.53G 1.03T 999M /usr/src > system-4k/be/var 5.30G 1.03T 812M /var > system-4k/be/var/log 4.21G 1.03T 2.67G /var/log > system-4k/be/var/mysql 82.5M 1.03T 33.9M /var/db/mysql Thanks for this script. Problem is, I can't get to the point where I can run it. I've tried booting from 8.1 and 8.2 mfsBSD ISOs but both hang at loader.conf. Thanks for any clues on getting beyond this point. . . dn From owner-freebsd-fs@FreeBSD.ORG Sat Apr 2 21:33:56 2011 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id E64E8106568F; Sat, 2 Apr 2011 21:33:56 +0000 (UTC) (envelope-from antiduh@csh.rit.edu) Received: from brownstoat.csh.rit.edu (mail.csh.rit.edu [129.21.49.169]) by mx1.freebsd.org (Postfix) with ESMTP id B4DE08FC15; Sat, 2 Apr 2011 21:33:56 +0000 (UTC) Received: from localhost (localhost [127.0.0.1]) by brownstoat.csh.rit.edu (Postfix) with ESMTP id 2C5977F902; Sat, 2 Apr 2011 17:16:12 -0400 (EDT) X-Virus-Scanned: Debian amavisd-new at csh.rit.edu Received: from brownstoat.csh.rit.edu ([127.0.0.1]) by localhost (brownstoat.csh.rit.edu [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id lW4oiuj0hpnA; Sat, 2 Apr 2011 17:16:12 -0400 (EDT) Received: from corrugated.mshome.net (cpe-184-153-112-141.rochester.res.rr.com [184.153.112.141]) by brownstoat.csh.rit.edu (Postfix) with ESMTPSA id BB61E7F55B; Sat, 2 Apr 2011 17:16:11 -0400 (EDT) Content-Type: text/plain; charset=iso-8859-15; format=flowed; delsp=yes To: freebsd-stable@freebsd.org, freebsd-fs@freebsd.org, "Lev Serebryakov" References: <895726715.20110328112007@serebryakov.spb.ru> Date: Sat, 02 Apr 2011 17:18:54 -0400 MIME-Version: 1.0 Content-Transfer-Encoding: 7bit From: "Kevin Thompson" Message-ID: In-Reply-To: <895726715.20110328112007@serebryakov.spb.ru> User-Agent: Opera Mail/11.01 (Win32) Cc: Subject: Re: Backup tool fot ZFS with all "classic dump(8)" fetatures -- what should I use? (or is here any way to make dump -L works well on large FFS2+SU?) X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 02 Apr 2011 21:33:57 -0000 On Mon, 28 Mar 2011 03:20:07 -0400, Lev Serebryakov wrote: > I'm thinking to transfer GOME filesystem to ZFS. But I can not find > appropriate tools for backing it up. Here is some requirements: Have you considered a full-up backup solution, like bacula? It's a client/server/server model backup system - there's a server process that coordinates all actions ('director'), various server process that run on machines with the devices/mounts/disks for storing the backups ('storage daemons') and then each client runs a little process to give access to the backup servers ('file daemons'). It allows you to specify a large amount of behavior. You can store backup to disk/file and to tape. If using disks/files, you can backup to the same file always, backup to files with 1gb max etc, or backup to a new file each time iirc. It has support for arbitrary schedules with each schedule being able to specify the dump level (full, incremental, differential) It uses a database in the director for metadata. And, iirc, it honors the nodump flag, stores ACLs, etc. Most importantly, it has support for pre- and post-backup hooks, so you can tell it to snapshot beforehand and then (probably, see below) use the post-hook to push the data where you want. Reading about your requirement #1, I'm guessing that the backup data is being collected locally and then sent over ftp for permanent storage. Do you have control over this remote machine? Could you replace ftp with bacula's networked client/server model? This might be the one spot that would be hard to make bacula work for you, I'm not sure since I haven't played with bacula in this configuration and I'm not exactly sure what your restrictions are. Even then, you could probably mount the FTP server as a 'file system' ala sshfs and have the storage daemon write it directly to the mounted file system. And yeah, it's free. http://www.bacula.org If you want to give it a shot, you can set it up on a little test machine and have it backup to itself. I might recommend doing this anyway since you'll want to be able to experiment with configuration and controls before trying it on your production machine. --Kevin From owner-freebsd-fs@FreeBSD.ORG Sat Apr 2 22:50:36 2011 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 8A2BE1065674 for ; Sat, 2 Apr 2011 22:50:36 +0000 (UTC) (envelope-from ppaczyn@gmail.com) Received: from mail-iw0-f182.google.com (mail-iw0-f182.google.com [209.85.214.182]) by mx1.freebsd.org (Postfix) with ESMTP id 4B9978FC12 for ; Sat, 2 Apr 2011 22:50:36 +0000 (UTC) Received: by iwn33 with SMTP id 33so5755529iwn.13 for ; Sat, 02 Apr 2011 15:50:35 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:mime-version:in-reply-to:references:date :message-id:subject:from:cc:content-type; bh=p+XHqeF1BIQBV3k7KliIcZ2/VfsGKKUKD92zWhxGHaU=; b=Lthezzb7cj5W9zjqSan1UBUvzT3AnguI/lCXIu3XnF3dy1Z/jfk2Vj8qFG5FrqKOwJ DImvchbxUdrvj6C7Hy4TYXLnq4YGZiaeW5FzHUCOtOaIX2oiAB2Syl6Gm8dWbTxGHRvp Caccrt7CcSTUcfGEXvghO4HcEhXqD+jpWuuqQ= DomainKey-Signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=mime-version:in-reply-to:references:date:message-id:subject:from:cc :content-type; b=Sj6GtMSpvxEXKXUqgHBozeZjEMdZSs/h8wthKNKXJGv6i1QKOiOl0BY6jXg6zGQP/e 1704ojG+gohG+LgC+EkgdmLG9zFGARSK2se+quyYhcPHhls8WibW2P+DHX939rv44Wnh sV/bzhWMmqVhMaKMpYwKtPGIhJAOKAmfREfkg= MIME-Version: 1.0 Received: by 10.43.131.195 with SMTP id hr3mr1706829icc.268.1301784635833; Sat, 02 Apr 2011 15:50:35 -0700 (PDT) Received: by 10.42.172.201 with HTTP; Sat, 2 Apr 2011 15:50:35 -0700 (PDT) In-Reply-To: References: <20110401013603.GA31034@icarus.home.lan> <84DF4838-CE43-430E-8C3A-4CC7881E44BD@gmail.com> Date: Sun, 3 Apr 2011 00:50:35 +0200 Message-ID: From: Piotr Paczynski Cc: freebsd-fs@freebsd.org Content-Type: text/plain; charset=ISO-8859-1 Subject: Re: ZFS failed after hard power off X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 02 Apr 2011 22:50:36 -0000 Emergency over. I didn't managed to make Solaris see my pool. Instead, I've installed FreeBSD-8.2 on a separate disk, fetched latest sources of FreeBSD9, rebuilt world, and then used "zpool import -fF zroot" to recover the pool. Worked like charm. Thank you for the help. From owner-freebsd-fs@FreeBSD.ORG Sat Apr 2 23:45:06 2011 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id EB19A106566B for ; Sat, 2 Apr 2011 23:45:06 +0000 (UTC) (envelope-from daryl@isletech.net) Received: from lagoon.isletech.net (lagoon.isletech.net [64.235.98.66]) by mx1.freebsd.org (Postfix) with ESMTP id AEF798FC13 for ; Sat, 2 Apr 2011 23:45:06 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=isletech.net; s=isle; h=To:References:Message-Id:Content-Transfer-Encoding:Cc:Date:In-Reply-To:From:Content-Type:Mime-Version:Subject; bh=MwB3Sx0aHcL/yHsGk0sFtpESTk/d5vPP1zb600xrlmQ=; b=IjqtXbi8F6ZicsMvWF8V3ub5V2KeVxVqDnzaGh6eLj4cmjWtoDlbIoFKPgYUIrJAyh5RlYCcxYxBnCNG8ddl2A==; Received: from home.isletech.net ([206.248.171.193]:61844 helo=mac.home.isletech.net) by lagoon.isletech.net with esmtpsa (TLSv1:AES128-SHA:128) (Exim 4.74 (FreeBSD)) (envelope-from ) id 1Q69jJ-000AU9-PN; Sat, 02 Apr 2011 18:55:41 -0400 Mime-Version: 1.0 (Apple Message framework v1084) Content-Type: text/plain; charset=us-ascii From: Daryl Richards In-Reply-To: Date: Sat, 2 Apr 2011 18:55:41 -0400 Content-Transfer-Encoding: quoted-printable Message-Id: References: <20110401013603.GA31034@icarus.home.lan> <84DF4838-CE43-430E-8C3A-4CC7881E44BD@gmail.com> To: Piotr Paczynski X-Mailer: Apple Mail (2.1084) Cc: freebsd-fs@freebsd.org Subject: Re: ZFS failed after hard power off X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 02 Apr 2011 23:45:07 -0000 On 2011-04-02, at 6:50 PM, Piotr Paczynski wrote: > Emergency over. I didn't managed to make Solaris see my pool. Instead, > I've installed FreeBSD-8.2 on a separate disk, fetched latest sources > of FreeBSD9, rebuilt world, and then used "zpool import -fF zroot" to > recover the pool. Worked like charm. Thank you for the help. > _______________________________________________ > freebsd-fs@freebsd.org mailing list > http://lists.freebsd.org/mailman/listinfo/freebsd-fs > To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" Good that you got it! For list reference, this is because Solaris doesn't understand FreeBSD's = slice table, so can't see the ZFS slice.