From owner-freebsd-stable@FreeBSD.ORG Tue Nov 16 21:15:39 2010 Return-Path: Delivered-To: freebsd-stable@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 4E9D5106566B for ; Tue, 16 Nov 2010 21:15:39 +0000 (UTC) (envelope-from telbizov@gmail.com) Received: from mail-pv0-f182.google.com (mail-pv0-f182.google.com [74.125.83.182]) by mx1.freebsd.org (Postfix) with ESMTP id 11CAE8FC12 for ; Tue, 16 Nov 2010 21:15:38 +0000 (UTC) Received: by pvc22 with SMTP id 22so407991pvc.13 for ; Tue, 16 Nov 2010 13:15:38 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:mime-version:received:received:in-reply-to :references:date:message-id:subject:from:to:cc:content-type; bh=TgTKwsUmZ8ltPrnUmNLKXjj7wyDorhKitvmDH1gcljc=; b=ah0zk+DJHYyIBfePruXHdS++5S8KQNb1v0DoKW2yRrLqFixzC1oWWpxaQIMAQifaqr 0rvuyYbNCu7qifziqLPOmEX5Qro/mmzKy1fNqwAFBB6W34mXZNAszbO7+mrX5DCxcoiO YuksMEQTMIOQkwgbXRXk5hz1rUnw//NKchdOs= DomainKey-Signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type; b=pr5fTaADS0GHzQwDeRJSJXGwxu4X94gtd2H5cX4efcdQDm1bYE8V7RYSL3YHYE/mss zCpHHnVrWKBculp0Fb/HFWro/YSWw5C9k3ENndcPnECXnsLtbLWAy6zncOKlEfF2s7Ps bM1ZXMlzazDr+R+AHaELrnWdGY2p9dMrYDk7g= MIME-Version: 1.0 Received: by 10.229.213.80 with SMTP id gv16mr6698176qcb.110.1289942137323; Tue, 16 Nov 2010 13:15:37 -0800 (PST) Received: by 10.229.85.149 with HTTP; Tue, 16 Nov 2010 13:15:37 -0800 (PST) In-Reply-To: <4CD6243B.90707@DataIX.net> References: <4CD6243B.90707@DataIX.net> Date: Tue, 16 Nov 2010 13:15:37 -0800 Message-ID: From: Rumen Telbizov To: jhell Content-Type: text/plain; charset=ISO-8859-1 X-Content-Filtered-By: Mailman/MimeDel 2.1.5 Cc: freebsd-stable@freebsd.org, Artem Belevich Subject: Re: Degraded zpool cannot detach old/bad drive X-BeenThere: freebsd-stable@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Production branch of FreeBSD source code List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 16 Nov 2010 21:15:39 -0000 Hello everyone, jhell thanks for the advice. I am sorry I couldn't try it earlier but the server was pretty busy and I just found a window to test this. So I think I'm pretty much there but still having a problem. So here's what I have: I exported the pool. I hid the individual disks (without mfid0 which is my root) in /etc/devfs.rules like you suggested: /etc/devfs.rules add path 'mfid1' hide add path 'mfid1p1' hide ... Checked that those are gone from /dev/. Then here's what happened when tried to import the pool # zpool import pool: tank id: 13504509992978610301 state: ONLINE action: The pool can be imported using its name or numeric identifier. config: tank ONLINE raidz1 ONLINE gptid/a7fb11e8-dfc9-11df-8732-002590087f3a ONLINE gptid/7a36f6f3-b9fd-11df-8105-002590087f3a ONLINE gptid/7a92d827-b9fd-11df-8105-002590087f3a ONLINE gptid/7b00dc15-b9fd-11df-8105-002590087f3a ONLINE raidz1 ONLINE gptid/7b6c8c45-b9fd-11df-8105-002590087f3a ONLINE gptid/7bd9c888-b9fd-11df-8105-002590087f3a ONLINE gptid/7c5129ee-b9fd-11df-8105-002590087f3a ONLINE gptid/7cceb1b1-b9fd-11df-8105-002590087f3a ONLINE raidz1 ONLINE gpt/disk-e1:s10 ONLINE gpt/disk-e1:s11 ONLINE gptid/7e61fd64-b9fd-11df-8105-002590087f3a ONLINE gptid/7ef18e3b-b9fd-11df-8105-002590087f3a ONLINE raidz1 ONLINE gptid/7f881f2c-b9fd-11df-8105-002590087f3a ONLINE gptid/8024fa06-b9fd-11df-8105-002590087f3a ONLINE gptid/80c7ea1b-b9fd-11df-8105-002590087f3a ONLINE gptid/b20fe225-dfc9-11df-8732-002590087f3a ONLINE raidz1 ONLINE gptid/82285e4b-b9fd-11df-8105-002590087f3a ONLINE gptid/82ec73bd-b9fd-11df-8105-002590087f3a ONLINE gpt/disk-e1:s20 ONLINE gpt/disk-e1:s21 ONLINE raidz1 ONLINE gptid/851c6087-b9fd-11df-8105-002590087f3a ONLINE gptid/85e2ef76-b9fd-11df-8105-002590087f3a ONLINE gpt/disk-e2:s0 ONLINE gpt/disk-e2:s1 ONLINE raidz1 ONLINE gptid/8855ae14-b9fd-11df-8105-002590087f3a ONLINE gptid/893243c7-b9fd-11df-8105-002590087f3a ONLINE gptid/8a1589fe-b9fd-11df-8105-002590087f3a ONLINE gptid/8b0125ce-b9fd-11df-8105-002590087f3a ONLINE raidz1 ONLINE gptid/8bf0471b-b9fd-11df-8105-002590087f3a ONLINE gptid/8ce57ab9-b9fd-11df-8105-002590087f3a ONLINE gptid/8de3a927-b9fd-11df-8105-002590087f3a ONLINE gptid/8ee44a55-b9fd-11df-8105-002590087f3a ONLINE spares gpt/disk-e2:s11 gptid/8fe55a60-b9fd-11df-8105-002590087f3a Obviously zfs forgot about the /dev/mfidXXp1 devices (GREAT!) but now catches the gptid's :( So I tried to disable gptid hoping that ZFS will continue with /dev/gpt/ only but for some reason after setting k*ern.geom.label.gptid.enable: 0 *I still see all the /dev/gptid/XXX entries and zpool import catches gptid's. Here are my sysctls: kern.geom.label.debug: 2 kern.geom.label.ext2fs.enable: 1 kern.geom.label.iso9660.enable: 1 kern.geom.label.msdosfs.enable: 1 kern.geom.label.ntfs.enable: 1 kern.geom.label.reiserfs.enable: 1 kern.geom.label.ufs.enable: 1 kern.geom.label.ufsid.enable: 0 *kern.geom.label.gptid.enable: 0* kern.geom.label.gpt.enable: 1 It seems like *kern.geom.label.gptid.enable: 0 *does not work anymore? I am pretty sure I was able to hide all the /dev/gptid/* entries with this sysctl variable before but now it doesn't quite work for me. I feel pretty confident that if I manage to hide the gptids, zfs will fall back to /dev/gpt and everything will be back to normal. zpool import -d /dev/gpt doesn't make it any better (doesn't find all the devices) just like you suggested in your previous email! Let me know if you have any ideas? All opinions are appreciated! Thank you, Rumen Telbizov On Sat, Nov 6, 2010 at 8:59 PM, jhell wrote: > On 10/31/2010 15:53, Rumen Telbizov wrote: > > Hi Artem, everyone, > > > > Here's the latest update on my case. > > I did upgrade the system to the latest stable: 8.1-STABLE FreeBSD > 8.1-STABLE > > #0: Sun Oct 31 11:44:06 PDT 2010 > > After that I did zpool upgrade and zfs upgrade -r all the filesystems. > > Currently I am running zpool 15 and zfs 4. > > Everything went fine with the upgrade but unfortunately my problem still > > persists. There's no difference in this aspect. > > I still have mfid devices. I also tried chmod-ing as you suggested > /dev/mfid > > devices but zfs/zpool didn't seem to care and imported > > the array regardless. > > > > So at this point since no one else seems to have any ideas and we seem to > be > > stuck I am almost ready to declare defeat on this one. > > Although the pool is usable I couldn't bring it back to exactly the same > > state as it was before the disk replacements took place. > > Disappointing indeed, although not a complete show stopper. > > > > I still think that if there's a way to edit the cache file and change the > > devices that might do the trick. > > > > Thanks for all the help, > > Rumen Telbizov > > > > > > On Fri, Oct 29, 2010 at 5:01 PM, Artem Belevich wrote: > > > >> On Fri, Oct 29, 2010 at 4:42 PM, Rumen Telbizov > >> wrote: > >>> FreeBSD 8.1-STABLE #0: Sun Sep 5 00:22:45 PDT 2010 > >>> That's when I csuped and rebuilt world/kernel. > >> > >> There were a lot of ZFS-related MFCs since then. I'd suggest updating > >> to the most recent -stable and try again. > >> > >> I've got another idea that may or may not work. Assuming that GPT > >> labels disappear because zpool opens one of the /dev/mfid* devices, > >> you can try to do "chmod a-rw /dev/mfid*" on them and then try > >> importing the pool again. > >> > >> --Artem > >> > > > > > > > > The problem seems to be that its just finding the actual disk before it > finds the GPT labels. You should be able to export the pool and then > re-import the pool after hiding the disks that it is finding via > /etc/devfs.rules file. > > Try adding something like (WARNING: This will hide all devices mfi) > adjust accordingly: > add path 'mfi*' hide > > To your devfs ruleset before re-importing the pool and that should make > ZFS go wondering around /dev enough to find the appropriate GPT label > for the disk it is trying to locate. > > It would seem to me that using '-d' in this case would not be effective > as ZFS would be looking for 'gpt/LABEL' within /dev/gpt/ if memory > serves correctly, and obviously path /dev/gpt/gpt/ would not exist. Also > even if it did find the correct gpt label then it would be assuming its > at a /dev path and not /dev/gpt/* and would fall back to finding the mfi > devices after the next boot again. > > -- > > jhell,v > -- Rumen Telbizov http://telbizov.com