From owner-freebsd-questions@freebsd.org Thu Sep 13 23:28:41 2018 Return-Path: Delivered-To: freebsd-questions@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2610:1c1:1:606c::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id C0CBB109D948 for ; Thu, 13 Sep 2018 23:28:41 +0000 (UTC) (envelope-from matt@conundrum.com) Received: from mail-it0-x22d.google.com (mail-it0-x22d.google.com [IPv6:2607:f8b0:4001:c0b::22d]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (Client CN "smtp.gmail.com", Issuer "Google Internet Authority G3" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id 5B7AA8B1CA for ; Thu, 13 Sep 2018 23:28:41 +0000 (UTC) (envelope-from matt@conundrum.com) Received: by mail-it0-x22d.google.com with SMTP id x79-v6so3117986ita.1 for ; Thu, 13 Sep 2018 16:28:41 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=conundrum-com.20150623.gappssmtp.com; s=20150623; h=mime-version:from:date:message-id:subject:to; bh=d47nLEJzkTA3fxTdlftwMakjJH5qX2fyrrTc2z92F+8=; b=Zys+aeZNCuQS8OfFXFuaXbpAVR7v1n6b/UAeuJQXmRzjwprTFQQvpImlQg9AfG1bfC eEe3qNy6ad/jyfTQmhS22w/ZYRKNVEQKNvIzAkpnaihFYjhCYFcuBLP6Ddhc5XkBDZt5 MYXD/uS0S7IEagFx82T2E8CFAScDkInyC7AdBoL+CWZsSfafnXHpVcApaguF5G56eZDV dptb2lgfb/rb1j1eIOhxYPCL7yDBSz4U+cn62fABZPIz1M1BTyoodZg6Jf2ZI1+q1qxy a6bBl+SNFdD9Laa0Ct0sQhQOo9R/ESE+F8NkflwVfR1d7ls26LBmb/eHPaWrVf+QGS42 E67A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:from:date:message-id:subject:to; bh=d47nLEJzkTA3fxTdlftwMakjJH5qX2fyrrTc2z92F+8=; b=McSgsjMOogyzd1eVRGMjMHmBTTQprN870jaTQxJupa4g4BUtYUnQktZcIe8u9hOHIP mgIhlguwVCXYXSapFn2J3qAG2ADuSBZZXsqGvoKYp3QJk+34rI5SyNMAnPAJ8KMKwZ0I zaSsrgjCRvdZePUrALdd3NyTOS0dKpoLhPjPrO5GicHfzKyWj79GXT9PmDRWVEBXnNpJ muKSxxxfqN1DBHvp/LlYWzGNNJOBn0+PJXHi2F0df4dX5IMsaTg8UwCLPn9ET69JKt0r +zJTrc/UeHYx4HmdHd5/5Q04pd5xtcLOo3FgsXvI44HsYjzQoSJS0tu1iYT47bJN6QO6 z5kA== X-Gm-Message-State: APzg51DoH/9zcuwZQPsQKppqqzTKo8esJnzZ+1w4fU04SjT0FmueFjSI 0/gmVymlbFP2FvgwoVuUpF3a1T5wIcXuyGQxDV7xs60qYeenUQ== X-Google-Smtp-Source: ANB0VdZ2vmta7BOLYii/sgNyMxyDo3iOtDJVn1eXHGC76ph5aa+ZoGUiRphaLdL8c/lPzKqTa1tQzArauzmtHRDAJr4= X-Received: by 2002:a24:7dd5:: with SMTP id b204-v6mr310447itc.96.1536881320179; Thu, 13 Sep 2018 16:28:40 -0700 (PDT) MIME-Version: 1.0 Received: by 2002:a02:650f:0:0:0:0:0 with HTTP; Thu, 13 Sep 2018 16:28:39 -0700 (PDT) From: Matthew Pounsett Date: Thu, 13 Sep 2018 16:28:39 -0700 Message-ID: Subject: Issues replacing a failed disk in a zfs pool To: freebsd-questions@freebsd.org Content-Type: text/plain; charset="UTF-8" X-Content-Filtered-By: Mailman/MimeDel 2.1.27 X-BeenThere: freebsd-questions@freebsd.org X-Mailman-Version: 2.1.27 Precedence: list List-Id: User questions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 13 Sep 2018 23:28:42 -0000 A disk in one of my zfs pools failed a few weeks ago. I did a 'zpool replace' to bring in one of the spares in its place. After the resilvering I did a zpool offline on the old drive. This is my current zpool status (apologies for the wide paste): pool: pool5b state: DEGRADED status: One or more devices has been taken offline by the administrator. Sufficient replicas exist for the pool to continue functioning in a degraded state. action: Online the device using 'zpool online' or replace the device with 'zpool replace'. scan: resilvered 2.59T in 48h40m with 0 errors on Mon Aug 6 20:32:39 2018 config: NAME STATE READ WRITE CKSUM pool5b DEGRADED 0 0 0 raidz2-0 ONLINE 0 0 0 diskid/DISK-PK2331PAG6ZLMT ONLINE 0 0 0 block size: 512B configured, 4096B native da10 ONLINE 0 0 0 block size: 512B configured, 4096B native diskid/DISK-PK2331PAG6ZVMT ONLINE 0 0 0 block size: 512B configured, 4096B native diskid/DISK-PK2331PAG728ET ONLINE 0 0 0 block size: 512B configured, 4096B native diskid/DISK-PK2331PAG6YGXT ONLINE 0 0 0 block size: 512B configured, 4096B native raidz2-1 ONLINE 0 0 0 diskid/DISK-WD-WMC1F0D2VV96 ONLINE 0 0 0 block size: 512B configured, 4096B native diskid/DISK-PK2331PAG6ZV8T ONLINE 0 0 0 block size: 512B configured, 4096B native diskid/DISK-PK2331PAG6Z3ST ONLINE 0 0 0 block size: 512B configured, 4096B native diskid/DISK-PK2331PAG70E0T ONLINE 0 0 0 block size: 512B configured, 4096B native diskid/DISK-PK2331PAG6ZWUT ONLINE 0 0 0 block size: 512B configured, 4096B native raidz2-2 DEGRADED 0 0 0 diskid/DISK-PN1334PBJPWU8S ONLINE 0 0 0 diskid/DISK-PK2331PAG6ZV2T ONLINE 0 0 0 block size: 512B configured, 4096B native diskid/DISK-PK2331PAG6ZWHT ONLINE 0 0 0 block size: 512B configured, 4096B native diskid/DISK-PK2331PAG7280T ONLINE 0 0 0 block size: 512B configured, 4096B native spare-4 DEGRADED 0 0 0 5996713305860302307 OFFLINE 0 0 0 was /dev/diskid/DISK-PK2331PAG704VT da23 ONLINE 0 0 0 block size: 512B configured, 4096B native raidz2-3 ONLINE 0 0 0 diskid/DISK-PK2331PAG704PT ONLINE 0 0 0 block size: 512B configured, 4096B native diskid/DISK-PK2331PAG6ZWAT ONLINE 0 0 0 block size: 512B configured, 4096B native diskid/DISK-PK2331PAG6ZZ0T ONLINE 0 0 0 block size: 512B configured, 4096B native diskid/DISK-PK2331PAG704ST ONLINE 0 0 0 block size: 512B configured, 4096B native diskid/DISK-PK2331PAG704WT ONLINE 0 0 0 block size: 512B configured, 4096B native spares 12114494961187138794 INUSE was /dev/da23 da21 AVAIL I'm now at the data centre, and I expected here to do a 'zpool remove' on the old drive so that I can swap it for a new one. However, I'm being told I can't do that. % sudo zpool remove pool5b 5996713305860302307 cannot remove 5996713305860302307: only inactive hot spares, cache, top-level, or log devices can be removed I just tried bringing the disk back online, and zfs now says it's being resilvered. I assume it's going to take longer to complete than I'm going to be here, so the replacement will now probably have to wait for my next visit. I must have missed a step somewhere, but I've no idea what it was. What am I missing? Thanks.