From owner-freebsd-stable@FreeBSD.ORG Tue Oct 14 11:40:51 2014 Return-Path: Delivered-To: freebsd-stable@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 07ED7A4D; Tue, 14 Oct 2014 11:40:51 +0000 (UTC) Received: from smtp1.multiplay.co.uk (smtp1.multiplay.co.uk [85.236.96.35]) by mx1.freebsd.org (Postfix) with ESMTP id B5FF3F4F; Tue, 14 Oct 2014 11:40:50 +0000 (UTC) Received: by smtp1.multiplay.co.uk (Postfix, from userid 65534) id 980F420E70953; Tue, 14 Oct 2014 11:40:49 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on smtp1.multiplay.co.uk X-Spam-Level: * X-Spam-Status: No, score=2.0 required=8.0 tests=AWL,BAYES_00,DOS_OE_TO_MX, FSL_HELO_NON_FQDN_1,RDNS_DYNAMIC autolearn=no version=3.3.1 Received: from r2d2 (82-69-141-170.dsl.in-addr.zen.co.uk [82.69.141.170]) by smtp1.multiplay.co.uk (Postfix) with ESMTPS id 0729020E70942; Tue, 14 Oct 2014 11:40:47 +0000 (UTC) Message-ID: From: "Steven Hartland" To: "Mark Martinec" , , References: <54372173.1010100@ijs.si> <644FA8299BF848E599B82D2C2C298EA7@multiplay.co.uk> <54372EBA.1000908@ijs.si> <543731F3.8090701@ijs.si> <543AE740.7000808@ijs.si> <6E01BBEDA9984CCDA14F290D26A8E14D@multiplay.co.uk> <14ADE02801754E028D9A0EAB4A16527E@multiplay.co.uk> <543C3C47.4010208@ijs.si> <138CF459AA0B41EB8CB4E11B3DE932CF@multiplay.co.uk> <543D0953.1070604@ijs.si> Subject: Re: zpool import hangs when out of space - Was: zfs pool import hangs on [tx->tx_sync_done_cv] Date: Tue, 14 Oct 2014 12:40:45 +0100 MIME-Version: 1.0 Content-Type: text/plain; format=flowed; charset="iso-8859-1"; reply-type=response Content-Transfer-Encoding: 7bit X-Priority: 3 X-MSMail-Priority: Normal X-Mailer: Microsoft Outlook Express 6.00.2900.5931 X-MimeOLE: Produced By Microsoft MimeOLE V6.00.2900.6157 X-BeenThere: freebsd-stable@freebsd.org X-Mailman-Version: 2.1.18-1 Precedence: list List-Id: Production branch of FreeBSD source code List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 14 Oct 2014 11:40:51 -0000 ----- Original Message ----- From: "Mark Martinec" > On 10/14/2014 13:19, Steven Hartland wrote: >> Well interesting issue I left this pool alone this morning literally doing >> nothing, and its now out of space. >> zpool list >> NAME SIZE ALLOC FREE FRAG EXPANDSZ CAP DEDUP HEALTH >> ALTROOT >> sys1boot 3.97G 3.97G 190K 0% - 99% 1.00x ONLINE - >> sys1copy 3.97G 3.97G 8K 0% - 99% 1.00x ONLINE - >> >> There's something very wrong here as nothing has been accessing the pool. >> >> pool: zfs >> state: ONLINE >> status: One or more devices are faulted in response to IO failures. >> action: Make sure the affected devices are connected, then run 'zpool >> clear'. >> see: http://illumos.org/msg/ZFS-8000-HC >> scan: none requested >> config: >> >> NAME STATE READ WRITE CKSUM >> zfs ONLINE 0 2 0 >> md1 ONLINE 0 0 0 >> >> I tried destroying the pool and ever that failed, presumably because >> the pool has suspended IO. > > That's exactly how trouble started here. Got the > "One or more devices are faulted in response to IO failures" > on all three small cloned boot pools one day, out of the blue. > There was no activity there, except for periodic snapshoting > every 10 minutes. Yer this isn't fragmentation, this is something else. I've started a thread on the openzfs list to discuss this as theres something quite odd going on. Regards Steve