From owner-freebsd-bugs@FreeBSD.ORG Mon Sep 29 22:41:09 2014 Return-Path: Delivered-To: freebsd-bugs@FreeBSD.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 425A3DE1 for ; Mon, 29 Sep 2014 22:41:09 +0000 (UTC) Received: from kenobi.freebsd.org (kenobi.freebsd.org [IPv6:2001:1900:2254:206a::16:76]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 29D2622D for ; Mon, 29 Sep 2014 22:41:09 +0000 (UTC) Received: from bugs.freebsd.org ([127.0.1.118]) by kenobi.freebsd.org (8.14.9/8.14.9) with ESMTP id s8TMf9Fh036227 for ; Mon, 29 Sep 2014 22:41:09 GMT (envelope-from bugzilla-noreply@freebsd.org) From: bugzilla-noreply@freebsd.org To: freebsd-bugs@FreeBSD.org Subject: [Bug 193875] [zfs] [panic] [reproducable] zfs/space_map.c: solaris assert: sm->sm_space + size <= sm->sm_size Date: Mon, 29 Sep 2014 22:41:09 +0000 X-Bugzilla-Reason: AssignedTo X-Bugzilla-Type: changed X-Bugzilla-Watch-Reason: None X-Bugzilla-Product: Base System X-Bugzilla-Component: kern X-Bugzilla-Version: 10.0-RELEASE X-Bugzilla-Keywords: X-Bugzilla-Severity: Affects Only Me X-Bugzilla-Who: girgen@FreeBSD.org X-Bugzilla-Status: Needs Triage X-Bugzilla-Priority: --- X-Bugzilla-Assigned-To: freebsd-bugs@FreeBSD.org X-Bugzilla-Target-Milestone: --- X-Bugzilla-Flags: X-Bugzilla-Changed-Fields: Message-ID: In-Reply-To: References: Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: 7bit X-Bugzilla-URL: https://bugs.freebsd.org/bugzilla/ Auto-Submitted: auto-generated MIME-Version: 1.0 X-BeenThere: freebsd-bugs@freebsd.org X-Mailman-Version: 2.1.18-1 Precedence: list List-Id: Bug reports List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 29 Sep 2014 22:41:09 -0000 https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=193875 --- Comment #4 from Palle Girgensohn --- (In reply to Xin LI from comment #3) > Is this reproducable on a newly created pool? It looks like you are using a > pool formatted with old format and did not upgrade (DO NOT DO IT NOW!), and > there may be existing damage with the space map -- in such case the only way > to recover from the situation would be to copy all data off the pool, > recreate it and restore the data. Hi Xin Li, thanks for the reply! I did not try a newly created pool, it is a large pool with data, one of two redundant systems where we use zfs send | ssh | zfs recv to keep them in sync. The other machine is still on 9.3, and we got this problem after updating one system to 10.0. So, we cannot really upgrade just yet. Also, it shouln't present such a big problem just running an old version...? But as you say, there seems to something fishy with the pool, and maybe there is nothing wrong with the kernel itself. Are you sure there are no other ways to fix this but to recreate the pool? Thera are just Terabytes of data, it will take a week... :-/ is there no zdb magic or zpool export + scrub + zpool import ditto with vfs.zfs.recover =1 that could help? -- You are receiving this mail because: You are the assignee for the bug.