From owner-freebsd-stable@freebsd.org Thu Jul 21 12:53:59 2016 Return-Path: Delivered-To: freebsd-stable@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id CEB53B9EA8F for ; Thu, 21 Jul 2016 12:53:59 +0000 (UTC) (envelope-from avg@FreeBSD.org) Received: from citapm.icyb.net.ua (citapm.icyb.net.ua [212.40.38.140]) by mx1.freebsd.org (Postfix) with ESMTP id 19B1A1AE8 for ; Thu, 21 Jul 2016 12:53:58 +0000 (UTC) (envelope-from avg@FreeBSD.org) Received: from porto.starpoint.kiev.ua (porto-e.starpoint.kiev.ua [212.40.38.100]) by citapm.icyb.net.ua (8.8.8p3/ICyb-2.3exp) with ESMTP id PAA27735; Thu, 21 Jul 2016 15:53:56 +0300 (EEST) (envelope-from avg@FreeBSD.org) Received: from localhost ([127.0.0.1]) by porto.starpoint.kiev.ua with esmtp (Exim 4.34 (FreeBSD)) id 1bQDUC-0005U3-Kl; Thu, 21 Jul 2016 15:53:56 +0300 Subject: Re: Panic on BETA1 in the ZFS subsystem To: Karl Denninger , freebsd-stable@FreeBSD.org, killing@multiplay.co.uk References: <8f44bc09-1237-44d0-fe7a-7eb9cf4fe85b@denninger.net> <54e5974c-312e-c33c-ab83-9e1148618ddc@FreeBSD.org> <97cf5283-683b-83fd-c484-18c14973b065@denninger.net> <1f064549-fa72-fe9b-d66d-85923437bb9b@denninger.net> From: Andriy Gapon Message-ID: <6cb46059-85c8-0c3b-7346-773647f1a962@FreeBSD.org> Date: Thu, 21 Jul 2016 15:52:35 +0300 User-Agent: Mozilla/5.0 (Windows NT 6.3; WOW64; rv:45.0) Gecko/20100101 Thunderbird/45.2.0 MIME-Version: 1.0 In-Reply-To: <1f064549-fa72-fe9b-d66d-85923437bb9b@denninger.net> Content-Type: text/plain; charset=windows-1252 Content-Transfer-Encoding: 7bit X-BeenThere: freebsd-stable@freebsd.org X-Mailman-Version: 2.1.22 Precedence: list List-Id: Production branch of FreeBSD source code List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 21 Jul 2016 12:53:59 -0000 On 21/07/2016 15:25, Karl Denninger wrote: > The crash occurred during a backup script operating, which is (roughly) > the following: > > zpool import -N backup (mount the pool to copy to) > > iterate over a list of zfs filesystems and... > > zfs rename fs@zfs-base fs@zfs-old > zfs snapshot fs@zfs-base > zfs send -RI fs@zfs-old fs@zfs-base | zfs receive -Fudv backup > zfs destroy -vr fs@zfs-old > > The first filesystem to be done is the rootfs, that is when it panic'd, > and from the traceback it appears that the Zio's in there are from the > backup volume, so the answer to your question is "yes". I think that what happened here was that a quite large number of TRIM requests was queued by ZFS before it had a chance to learn that the target vdev in the backup pool did not support TRIM. So, when the the first request failed with ENOTSUP the vdev was marked as not supporting TRIM. After that all subsequent requests were failed without sending them down the storage stack. But the way it is done means that all the requests were processed by the nested zio_execute() calls on the same stack. And that lead to the stack overflow. Steve, do you think that this is a correct description of what happened? The state of the pools that you described below probably contributed to the avalanche of TRIMs that caused the problem. > This is a different panic that I used to get on 10.2 (the other one was > always in dounmount) and the former symptom was also not immediately > reproducable; whatever was blowing it up before was in-core, and a > reboot would clear it. This one is not; I (foolishly) believed that the > operation would succeed after the reboot and re-attempted it, only to > get an immediate repeat of the same panic (with an essentially-identical > traceback.) > > What allowed the operation to succeed was removing *all* of the > snapshots (other than the base filesystem, of course) from both the > source *and* backup destination zpools, then re-running the operation. > That causes a "base" copy to be taken (zfs snapshot fs@zfs-base and then > just a straight send of that instead of an incremental), which was > successful. > > The only thing that was odd about the zfs filesystem in question was > that as a boot environment that was my roll-forward to 11.0 its "origin" > was a clone of 10.2 before the install was done, so that snapshot was > present in the zfs snapshot list. However, it had been present for > several days without incident, so I doubt its presence was involved in > the creation of the circumstances leading to the panic. -- Andriy Gapon