From owner-freebsd-fs@freebsd.org Sun Jul 26 02:33:28 2015 Return-Path: Delivered-To: freebsd-fs@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 368F89A9C02 for ; Sun, 26 Jul 2015 02:33:28 +0000 (UTC) (envelope-from bugzilla-noreply@freebsd.org) Received: from kenobi.freebsd.org (kenobi.freebsd.org [IPv6:2001:1900:2254:206a::16:76]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 23C3A69C for ; Sun, 26 Jul 2015 02:33:28 +0000 (UTC) (envelope-from bugzilla-noreply@freebsd.org) Received: from bugs.freebsd.org ([127.0.1.118]) by kenobi.freebsd.org (8.15.2/8.15.2) with ESMTP id t6Q2XSMJ089432 for ; Sun, 26 Jul 2015 02:33:28 GMT (envelope-from bugzilla-noreply@freebsd.org) From: bugzilla-noreply@freebsd.org To: freebsd-fs@FreeBSD.org Subject: [Bug 201859] Kernel panic after every reboot (ZFS) Date: Sun, 26 Jul 2015 02:33:27 +0000 X-Bugzilla-Reason: AssignedTo X-Bugzilla-Type: changed X-Bugzilla-Watch-Reason: None X-Bugzilla-Product: Base System X-Bugzilla-Component: misc X-Bugzilla-Version: 10.1-STABLE X-Bugzilla-Keywords: X-Bugzilla-Severity: Affects Only Me X-Bugzilla-Who: linimon@FreeBSD.org X-Bugzilla-Status: New X-Bugzilla-Priority: --- X-Bugzilla-Assigned-To: freebsd-fs@FreeBSD.org X-Bugzilla-Target-Milestone: --- X-Bugzilla-Flags: X-Bugzilla-Changed-Fields: assigned_to Message-ID: In-Reply-To: References: Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: 7bit X-Bugzilla-URL: https://bugs.freebsd.org/bugzilla/ Auto-Submitted: auto-generated MIME-Version: 1.0 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 26 Jul 2015 02:33:28 -0000 https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=201859 Mark Linimon changed: What |Removed |Added ---------------------------------------------------------------------------- Assignee|freebsd-bugs@FreeBSD.org |freebsd-fs@FreeBSD.org -- You are receiving this mail because: You are the assignee for the bug. From owner-freebsd-fs@freebsd.org Sun Jul 26 21:00:29 2015 Return-Path: Delivered-To: freebsd-fs@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 02C029A952D for ; Sun, 26 Jul 2015 21:00:29 +0000 (UTC) (envelope-from bugzilla-noreply@FreeBSD.org) Received: from kenobi.freebsd.org (kenobi.freebsd.org [IPv6:2001:1900:2254:206a::16:76]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id D205C1BA1 for ; Sun, 26 Jul 2015 21:00:28 +0000 (UTC) (envelope-from bugzilla-noreply@FreeBSD.org) Received: from bugs.freebsd.org ([127.0.1.118]) by kenobi.freebsd.org (8.15.2/8.15.2) with ESMTP id t6QL0S72092996 for ; Sun, 26 Jul 2015 21:00:28 GMT (envelope-from bugzilla-noreply@FreeBSD.org) Message-Id: <201507262100.t6QL0S72092996@kenobi.freebsd.org> From: bugzilla-noreply@FreeBSD.org To: freebsd-fs@FreeBSD.org Subject: Problem reports for freebsd-fs@FreeBSD.org that need special attention X-Bugzilla-URL: https://bugs.freebsd.org/bugzilla/ Auto-Submitted: auto-generated MIME-Version: 1.0 Date: Sun, 26 Jul 2015 21:00:28 +0000 Content-Type: text/plain; charset="UTF-8" X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 26 Jul 2015 21:00:29 -0000 To view an individual PR, use: https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=(Bug Id). The following is a listing of current problems submitted by FreeBSD users, which need special attention. These represent problem reports covering all versions including experimental development code and obsolete releases. Status | Bug Id | Description ------------+-----------+--------------------------------------------------- Open | 136470 | [nfs] Cannot mount / in read-only, over NFS Open | 139651 | [nfs] mount(8): read-only remount of NFS volume d Open | 144447 | [zfs] sharenfs fsunshare() & fsshare_main() non f 3 problems total for which you should take action. From owner-freebsd-fs@freebsd.org Tue Jul 28 03:06:07 2015 Return-Path: Delivered-To: freebsd-fs@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id D7A9798D7D4 for ; Tue, 28 Jul 2015 03:06:07 +0000 (UTC) (envelope-from bugzilla-noreply@freebsd.org) Received: from kenobi.freebsd.org (kenobi.freebsd.org [IPv6:2001:1900:2254:206a::16:76]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id C48DDBE4 for ; Tue, 28 Jul 2015 03:06:07 +0000 (UTC) (envelope-from bugzilla-noreply@freebsd.org) Received: from bugs.freebsd.org ([127.0.1.118]) by kenobi.freebsd.org (8.15.2/8.15.2) with ESMTP id t6S367CE002029 for ; Tue, 28 Jul 2015 03:06:07 GMT (envelope-from bugzilla-noreply@freebsd.org) From: bugzilla-noreply@freebsd.org To: freebsd-fs@FreeBSD.org Subject: [Bug 201912] panic in smbfs during mount Date: Tue, 28 Jul 2015 03:06:07 +0000 X-Bugzilla-Reason: AssignedTo X-Bugzilla-Type: changed X-Bugzilla-Watch-Reason: None X-Bugzilla-Product: Base System X-Bugzilla-Component: kern X-Bugzilla-Version: 10.1-RELEASE X-Bugzilla-Keywords: X-Bugzilla-Severity: Affects Only Me X-Bugzilla-Who: linimon@FreeBSD.org X-Bugzilla-Status: New X-Bugzilla-Priority: --- X-Bugzilla-Assigned-To: freebsd-fs@FreeBSD.org X-Bugzilla-Target-Milestone: --- X-Bugzilla-Flags: X-Bugzilla-Changed-Fields: assigned_to Message-ID: In-Reply-To: References: Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: 7bit X-Bugzilla-URL: https://bugs.freebsd.org/bugzilla/ Auto-Submitted: auto-generated MIME-Version: 1.0 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 28 Jul 2015 03:06:07 -0000 https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=201912 Mark Linimon changed: What |Removed |Added ---------------------------------------------------------------------------- Assignee|freebsd-bugs@FreeBSD.org |freebsd-fs@FreeBSD.org -- You are receiving this mail because: You are the assignee for the bug. From owner-freebsd-fs@freebsd.org Tue Jul 28 08:39:10 2015 Return-Path: Delivered-To: freebsd-fs@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 9AD379AAA5E for ; Tue, 28 Jul 2015 08:39:10 +0000 (UTC) (envelope-from bugzilla-noreply@freebsd.org) Received: from kenobi.freebsd.org (kenobi.freebsd.org [IPv6:2001:1900:2254:206a::16:76]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 874C11475 for ; Tue, 28 Jul 2015 08:39:10 +0000 (UTC) (envelope-from bugzilla-noreply@freebsd.org) Received: from bugs.freebsd.org ([127.0.1.118]) by kenobi.freebsd.org (8.15.2/8.15.2) with ESMTP id t6S8dA0J028364 for ; Tue, 28 Jul 2015 08:39:10 GMT (envelope-from bugzilla-noreply@freebsd.org) From: bugzilla-noreply@freebsd.org To: freebsd-fs@FreeBSD.org Subject: [Bug 201859] Kernel panic after every reboot (ZFS) Date: Tue, 28 Jul 2015 08:39:09 +0000 X-Bugzilla-Reason: AssignedTo X-Bugzilla-Type: changed X-Bugzilla-Watch-Reason: None X-Bugzilla-Product: Base System X-Bugzilla-Component: misc X-Bugzilla-Version: 10.1-STABLE X-Bugzilla-Keywords: X-Bugzilla-Severity: Affects Only Me X-Bugzilla-Who: smh@FreeBSD.org X-Bugzilla-Status: New X-Bugzilla-Priority: --- X-Bugzilla-Assigned-To: freebsd-fs@FreeBSD.org X-Bugzilla-Target-Milestone: --- X-Bugzilla-Flags: X-Bugzilla-Changed-Fields: cc Message-ID: In-Reply-To: References: Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: 7bit X-Bugzilla-URL: https://bugs.freebsd.org/bugzilla/ Auto-Submitted: auto-generated MIME-Version: 1.0 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 28 Jul 2015 08:39:10 -0000 https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=201859 Steven Hartland changed: What |Removed |Added ---------------------------------------------------------------------------- CC| |smh@FreeBSD.org --- Comment #2 from Steven Hartland --- You cant run ZFS with GENERIC on i386 as you need the custom option: options KSTACK_PAGES=4 Of your kernel config, see the UPDATING entry 20121223. -- You are receiving this mail because: You are the assignee for the bug. From owner-freebsd-fs@freebsd.org Tue Jul 28 09:50:04 2015 Return-Path: Delivered-To: freebsd-fs@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id AB29D9AD02C for ; Tue, 28 Jul 2015 09:50:04 +0000 (UTC) (envelope-from bugzilla-noreply@freebsd.org) Received: from kenobi.freebsd.org (kenobi.freebsd.org [IPv6:2001:1900:2254:206a::16:76]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 983A891C for ; Tue, 28 Jul 2015 09:50:04 +0000 (UTC) (envelope-from bugzilla-noreply@freebsd.org) Received: from bugs.freebsd.org ([127.0.1.118]) by kenobi.freebsd.org (8.15.2/8.15.2) with ESMTP id t6S9o42B016774 for ; Tue, 28 Jul 2015 09:50:04 GMT (envelope-from bugzilla-noreply@freebsd.org) From: bugzilla-noreply@freebsd.org To: freebsd-fs@FreeBSD.org Subject: [Bug 201859] Kernel panic after every reboot (ZFS) Date: Tue, 28 Jul 2015 09:50:04 +0000 X-Bugzilla-Reason: AssignedTo X-Bugzilla-Type: changed X-Bugzilla-Watch-Reason: None X-Bugzilla-Product: Base System X-Bugzilla-Component: misc X-Bugzilla-Version: 10.1-STABLE X-Bugzilla-Keywords: X-Bugzilla-Severity: Affects Only Me X-Bugzilla-Who: licho@protonmail.ch X-Bugzilla-Status: New X-Bugzilla-Priority: --- X-Bugzilla-Assigned-To: freebsd-fs@FreeBSD.org X-Bugzilla-Target-Milestone: --- X-Bugzilla-Flags: X-Bugzilla-Changed-Fields: Message-ID: In-Reply-To: References: Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: 7bit X-Bugzilla-URL: https://bugs.freebsd.org/bugzilla/ Auto-Submitted: auto-generated MIME-Version: 1.0 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 28 Jul 2015 09:50:04 -0000 https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=201859 --- Comment #3 from Licho --- (In reply to Steven Hartland from comment #2) >> FreeBSD/i386 10.1-RELEASE configured with a multi-disk ZFS dataset (mirror, raidz1, raidz2, raidz3) may crash during boot when the ZFS pool mount is attempted while booting an unmodified GENERIC kernel. (https://www.freebsd.org/releases/10.1R/errata.html) It's single drive pool, also panics are only after machine reboot after I set kern.shutdown.poweroff_delay=60000 - on shutdown and cold boot now it works fine every time. -- You are receiving this mail because: You are the assignee for the bug. From owner-freebsd-fs@freebsd.org Tue Jul 28 09:55:56 2015 Return-Path: Delivered-To: freebsd-fs@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 9357A9AD196 for ; Tue, 28 Jul 2015 09:55:56 +0000 (UTC) (envelope-from bugzilla-noreply@freebsd.org) Received: from kenobi.freebsd.org (kenobi.freebsd.org [IPv6:2001:1900:2254:206a::16:76]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 80433C05 for ; Tue, 28 Jul 2015 09:55:56 +0000 (UTC) (envelope-from bugzilla-noreply@freebsd.org) Received: from bugs.freebsd.org ([127.0.1.118]) by kenobi.freebsd.org (8.15.2/8.15.2) with ESMTP id t6S9tu3J024670 for ; Tue, 28 Jul 2015 09:55:56 GMT (envelope-from bugzilla-noreply@freebsd.org) From: bugzilla-noreply@freebsd.org To: freebsd-fs@FreeBSD.org Subject: [Bug 201859] Kernel panic after every reboot (ZFS) Date: Tue, 28 Jul 2015 09:55:56 +0000 X-Bugzilla-Reason: AssignedTo X-Bugzilla-Type: changed X-Bugzilla-Watch-Reason: None X-Bugzilla-Product: Base System X-Bugzilla-Component: misc X-Bugzilla-Version: 10.1-STABLE X-Bugzilla-Keywords: X-Bugzilla-Severity: Affects Only Me X-Bugzilla-Who: fk@fabiankeil.de X-Bugzilla-Status: New X-Bugzilla-Priority: --- X-Bugzilla-Assigned-To: freebsd-fs@FreeBSD.org X-Bugzilla-Target-Milestone: --- X-Bugzilla-Flags: X-Bugzilla-Changed-Fields: cc Message-ID: In-Reply-To: References: Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: 7bit X-Bugzilla-URL: https://bugs.freebsd.org/bugzilla/ Auto-Submitted: auto-generated MIME-Version: 1.0 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 28 Jul 2015 09:55:56 -0000 https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=201859 fk@fabiankeil.de changed: What |Removed |Added ---------------------------------------------------------------------------- CC| |fk@fabiankeil.de --- Comment #4 from fk@fabiankeil.de --- Given how many users run into this, I think the zfs module should emit a warning when KSTACK_PAGES is too low, just like it already does if there's less RAM than recommended in general or for prefetching. -- You are receiving this mail because: You are the assignee for the bug. From owner-freebsd-fs@freebsd.org Tue Jul 28 09:58:39 2015 Return-Path: Delivered-To: freebsd-fs@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 838CB9AD209 for ; Tue, 28 Jul 2015 09:58:39 +0000 (UTC) (envelope-from bugzilla-noreply@freebsd.org) Received: from kenobi.freebsd.org (kenobi.freebsd.org [IPv6:2001:1900:2254:206a::16:76]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 70500CD3 for ; Tue, 28 Jul 2015 09:58:39 +0000 (UTC) (envelope-from bugzilla-noreply@freebsd.org) Received: from bugs.freebsd.org ([127.0.1.118]) by kenobi.freebsd.org (8.15.2/8.15.2) with ESMTP id t6S9wdif026410 for ; Tue, 28 Jul 2015 09:58:39 GMT (envelope-from bugzilla-noreply@freebsd.org) From: bugzilla-noreply@freebsd.org To: freebsd-fs@FreeBSD.org Subject: [Bug 201859] Kernel panic after every reboot (ZFS) Date: Tue, 28 Jul 2015 09:58:39 +0000 X-Bugzilla-Reason: AssignedTo X-Bugzilla-Type: changed X-Bugzilla-Watch-Reason: None X-Bugzilla-Product: Base System X-Bugzilla-Component: misc X-Bugzilla-Version: 10.1-STABLE X-Bugzilla-Keywords: X-Bugzilla-Severity: Affects Only Me X-Bugzilla-Who: smh@FreeBSD.org X-Bugzilla-Status: New X-Bugzilla-Priority: --- X-Bugzilla-Assigned-To: freebsd-fs@FreeBSD.org X-Bugzilla-Target-Milestone: --- X-Bugzilla-Flags: X-Bugzilla-Changed-Fields: Message-ID: In-Reply-To: References: Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: 7bit X-Bugzilla-URL: https://bugs.freebsd.org/bugzilla/ Auto-Submitted: auto-generated MIME-Version: 1.0 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 28 Jul 2015 09:58:39 -0000 https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=201859 --- Comment #5 from Steven Hartland --- It could still be stack space, impossible to tell without the panic info; so try with the correct kernel options and see if that fixes the issue. If it still fails please attach the stack trace from the panic. You may need to set the following in /etc/rc.conf to get a crash dump for the full panic information: dumpdev="AUTO" -- You are receiving this mail because: You are the assignee for the bug. From owner-freebsd-fs@freebsd.org Tue Jul 28 11:23:01 2015 Return-Path: Delivered-To: freebsd-fs@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id C7CB99AD69C for ; Tue, 28 Jul 2015 11:23:01 +0000 (UTC) (envelope-from bugzilla-noreply@freebsd.org) Received: from kenobi.freebsd.org (kenobi.freebsd.org [IPv6:2001:1900:2254:206a::16:76]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id B46CA272 for ; Tue, 28 Jul 2015 11:23:01 +0000 (UTC) (envelope-from bugzilla-noreply@freebsd.org) Received: from bugs.freebsd.org ([127.0.1.118]) by kenobi.freebsd.org (8.15.2/8.15.2) with ESMTP id t6SBN1Ov038187 for ; Tue, 28 Jul 2015 11:23:01 GMT (envelope-from bugzilla-noreply@freebsd.org) From: bugzilla-noreply@freebsd.org To: freebsd-fs@FreeBSD.org Subject: [Bug 201859] Kernel panic after every reboot (ZFS) Date: Tue, 28 Jul 2015 11:23:02 +0000 X-Bugzilla-Reason: AssignedTo X-Bugzilla-Type: changed X-Bugzilla-Watch-Reason: None X-Bugzilla-Product: Base System X-Bugzilla-Component: misc X-Bugzilla-Version: 10.1-STABLE X-Bugzilla-Keywords: X-Bugzilla-Severity: Affects Only Me X-Bugzilla-Who: smh@FreeBSD.org X-Bugzilla-Status: New X-Bugzilla-Priority: --- X-Bugzilla-Assigned-To: freebsd-fs@FreeBSD.org X-Bugzilla-Target-Milestone: --- X-Bugzilla-Flags: X-Bugzilla-Changed-Fields: Message-ID: In-Reply-To: References: Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: 7bit X-Bugzilla-URL: https://bugs.freebsd.org/bugzilla/ Auto-Submitted: auto-generated MIME-Version: 1.0 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 28 Jul 2015 11:23:01 -0000 https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=201859 --- Comment #6 from Steven Hartland --- (In reply to fk from comment #4) Agreed, committed as https://svnweb.freebsd.org/changeset/base/285947 -- You are receiving this mail because: You are the assignee for the bug. From owner-freebsd-fs@freebsd.org Tue Jul 28 12:47:57 2015 Return-Path: Delivered-To: freebsd-fs@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 8E6029AC314 for ; Tue, 28 Jul 2015 12:47:57 +0000 (UTC) (envelope-from email.ahmedkamal@googlemail.com) Received: from mail-wi0-x236.google.com (mail-wi0-x236.google.com [IPv6:2a00:1450:400c:c05::236]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (Client CN "smtp.gmail.com", Issuer "Google Internet Authority G2" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id 22C001EB9 for ; Tue, 28 Jul 2015 12:47:57 +0000 (UTC) (envelope-from email.ahmedkamal@googlemail.com) Received: by wibxm9 with SMTP id xm9so155641678wib.0 for ; Tue, 28 Jul 2015 05:47:55 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=googlemail.com; s=20120113; h=mime-version:in-reply-to:references:from:date:message-id:subject:to :cc:content-type; bh=fSDIOCU/ZeAS4Jb56Uz1INqDsHOpWRd0UdCDXteZdYg=; b=X8hqY3VIsaPCF4B1P0rWr6CL7iSFSP92v7ECAUrx7zgIBa+tH8nw4+N0b1Z7KhgEXj ZC2b5pQ0HWXJrE7xAr57dI3RwfxO00+A3Ag6m60kgWKnpOGbr5iLxifzF4FDjk/vSSo8 eWn3V1rc7/pZ7ow7O3n2DhxqKGni0O8NR6f4zs8AiT4XtAIQPP1U7B4BsrygARuJulee 6hShyzMUdOMrJF5pjkszHNOWG8tQ6s+HNRWhxRAwrBWbUkTUsaTVuSsEILeilY/b6j7e qesBOpmIbPepCu/8cRb4ymlHO27Nj+MFlRul+WhJff0t5A+S9UDoAh6CXAIDYLRmFFbn ABxQ== X-Received: by 10.194.236.161 with SMTP id uv1mr69726713wjc.158.1438087675435; Tue, 28 Jul 2015 05:47:55 -0700 (PDT) MIME-Version: 1.0 Received: by 10.28.6.143 with HTTP; Tue, 28 Jul 2015 05:47:35 -0700 (PDT) In-Reply-To: <576106597.2326662.1437688749018.JavaMail.zimbra@uoguelph.ca> References: <684628776.2772174.1435793776748.JavaMail.zimbra@uoguelph.ca> <20150716235022.GF32479@physics.umn.edu> <184170291.10949389.1437161519387.JavaMail.zimbra@uoguelph.ca> <55B12EB7.6030607@physics.umn.edu> <1935759160.2320694.1437688383362.JavaMail.zimbra@uoguelph.ca> <576106597.2326662.1437688749018.JavaMail.zimbra@uoguelph.ca> From: Ahmed Kamal Date: Tue, 28 Jul 2015 14:47:35 +0200 Message-ID: Subject: Re: Linux NFSv4 clients are getting (bad sequence-id error!) To: Rick Macklem Cc: Graham Allan , Ahmed Kamal via freebsd-fs Content-Type: text/plain; charset=UTF-8 X-Content-Filtered-By: Mailman/MimeDel 2.1.20 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 28 Jul 2015 12:47:57 -0000 Hi again Rick, Seems that I'm still being unlucky with nfs :/ I caught one of the newly installed RHEL6 boxes having high CPU usage, and bombarding the BSD NFS box with 10Mbps traffic .. I caught a tcpdump as you mentioned .. You can download it here: https://dl.dropboxusercontent.com/u/51939288/nfs41-high-client-cpu.pcap.bz2 I didn't restart the client yet .. so if you catch me in the next few hours and want me to run any diagnostics, let me know. Thanks a lot all for helping On Thu, Jul 23, 2015 at 11:59 PM, Rick Macklem wrote: > Ahmed Kamal wrote: > > Can you please let me know the ultimate packet trace command I'd need to > > run in case of any nfs4 troubles .. I guess this should be comprehensive > > even at the expense of a larger output size (which we can trim later).. > > Thanks a lot for the help! > > > tcpdump -s 0 -w .pcap host > ( refers to a file name you choose and refers to > the host name of a client generating traffic.) > --> But you won't be able to allow this to run for long during the storm > or the > file will be huge. > > Then you look at .pcap in wireshark, which knows NFS. > > rick > > > On Thu, Jul 23, 2015 at 11:53 PM, Rick Macklem > wrote: > > > > > Graham Allan wrote: > > > > For our part, the user whose code triggered the pathological > behaviour > > > > on SL5 reran it on SL6 without incident - I still see lots of > > > > sequence-id errors in the logs, but nothing bad happened. > > > > > > > > I'd still like to ask them to rerun again on SL5 to see if the > "accept > > > > skipped seqid" patch had any effect, though I think we expect not. > Maybe > > > > it would be nice if I could get set up to capture rolling tcpdumps of > > > > the nfs traffic before they run that though... > > > > > > > > Graham > > > > > > > > On 7/20/2015 10:26 PM, Ahmed Kamal wrote: > > > > > Hi folks, > > > > > > > > > > I've upgraded a test client to rhel6 today, and I'll keep an eye > on it > > > > > to see what happens. > > > > > > > > > > During the process, I made the (I guess mistake) of zfs send | > recv to > > > a > > > > > locally attached usb disk for backup purposes .. long story short, > > > > > sharenfs property on the received filesystem was causing some > > > nfs/mountd > > > > > errors in logs .. I wasn't too happy with what I got .. I > destroyed the > > > > > backup datasets and the whole pool eventually .. and then rebooted > the > > > > > whole nas box .. After reboot my logs are still flooded with > > > > > > > > > > Jul 21 05:12:36 nas kernel: nfsrv_cache_session: no session > > > > > Jul 21 05:13:07 nas last message repeated 7536 times > > > > > Jul 21 05:15:08 nas last message repeated 29664 times > > > > > > > > > > Not sure what that means .. or how it can be stopped .. Anyway, > will > > > > > keep you posted on progress. > > > > > > > Oh, I didn't see the part about "reboot" before. Unfortunately, it > sounds > > > like the > > > client isn't recovering after the session is lost. When the server > > > reboots, the > > > client(s) will get NFS4ERR_BAD_SESSION errors back because the server > > > reboot has > > > deleted all sessions. The NFS4ERR_BAD_SESSION should trigger state > > > recovery on the client. > > > (It doesn't sound like the clients went into recovery, starting with a > > > Create_session > > > operation, but without a packet trace, I can't be sure?) > > > > > > rick > > > > > > > > > > > -- > > > > > ------------------------------------------------------------------------- > > > > Graham Allan - gta@umn.edu - allan@physics.umn.edu > > > > School of Physics and Astronomy - University of Minnesota > > > > > ------------------------------------------------------------------------- > > > > > > > > > > > > > > From owner-freebsd-fs@freebsd.org Tue Jul 28 17:45:17 2015 Return-Path: Delivered-To: freebsd-fs@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 953509AC8D9 for ; Tue, 28 Jul 2015 17:45:17 +0000 (UTC) (envelope-from bugzilla-noreply@freebsd.org) Received: from kenobi.freebsd.org (kenobi.freebsd.org [IPv6:2001:1900:2254:206a::16:76]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 824C616B3 for ; Tue, 28 Jul 2015 17:45:17 +0000 (UTC) (envelope-from bugzilla-noreply@freebsd.org) Received: from bugs.freebsd.org ([127.0.1.118]) by kenobi.freebsd.org (8.15.2/8.15.2) with ESMTP id t6SHjHhm069035 for ; Tue, 28 Jul 2015 17:45:17 GMT (envelope-from bugzilla-noreply@freebsd.org) From: bugzilla-noreply@freebsd.org To: freebsd-fs@FreeBSD.org Subject: [Bug 201859] Kernel panic after every reboot (ZFS) Date: Tue, 28 Jul 2015 17:45:17 +0000 X-Bugzilla-Reason: AssignedTo X-Bugzilla-Type: changed X-Bugzilla-Watch-Reason: None X-Bugzilla-Product: Base System X-Bugzilla-Component: misc X-Bugzilla-Version: 10.1-STABLE X-Bugzilla-Keywords: X-Bugzilla-Severity: Affects Only Me X-Bugzilla-Who: licho@protonmail.ch X-Bugzilla-Status: New X-Bugzilla-Priority: --- X-Bugzilla-Assigned-To: freebsd-fs@FreeBSD.org X-Bugzilla-Target-Milestone: --- X-Bugzilla-Flags: X-Bugzilla-Changed-Fields: Message-ID: In-Reply-To: References: Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: 7bit X-Bugzilla-URL: https://bugs.freebsd.org/bugzilla/ Auto-Submitted: auto-generated MIME-Version: 1.0 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 28 Jul 2015 17:45:17 -0000 https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=201859 --- Comment #7 from Licho --- Seems like KSTACK_PAGES works for me. Is it possible that this will be default behavior on generic kernels in next release (10.2 or 11.0)? -- You are receiving this mail because: You are the assignee for the bug. From owner-freebsd-fs@freebsd.org Tue Jul 28 17:55:36 2015 Return-Path: Delivered-To: freebsd-fs@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 5EAAD9ACC79 for ; Tue, 28 Jul 2015 17:55:36 +0000 (UTC) (envelope-from bugzilla-noreply@freebsd.org) Received: from kenobi.freebsd.org (kenobi.freebsd.org [IPv6:2001:1900:2254:206a::16:76]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 4BB8C1E84 for ; Tue, 28 Jul 2015 17:55:36 +0000 (UTC) (envelope-from bugzilla-noreply@freebsd.org) Received: from bugs.freebsd.org ([127.0.1.118]) by kenobi.freebsd.org (8.15.2/8.15.2) with ESMTP id t6SHtaCI084030 for ; Tue, 28 Jul 2015 17:55:36 GMT (envelope-from bugzilla-noreply@freebsd.org) From: bugzilla-noreply@freebsd.org To: freebsd-fs@FreeBSD.org Subject: [Bug 201859] Kernel panic after every reboot (ZFS) Date: Tue, 28 Jul 2015 17:55:36 +0000 X-Bugzilla-Reason: AssignedTo X-Bugzilla-Type: changed X-Bugzilla-Watch-Reason: None X-Bugzilla-Product: Base System X-Bugzilla-Component: misc X-Bugzilla-Version: 10.1-STABLE X-Bugzilla-Keywords: X-Bugzilla-Severity: Affects Only Me X-Bugzilla-Who: gjb@FreeBSD.org X-Bugzilla-Status: New X-Bugzilla-Priority: --- X-Bugzilla-Assigned-To: freebsd-fs@FreeBSD.org X-Bugzilla-Target-Milestone: --- X-Bugzilla-Flags: X-Bugzilla-Changed-Fields: Message-ID: In-Reply-To: References: Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: 7bit X-Bugzilla-URL: https://bugs.freebsd.org/bugzilla/ Auto-Submitted: auto-generated MIME-Version: 1.0 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 28 Jul 2015 17:55:36 -0000 https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=201859 --- Comment #8 from Glen Barber --- (In reply to Licho from comment #7) > Seems like KSTACK_PAGES works for me. Is it possible that this will be > default behavior on generic kernels in next release (10.2 or 11.0)? (After referring to email archives from the 10.1-RELEASE cycle...) This cannot be made the default for i386, because increasing the stack pages significantly limits the number of userland threads, which will eventually lead to KVA exhaustion. -- You are receiving this mail because: You are the assignee for the bug. From owner-freebsd-fs@freebsd.org Tue Jul 28 22:49:40 2015 Return-Path: Delivered-To: freebsd-fs@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id D80B19AD271 for ; Tue, 28 Jul 2015 22:49:40 +0000 (UTC) (envelope-from bugzilla-noreply@freebsd.org) Received: from kenobi.freebsd.org (kenobi.freebsd.org [IPv6:2001:1900:2254:206a::16:76]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id C4804EAA for ; Tue, 28 Jul 2015 22:49:40 +0000 (UTC) (envelope-from bugzilla-noreply@freebsd.org) Received: from bugs.freebsd.org ([127.0.1.118]) by kenobi.freebsd.org (8.15.2/8.15.2) with ESMTP id t6SMnerC035892 for ; Tue, 28 Jul 2015 22:49:40 GMT (envelope-from bugzilla-noreply@freebsd.org) From: bugzilla-noreply@freebsd.org To: freebsd-fs@FreeBSD.org Subject: [Bug 154930] [zfs] cannot delete/unlink file from full volume -> ENOSPC Date: Tue, 28 Jul 2015 22:49:41 +0000 X-Bugzilla-Reason: AssignedTo X-Bugzilla-Type: changed X-Bugzilla-Watch-Reason: None X-Bugzilla-Product: Base System X-Bugzilla-Component: kern X-Bugzilla-Version: 8.2-PRERELEASE X-Bugzilla-Keywords: X-Bugzilla-Severity: Affects Only Me X-Bugzilla-Who: commit-hook@freebsd.org X-Bugzilla-Status: In Progress X-Bugzilla-Priority: Normal X-Bugzilla-Assigned-To: freebsd-fs@FreeBSD.org X-Bugzilla-Target-Milestone: --- X-Bugzilla-Flags: X-Bugzilla-Changed-Fields: Message-ID: In-Reply-To: References: Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: 7bit X-Bugzilla-URL: https://bugs.freebsd.org/bugzilla/ Auto-Submitted: auto-generated MIME-Version: 1.0 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 28 Jul 2015 22:49:40 -0000 https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=154930 --- Comment #5 from commit-hook@freebsd.org --- A commit references this bug: Author: bdrewery Date: Tue Jul 28 22:48:59 UTC 2015 New revision: 285990 URL: https://svnweb.freebsd.org/changeset/base/285990 Log: unlink(2): Note the possibility for ENOSPC to be returned on ZFS. PR: 154930 Changes: head/lib/libc/sys/unlink.2 -- You are receiving this mail because: You are the assignee for the bug. From owner-freebsd-fs@freebsd.org Tue Jul 28 23:39:30 2015 Return-Path: Delivered-To: freebsd-fs@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 707CF9AC532 for ; Tue, 28 Jul 2015 23:39:30 +0000 (UTC) (envelope-from rmacklem@uoguelph.ca) Received: from esa-jnhn.mail.uoguelph.ca (esa-jnhn.mail.uoguelph.ca [131.104.91.44]) by mx1.freebsd.org (Postfix) with ESMTP id 098A5B50 for ; Tue, 28 Jul 2015 23:39:29 +0000 (UTC) (envelope-from rmacklem@uoguelph.ca) X-IronPort-Anti-Spam-Filtered: true X-IronPort-Anti-Spam-Result: A2A1AwD2EbhV/61jaINbg2lpBoMduRYJggOFdwKCDxQBAQEBAQEBgQqEIwEBAQMBIwRSBQsCAQgOChEZAgICVQIEE4gmCA25WpYLAQEBAQEBAQEBAQEBAQEBAQEBFgSLToQbIQkNARkbB4JpgUMFhxaFLogkgjiCQYJihiyEHZNHAiaEGSIxAQGBBUGBBAEBAQ X-IronPort-AV: E=Sophos;i="5.15,567,1432612800"; d="scan'208";a="226768701" Received: from nipigon.cs.uoguelph.ca (HELO zcs1.mail.uoguelph.ca) ([131.104.99.173]) by esa-jnhn.mail.uoguelph.ca with ESMTP; 28 Jul 2015 19:39:22 -0400 Received: from localhost (localhost [127.0.0.1]) by zcs1.mail.uoguelph.ca (Postfix) with ESMTP id 49D4415F542; Tue, 28 Jul 2015 19:39:22 -0400 (EDT) Received: from zcs1.mail.uoguelph.ca ([127.0.0.1]) by localhost (zcs1.mail.uoguelph.ca [127.0.0.1]) (amavisd-new, port 10032) with ESMTP id ZgyrirGtu2B6; Tue, 28 Jul 2015 19:39:21 -0400 (EDT) Received: from localhost (localhost [127.0.0.1]) by zcs1.mail.uoguelph.ca (Postfix) with ESMTP id 0A0A715F55D; Tue, 28 Jul 2015 19:39:21 -0400 (EDT) X-Virus-Scanned: amavisd-new at zcs1.mail.uoguelph.ca Received: from zcs1.mail.uoguelph.ca ([127.0.0.1]) by localhost (zcs1.mail.uoguelph.ca [127.0.0.1]) (amavisd-new, port 10026) with ESMTP id yrHZXU7dOecb; Tue, 28 Jul 2015 19:39:20 -0400 (EDT) Received: from zcs1.mail.uoguelph.ca (zcs1.mail.uoguelph.ca [172.17.95.18]) by zcs1.mail.uoguelph.ca (Postfix) with ESMTP id DB84315F542; Tue, 28 Jul 2015 19:39:20 -0400 (EDT) Date: Tue, 28 Jul 2015 19:39:20 -0400 (EDT) From: Rick Macklem To: Ahmed Kamal Cc: Graham Allan , Ahmed Kamal via freebsd-fs Message-ID: <1089316279.4709692.1438126760802.JavaMail.zimbra@uoguelph.ca> In-Reply-To: References: <684628776.2772174.1435793776748.JavaMail.zimbra@uoguelph.ca> <184170291.10949389.1437161519387.JavaMail.zimbra@uoguelph.ca> <55B12EB7.6030607@physics.umn.edu> <1935759160.2320694.1437688383362.JavaMail.zimbra@uoguelph.ca> <576106597.2326662.1437688749018.JavaMail.zimbra@uoguelph.ca> Subject: Re: Linux NFSv4 clients are getting (bad sequence-id error!) MIME-Version: 1.0 Content-Type: multipart/mixed; boundary="----=_Part_4709690_1367398941.1438126760800" X-Originating-IP: [172.17.95.10] X-Mailer: Zimbra 8.0.9_GA_6191 (ZimbraWebClient - FF34 (Win)/8.0.9_GA_6191) Thread-Topic: Linux NFSv4 clients are getting (bad sequence-id error!) Thread-Index: JCK7//ijhGF2vsViuu2eSg5UNrGl8w== X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 28 Jul 2015 23:39:30 -0000 ------=_Part_4709690_1367398941.1438126760800 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit Ahmed Kamal wrote: > Hi again Rick, > > Seems that I'm still being unlucky with nfs :/ I caught one of the newly > installed RHEL6 boxes having high CPU usage, and bombarding the BSD NFS box > with 10Mbps traffic .. I caught a tcpdump as you mentioned .. You can > download it here: > > https://dl.dropboxusercontent.com/u/51939288/nfs41-high-client-cpu.pcap.bz2 > Ok, the packet trace suggests that the NFSv4 server is broken (it is replying with NFS4ERR_STALE_CLIENTID for a recently generated ClientID). Now, I can't be sure, but the only explanation I can come up with is... - For some arches (I only have i386, so I wouldn't have seen this during testing), time_t is 64bits (uint64_t). --> If time_seconds somehow doesn't fit in the low order 32bits, then the code would be busted for these arches because nfsrvboottime is set to time_seconds when the server is started and then there are comparisons like: if (nfsrvboottime != clientid.lval[0]) return (NFSERR_STALECLIENTID); /* where clientid.lval[0] is a uint32_t */ Anyhow, if this is what is happening, the attached simple patch should fix it. (I don't know how time_seconds would exceed 4billion, but the clock code is pretty convoluted, so I can't say if it can possibly happen?) rick ps: Hmm, on i386 time_seconds ends up at 1438126486, so maybe it can exceed 4*1024*1024*1024 - 1 on amd64? > I didn't restart the client yet .. so if you catch me in the next few hours > and want me to run any diagnostics, let me know. Thanks a lot all for > helping > > On Thu, Jul 23, 2015 at 11:59 PM, Rick Macklem wrote: > > > Ahmed Kamal wrote: > > > Can you please let me know the ultimate packet trace command I'd need to > > > run in case of any nfs4 troubles .. I guess this should be comprehensive > > > even at the expense of a larger output size (which we can trim later).. > > > Thanks a lot for the help! > > > > > tcpdump -s 0 -w .pcap host > > ( refers to a file name you choose and refers to > > the host name of a client generating traffic.) > > --> But you won't be able to allow this to run for long during the storm > > or the > > file will be huge. > > > > Then you look at .pcap in wireshark, which knows NFS. > > > > rick > > > > > On Thu, Jul 23, 2015 at 11:53 PM, Rick Macklem > > wrote: > > > > > > > Graham Allan wrote: > > > > > For our part, the user whose code triggered the pathological > > behaviour > > > > > on SL5 reran it on SL6 without incident - I still see lots of > > > > > sequence-id errors in the logs, but nothing bad happened. > > > > > > > > > > I'd still like to ask them to rerun again on SL5 to see if the > > "accept > > > > > skipped seqid" patch had any effect, though I think we expect not. > > Maybe > > > > > it would be nice if I could get set up to capture rolling tcpdumps of > > > > > the nfs traffic before they run that though... > > > > > > > > > > Graham > > > > > > > > > > On 7/20/2015 10:26 PM, Ahmed Kamal wrote: > > > > > > Hi folks, > > > > > > > > > > > > I've upgraded a test client to rhel6 today, and I'll keep an eye > > on it > > > > > > to see what happens. > > > > > > > > > > > > During the process, I made the (I guess mistake) of zfs send | > > recv to > > > > a > > > > > > locally attached usb disk for backup purposes .. long story short, > > > > > > sharenfs property on the received filesystem was causing some > > > > nfs/mountd > > > > > > errors in logs .. I wasn't too happy with what I got .. I > > destroyed the > > > > > > backup datasets and the whole pool eventually .. and then rebooted > > the > > > > > > whole nas box .. After reboot my logs are still flooded with > > > > > > > > > > > > Jul 21 05:12:36 nas kernel: nfsrv_cache_session: no session > > > > > > Jul 21 05:13:07 nas last message repeated 7536 times > > > > > > Jul 21 05:15:08 nas last message repeated 29664 times > > > > > > > > > > > > Not sure what that means .. or how it can be stopped .. Anyway, > > will > > > > > > keep you posted on progress. > > > > > > > > > Oh, I didn't see the part about "reboot" before. Unfortunately, it > > sounds > > > > like the > > > > client isn't recovering after the session is lost. When the server > > > > reboots, the > > > > client(s) will get NFS4ERR_BAD_SESSION errors back because the server > > > > reboot has > > > > deleted all sessions. The NFS4ERR_BAD_SESSION should trigger state > > > > recovery on the client. > > > > (It doesn't sound like the clients went into recovery, starting with a > > > > Create_session > > > > operation, but without a packet trace, I can't be sure?) > > > > > > > > rick > > > > > > > > > > > > > > -- > > > > > > > ------------------------------------------------------------------------- > > > > > Graham Allan - gta@umn.edu - allan@physics.umn.edu > > > > > School of Physics and Astronomy - University of Minnesota > > > > > > > ------------------------------------------------------------------------- > > > > > > > > > > > > > > > > > > > > ------=_Part_4709690_1367398941.1438126760800 Content-Type: text/x-patch; name=64bitboottime.patch Content-Disposition: attachment; filename=64bitboottime.patch Content-Transfer-Encoding: base64 LS0tIGZzL25mc3NlcnZlci9uZnNfbmZzZHN0YXRlLmMuc2F2CTIwMTUtMDctMjggMTg6NTQ6MDYu NTYxNDU0MDAwIC0wNDAwDQorKysgZnMvbmZzc2VydmVyL25mc19uZnNkc3RhdGUuYwkyMDE1LTA3 LTI4IDE5OjAwOjIwLjM1MTA4OTAwMCAtMDQwMA0KQEAgLTQ4Nyw3ICs0ODcsNyBAQCBuZnNydl9n ZXRjbGllbnQobmZzcXVhZF90IGNsaWVudGlkLCBpbnQgDQogCWlmIChjbHBwKQ0KIAkJKmNscHAg PSBOVUxMOw0KIAlpZiAoKG5kID09IE5VTEwgfHwgKG5kLT5uZF9mbGFnICYgTkRfTkZTVjQxKSA9 PSAwIHx8DQotCSAgICBvcGZsYWdzICE9IENMT1BTX1JFTkVXKSAmJiBuZnNydmJvb3R0aW1lICE9 IGNsaWVudGlkLmx2YWxbMF0pIHsNCisJICAgIG9wZmxhZ3MgIT0gQ0xPUFNfUkVORVcpICYmICh1 aW50MzJfdCluZnNydmJvb3R0aW1lICE9IGNsaWVudGlkLmx2YWxbMF0pIHsNCiAJCWVycm9yID0g TkZTRVJSX1NUQUxFQ0xJRU5USUQ7DQogCQlnb3RvIG91dDsNCiAJfQ0KQEAgLTY4Myw3ICs2ODMs NyBAQCBuZnNydl9kZXN0cm95Y2xpZW50KG5mc3F1YWRfdCBjbGllbnRpZCwgDQogCXN0cnVjdCBu ZnNjbGllbnRoYXNoaGVhZCAqaHA7DQogCWludCBlcnJvciA9IDAsIGksIGlnb3Rsb2NrOw0KIA0K LQlpZiAobmZzcnZib290dGltZSAhPSBjbGllbnRpZC5sdmFsWzBdKSB7DQorCWlmICgodWludDMy X3QpbmZzcnZib290dGltZSAhPSBjbGllbnRpZC5sdmFsWzBdKSB7DQogCQllcnJvciA9IE5GU0VS Ul9TVEFMRUNMSUVOVElEOw0KIAkJZ290byBvdXQ7DQogCX0NCkBAIC0zOTk2LDExICszOTk2LDEx IEBAIG5mc3J2X2NoZWNrcmVzdGFydChuZnNxdWFkX3QgY2xpZW50aWQsIHUNCiAJICovDQogCWlm IChmbGFncyAmDQogCSAgICAoTkZTTENLX09QRU4gfCBORlNMQ0tfVEVTVCB8IE5GU0xDS19SRUxF QVNFIHwgTkZTTENLX0RFTEVHUFVSR0UpKSB7DQotCQlpZiAoY2xpZW50aWQubHZhbFswXSAhPSBu ZnNydmJvb3R0aW1lKSB7DQorCQlpZiAoY2xpZW50aWQubHZhbFswXSAhPSAodWludDMyX3QpbmZz cnZib290dGltZSkgew0KIAkJCXJldCA9IE5GU0VSUl9TVEFMRUNMSUVOVElEOw0KIAkJCWdvdG8g b3V0Ow0KIAkJfQ0KLQl9IGVsc2UgaWYgKHN0YXRlaWRwLT5vdGhlclswXSAhPSBuZnNydmJvb3R0 aW1lICYmDQorCX0gZWxzZSBpZiAoc3RhdGVpZHAtPm90aGVyWzBdICE9ICh1aW50MzJfdCluZnNy dmJvb3R0aW1lICYmDQogCQlzcGVjaWFsaWQgPT0gMCkgew0KIAkJcmV0ID0gTkZTRVJSX1NUQUxF U1RBVEVJRDsNCiAJCWdvdG8gb3V0Ow0K ------=_Part_4709690_1367398941.1438126760800-- From owner-freebsd-fs@freebsd.org Wed Jul 29 07:44:54 2015 Return-Path: Delivered-To: freebsd-fs@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 69D9B9AB72A for ; Wed, 29 Jul 2015 07:44:54 +0000 (UTC) (envelope-from bugzilla-noreply@freebsd.org) Received: from kenobi.freebsd.org (kenobi.freebsd.org [IPv6:2001:1900:2254:206a::16:76]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 56BD32F2 for ; Wed, 29 Jul 2015 07:44:54 +0000 (UTC) (envelope-from bugzilla-noreply@freebsd.org) Received: from bugs.freebsd.org ([127.0.1.118]) by kenobi.freebsd.org (8.15.2/8.15.2) with ESMTP id t6T7iscv017530 for ; Wed, 29 Jul 2015 07:44:54 GMT (envelope-from bugzilla-noreply@freebsd.org) From: bugzilla-noreply@freebsd.org To: freebsd-fs@FreeBSD.org Subject: [Bug 201912] panic in smbfs during mount Date: Wed, 29 Jul 2015 07:44:54 +0000 X-Bugzilla-Reason: AssignedTo X-Bugzilla-Type: changed X-Bugzilla-Watch-Reason: None X-Bugzilla-Product: Base System X-Bugzilla-Component: kern X-Bugzilla-Version: 10.1-RELEASE X-Bugzilla-Keywords: X-Bugzilla-Severity: Affects Only Me X-Bugzilla-Who: martin@sugioarto.com X-Bugzilla-Status: New X-Bugzilla-Priority: --- X-Bugzilla-Assigned-To: freebsd-fs@FreeBSD.org X-Bugzilla-Target-Milestone: --- X-Bugzilla-Flags: X-Bugzilla-Changed-Fields: Message-ID: In-Reply-To: References: Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: 7bit X-Bugzilla-URL: https://bugs.freebsd.org/bugzilla/ Auto-Submitted: auto-generated MIME-Version: 1.0 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 29 Jul 2015 07:44:54 -0000 https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=201912 --- Comment #3 from martin@sugioarto.com --- (In reply to Andrey V. Elsukov from comment #2) Yes, I am using the GENERIC kernel. ident says there are "no id keywords". Are you sure that IDs are in GENERIC kernels? Maybe it's because I check out from the Git repository on Github (https://github.com/freebsd/freebsd). uname -a: FreeBSD sugioarto.phiscience.local 10.1-RELEASE-p14 FreeBSD 10.1-RELEASE-p14 #0 r284985+86de4e2(releng/10.1): Fri Jul 10 11:54:22 CEST 2015 root@sugioarto.phiscience.local:/usr/obj/usr/src/sys/GENERIC amd64 Usually I remove all /usr/obj and build entire world. The timestamps in /boot/kernel are also consistent. I've seen this bug the first time and reported instantly. I'll rebuild the world+kernel once again now with the latest patches, if you say so. Ok, let's close this PR for now. I'll reopen it, when I see this crash again. I use smbfs a lot and won't change my configuration of it for a long time, I think. -- You are receiving this mail because: You are the assignee for the bug. From owner-freebsd-fs@freebsd.org Wed Jul 29 08:19:42 2015 Return-Path: Delivered-To: freebsd-fs@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 0127B9AD691 for ; Wed, 29 Jul 2015 08:19:42 +0000 (UTC) (envelope-from bugzilla-noreply@freebsd.org) Received: from kenobi.freebsd.org (kenobi.freebsd.org [IPv6:2001:1900:2254:206a::16:76]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id E22051ABC for ; Wed, 29 Jul 2015 08:19:41 +0000 (UTC) (envelope-from bugzilla-noreply@freebsd.org) Received: from bugs.freebsd.org ([127.0.1.118]) by kenobi.freebsd.org (8.15.2/8.15.2) with ESMTP id t6T8Jf4A090546 for ; Wed, 29 Jul 2015 08:19:41 GMT (envelope-from bugzilla-noreply@freebsd.org) From: bugzilla-noreply@freebsd.org To: freebsd-fs@FreeBSD.org Subject: [Bug 201859] Kernel panic after every reboot (ZFS) Date: Wed, 29 Jul 2015 08:19:41 +0000 X-Bugzilla-Reason: AssignedTo X-Bugzilla-Type: changed X-Bugzilla-Watch-Reason: None X-Bugzilla-Product: Base System X-Bugzilla-Component: misc X-Bugzilla-Version: 10.1-STABLE X-Bugzilla-Keywords: X-Bugzilla-Severity: Affects Only Me X-Bugzilla-Who: smh@FreeBSD.org X-Bugzilla-Status: Closed X-Bugzilla-Priority: --- X-Bugzilla-Assigned-To: freebsd-fs@FreeBSD.org X-Bugzilla-Target-Milestone: --- X-Bugzilla-Flags: X-Bugzilla-Changed-Fields: bug_status resolution Message-ID: In-Reply-To: References: Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: 7bit X-Bugzilla-URL: https://bugs.freebsd.org/bugzilla/ Auto-Submitted: auto-generated MIME-Version: 1.0 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 29 Jul 2015 08:19:42 -0000 https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=201859 Steven Hartland changed: What |Removed |Added ---------------------------------------------------------------------------- Status|New |Closed Resolution|--- |Not A Bug --- Comment #9 from Steven Hartland --- Closing at is was a KSTACK_PAGES issue, which is documented. -- You are receiving this mail because: You are the assignee for the bug. From owner-freebsd-fs@freebsd.org Wed Jul 29 08:40:05 2015 Return-Path: Delivered-To: freebsd-fs@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id A88F89ADDE1 for ; Wed, 29 Jul 2015 08:40:05 +0000 (UTC) (envelope-from bugzilla-noreply@freebsd.org) Received: from kenobi.freebsd.org (kenobi.freebsd.org [IPv6:2001:1900:2254:206a::16:76]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 94F56949 for ; Wed, 29 Jul 2015 08:40:05 +0000 (UTC) (envelope-from bugzilla-noreply@freebsd.org) Received: from bugs.freebsd.org ([127.0.1.118]) by kenobi.freebsd.org (8.15.2/8.15.2) with ESMTP id t6T8e5Wa016336 for ; Wed, 29 Jul 2015 08:40:05 GMT (envelope-from bugzilla-noreply@freebsd.org) From: bugzilla-noreply@freebsd.org To: freebsd-fs@FreeBSD.org Subject: [Bug 154930] [zfs] cannot delete/unlink file from full volume -> ENOSPC Date: Wed, 29 Jul 2015 08:40:05 +0000 X-Bugzilla-Reason: AssignedTo X-Bugzilla-Type: changed X-Bugzilla-Watch-Reason: None X-Bugzilla-Product: Base System X-Bugzilla-Component: kern X-Bugzilla-Version: 8.2-PRERELEASE X-Bugzilla-Keywords: X-Bugzilla-Severity: Affects Only Me X-Bugzilla-Who: smh@FreeBSD.org X-Bugzilla-Status: Closed X-Bugzilla-Priority: Normal X-Bugzilla-Assigned-To: freebsd-fs@FreeBSD.org X-Bugzilla-Target-Milestone: --- X-Bugzilla-Flags: X-Bugzilla-Changed-Fields: bug_status resolution cc Message-ID: In-Reply-To: References: Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: 7bit X-Bugzilla-URL: https://bugs.freebsd.org/bugzilla/ Auto-Submitted: auto-generated MIME-Version: 1.0 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 29 Jul 2015 08:40:05 -0000 https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=154930 Steven Hartland changed: What |Removed |Added ---------------------------------------------------------------------------- Status|In Progress |Closed Resolution|--- |Not A Bug CC| |smh@FreeBSD.org --- Comment #6 from Steven Hartland --- In addition later ZFS versions reserve more privileged space for this very reason see: https://svnweb.freebsd.org/base?view=revision&revision=268473 -- You are receiving this mail because: You are the assignee for the bug. From owner-freebsd-fs@freebsd.org Wed Jul 29 09:33:51 2015 Return-Path: Delivered-To: freebsd-fs@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 4B56C9AEAD4 for ; Wed, 29 Jul 2015 09:33:51 +0000 (UTC) (envelope-from email.ahmedkamal@googlemail.com) Received: from mail-wi0-x234.google.com (mail-wi0-x234.google.com [IPv6:2a00:1450:400c:c05::234]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (Client CN "smtp.gmail.com", Issuer "Google Internet Authority G2" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id D0542B0 for ; Wed, 29 Jul 2015 09:33:50 +0000 (UTC) (envelope-from email.ahmedkamal@googlemail.com) Received: by wicmv11 with SMTP id mv11so211141915wic.0 for ; Wed, 29 Jul 2015 02:33:49 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=googlemail.com; s=20120113; h=mime-version:in-reply-to:references:from:date:message-id:subject:to :cc:content-type; bh=YYrwGru4OcsS6FrjFcKJZbvlA6G1YJJWApz6W1HxKMM=; b=Ifb95BQXFZRYgDNUh6aMgONbdq6y7RHhF0sfuygvUzDE5PUEBcXPSZ8KTZJ0fEbqy1 cMlBfNA0z6aoBLgljQ7wj/FJb7E2blne8xqukF6lAjaCOTHv+SCL8A+I8ZFFyuyBiv9q spEEr45KYnIxyiG4Q5uApwWQv45dTHrhN6N/6F/OiDZ+aPhp+Bngi2an4gVNrqrY4W9u 7zxYp5Gmobiavo+ucIdG0FCvlEcWoHb5ZDb3exYQp8G7tWEFUuQN7NtS436pve61JGHS SPWwSWAGXuy3MXJjwMd2S3shEJWMnEOaNdGOYx+oo9b2UjpzMY5Bk/zumSn7sWVua8CW dP2w== X-Received: by 10.181.13.36 with SMTP id ev4mr4081676wid.65.1438162428824; Wed, 29 Jul 2015 02:33:48 -0700 (PDT) MIME-Version: 1.0 Received: by 10.28.6.143 with HTTP; Wed, 29 Jul 2015 02:33:29 -0700 (PDT) In-Reply-To: <1089316279.4709692.1438126760802.JavaMail.zimbra@uoguelph.ca> References: <684628776.2772174.1435793776748.JavaMail.zimbra@uoguelph.ca> <184170291.10949389.1437161519387.JavaMail.zimbra@uoguelph.ca> <55B12EB7.6030607@physics.umn.edu> <1935759160.2320694.1437688383362.JavaMail.zimbra@uoguelph.ca> <576106597.2326662.1437688749018.JavaMail.zimbra@uoguelph.ca> <1089316279.4709692.1438126760802.JavaMail.zimbra@uoguelph.ca> From: Ahmed Kamal Date: Wed, 29 Jul 2015 11:33:29 +0200 Message-ID: Subject: Re: Linux NFSv4 clients are getting (bad sequence-id error!) To: Rick Macklem Cc: Graham Allan , Ahmed Kamal via freebsd-fs Content-Type: text/plain; charset=UTF-8 X-Content-Filtered-By: Mailman/MimeDel 2.1.20 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 29 Jul 2015 09:33:51 -0000 hmm, if I understand you correctly, this time_seconds value is the number of seconds till the box booted ? If so, I guess this is not really the cause of what we're seeing as the box is only up for 8 days bsd# uptime 11:28AM up 8 days, 6:20, 6 users, load averages: 0.94, 0.91, 0.84 The NFS client box's uptime is linux# uptime 11:31:39 up 8 days, 5:51, 11 users, load average: 87.74, 87.43, 87.35 and yes the huge load is most likely due to this NFS bug On Wed, Jul 29, 2015 at 1:39 AM, Rick Macklem wrote: > Ahmed Kamal wrote: > > Hi again Rick, > > > > Seems that I'm still being unlucky with nfs :/ I caught one of the newly > > installed RHEL6 boxes having high CPU usage, and bombarding the BSD NFS > box > > with 10Mbps traffic .. I caught a tcpdump as you mentioned .. You can > > download it here: > > > > > https://dl.dropboxusercontent.com/u/51939288/nfs41-high-client-cpu.pcap.bz2 > > > Ok, the packet trace suggests that the NFSv4 server is broken (it is > replying > with NFS4ERR_STALE_CLIENTID for a recently generated ClientID). > Now, I can't be sure, but the only explanation I can come up with is... > - For some arches (I only have i386, so I wouldn't have seen this during > testing), > time_t is 64bits (uint64_t). > --> If time_seconds somehow doesn't fit in the low order 32bits, then > the code > would be busted for these arches because nfsrvboottime is set to > time_seconds > when the server is started and then there are comparisons like: > if (nfsrvboottime != clientid.lval[0]) > return (NFSERR_STALECLIENTID); > /* where clientid.lval[0] is a uint32_t */ > Anyhow, if this is what is happening, the attached simple patch should fix > it. > (I don't know how time_seconds would exceed 4billion, but the clock code is > pretty convoluted, so I can't say if it can possibly happen?) > > rick > ps: Hmm, on i386 time_seconds ends up at 1438126486, so maybe it can exceed > 4*1024*1024*1024 - 1 on amd64? > > > I didn't restart the client yet .. so if you catch me in the next few > hours > > and want me to run any diagnostics, let me know. Thanks a lot all for > > helping > > > > On Thu, Jul 23, 2015 at 11:59 PM, Rick Macklem > wrote: > > > > > Ahmed Kamal wrote: > > > > Can you please let me know the ultimate packet trace command I'd > need to > > > > run in case of any nfs4 troubles .. I guess this should be > comprehensive > > > > even at the expense of a larger output size (which we can trim > later).. > > > > Thanks a lot for the help! > > > > > > > tcpdump -s 0 -w .pcap host > > > ( refers to a file name you choose and refers > to > > > the host name of a client generating traffic.) > > > --> But you won't be able to allow this to run for long during the > storm > > > or the > > > file will be huge. > > > > > > Then you look at .pcap in wireshark, which knows NFS. > > > > > > rick > > > > > > > On Thu, Jul 23, 2015 at 11:53 PM, Rick Macklem > > > > wrote: > > > > > > > > > Graham Allan wrote: > > > > > > For our part, the user whose code triggered the pathological > > > behaviour > > > > > > on SL5 reran it on SL6 without incident - I still see lots of > > > > > > sequence-id errors in the logs, but nothing bad happened. > > > > > > > > > > > > I'd still like to ask them to rerun again on SL5 to see if the > > > "accept > > > > > > skipped seqid" patch had any effect, though I think we expect > not. > > > Maybe > > > > > > it would be nice if I could get set up to capture rolling > tcpdumps of > > > > > > the nfs traffic before they run that though... > > > > > > > > > > > > Graham > > > > > > > > > > > > On 7/20/2015 10:26 PM, Ahmed Kamal wrote: > > > > > > > Hi folks, > > > > > > > > > > > > > > I've upgraded a test client to rhel6 today, and I'll keep an > eye > > > on it > > > > > > > to see what happens. > > > > > > > > > > > > > > During the process, I made the (I guess mistake) of zfs send | > > > recv to > > > > > a > > > > > > > locally attached usb disk for backup purposes .. long story > short, > > > > > > > sharenfs property on the received filesystem was causing some > > > > > nfs/mountd > > > > > > > errors in logs .. I wasn't too happy with what I got .. I > > > destroyed the > > > > > > > backup datasets and the whole pool eventually .. and then > rebooted > > > the > > > > > > > whole nas box .. After reboot my logs are still flooded with > > > > > > > > > > > > > > Jul 21 05:12:36 nas kernel: nfsrv_cache_session: no session > > > > > > > Jul 21 05:13:07 nas last message repeated 7536 times > > > > > > > Jul 21 05:15:08 nas last message repeated 29664 times > > > > > > > > > > > > > > Not sure what that means .. or how it can be stopped .. Anyway, > > > will > > > > > > > keep you posted on progress. > > > > > > > > > > > Oh, I didn't see the part about "reboot" before. Unfortunately, it > > > sounds > > > > > like the > > > > > client isn't recovering after the session is lost. When the server > > > > > reboots, the > > > > > client(s) will get NFS4ERR_BAD_SESSION errors back because the > server > > > > > reboot has > > > > > deleted all sessions. The NFS4ERR_BAD_SESSION should trigger state > > > > > recovery on the client. > > > > > (It doesn't sound like the clients went into recovery, starting > with a > > > > > Create_session > > > > > operation, but without a packet trace, I can't be sure?) > > > > > > > > > > rick > > > > > > > > > > > > > > > > > -- > > > > > > > > > > ------------------------------------------------------------------------- > > > > > > Graham Allan - gta@umn.edu - allan@physics.umn.edu > > > > > > School of Physics and Astronomy - University of Minnesota > > > > > > > > > > ------------------------------------------------------------------------- > > > > > > > > > > > > > > > > > > > > > > > > > > > From owner-freebsd-fs@freebsd.org Wed Jul 29 11:35:28 2015 Return-Path: Delivered-To: freebsd-fs@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id A12919AECB7 for ; Wed, 29 Jul 2015 11:35:28 +0000 (UTC) (envelope-from rmacklem@uoguelph.ca) Received: from esa-annu.net.uoguelph.ca (esa-annu.mail.uoguelph.ca [131.104.91.36]) by mx1.freebsd.org (Postfix) with ESMTP id 365FD819 for ; Wed, 29 Jul 2015 11:35:27 +0000 (UTC) (envelope-from rmacklem@uoguelph.ca) X-IronPort-Anti-Spam-Filtered: true X-IronPort-Anti-Spam-Result: A2CqBAB6ubhV/61jaINbg2lpBoMduGqCA4V3AoIPEgEBAQEBAQGBCoQjAQEBAwEjBFIFCwIBCA4KAgINGQICVwIEE4gmCA24fJYFAQEBAQEBAQECAQEBAQEBARcEgSKKLIQbIQkONAeCaYFDBYcXhTGIKIR6gmKGLYQgk0kCJoQZIjEBAYEFQYEEAQEB X-IronPort-AV: E=Sophos;i="5.15,570,1432612800"; d="scan'208";a="228894874" Received: from nipigon.cs.uoguelph.ca (HELO zcs1.mail.uoguelph.ca) ([131.104.99.173]) by esa-annu.net.uoguelph.ca with ESMTP; 29 Jul 2015 07:35:26 -0400 Received: from localhost (localhost [127.0.0.1]) by zcs1.mail.uoguelph.ca (Postfix) with ESMTP id F2AC415F542; Wed, 29 Jul 2015 07:35:25 -0400 (EDT) Received: from zcs1.mail.uoguelph.ca ([127.0.0.1]) by localhost (zcs1.mail.uoguelph.ca [127.0.0.1]) (amavisd-new, port 10032) with ESMTP id bvW2imZAWBJu; Wed, 29 Jul 2015 07:35:24 -0400 (EDT) Received: from localhost (localhost [127.0.0.1]) by zcs1.mail.uoguelph.ca (Postfix) with ESMTP id B1E1015F55D; Wed, 29 Jul 2015 07:35:24 -0400 (EDT) X-Virus-Scanned: amavisd-new at zcs1.mail.uoguelph.ca Received: from zcs1.mail.uoguelph.ca ([127.0.0.1]) by localhost (zcs1.mail.uoguelph.ca [127.0.0.1]) (amavisd-new, port 10026) with ESMTP id nZ2_QjLRKugM; Wed, 29 Jul 2015 07:35:24 -0400 (EDT) Received: from zcs1.mail.uoguelph.ca (zcs1.mail.uoguelph.ca [172.17.95.18]) by zcs1.mail.uoguelph.ca (Postfix) with ESMTP id 7CEC215F542; Wed, 29 Jul 2015 07:35:24 -0400 (EDT) Date: Wed, 29 Jul 2015 07:35:24 -0400 (EDT) From: Rick Macklem To: Ahmed Kamal Cc: Graham Allan , Ahmed Kamal via freebsd-fs Message-ID: <1603742210.4824721.1438169724361.JavaMail.zimbra@uoguelph.ca> In-Reply-To: References: <684628776.2772174.1435793776748.JavaMail.zimbra@uoguelph.ca> <55B12EB7.6030607@physics.umn.edu> <1935759160.2320694.1437688383362.JavaMail.zimbra@uoguelph.ca> <576106597.2326662.1437688749018.JavaMail.zimbra@uoguelph.ca> <1089316279.4709692.1438126760802.JavaMail.zimbra@uoguelph.ca> Subject: Re: Linux NFSv4 clients are getting (bad sequence-id error!) MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit X-Originating-IP: [172.17.95.12] X-Mailer: Zimbra 8.0.9_GA_6191 (ZimbraWebClient - FF34 (Win)/8.0.9_GA_6191) Thread-Topic: Linux NFSv4 clients are getting (bad sequence-id error!) Thread-Index: KB+lVMviB2CyfbdENQ+YzLz+Rlbo0A== X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 29 Jul 2015 11:35:28 -0000 Ahmed Kamal wrote: > hmm, if I understand you correctly, this time_seconds value is the number > of seconds till the box booted ? No, it is the number of seconds when the box booted. Once, it was supposed to be the number of seconds since Jan. 1, 1970, but I don't if that is still the case. (For my i386, it is about 1.4billion when it boots, so I'd guess they still use Jan. 1, 1970. 3600*24*365*45 = 1419120000. Yea, I didn't bother with leap years, etc.) Now, I don't know if the clock could somehow set it to a value > 4billion when the nfsd starts (it copies time_seconds to nfsrvboottime as it starts up), but the current clock code is pretty convoluted stuff, so?? rick ps: From the NFSv4 server's point of view, it only needs a number that is unique and changes every time the server reboots. As such, using the low order 32bits of it would be sufficient, even if it exceeds 4billion. However, the code incorrectly assumes it won't exceed 4*1024*1024*1024 - 1 unless you apply the patch. > If so, I guess this is not really the > cause of what we're seeing as the box is only up for 8 days > > bsd# uptime > 11:28AM up 8 days, 6:20, 6 users, load averages: 0.94, 0.91, 0.84 > > The NFS client box's uptime is > linux# uptime > 11:31:39 up 8 days, 5:51, 11 users, load average: 87.74, 87.43, 87.35 > > and yes the huge load is most likely due to this NFS bug > > On Wed, Jul 29, 2015 at 1:39 AM, Rick Macklem wrote: > > > Ahmed Kamal wrote: > > > Hi again Rick, > > > > > > Seems that I'm still being unlucky with nfs :/ I caught one of the newly > > > installed RHEL6 boxes having high CPU usage, and bombarding the BSD NFS > > box > > > with 10Mbps traffic .. I caught a tcpdump as you mentioned .. You can > > > download it here: > > > > > > > > https://dl.dropboxusercontent.com/u/51939288/nfs41-high-client-cpu.pcap.bz2 > > > > > Ok, the packet trace suggests that the NFSv4 server is broken (it is > > replying > > with NFS4ERR_STALE_CLIENTID for a recently generated ClientID). > > Now, I can't be sure, but the only explanation I can come up with is... > > - For some arches (I only have i386, so I wouldn't have seen this during > > testing), > > time_t is 64bits (uint64_t). > > --> If time_seconds somehow doesn't fit in the low order 32bits, then > > the code > > would be busted for these arches because nfsrvboottime is set to > > time_seconds > > when the server is started and then there are comparisons like: > > if (nfsrvboottime != clientid.lval[0]) > > return (NFSERR_STALECLIENTID); > > /* where clientid.lval[0] is a uint32_t */ > > Anyhow, if this is what is happening, the attached simple patch should fix > > it. > > (I don't know how time_seconds would exceed 4billion, but the clock code is > > pretty convoluted, so I can't say if it can possibly happen?) > > > > rick > > ps: Hmm, on i386 time_seconds ends up at 1438126486, so maybe it can exceed > > 4*1024*1024*1024 - 1 on amd64? > > > > > I didn't restart the client yet .. so if you catch me in the next few > > hours > > > and want me to run any diagnostics, let me know. Thanks a lot all for > > > helping > > > > > > On Thu, Jul 23, 2015 at 11:59 PM, Rick Macklem > > wrote: > > > > > > > Ahmed Kamal wrote: > > > > > Can you please let me know the ultimate packet trace command I'd > > need to > > > > > run in case of any nfs4 troubles .. I guess this should be > > comprehensive > > > > > even at the expense of a larger output size (which we can trim > > later).. > > > > > Thanks a lot for the help! > > > > > > > > > tcpdump -s 0 -w .pcap host > > > > ( refers to a file name you choose and refers > > to > > > > the host name of a client generating traffic.) > > > > --> But you won't be able to allow this to run for long during the > > storm > > > > or the > > > > file will be huge. > > > > > > > > Then you look at .pcap in wireshark, which knows NFS. > > > > > > > > rick > > > > > > > > > On Thu, Jul 23, 2015 at 11:53 PM, Rick Macklem > > > > > > wrote: > > > > > > > > > > > Graham Allan wrote: > > > > > > > For our part, the user whose code triggered the pathological > > > > behaviour > > > > > > > on SL5 reran it on SL6 without incident - I still see lots of > > > > > > > sequence-id errors in the logs, but nothing bad happened. > > > > > > > > > > > > > > I'd still like to ask them to rerun again on SL5 to see if the > > > > "accept > > > > > > > skipped seqid" patch had any effect, though I think we expect > > not. > > > > Maybe > > > > > > > it would be nice if I could get set up to capture rolling > > tcpdumps of > > > > > > > the nfs traffic before they run that though... > > > > > > > > > > > > > > Graham > > > > > > > > > > > > > > On 7/20/2015 10:26 PM, Ahmed Kamal wrote: > > > > > > > > Hi folks, > > > > > > > > > > > > > > > > I've upgraded a test client to rhel6 today, and I'll keep an > > eye > > > > on it > > > > > > > > to see what happens. > > > > > > > > > > > > > > > > During the process, I made the (I guess mistake) of zfs send | > > > > recv to > > > > > > a > > > > > > > > locally attached usb disk for backup purposes .. long story > > short, > > > > > > > > sharenfs property on the received filesystem was causing some > > > > > > nfs/mountd > > > > > > > > errors in logs .. I wasn't too happy with what I got .. I > > > > destroyed the > > > > > > > > backup datasets and the whole pool eventually .. and then > > rebooted > > > > the > > > > > > > > whole nas box .. After reboot my logs are still flooded with > > > > > > > > > > > > > > > > Jul 21 05:12:36 nas kernel: nfsrv_cache_session: no session > > > > > > > > Jul 21 05:13:07 nas last message repeated 7536 times > > > > > > > > Jul 21 05:15:08 nas last message repeated 29664 times > > > > > > > > > > > > > > > > Not sure what that means .. or how it can be stopped .. Anyway, > > > > will > > > > > > > > keep you posted on progress. > > > > > > > > > > > > > Oh, I didn't see the part about "reboot" before. Unfortunately, it > > > > sounds > > > > > > like the > > > > > > client isn't recovering after the session is lost. When the server > > > > > > reboots, the > > > > > > client(s) will get NFS4ERR_BAD_SESSION errors back because the > > server > > > > > > reboot has > > > > > > deleted all sessions. The NFS4ERR_BAD_SESSION should trigger state > > > > > > recovery on the client. > > > > > > (It doesn't sound like the clients went into recovery, starting > > with a > > > > > > Create_session > > > > > > operation, but without a packet trace, I can't be sure?) > > > > > > > > > > > > rick > > > > > > > > > > > > > > > > > > > > -- > > > > > > > > > > > > > ------------------------------------------------------------------------- > > > > > > > Graham Allan - gta@umn.edu - allan@physics.umn.edu > > > > > > > School of Physics and Astronomy - University of Minnesota > > > > > > > > > > > > > ------------------------------------------------------------------------- > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > From owner-freebsd-fs@freebsd.org Wed Jul 29 11:51:28 2015 Return-Path: Delivered-To: freebsd-fs@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id CCB2A9AB29A for ; Wed, 29 Jul 2015 11:51:28 +0000 (UTC) (envelope-from email.ahmedkamal@googlemail.com) Received: from mail-wi0-x22a.google.com (mail-wi0-x22a.google.com [IPv6:2a00:1450:400c:c05::22a]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (Client CN "smtp.gmail.com", Issuer "Google Internet Authority G2" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id 5784FD46 for ; Wed, 29 Jul 2015 11:51:28 +0000 (UTC) (envelope-from email.ahmedkamal@googlemail.com) Received: by wibud3 with SMTP id ud3so217222357wib.1 for ; Wed, 29 Jul 2015 04:51:26 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=googlemail.com; s=20120113; h=mime-version:in-reply-to:references:from:date:message-id:subject:to :cc:content-type; bh=JMLZBAfVLiDXHqRVzOZfEfwjWo7gg8HuMj1kdB3c/2A=; b=H/pZToaIT8A2c2nyHh4HJHln5bWSMGAzYyRcb7tSUwEyJSNS7EvAQAB1GeoWEgvOYS KEdzDHbwN5fNnYb5gWahxn+ig9jzp2gv6AjJZcpgClgpg1/iZOyVALiSZTEbjMNuosUP s5Qnsl5kPkyO3/0srEDNaR6y21yhB+RbQcvsFwnsOs1elj3ecLf5uXLxvmHErv9MmuV4 9Jsgu8h5Jse8UUdQJzs8WkloJNZOkAdMKU44PGSuNfuuKL+trmSi3kq7C0qmii9aN2u6 7e94HzHUieHGctMAZU2zU+ZmbtVs8wTh69tV5HT05IPIzG2OxleBCwf3u1ie7KPtqUPW aGrQ== X-Received: by 10.180.21.244 with SMTP id y20mr16751363wie.65.1438170686789; Wed, 29 Jul 2015 04:51:26 -0700 (PDT) MIME-Version: 1.0 Received: by 10.28.6.143 with HTTP; Wed, 29 Jul 2015 04:51:07 -0700 (PDT) In-Reply-To: <1603742210.4824721.1438169724361.JavaMail.zimbra@uoguelph.ca> References: <684628776.2772174.1435793776748.JavaMail.zimbra@uoguelph.ca> <55B12EB7.6030607@physics.umn.edu> <1935759160.2320694.1437688383362.JavaMail.zimbra@uoguelph.ca> <576106597.2326662.1437688749018.JavaMail.zimbra@uoguelph.ca> <1089316279.4709692.1438126760802.JavaMail.zimbra@uoguelph.ca> <1603742210.4824721.1438169724361.JavaMail.zimbra@uoguelph.ca> From: Ahmed Kamal Date: Wed, 29 Jul 2015 13:51:07 +0200 Message-ID: Subject: Re: Linux NFSv4 clients are getting (bad sequence-id error!) To: Rick Macklem Cc: Graham Allan , Ahmed Kamal via freebsd-fs Content-Type: text/plain; charset=UTF-8 X-Content-Filtered-By: Mailman/MimeDel 2.1.20 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 29 Jul 2015 11:51:29 -0000 hmm Thanks Rick .. You mentioned the error appears when nfsrvboottime != clientid.lval[0] .. I understand nfsrvboottime is number of seconds since the epoch (1970) .. Can you please explain what clientid.lval[0] is, and (if it comes from the client?) what guarantees it should equal nfsrvboottime ? Apart from trying to understand the problem. Can you send me a small c program that runs the same code that computes nfsrvboottime and writes that to the terminal window. I would like to avoid testing a kernel patch on this system since it runs in production. And last time I rebooted the nfs server, I ended up having to reboot all clients (every single workstation) so that was painful .. So if we just want to know if the number if bigger than 4 billion or not, I think this small app can help us get this value right ? On Wed, Jul 29, 2015 at 1:35 PM, Rick Macklem wrote: > Ahmed Kamal wrote: > > hmm, if I understand you correctly, this time_seconds value is the number > > of seconds till the box booted ? > No, it is the number of seconds when the box booted. Once, it was supposed > to > be the number of seconds since Jan. 1, 1970, but I don't if that is still > the > case. (For my i386, it is about 1.4billion when it boots, so I'd guess they > still use Jan. 1, 1970. 3600*24*365*45 = 1419120000. Yea, I didn't bother > with > leap years, etc.) > > Now, I don't know if the clock could somehow set it to a value > 4billion > when > the nfsd starts (it copies time_seconds to nfsrvboottime as it starts up), > but the > current clock code is pretty convoluted stuff, so?? > > rick > ps: From the NFSv4 server's point of view, it only needs a number that is > unique and > changes every time the server reboots. As such, using the low order > 32bits of > it would be sufficient, even if it exceeds 4billion. However, the code > incorrectly > assumes it won't exceed 4*1024*1024*1024 - 1 unless you apply the > patch. > > > If so, I guess this is not really the > > cause of what we're seeing as the box is only up for 8 days > > > > bsd# uptime > > 11:28AM up 8 days, 6:20, 6 users, load averages: 0.94, 0.91, 0.84 > > > > The NFS client box's uptime is > > linux# uptime > > 11:31:39 up 8 days, 5:51, 11 users, load average: 87.74, 87.43, 87.35 > > > > and yes the huge load is most likely due to this NFS bug > > > > On Wed, Jul 29, 2015 at 1:39 AM, Rick Macklem > wrote: > > > > > Ahmed Kamal wrote: > > > > Hi again Rick, > > > > > > > > Seems that I'm still being unlucky with nfs :/ I caught one of the > newly > > > > installed RHEL6 boxes having high CPU usage, and bombarding the BSD > NFS > > > box > > > > with 10Mbps traffic .. I caught a tcpdump as you mentioned .. You can > > > > download it here: > > > > > > > > > > > > https://dl.dropboxusercontent.com/u/51939288/nfs41-high-client-cpu.pcap.bz2 > > > > > > > Ok, the packet trace suggests that the NFSv4 server is broken (it is > > > replying > > > with NFS4ERR_STALE_CLIENTID for a recently generated ClientID). > > > Now, I can't be sure, but the only explanation I can come up with is... > > > - For some arches (I only have i386, so I wouldn't have seen this > during > > > testing), > > > time_t is 64bits (uint64_t). > > > --> If time_seconds somehow doesn't fit in the low order 32bits, then > > > the code > > > would be busted for these arches because nfsrvboottime is set to > > > time_seconds > > > when the server is started and then there are comparisons like: > > > if (nfsrvboottime != clientid.lval[0]) > > > return (NFSERR_STALECLIENTID); > > > /* where clientid.lval[0] is a uint32_t */ > > > Anyhow, if this is what is happening, the attached simple patch should > fix > > > it. > > > (I don't know how time_seconds would exceed 4billion, but the clock > code is > > > pretty convoluted, so I can't say if it can possibly happen?) > > > > > > rick > > > ps: Hmm, on i386 time_seconds ends up at 1438126486, so maybe it can > exceed > > > 4*1024*1024*1024 - 1 on amd64? > > > > > > > I didn't restart the client yet .. so if you catch me in the next few > > > hours > > > > and want me to run any diagnostics, let me know. Thanks a lot all for > > > > helping > > > > > > > > On Thu, Jul 23, 2015 at 11:59 PM, Rick Macklem > > > > wrote: > > > > > > > > > Ahmed Kamal wrote: > > > > > > Can you please let me know the ultimate packet trace command I'd > > > need to > > > > > > run in case of any nfs4 troubles .. I guess this should be > > > comprehensive > > > > > > even at the expense of a larger output size (which we can trim > > > later).. > > > > > > Thanks a lot for the help! > > > > > > > > > > > tcpdump -s 0 -w .pcap host > > > > > ( refers to a file name you choose and > refers > > > to > > > > > the host name of a client generating traffic.) > > > > > --> But you won't be able to allow this to run for long during the > > > storm > > > > > or the > > > > > file will be huge. > > > > > > > > > > Then you look at .pcap in wireshark, which knows NFS. > > > > > > > > > > rick > > > > > > > > > > > On Thu, Jul 23, 2015 at 11:53 PM, Rick Macklem < > rmacklem@uoguelph.ca > > > > > > > > > wrote: > > > > > > > > > > > > > Graham Allan wrote: > > > > > > > > For our part, the user whose code triggered the pathological > > > > > behaviour > > > > > > > > on SL5 reran it on SL6 without incident - I still see lots of > > > > > > > > sequence-id errors in the logs, but nothing bad happened. > > > > > > > > > > > > > > > > I'd still like to ask them to rerun again on SL5 to see if > the > > > > > "accept > > > > > > > > skipped seqid" patch had any effect, though I think we expect > > > not. > > > > > Maybe > > > > > > > > it would be nice if I could get set up to capture rolling > > > tcpdumps of > > > > > > > > the nfs traffic before they run that though... > > > > > > > > > > > > > > > > Graham > > > > > > > > > > > > > > > > On 7/20/2015 10:26 PM, Ahmed Kamal wrote: > > > > > > > > > Hi folks, > > > > > > > > > > > > > > > > > > I've upgraded a test client to rhel6 today, and I'll keep > an > > > eye > > > > > on it > > > > > > > > > to see what happens. > > > > > > > > > > > > > > > > > > During the process, I made the (I guess mistake) of zfs > send | > > > > > recv to > > > > > > > a > > > > > > > > > locally attached usb disk for backup purposes .. long story > > > short, > > > > > > > > > sharenfs property on the received filesystem was causing > some > > > > > > > nfs/mountd > > > > > > > > > errors in logs .. I wasn't too happy with what I got .. I > > > > > destroyed the > > > > > > > > > backup datasets and the whole pool eventually .. and then > > > rebooted > > > > > the > > > > > > > > > whole nas box .. After reboot my logs are still flooded > with > > > > > > > > > > > > > > > > > > Jul 21 05:12:36 nas kernel: nfsrv_cache_session: no session > > > > > > > > > Jul 21 05:13:07 nas last message repeated 7536 times > > > > > > > > > Jul 21 05:15:08 nas last message repeated 29664 times > > > > > > > > > > > > > > > > > > Not sure what that means .. or how it can be stopped .. > Anyway, > > > > > will > > > > > > > > > keep you posted on progress. > > > > > > > > > > > > > > > Oh, I didn't see the part about "reboot" before. > Unfortunately, it > > > > > sounds > > > > > > > like the > > > > > > > client isn't recovering after the session is lost. When the > server > > > > > > > reboots, the > > > > > > > client(s) will get NFS4ERR_BAD_SESSION errors back because the > > > server > > > > > > > reboot has > > > > > > > deleted all sessions. The NFS4ERR_BAD_SESSION should trigger > state > > > > > > > recovery on the client. > > > > > > > (It doesn't sound like the clients went into recovery, starting > > > with a > > > > > > > Create_session > > > > > > > operation, but without a packet trace, I can't be sure?) > > > > > > > > > > > > > > rick > > > > > > > > > > > > > > > > > > > > > > > -- > > > > > > > > > > > > > > > > > ------------------------------------------------------------------------- > > > > > > > > Graham Allan - gta@umn.edu - allan@physics.umn.edu > > > > > > > > School of Physics and Astronomy - University of Minnesota > > > > > > > > > > > > > > > > > ------------------------------------------------------------------------- > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > From owner-freebsd-fs@freebsd.org Wed Jul 29 21:00:39 2015 Return-Path: Delivered-To: freebsd-fs@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id B94609AE819 for ; Wed, 29 Jul 2015 21:00:39 +0000 (UTC) (envelope-from rmacklem@uoguelph.ca) Received: from esa-jnhn.mail.uoguelph.ca (esa-jnhn.mail.uoguelph.ca [131.104.91.44]) by mx1.freebsd.org (Postfix) with ESMTP id 4ABCACC5 for ; Wed, 29 Jul 2015 21:00:38 +0000 (UTC) (envelope-from rmacklem@uoguelph.ca) X-IronPort-Anti-Spam-Filtered: true X-IronPort-Anti-Spam-Result: A2AeBQCePblV/61jaINcg25pBoMdunWFdwKCFhEBAQEBAQEBgQqEIwEBAQMBIwRSBQsCAQgOCgICDRkCAlcCBBOIJggNuQSVewEBAQEBAQEDAQEBAQEBGASBIooshBsJEQEGCQ40B4JpgUMFhxeFMYgohHqCYoYthCCTSQImgg0dgW8iMQEBgQUHFyOBBAEBAQ X-IronPort-AV: E=Sophos;i="5.15,572,1432612800"; d="scan'208";a="227358330" Received: from nipigon.cs.uoguelph.ca (HELO zcs1.mail.uoguelph.ca) ([131.104.99.173]) by esa-jnhn.mail.uoguelph.ca with ESMTP; 29 Jul 2015 17:00:36 -0400 Received: from localhost (localhost [127.0.0.1]) by zcs1.mail.uoguelph.ca (Postfix) with ESMTP id 749A215F542; Wed, 29 Jul 2015 17:00:36 -0400 (EDT) Received: from zcs1.mail.uoguelph.ca ([127.0.0.1]) by localhost (zcs1.mail.uoguelph.ca [127.0.0.1]) (amavisd-new, port 10032) with ESMTP id sYv31StfkDHJ; Wed, 29 Jul 2015 17:00:35 -0400 (EDT) Received: from localhost (localhost [127.0.0.1]) by zcs1.mail.uoguelph.ca (Postfix) with ESMTP id 6E2C315F55D; Wed, 29 Jul 2015 17:00:35 -0400 (EDT) X-Virus-Scanned: amavisd-new at zcs1.mail.uoguelph.ca Received: from zcs1.mail.uoguelph.ca ([127.0.0.1]) by localhost (zcs1.mail.uoguelph.ca [127.0.0.1]) (amavisd-new, port 10026) with ESMTP id 50utDo2GEAfB; Wed, 29 Jul 2015 17:00:35 -0400 (EDT) Received: from zcs1.mail.uoguelph.ca (zcs1.mail.uoguelph.ca [172.17.95.18]) by zcs1.mail.uoguelph.ca (Postfix) with ESMTP id 4FCEB15F542; Wed, 29 Jul 2015 17:00:35 -0400 (EDT) Date: Wed, 29 Jul 2015 17:00:35 -0400 (EDT) From: Rick Macklem To: Ahmed Kamal Cc: Graham Allan , Ahmed Kamal via freebsd-fs Message-ID: <234027637.5255468.1438203635109.JavaMail.zimbra@uoguelph.ca> In-Reply-To: References: <684628776.2772174.1435793776748.JavaMail.zimbra@uoguelph.ca> <576106597.2326662.1437688749018.JavaMail.zimbra@uoguelph.ca> <1089316279.4709692.1438126760802.JavaMail.zimbra@uoguelph.ca> <1603742210.4824721.1438169724361.JavaMail.zimbra@uoguelph.ca> Subject: Re: Linux NFSv4 clients are getting (bad sequence-id error!) MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit X-Originating-IP: [172.17.95.10] X-Mailer: Zimbra 8.0.9_GA_6191 (ZimbraWebClient - FF34 (Win)/8.0.9_GA_6191) Thread-Topic: Linux NFSv4 clients are getting (bad sequence-id error!) Thread-Index: 3esZ3L8g9BFWftoiBLbBjmhyJXhgMg== X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 29 Jul 2015 21:00:39 -0000 Ahmed Kamal wrote: > hmm Thanks Rick .. > > You mentioned the error appears when nfsrvboottime != clientid.lval[0] .. I > understand nfsrvboottime is number of seconds since the epoch (1970) .. Can > you please explain what clientid.lval[0] is, and (if it comes from the > client?) what guarantees it should equal nfsrvboottime ? > The low order 32bits of nfsrvboottime is assigned to clientid.lval[0] when a new clientid is generated and then sent to a client. The client simply puts the clientid in requests (the bits are opaque to the client). Since nfsrvboottime was assumed by the code to fit in 32bits and does not change (it is set from time_seconds once during startup), this check detects if the clientid that the client has was acquired from the server before the most recent reboot of the server. (Since the server didn't reboot between ExchangeID and Create_session in the packet trace, the value should be the same as what is in nfsrvboottime.) However, if nfsrvboottime is 64bits (which is what time_t is on some arches) and somehow has a non-zero high order 32bits, the value will be != because clientid.lval[0] is a uint32_t. (The patch just casts nfsrvboottime to (uint32_t) to make the comparison ignore the high order bits of nfsrvboottime.) > Apart from trying to understand the problem. Can you send me a small c > program that runs the same code that computes nfsrvboottime and writes that > to the terminal window. I would like to avoid testing a kernel patch on > this system since it runs in production. And last time I rebooted the nfs > server, I ended up having to reboot all clients (every single workstation) > so that was painful .. So if we just want to know if the number if bigger > than 4 billion or not, I think this small app can help us get this value > right ? > Since it is in the kernel and declared "static" I don't know of a way for a C app to manipulate it. (I just booted with a printf in the kernel for it, to see what it was set to.) I just spotted a place in the code that allocates clientids (nfsrv_setclient()) that looks broken when the entry exists on the last hash table element. I will email you a patch for this. To be honest, the only way this will get resolved is if you can test the patch(es). I understand that rebooting the server isn't attractive (and ideally is done only when all clients are dismounted from it), but this is all I can suggest. (Maybe create a kernel with the patches on the server and then reboot with the new kernel when it crashes or has to be rebooted for some other reason? This is no rush for me, so it depends on how problematic the issue is for you?) rick > On Wed, Jul 29, 2015 at 1:35 PM, Rick Macklem wrote: > > > Ahmed Kamal wrote: > > > hmm, if I understand you correctly, this time_seconds value is the number > > > of seconds till the box booted ? > > No, it is the number of seconds when the box booted. Once, it was supposed > > to > > be the number of seconds since Jan. 1, 1970, but I don't if that is still > > the > > case. (For my i386, it is about 1.4billion when it boots, so I'd guess they > > still use Jan. 1, 1970. 3600*24*365*45 = 1419120000. Yea, I didn't bother > > with > > leap years, etc.) > > > > Now, I don't know if the clock could somehow set it to a value > 4billion > > when > > the nfsd starts (it copies time_seconds to nfsrvboottime as it starts up), > > but the > > current clock code is pretty convoluted stuff, so?? > > > > rick > > ps: From the NFSv4 server's point of view, it only needs a number that is > > unique and > > changes every time the server reboots. As such, using the low order > > 32bits of > > it would be sufficient, even if it exceeds 4billion. However, the code > > incorrectly > > assumes it won't exceed 4*1024*1024*1024 - 1 unless you apply the > > patch. > > > > > If so, I guess this is not really the > > > cause of what we're seeing as the box is only up for 8 days > > > > > > bsd# uptime > > > 11:28AM up 8 days, 6:20, 6 users, load averages: 0.94, 0.91, 0.84 > > > > > > The NFS client box's uptime is > > > linux# uptime > > > 11:31:39 up 8 days, 5:51, 11 users, load average: 87.74, 87.43, 87.35 > > > > > > and yes the huge load is most likely due to this NFS bug > > > > > > On Wed, Jul 29, 2015 at 1:39 AM, Rick Macklem > > wrote: > > > > > > > Ahmed Kamal wrote: > > > > > Hi again Rick, > > > > > > > > > > Seems that I'm still being unlucky with nfs :/ I caught one of the > > newly > > > > > installed RHEL6 boxes having high CPU usage, and bombarding the BSD > > NFS > > > > box > > > > > with 10Mbps traffic .. I caught a tcpdump as you mentioned .. You can > > > > > download it here: > > > > > > > > > > > > > > > > https://dl.dropboxusercontent.com/u/51939288/nfs41-high-client-cpu.pcap.bz2 > > > > > > > > > Ok, the packet trace suggests that the NFSv4 server is broken (it is > > > > replying > > > > with NFS4ERR_STALE_CLIENTID for a recently generated ClientID). > > > > Now, I can't be sure, but the only explanation I can come up with is... > > > > - For some arches (I only have i386, so I wouldn't have seen this > > during > > > > testing), > > > > time_t is 64bits (uint64_t). > > > > --> If time_seconds somehow doesn't fit in the low order 32bits, then > > > > the code > > > > would be busted for these arches because nfsrvboottime is set to > > > > time_seconds > > > > when the server is started and then there are comparisons like: > > > > if (nfsrvboottime != clientid.lval[0]) > > > > return (NFSERR_STALECLIENTID); > > > > /* where clientid.lval[0] is a uint32_t */ > > > > Anyhow, if this is what is happening, the attached simple patch should > > fix > > > > it. > > > > (I don't know how time_seconds would exceed 4billion, but the clock > > code is > > > > pretty convoluted, so I can't say if it can possibly happen?) > > > > > > > > rick > > > > ps: Hmm, on i386 time_seconds ends up at 1438126486, so maybe it can > > exceed > > > > 4*1024*1024*1024 - 1 on amd64? > > > > > > > > > I didn't restart the client yet .. so if you catch me in the next few > > > > hours > > > > > and want me to run any diagnostics, let me know. Thanks a lot all for > > > > > helping > > > > > > > > > > On Thu, Jul 23, 2015 at 11:59 PM, Rick Macklem > > > > > > wrote: > > > > > > > > > > > Ahmed Kamal wrote: > > > > > > > Can you please let me know the ultimate packet trace command I'd > > > > need to > > > > > > > run in case of any nfs4 troubles .. I guess this should be > > > > comprehensive > > > > > > > even at the expense of a larger output size (which we can trim > > > > later).. > > > > > > > Thanks a lot for the help! > > > > > > > > > > > > > tcpdump -s 0 -w .pcap host > > > > > > ( refers to a file name you choose and > > refers > > > > to > > > > > > the host name of a client generating traffic.) > > > > > > --> But you won't be able to allow this to run for long during the > > > > storm > > > > > > or the > > > > > > file will be huge. > > > > > > > > > > > > Then you look at .pcap in wireshark, which knows NFS. > > > > > > > > > > > > rick > > > > > > > > > > > > > On Thu, Jul 23, 2015 at 11:53 PM, Rick Macklem < > > rmacklem@uoguelph.ca > > > > > > > > > > > wrote: > > > > > > > > > > > > > > > Graham Allan wrote: > > > > > > > > > For our part, the user whose code triggered the pathological > > > > > > behaviour > > > > > > > > > on SL5 reran it on SL6 without incident - I still see lots of > > > > > > > > > sequence-id errors in the logs, but nothing bad happened. > > > > > > > > > > > > > > > > > > I'd still like to ask them to rerun again on SL5 to see if > > the > > > > > > "accept > > > > > > > > > skipped seqid" patch had any effect, though I think we expect > > > > not. > > > > > > Maybe > > > > > > > > > it would be nice if I could get set up to capture rolling > > > > tcpdumps of > > > > > > > > > the nfs traffic before they run that though... > > > > > > > > > > > > > > > > > > Graham > > > > > > > > > > > > > > > > > > On 7/20/2015 10:26 PM, Ahmed Kamal wrote: > > > > > > > > > > Hi folks, > > > > > > > > > > > > > > > > > > > > I've upgraded a test client to rhel6 today, and I'll keep > > an > > > > eye > > > > > > on it > > > > > > > > > > to see what happens. > > > > > > > > > > > > > > > > > > > > During the process, I made the (I guess mistake) of zfs > > send | > > > > > > recv to > > > > > > > > a > > > > > > > > > > locally attached usb disk for backup purposes .. long story > > > > short, > > > > > > > > > > sharenfs property on the received filesystem was causing > > some > > > > > > > > nfs/mountd > > > > > > > > > > errors in logs .. I wasn't too happy with what I got .. I > > > > > > destroyed the > > > > > > > > > > backup datasets and the whole pool eventually .. and then > > > > rebooted > > > > > > the > > > > > > > > > > whole nas box .. After reboot my logs are still flooded > > with > > > > > > > > > > > > > > > > > > > > Jul 21 05:12:36 nas kernel: nfsrv_cache_session: no session > > > > > > > > > > Jul 21 05:13:07 nas last message repeated 7536 times > > > > > > > > > > Jul 21 05:15:08 nas last message repeated 29664 times > > > > > > > > > > > > > > > > > > > > Not sure what that means .. or how it can be stopped .. > > Anyway, > > > > > > will > > > > > > > > > > keep you posted on progress. > > > > > > > > > > > > > > > > > Oh, I didn't see the part about "reboot" before. > > Unfortunately, it > > > > > > sounds > > > > > > > > like the > > > > > > > > client isn't recovering after the session is lost. When the > > server > > > > > > > > reboots, the > > > > > > > > client(s) will get NFS4ERR_BAD_SESSION errors back because the > > > > server > > > > > > > > reboot has > > > > > > > > deleted all sessions. The NFS4ERR_BAD_SESSION should trigger > > state > > > > > > > > recovery on the client. > > > > > > > > (It doesn't sound like the clients went into recovery, starting > > > > with a > > > > > > > > Create_session > > > > > > > > operation, but without a packet trace, I can't be sure?) > > > > > > > > > > > > > > > > rick > > > > > > > > > > > > > > > > > > > > > > > > > > -- > > > > > > > > > > > > > > > > > > > > > ------------------------------------------------------------------------- > > > > > > > > > Graham Allan - gta@umn.edu - allan@physics.umn.edu > > > > > > > > > School of Physics and Astronomy - University of Minnesota > > > > > > > > > > > > > > > > > > > > > ------------------------------------------------------------------------- > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > From owner-freebsd-fs@freebsd.org Wed Jul 29 21:12:02 2015 Return-Path: Delivered-To: freebsd-fs@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 4E1C09AEB1E for ; Wed, 29 Jul 2015 21:12:02 +0000 (UTC) (envelope-from javocado@gmail.com) Received: from mail-lb0-x230.google.com (mail-lb0-x230.google.com [IPv6:2a00:1450:4010:c04::230]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (Client CN "smtp.gmail.com", Issuer "Google Internet Authority G2" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id CDB7213A5 for ; Wed, 29 Jul 2015 21:12:01 +0000 (UTC) (envelope-from javocado@gmail.com) Received: by lbbst4 with SMTP id st4so14820866lbb.1 for ; Wed, 29 Jul 2015 14:11:59 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:date:message-id:subject:from:to:content-type; bh=RJ48jI/24zAwZST9h6k7yWneTkRfFeGzWe+S9WGruE8=; b=MJylOO4orIesyRrfbfWbD82rdsJyIUDmKCDZng0GjlOAAnzi3Pu3Nd7aAx52j37Mic Tjx4QLugJDVM9hxSki/b6mwhQOq4I6pCMTmEZEtra05sYHagjIlngf6xySOpQauGZp7v D6zIyyjM2j3Re7OxbeO0SHokLpaI9ef06COG8V0ORWL9o5giFy9W2clIMoCFwMUKBtAH w6AFs+H6QjoXukpA9knZBuhdV9JSP+VTLjsqubcPWXThW+0x5Z2CiJY1cUPzH3IBhz8n GRKN8+Aa/vrkUX6aVjDW80hCqrFlUG715WnFWufjdBND6TN8u4ir9MS12tWznsMLc8d+ PcZA== MIME-Version: 1.0 X-Received: by 10.112.137.164 with SMTP id qj4mr40724648lbb.105.1438204319587; Wed, 29 Jul 2015 14:11:59 -0700 (PDT) Received: by 10.114.96.8 with HTTP; Wed, 29 Jul 2015 14:11:59 -0700 (PDT) Date: Wed, 29 Jul 2015 14:11:59 -0700 Message-ID: Subject: pw operations slow under zfs load From: javocado To: FreeBSD Filesystems Content-Type: text/plain; charset=UTF-8 X-Content-Filtered-By: Mailman/MimeDel 2.1.20 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 29 Jul 2015 21:12:02 -0000 Hi, We have a pretty busy ZFS pool running on an 8.3 AMD system. We are noticing that when the pool is busy pw-related operations seem to take a long time to complete: # time pw unlock 1000 0.007u 0.036s 0:39.72 0.0% 45+1953k 0+113io 0pf+0w # time pw lock 1000 0.032u 0.022s 1:09.63 0.0% 24+1132k 0+114io 0pf+0w Wile the command is running, we note that the process is locked in the D state: root 85051 0.0 0.0 5832 960 0 D+ 1:53PM 0:00.02 /usr/sbin/pwd_mkdb -u 1000 /etc/master.passwd We also note that there is next to 0 disk activity on the boot volume: # gstat -f ad dT: 1.005s w: 1.000s filter: ad L(q) ops/s r/s kBps ms/r w/s kBps ms/w %busy Name 0 0 0 0 0.0 0 0 0.0 0.0| ad6 0 0 0 0 0.0 0 0 0.0 0.0| ad8 0 0 0 0 0.0 0 0 0.0 0.0| ad10 0 0 0 0 0.0 0 0 0.0 0.0| ad12 And plenty of free mem: Mem: 400M Active, 3391M Inact, 128G Wired, 1935M Cache, 14G Buf, 6055M Free So, what's going on here? How does a busy pool with it's own set of drives (which operate off an HBA) affect the speed of operations involving the boot volume (an SSD connected to the mobo)? From owner-freebsd-fs@freebsd.org Wed Jul 29 21:13:51 2015 Return-Path: Delivered-To: freebsd-fs@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 35CF99AEB61 for ; Wed, 29 Jul 2015 21:13:51 +0000 (UTC) (envelope-from javocado@gmail.com) Received: from mail-la0-x231.google.com (mail-la0-x231.google.com [IPv6:2a00:1450:4010:c03::231]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (Client CN "smtp.gmail.com", Issuer "Google Internet Authority G2" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id AFB70147D for ; Wed, 29 Jul 2015 21:13:50 +0000 (UTC) (envelope-from javocado@gmail.com) Received: by lagw2 with SMTP id w2so13890873lag.3 for ; Wed, 29 Jul 2015 14:13:48 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :content-type; bh=j4hO9STThkXuLOvWJ9WMAi2Cux6SicywGcdyJGTbebA=; b=M4lYuI4GZmJiTHbh8kmteHTKbzSmHNA9K7z/boKwr/CPqSvpV/gvu7IQVqtjfRsrPD 8d4S5sGk4FfML3aGkh/mDEPT1rMqb3oLaGMJpzXd1JJPNl6/jYyGWmSHewLD50rV0XyS cPo9N2o80VkgfaAf7qz/hPKg4M4sg9yQgz26e1vz4Ees3Crp186QOq1Ga5c6fyXtCaPQ G975zDiuTdM3jMv22Hs3vaUilk+HmtZf+8WJqDr7hK5+4mQOhzfVZioHzFP02b5cDoJp WapELGh2IN5FUiNr0tilngH2XobGI62Fj9Y7ACk7Vi8YaI7w6VltWQOtTAILwXGkvZYr obhQ== MIME-Version: 1.0 X-Received: by 10.112.219.70 with SMTP id pm6mr40044213lbc.41.1438204428768; Wed, 29 Jul 2015 14:13:48 -0700 (PDT) Received: by 10.114.96.8 with HTTP; Wed, 29 Jul 2015 14:13:48 -0700 (PDT) In-Reply-To: References: Date: Wed, 29 Jul 2015 14:13:48 -0700 Message-ID: Subject: Re: pw operations slow under zfs load From: javocado To: FreeBSD Filesystems Content-Type: text/plain; charset=UTF-8 X-Content-Filtered-By: Mailman/MimeDel 2.1.20 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 29 Jul 2015 21:13:51 -0000 Sorry, the boot disk (SSD) is plain old ufs. On Wed, Jul 29, 2015 at 2:11 PM, javocado wrote: > Hi, > > We have a pretty busy ZFS pool running on an 8.3 AMD system. We are > noticing that when the pool is busy pw-related operations seem to take a > long time to complete: > > # time pw unlock 1000 > 0.007u 0.036s 0:39.72 0.0% 45+1953k 0+113io 0pf+0w > > # time pw lock 1000 > 0.032u 0.022s 1:09.63 0.0% 24+1132k 0+114io 0pf+0w > > Wile the command is running, we note that the process is locked in the D > state: > > root 85051 0.0 0.0 5832 960 0 D+ 1:53PM 0:00.02 > /usr/sbin/pwd_mkdb -u 1000 /etc/master.passwd > > We also note that there is next to 0 disk activity on the boot volume: > > # gstat -f ad > > dT: 1.005s w: 1.000s filter: ad > L(q) ops/s r/s kBps ms/r w/s kBps ms/w %busy Name > 0 0 0 0 0.0 0 0 0.0 0.0| ad6 > 0 0 0 0 0.0 0 0 0.0 0.0| ad8 > 0 0 0 0 0.0 0 0 0.0 0.0| ad10 > 0 0 0 0 0.0 0 0 0.0 0.0| ad12 > > And plenty of free mem: > > Mem: 400M Active, 3391M Inact, 128G Wired, 1935M Cache, 14G Buf, 6055M Free > > So, what's going on here? How does a busy pool with it's own set of drives > (which operate off an HBA) affect the speed of operations involving the > boot volume (an SSD connected to the mobo)? > > From owner-freebsd-fs@freebsd.org Wed Jul 29 21:37:24 2015 Return-Path: Delivered-To: freebsd-fs@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 5D1FE9AEFD5 for ; Wed, 29 Jul 2015 21:37:24 +0000 (UTC) (envelope-from karl@denninger.net) Received: from fs.denninger.net (wsip-70-169-168-7.pn.at.cox.net [70.169.168.7]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client CN "NewFS.denninger.net", Issuer "NewFS.denninger.net" (not verified)) by mx1.freebsd.org (Postfix) with ESMTPS id 238DF766 for ; Wed, 29 Jul 2015 21:37:23 +0000 (UTC) (envelope-from karl@denninger.net) Received: from [192.168.1.40] (localhost [127.0.0.1]) by fs.denninger.net (8.15.2/8.14.8) with ESMTP id t6TLbGnP037779 for ; Wed, 29 Jul 2015 16:37:16 -0500 (CDT) (envelope-from karl@denninger.net) Received: from [192.168.1.40] [192.168.1.40] (Via SSLv3 AES128-SHA) ; by Spamblock-sys (LOCAL/AUTH) Wed Jul 29 16:37:16 2015 Message-ID: <55B9477C.2050302@denninger.net> Date: Wed, 29 Jul 2015 16:37:00 -0500 From: Karl Denninger User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:31.0) Gecko/20100101 Thunderbird/31.7.0 MIME-Version: 1.0 To: freebsd-fs@freebsd.org Subject: Re: pw operations slow under zfs load References: In-Reply-To: Content-Type: multipart/signed; protocol="application/pkcs7-signature"; micalg=sha1; boundary="------------ms040103040106040808090908" X-Content-Filtered-By: Mailman/MimeDel 2.1.20 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 29 Jul 2015 21:37:24 -0000 This is a cryptographically signed message in MIME format. --------------ms040103040106040808090908 Content-Type: text/plain; charset=windows-1252 Content-Transfer-Encoding: quoted-printable On 7/29/2015 16:13, javocado wrote: > Sorry, the boot disk (SSD) is plain old ufs. > > On Wed, Jul 29, 2015 at 2:11 PM, javocado wrote: > >> Hi, >> >> We have a pretty busy ZFS pool running on an 8.3 AMD system. We are >> noticing that when the pool is busy pw-related operations seem to tak= e a >> long time to complete: >> >> # time pw unlock 1000 >> 0.007u 0.036s 0:39.72 0.0% 45+1953k 0+113io 0pf+0w >> >> # time pw lock 1000 >> 0.032u 0.022s 1:09.63 0.0% 24+1132k 0+114io 0pf+0w >> >> Wile the command is running, we note that the process is locked in the= D >> state: >> >> root 85051 0.0 0.0 5832 960 0 D+ 1:53PM 0:00.02 >> /usr/sbin/pwd_mkdb -u 1000 /etc/master.passwd >> >> We also note that there is next to 0 disk activity on the boot volume:= >> >> # gstat -f ad >> >> dT: 1.005s w: 1.000s filter: ad >> L(q) ops/s r/s kBps ms/r w/s kBps ms/w %busy Name >> 0 0 0 0 0.0 0 0 0.0 0.0| ad6 >> 0 0 0 0 0.0 0 0 0.0 0.0| ad8 >> 0 0 0 0 0.0 0 0 0.0 0.0| ad10 >> 0 0 0 0 0.0 0 0 0.0 0.0| ad12 >> >> And plenty of free mem: >> >> Mem: 400M Active, 3391M Inact, 128G Wired, 1935M Cache, 14G Buf, 6055M= Free >> >> So, what's going on here? How does a busy pool with it's own set of dr= ives >> (which operate off an HBA) affect the speed of operations involving th= e >> boot volume (an SSD connected to the mobo)? >> >> >> A very common manifestation of the way ZFS and the VM system interact, unfortunately. It's better in 9.x and 10.x, and the patch I have against them makes it even better, but it still isn't completely resolved. --=20 Karl Denninger karl@denninger.net /The Market Ticker/ /[S/MIME encrypted email preferred]/ --------------ms040103040106040808090908 Content-Type: application/pkcs7-signature; name="smime.p7s" Content-Transfer-Encoding: base64 Content-Disposition: attachment; filename="smime.p7s" Content-Description: S/MIME Cryptographic Signature MIAGCSqGSIb3DQEHAqCAMIACAQExCzAJBgUrDgMCGgUAMIAGCSqGSIb3DQEHAQAAoIIGXzCC BlswggRDoAMCAQICASkwDQYJKoZIhvcNAQELBQAwgZAxCzAJBgNVBAYTAlVTMRAwDgYDVQQI EwdGbG9yaWRhMRIwEAYDVQQHEwlOaWNldmlsbGUxGTAXBgNVBAoTEEN1ZGEgU3lzdGVtcyBM TEMxHDAaBgNVBAMTE0N1ZGEgU3lzdGVtcyBMTEMgQ0ExIjAgBgkqhkiG9w0BCQEWE0N1ZGEg U3lzdGVtcyBMTEMgQ0EwHhcNMTUwNDIxMDIyMTU5WhcNMjAwNDE5MDIyMTU5WjBaMQswCQYD VQQGEwJVUzEQMA4GA1UECBMHRmxvcmlkYTEZMBcGA1UEChMQQ3VkYSBTeXN0ZW1zIExMQzEe MBwGA1UEAxMVS2FybCBEZW5uaW5nZXIgKE9DU1ApMIICIjANBgkqhkiG9w0BAQEFAAOCAg8A MIICCgKCAgEAuYRY+EB2mGtZ3grlVO8TmnEvduVFA/IYXcCmNSOC1q+pTVjylsjcHKBcOPb9 TP1KLxdWP+Q1soSORGHlKw2/HcVzShDW5WPIKrvML+Ry0XvIvNBu9adTiCsA9nci4Cnf98XE hVpenER0qbJkBUOGT1rP4iAcfjet0lEgzPEnm+pAxv6fYSNp1WqIY9u0b1pkQiaWrt8hgNOc rJOiLbc8CeQ/DBP6rUiQjYNO9/aPNauEtHkNNfR9RgLSfGUdZuOCmJqnIla1HsrZhA5p69Bv /e832BKiNPaH5wF6btAiPpTr2sRhwQO8/IIxcRX1Vxd1yZbjYtJGw+9lwEcWRYAmoxkzKLPi S6Zo/6z5wgNpeK1H+zOioMoZIczgI8BlX1iHxqy/FAvm4PHPnC8s+BLnJLwr+jvMNHm82QwL J9hC5Ho8AnFU6TkCuq+P2V8/clJVqnBuvTUKhYMGSm4mUp+lAgR4L+lwIEqSeWVsxirIcE7Z OKkvI7k5x3WeE3+c6w74L6PfWVAd84xFlo9DKRdU9YbkFuFZPu21fi/LmE5brImB5P+jdqnK eWnVwRq+RBFLy4kehCzMXooitAwgP8l/JJa9VDiSyd/PAHaVGiat2vCdDh4b8cFL7SV6jPA4 k0MgGUA/6Et7wDmhZmCigggr9K6VQCx8jpKB3x1NlNNiaWECAwEAAaOB9DCB8TA3BggrBgEF BQcBAQQrMCkwJwYIKwYBBQUHMAGGG2h0dHA6Ly9jdWRhc3lzdGVtcy5uZXQ6ODg4ODAJBgNV HRMEAjAAMBEGCWCGSAGG+EIBAQQEAwIFoDALBgNVHQ8EBAMCBeAwLAYJYIZIAYb4QgENBB8W HU9wZW5TU0wgR2VuZXJhdGVkIENlcnRpZmljYXRlMB0GA1UdDgQWBBTFHJQt6cloXBdG1Pv1 o2YgH+7lWTAfBgNVHSMEGDAWgBQkcZudhX383d29sMqSlAOh+tNtNTAdBgNVHREEFjAUgRJr YXJsQGRlbm5pbmdlci5uZXQwDQYJKoZIhvcNAQELBQADggIBAE9/dxi2YqjCYYhiybp4GKcm 7tBVa/GLW+qcHPcoT4dqmqghlLz8+iUH+HCJjRQATVGyMEnvISOKFVHC6aZIG+Sg7J8bfS4+ fjKDi9smRH2VPPx3bV8+yFYRNroMGHaPHZB/Xctmmvc+PZ9O2W7rExgrODtxIOB3Zs6wkYf+ ty+9r1KmTHlV+rRHI6timH1uiyFE3cPi1taAEBxf0851cJV8k40PGF8G48ewnq8SY9sCf5cv liXbpdgU+I4ND5BuTjg63WS32zuhLd1VSuH3ZC/QbcncMX5W3oLXmcQP5/5uTiBJy74kdPtG MSZ9rXwZPwNxP/8PXMSR7ViaFvjUkf4bJlyENFa2PGxLk4EUzOuO7t3brjMlQW1fuInfG+ko 3tVxko20Hp0tKGPe/9cOxBVBZeZH/VgpZn3cLculGzZjmdh2fqAQ6kv9Z9AVOG1+dq0c1zt8 2zm+Oi1pikGXkfz5UJq60psY6zbX25BuEZkthO/qiS4pxjxb7gQkS0rTEHTy+qv0l3QVL0wa NAT74Zaj7l5DEW3qdQQ0dtVieyvptg9CxkfQJE3JyBMb0zBj9Qhc5/hbTfhSlHzZMEbUuIyx h9vxqFAmGzfB1/WfOKkiNHChkpPW8ZeH9yPeDBKvrgZ96dREHFoVkDk7Vpw5lSM+tFOfdyLg xxhb/RZVUDeUMYIE4zCCBN8CAQEwgZYwgZAxCzAJBgNVBAYTAlVTMRAwDgYDVQQIEwdGbG9y aWRhMRIwEAYDVQQHEwlOaWNldmlsbGUxGTAXBgNVBAoTEEN1ZGEgU3lzdGVtcyBMTEMxHDAa BgNVBAMTE0N1ZGEgU3lzdGVtcyBMTEMgQ0ExIjAgBgkqhkiG9w0BCQEWE0N1ZGEgU3lzdGVt cyBMTEMgQ0ECASkwCQYFKw4DAhoFAKCCAiEwGAYJKoZIhvcNAQkDMQsGCSqGSIb3DQEHATAc BgkqhkiG9w0BCQUxDxcNMTUwNzI5MjEzNzAwWjAjBgkqhkiG9w0BCQQxFgQULQ5jn4+UoMwa LdEkrxNsgSXY11wwbAYJKoZIhvcNAQkPMV8wXTALBglghkgBZQMEASowCwYJYIZIAWUDBAEC MAoGCCqGSIb3DQMHMA4GCCqGSIb3DQMCAgIAgDANBggqhkiG9w0DAgIBQDAHBgUrDgMCBzAN BggqhkiG9w0DAgIBKDCBpwYJKwYBBAGCNxAEMYGZMIGWMIGQMQswCQYDVQQGEwJVUzEQMA4G A1UECBMHRmxvcmlkYTESMBAGA1UEBxMJTmljZXZpbGxlMRkwFwYDVQQKExBDdWRhIFN5c3Rl bXMgTExDMRwwGgYDVQQDExNDdWRhIFN5c3RlbXMgTExDIENBMSIwIAYJKoZIhvcNAQkBFhND dWRhIFN5c3RlbXMgTExDIENBAgEpMIGpBgsqhkiG9w0BCRACCzGBmaCBljCBkDELMAkGA1UE BhMCVVMxEDAOBgNVBAgTB0Zsb3JpZGExEjAQBgNVBAcTCU5pY2V2aWxsZTEZMBcGA1UEChMQ Q3VkYSBTeXN0ZW1zIExMQzEcMBoGA1UEAxMTQ3VkYSBTeXN0ZW1zIExMQyBDQTEiMCAGCSqG SIb3DQEJARYTQ3VkYSBTeXN0ZW1zIExMQyBDQQIBKTANBgkqhkiG9w0BAQEFAASCAgBCAOqW Ir4L188HeZJKC5ohsASybhP2DyhbXBogFpCzSsaalfnA+98a64RhmdFVEYJa27JiTD5yZXZ6 ZfaPnffaei6OCWny0fjBE5o7vrmDUclMM42TSGTax75mMR5Ptvl1Nth5g6f1Enbts2SVd3Gy 0zvl9qyVShOv7DrBup/XfGr/P7XU3b65wnIfoJm+BRYoGVYrdnzHPsw3TQxcm6GAeCxxCCuz m/4oxqS8jq3IZWl5FAmC+oMK3uI3dH3BYdF/5ZZgjAJW+pkIi3AlU0/UiT1K3JAbI8g1z/Qx 294j5KeRYtAC7okjJna7ywzzUCwtIBLE2nZ8psFVQK/AmX0OY0ayEWL+NbGuo7LWHWylz8mo qH3fvV5ZpjxwjW2BfvnJ4dSXH+AqUhZcK2QsyXCbUOTNvxounGAILkWTH4qZPYXvy8CgEOZw f2c0Xfz6TRvKh4f8ko6NlUMTADDxeKfj0hFwcEP8E0QEoCPBhrHMJj0UT4g4J9OyZJ6306tX lCDsUYoBExxc7l5UDLKtCoS4nAyTXdyZY4zFiasVzzMzeBasMc5Rr4IuDHCHzmz0wcI7B80P ncbn39WoL4j1EVTDTv0kPnrH7Au/NS5QxiDA/HyklrHqeR6ciRD5fkNZAS5Rd5MFPkYd4JVB ArAa9H1gx9W/QH6w9dhVyM+0+yr/9AAAAAAAAA== --------------ms040103040106040808090908-- From owner-freebsd-fs@freebsd.org Wed Jul 29 22:47:47 2015 Return-Path: Delivered-To: freebsd-fs@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 5064B9AEE16 for ; Wed, 29 Jul 2015 22:47:47 +0000 (UTC) (envelope-from rmacklem@uoguelph.ca) Received: from esa-annu.net.uoguelph.ca (esa-annu.mail.uoguelph.ca [131.104.91.36]) by mx1.freebsd.org (Postfix) with ESMTP id C9950AA5 for ; Wed, 29 Jul 2015 22:47:46 +0000 (UTC) (envelope-from rmacklem@uoguelph.ca) X-IronPort-Anti-Spam-Filtered: true X-IronPort-Anti-Spam-Result: A2AcBQD1VrlV/61jaINbg25pBoMduHGCBYV3AoIREwEBAQEBAQGBCoQjAQEBAwEjBFIFCwIBCA4KERkCAgJVAgQTiCYIDblblgIBAQEBAQEBAQIBAQEBAQEBARYEi06EGyECBw0BGRsHgmmBQwWHF4UxiCiCOIJCgmKGLYQgiyuEPYNhAiaCDQEcgW8iMQEBgQQBHiOBBAEBAQ X-IronPort-AV: E=Sophos;i="5.15,573,1432612800"; d="scan'208";a="229267073" Received: from nipigon.cs.uoguelph.ca (HELO zcs1.mail.uoguelph.ca) ([131.104.99.173]) by esa-annu.net.uoguelph.ca with ESMTP; 29 Jul 2015 18:47:44 -0400 Received: from localhost (localhost [127.0.0.1]) by zcs1.mail.uoguelph.ca (Postfix) with ESMTP id 58A2815F542; Wed, 29 Jul 2015 18:47:44 -0400 (EDT) Received: from zcs1.mail.uoguelph.ca ([127.0.0.1]) by localhost (zcs1.mail.uoguelph.ca [127.0.0.1]) (amavisd-new, port 10032) with ESMTP id lHTtAlGUbxbh; Wed, 29 Jul 2015 18:47:42 -0400 (EDT) Received: from localhost (localhost [127.0.0.1]) by zcs1.mail.uoguelph.ca (Postfix) with ESMTP id D1F8315F561; Wed, 29 Jul 2015 18:47:42 -0400 (EDT) X-Virus-Scanned: amavisd-new at zcs1.mail.uoguelph.ca Received: from zcs1.mail.uoguelph.ca ([127.0.0.1]) by localhost (zcs1.mail.uoguelph.ca [127.0.0.1]) (amavisd-new, port 10026) with ESMTP id 6LVdxa886sqq; Wed, 29 Jul 2015 18:47:42 -0400 (EDT) Received: from zcs1.mail.uoguelph.ca (zcs1.mail.uoguelph.ca [172.17.95.18]) by zcs1.mail.uoguelph.ca (Postfix) with ESMTP id B2D5015F542; Wed, 29 Jul 2015 18:47:42 -0400 (EDT) Date: Wed, 29 Jul 2015 18:47:42 -0400 (EDT) From: Rick Macklem To: Ahmed Kamal Cc: Graham Allan , Ahmed Kamal via freebsd-fs Message-ID: <399737523.5288680.1438210062715.JavaMail.zimbra@uoguelph.ca> In-Reply-To: References: <684628776.2772174.1435793776748.JavaMail.zimbra@uoguelph.ca> <576106597.2326662.1437688749018.JavaMail.zimbra@uoguelph.ca> <1089316279.4709692.1438126760802.JavaMail.zimbra@uoguelph.ca> <1603742210.4824721.1438169724361.JavaMail.zimbra@uoguelph.ca> Subject: Re: Linux NFSv4 clients are getting (bad sequence-id error!) MIME-Version: 1.0 Content-Type: multipart/mixed; boundary="----=_Part_5288678_109992561.1438210062713" X-Originating-IP: [172.17.95.10] X-Mailer: Zimbra 8.0.9_GA_6191 (ZimbraWebClient - FF34 (Win)/8.0.9_GA_6191) Thread-Topic: Linux NFSv4 clients are getting (bad sequence-id error!) Thread-Index: HVutiPFL6x9b3DPqgnqlg/Xjk8t7mw== X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 29 Jul 2015 22:47:47 -0000 ------=_Part_5288678_109992561.1438210062713 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit Ahmed Kamal wrote: > hmm Thanks Rick .. > > You mentioned the error appears when nfsrvboottime != clientid.lval[0] .. I > understand nfsrvboottime is number of seconds since the epoch (1970) .. Can > you please explain what clientid.lval[0] is, and (if it comes from the > client?) what guarantees it should equal nfsrvboottime ? > > Apart from trying to understand the problem. Can you send me a small c > program that runs the same code that computes nfsrvboottime and writes that > to the terminal window. I would like to avoid testing a kernel patch on > this system since it runs in production. And last time I rebooted the nfs > server, I ended up having to reboot all clients (every single workstation) > so that was painful .. So if we just want to know if the number if bigger > than 4 billion or not, I think this small app can help us get this value > right ? > Ok, I took a closer look at the packet trace (thanks for creating it) and I saw that the clientid value being returned by the server was bogus. (The field that shouldn't change except when the server is rebooted was changing.) Given the above, I think I did find the bug and the attached patch should fix it. (This is NFSv4.1 specific and has nothing to do with the NFSv4.0 seqid issue.) The patch actually fixes 3 things, although I don't think the other 2 would affect you in practice: 1 - When a confirmed clientid already exists, nfsrv_setclient() wasn't setting the clientidp argument, so the reply included garbage off the stack. I think this is what caused your problem. 2 - If the high order 32bits of the nfsrvboottime is non-zero, the comparisons with clientid.lval[0] would fail. I don't think this should actually happen until the year 2138, but is fixed in the patch. 3 - It was possible to leave an unconfirmed duplicate clientid structure in the list when the match in nfsrv_setclient() found it in the last hash list. I do not think this would have caused any problem, since the new one would be at the head of the list. The old one would eventually been scavenged and cleared out, although it would have occupied storage until then. The attached patch fixes all of these and I think will fix your problem. Maybe you can create a patched kernel and then find a time to reboot the server someday? If any other reader is using the NFSv4 server, please test this patch if possible. Thanks for creating the packet trace and sorry about this bug causing you grief, rick ps: I'd guess you are one of the first serious users of the NFSv4.1 server, but hopefully it will behave ok for you with this patch. > On Wed, Jul 29, 2015 at 1:35 PM, Rick Macklem wrote: > > > Ahmed Kamal wrote: > > > hmm, if I understand you correctly, this time_seconds value is the number > > > of seconds till the box booted ? > > No, it is the number of seconds when the box booted. Once, it was supposed > > to > > be the number of seconds since Jan. 1, 1970, but I don't if that is still > > the > > case. (For my i386, it is about 1.4billion when it boots, so I'd guess they > > still use Jan. 1, 1970. 3600*24*365*45 = 1419120000. Yea, I didn't bother > > with > > leap years, etc.) > > > > Now, I don't know if the clock could somehow set it to a value > 4billion > > when > > the nfsd starts (it copies time_seconds to nfsrvboottime as it starts up), > > but the > > current clock code is pretty convoluted stuff, so?? > > > > rick > > ps: From the NFSv4 server's point of view, it only needs a number that is > > unique and > > changes every time the server reboots. As such, using the low order > > 32bits of > > it would be sufficient, even if it exceeds 4billion. However, the code > > incorrectly > > assumes it won't exceed 4*1024*1024*1024 - 1 unless you apply the > > patch. > > > > > If so, I guess this is not really the > > > cause of what we're seeing as the box is only up for 8 days > > > > > > bsd# uptime > > > 11:28AM up 8 days, 6:20, 6 users, load averages: 0.94, 0.91, 0.84 > > > > > > The NFS client box's uptime is > > > linux# uptime > > > 11:31:39 up 8 days, 5:51, 11 users, load average: 87.74, 87.43, 87.35 > > > > > > and yes the huge load is most likely due to this NFS bug > > > > > > On Wed, Jul 29, 2015 at 1:39 AM, Rick Macklem > > wrote: > > > > > > > Ahmed Kamal wrote: > > > > > Hi again Rick, > > > > > > > > > > Seems that I'm still being unlucky with nfs :/ I caught one of the > > newly > > > > > installed RHEL6 boxes having high CPU usage, and bombarding the BSD > > NFS > > > > box > > > > > with 10Mbps traffic .. I caught a tcpdump as you mentioned .. You can > > > > > download it here: > > > > > > > > > > > > > > > > https://dl.dropboxusercontent.com/u/51939288/nfs41-high-client-cpu.pcap.bz2 > > > > > > > > > Ok, the packet trace suggests that the NFSv4 server is broken (it is > > > > replying > > > > with NFS4ERR_STALE_CLIENTID for a recently generated ClientID). > > > > Now, I can't be sure, but the only explanation I can come up with is... > > > > - For some arches (I only have i386, so I wouldn't have seen this > > during > > > > testing), > > > > time_t is 64bits (uint64_t). > > > > --> If time_seconds somehow doesn't fit in the low order 32bits, then > > > > the code > > > > would be busted for these arches because nfsrvboottime is set to > > > > time_seconds > > > > when the server is started and then there are comparisons like: > > > > if (nfsrvboottime != clientid.lval[0]) > > > > return (NFSERR_STALECLIENTID); > > > > /* where clientid.lval[0] is a uint32_t */ > > > > Anyhow, if this is what is happening, the attached simple patch should > > fix > > > > it. > > > > (I don't know how time_seconds would exceed 4billion, but the clock > > code is > > > > pretty convoluted, so I can't say if it can possibly happen?) > > > > > > > > rick > > > > ps: Hmm, on i386 time_seconds ends up at 1438126486, so maybe it can > > exceed > > > > 4*1024*1024*1024 - 1 on amd64? > > > > > > > > > I didn't restart the client yet .. so if you catch me in the next few > > > > hours > > > > > and want me to run any diagnostics, let me know. Thanks a lot all for > > > > > helping > > > > > > > > > > On Thu, Jul 23, 2015 at 11:59 PM, Rick Macklem > > > > > > wrote: > > > > > > > > > > > Ahmed Kamal wrote: > > > > > > > Can you please let me know the ultimate packet trace command I'd > > > > need to > > > > > > > run in case of any nfs4 troubles .. I guess this should be > > > > comprehensive > > > > > > > even at the expense of a larger output size (which we can trim > > > > later).. > > > > > > > Thanks a lot for the help! > > > > > > > > > > > > > tcpdump -s 0 -w .pcap host > > > > > > ( refers to a file name you choose and > > refers > > > > to > > > > > > the host name of a client generating traffic.) > > > > > > --> But you won't be able to allow this to run for long during the > > > > storm > > > > > > or the > > > > > > file will be huge. > > > > > > > > > > > > Then you look at .pcap in wireshark, which knows NFS. > > > > > > > > > > > > rick > > > > > > > > > > > > > On Thu, Jul 23, 2015 at 11:53 PM, Rick Macklem < > > rmacklem@uoguelph.ca > > > > > > > > > > > wrote: > > > > > > > > > > > > > > > Graham Allan wrote: > > > > > > > > > For our part, the user whose code triggered the pathological > > > > > > behaviour > > > > > > > > > on SL5 reran it on SL6 without incident - I still see lots of > > > > > > > > > sequence-id errors in the logs, but nothing bad happened. > > > > > > > > > > > > > > > > > > I'd still like to ask them to rerun again on SL5 to see if > > the > > > > > > "accept > > > > > > > > > skipped seqid" patch had any effect, though I think we expect > > > > not. > > > > > > Maybe > > > > > > > > > it would be nice if I could get set up to capture rolling > > > > tcpdumps of > > > > > > > > > the nfs traffic before they run that though... > > > > > > > > > > > > > > > > > > Graham > > > > > > > > > > > > > > > > > > On 7/20/2015 10:26 PM, Ahmed Kamal wrote: > > > > > > > > > > Hi folks, > > > > > > > > > > > > > > > > > > > > I've upgraded a test client to rhel6 today, and I'll keep > > an > > > > eye > > > > > > on it > > > > > > > > > > to see what happens. > > > > > > > > > > > > > > > > > > > > During the process, I made the (I guess mistake) of zfs > > send | > > > > > > recv to > > > > > > > > a > > > > > > > > > > locally attached usb disk for backup purposes .. long story > > > > short, > > > > > > > > > > sharenfs property on the received filesystem was causing > > some > > > > > > > > nfs/mountd > > > > > > > > > > errors in logs .. I wasn't too happy with what I got .. I > > > > > > destroyed the > > > > > > > > > > backup datasets and the whole pool eventually .. and then > > > > rebooted > > > > > > the > > > > > > > > > > whole nas box .. After reboot my logs are still flooded > > with > > > > > > > > > > > > > > > > > > > > Jul 21 05:12:36 nas kernel: nfsrv_cache_session: no session > > > > > > > > > > Jul 21 05:13:07 nas last message repeated 7536 times > > > > > > > > > > Jul 21 05:15:08 nas last message repeated 29664 times > > > > > > > > > > > > > > > > > > > > Not sure what that means .. or how it can be stopped .. > > Anyway, > > > > > > will > > > > > > > > > > keep you posted on progress. > > > > > > > > > > > > > > > > > Oh, I didn't see the part about "reboot" before. > > Unfortunately, it > > > > > > sounds > > > > > > > > like the > > > > > > > > client isn't recovering after the session is lost. When the > > server > > > > > > > > reboots, the > > > > > > > > client(s) will get NFS4ERR_BAD_SESSION errors back because the > > > > server > > > > > > > > reboot has > > > > > > > > deleted all sessions. The NFS4ERR_BAD_SESSION should trigger > > state > > > > > > > > recovery on the client. > > > > > > > > (It doesn't sound like the clients went into recovery, starting > > > > with a > > > > > > > > Create_session > > > > > > > > operation, but without a packet trace, I can't be sure?) > > > > > > > > > > > > > > > > rick > > > > > > > > > > > > > > > > > > > > > > > > > > -- > > > > > > > > > > > > > > > > > > > > > ------------------------------------------------------------------------- > > > > > > > > > Graham Allan - gta@umn.edu - allan@physics.umn.edu > > > > > > > > > School of Physics and Astronomy - University of Minnesota > > > > > > > > > > > > > > > > > > > > > ------------------------------------------------------------------------- > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > ------=_Part_5288678_109992561.1438210062713 Content-Type: text/x-patch; name=nfsv41exch.patch Content-Disposition: attachment; filename=nfsv41exch.patch Content-Transfer-Encoding: base64 LS0tIGZzL25mc3NlcnZlci9uZnNfbmZzZHN0YXRlLmMuc2F2CTIwMTUtMDctMjggMTg6NTQ6MDYu NTYxNDU0MDAwIC0wNDAwCisrKyBmcy9uZnNzZXJ2ZXIvbmZzX25mc2RzdGF0ZS5jCTIwMTUtMDct MjkgMTg6MDc6NTMuMDAwMDAwMDAwIC0wNDAwCkBAIC0yMjAsNyArMjIwLDggQEAgbmZzcnZfc2V0 Y2xpZW50KHN0cnVjdCBuZnNydl9kZXNjcmlwdCAqbgogCQkJYnJlYWs7CiAJCX0KIAkgICAgfQot CSAgICBpKys7CisJICAgIGlmIChnb3RpdCA9PSAwKQorCQlpKys7CiAJfQogCWlmICghZ290aXQg fHwKIAkgICAgKGNscC0+bGNfZmxhZ3MgJiAoTENMX05FRURTQ09ORklSTSB8IExDTF9BRE1JTlJF Vk9LRUQpKSkgewpAQCAtNDAwLDkgKzQwMSwxMiBAQCBuZnNydl9zZXRjbGllbnQoc3RydWN0IG5m c3J2X2Rlc2NyaXB0ICpuCiAJfQogCiAJLyogRm9yIE5GU3Y0LjEsIG1hcmsgdGhhdCB3ZSBmb3Vu ZCBhIGNvbmZpcm1lZCBjbGllbnRpZC4gKi8KLQlpZiAoKG5kLT5uZF9mbGFnICYgTkRfTkZTVjQx KSAhPSAwKQorCWlmICgobmQtPm5kX2ZsYWcgJiBORF9ORlNWNDEpICE9IDApIHsKKwkJY2xpZW50 aWRwLT5sdmFsWzBdID0gY2xwLT5sY19jbGllbnRpZC5sdmFsWzBdOworCQljbGllbnRpZHAtPmx2 YWxbMV0gPSBjbHAtPmxjX2NsaWVudGlkLmx2YWxbMV07CisJCWNvbmZpcm1wLT5sdmFsWzBdID0g MDsJLyogSWdub3JlZCBieSBjbGllbnQgKi8KIAkJY29uZmlybXAtPmx2YWxbMV0gPSAxOwotCWVs c2UgeworCX0gZWxzZSB7CiAJCS8qCiAJCSAqIGlkIGFuZCB2ZXJpZmllciBtYXRjaCwgc28gdXBk YXRlIHRoZSBuZXQgYWRkcmVzcyBpbmZvCiAJCSAqIGFuZCBnZXQgcmlkIG9mIGFueSBleGlzdGlu ZyBjYWxsYmFjayBhdXRoZW50aWNhdGlvbgpAQCAtNDg3LDcgKzQ5MSw3IEBAIG5mc3J2X2dldGNs aWVudChuZnNxdWFkX3QgY2xpZW50aWQsIGludCAKIAlpZiAoY2xwcCkKIAkJKmNscHAgPSBOVUxM OwogCWlmICgobmQgPT0gTlVMTCB8fCAobmQtPm5kX2ZsYWcgJiBORF9ORlNWNDEpID09IDAgfHwK LQkgICAgb3BmbGFncyAhPSBDTE9QU19SRU5FVykgJiYgbmZzcnZib290dGltZSAhPSBjbGllbnRp ZC5sdmFsWzBdKSB7CisJICAgIG9wZmxhZ3MgIT0gQ0xPUFNfUkVORVcpICYmICh1aW50MzJfdClu ZnNydmJvb3R0aW1lICE9IGNsaWVudGlkLmx2YWxbMF0pIHsKIAkJZXJyb3IgPSBORlNFUlJfU1RB TEVDTElFTlRJRDsKIAkJZ290byBvdXQ7CiAJfQpAQCAtNjgzLDcgKzY4Nyw3IEBAIG5mc3J2X2Rl c3Ryb3ljbGllbnQobmZzcXVhZF90IGNsaWVudGlkLCAKIAlzdHJ1Y3QgbmZzY2xpZW50aGFzaGhl YWQgKmhwOwogCWludCBlcnJvciA9IDAsIGksIGlnb3Rsb2NrOwogCi0JaWYgKG5mc3J2Ym9vdHRp bWUgIT0gY2xpZW50aWQubHZhbFswXSkgeworCWlmICgodWludDMyX3QpbmZzcnZib290dGltZSAh PSBjbGllbnRpZC5sdmFsWzBdKSB7CiAJCWVycm9yID0gTkZTRVJSX1NUQUxFQ0xJRU5USUQ7CiAJ CWdvdG8gb3V0OwogCX0KQEAgLTM5OTYsMTEgKzQwMDAsMTEgQEAgbmZzcnZfY2hlY2tyZXN0YXJ0 KG5mc3F1YWRfdCBjbGllbnRpZCwgdQogCSAqLwogCWlmIChmbGFncyAmCiAJICAgIChORlNMQ0tf T1BFTiB8IE5GU0xDS19URVNUIHwgTkZTTENLX1JFTEVBU0UgfCBORlNMQ0tfREVMRUdQVVJHRSkp IHsKLQkJaWYgKGNsaWVudGlkLmx2YWxbMF0gIT0gbmZzcnZib290dGltZSkgeworCQlpZiAoY2xp ZW50aWQubHZhbFswXSAhPSAodWludDMyX3QpbmZzcnZib290dGltZSkgewogCQkJcmV0ID0gTkZT RVJSX1NUQUxFQ0xJRU5USUQ7CiAJCQlnb3RvIG91dDsKIAkJfQotCX0gZWxzZSBpZiAoc3RhdGVp ZHAtPm90aGVyWzBdICE9IG5mc3J2Ym9vdHRpbWUgJiYKKwl9IGVsc2UgaWYgKHN0YXRlaWRwLT5v dGhlclswXSAhPSAodWludDMyX3QpbmZzcnZib290dGltZSAmJgogCQlzcGVjaWFsaWQgPT0gMCkg ewogCQlyZXQgPSBORlNFUlJfU1RBTEVTVEFURUlEOwogCQlnb3RvIG91dDsK ------=_Part_5288678_109992561.1438210062713-- From owner-freebsd-fs@freebsd.org Thu Jul 30 11:30:24 2015 Return-Path: Delivered-To: freebsd-fs@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 9AEFF9AE256 for ; Thu, 30 Jul 2015 11:30:24 +0000 (UTC) (envelope-from lev@FreeBSD.org) Received: from onlyone.friendlyhosting.spb.ru (onlyone.friendlyhosting.spb.ru [IPv6:2a01:4f8:131:60a2::2]) by mx1.freebsd.org (Postfix) with ESMTP id 62113BA4 for ; Thu, 30 Jul 2015 11:30:24 +0000 (UTC) (envelope-from lev@FreeBSD.org) Received: from lion.home.serebryakov.spb.ru (unknown [IPv6:2001:470:923f:1:2924:7e01:7d9c:bbfe]) (Authenticated sender: lev@serebryakov.spb.ru) by onlyone.friendlyhosting.spb.ru (Postfix) with ESMTPSA id 506DD2D87 for ; Thu, 30 Jul 2015 14:30:14 +0300 (MSK) Date: Thu, 30 Jul 2015 14:30:08 +0300 From: Lev Serebryakov Reply-To: lev@FreeBSD.org Organization: FreeBSD X-Priority: 3 (Normal) Message-ID: <164833736.20150730143008@serebryakov.spb.ru> To: freebsd-fs@freebsd.org Subject: ZFS on 10-STABLE r281159: programs, accessing ZFS pauses for minutes in state [*kmem arena] MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Transfer-Encoding: 7bit X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 30 Jul 2015 11:30:24 -0000 Hello Freebsd-fs, I'm migrating my NAS from geom_raid5 + UFS to ZFS raidz. My main storage is 5x2Tb HDDs. Additionaly, I have 2x3Tb HDDs attached to hold my data when I re-make my main storage. So, I have now two ZFS pools: ztemp mirror ada0 ada1 [both are 3Tb HDDS] zstor raidz ada3 ada4 ada5 ada6 ada7 [all of them are 2Tb] ztemp contain one filesystem with 2.1Tb of my data. ztemp was populated with my data from old geom_raid5 + UFS installation via "rsync" and it was FAST (HDD-speed). zstor contains several empty file systems (one per user), like: zstor/home/lev zstor/home/sveta zstor/home/nsvn zstor/home/torrents zstor/home/storage Deduplication IS TURNED OFF. atime is turned off. Record size set to 1M as I have a lot of big files (movies, RAW photo from DSLR, etc). Compression is turned off. When I try to copy all my data from temporary HDDs (ztemp pool) to my new shiny RIAD (zstor pool) with cd /ztemp/fs && rsync -avH lev sveta nsvn storage /usr/home/ rsync pauses for tens of minutes (!) after several hundreds of files. ^T and top shows state "[*kmem arena]". When I stop rsync with ^C and try to do "zfs list" it waits forever, in state "[*kmem arena]" again. This server is equipped with 6GiB of RAM. It looks FreeBSD contains bug about year ago which leads to this behavior, but mailing lists says, that it was fixed in r272221, 10 months ago. -- Best regards, Lev mailto:lev@FreeBSD.org From owner-freebsd-fs@freebsd.org Thu Jul 30 11:49:27 2015 Return-Path: Delivered-To: freebsd-fs@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id A7E769AE600 for ; Thu, 30 Jul 2015 11:49:27 +0000 (UTC) (envelope-from killing@multiplay.co.uk) Received: from mail-wi0-f180.google.com (mail-wi0-f180.google.com [209.85.212.180]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (Client CN "smtp.gmail.com", Issuer "Google Internet Authority G2" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id 47DB715CF for ; Thu, 30 Jul 2015 11:49:26 +0000 (UTC) (envelope-from killing@multiplay.co.uk) Received: by wicgb10 with SMTP id gb10so240264418wic.1 for ; Thu, 30 Jul 2015 04:49:19 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:subject:to:references:from:message-id:date :user-agent:mime-version:in-reply-to:content-type :content-transfer-encoding; bh=2PmLpfiz+GyxBunUuDlXDWpm5qERBRfL5Z/BVBFjNBM=; b=h2X3ZN7VjAr6IZdFutF0P4RjlQzUU6fvQrMgtzc8+9ooPFVj5oP/qMF/NC5ejEiy8/ hdpIWx2gYdPTsK+lL7zwEUTbPKebYrUKreEPw/Xm7bjte+5rzz2BCOtzjpCWU2+FNjNq 9Hj/h9S7ymx85fumwDmkqLDGhGkRUpOUI40IK5GJ7ZxkA6IDZ+TjYKLhajC1qKNSK8hp h6is+MT5o+yktLHfXr4xOm6fHeW9mhksEDqGKXrU+anIXX2PbKxiw9FtTYNNkWqEjtzA FMwUVnLrAZoURn1iM4cNk6/8wRd09UeRY3/vjNIvYAwQD1aM3So1owFBwf/tPuMfslZK EufQ== X-Gm-Message-State: ALoCoQmzc6sF0z1I5yPs5FX7vG5nHVpJTPHeuMegSPIVg/XYJrKDZ1C3GJiXYRuKqKQ+GYYlOKt9 X-Received: by 10.180.97.7 with SMTP id dw7mr5664145wib.74.1438256959326; Thu, 30 Jul 2015 04:49:19 -0700 (PDT) Received: from [10.10.1.68] (82-69-141-170.dsl.in-addr.zen.co.uk. [82.69.141.170]) by smtp.gmail.com with ESMTPSA id u7sm29492757wif.3.2015.07.30.04.49.18 for (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Thu, 30 Jul 2015 04:49:18 -0700 (PDT) Subject: Re: ZFS on 10-STABLE r281159: programs, accessing ZFS pauses for minutes in state [*kmem arena] To: freebsd-fs@freebsd.org References: <164833736.20150730143008@serebryakov.spb.ru> From: Steven Hartland Message-ID: <55BA0F41.6070508@multiplay.co.uk> Date: Thu, 30 Jul 2015 12:49:21 +0100 User-Agent: Mozilla/5.0 (Windows NT 5.1; rv:38.0) Gecko/20100101 Thunderbird/38.1.0 MIME-Version: 1.0 In-Reply-To: <164833736.20150730143008@serebryakov.spb.ru> Content-Type: text/plain; charset=windows-1252; format=flowed Content-Transfer-Encoding: 7bit X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 30 Jul 2015 11:49:27 -0000 On 30/07/2015 12:30, Lev Serebryakov wrote: > Hello Freebsd-fs, > > > I'm migrating my NAS from geom_raid5 + UFS to ZFS raidz. My main storage > is 5x2Tb HDDs. Additionaly, I have 2x3Tb HDDs attached to hold my data when > I re-make my main storage. > > So, I have now two ZFS pools: > > ztemp mirror ada0 ada1 [both are 3Tb HDDS] > zstor raidz ada3 ada4 ada5 ada6 ada7 [all of them are 2Tb] > > ztemp contain one filesystem with 2.1Tb of my data. ztemp was populated > with my data from old geom_raid5 + UFS installation via "rsync" and it was > FAST (HDD-speed). > > zstor contains several empty file systems (one per user), like: > > zstor/home/lev > zstor/home/sveta > zstor/home/nsvn > zstor/home/torrents > zstor/home/storage > > Deduplication IS TURNED OFF. atime is turned off. Record size set to 1M as > I have a lot of big files (movies, RAW photo from DSLR, etc). Compression is > turned off. You don't need to do that as record set size is a min not a max, if you don't force it large files will still be stored efficiently. > When I try to copy all my data from temporary HDDs (ztemp pool) to my new > shiny RIAD (zstor pool) with > > cd /ztemp/fs && rsync -avH lev sveta nsvn storage /usr/home/ > > rsync pauses for tens of minutes (!) after several hundreds of files. ^T > and top shows state "[*kmem arena]". When I stop rsync with ^C and try to do > "zfs list" it waits forever, in state "[*kmem arena]" again. > > This server is equipped with 6GiB of RAM. > > It looks FreeBSD contains bug about year ago which leads to this behavior, > but mailing lists says, that it was fixed in r272221, 10 months ago. When this happens what is the state of memory on the machine? Top will give a good overview, while sysctl vm.stats.vm and vmstat -z will provide some detail. If you're seeing significant memory pressure, which could well be the case with a mixed ZFS UFS system during this transfer (they use competing memory resource pools) then you could try limiting ARC via vfs.zfs.arc_max You could also see if the patch on https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=187594 help. Regards Steve From owner-freebsd-fs@freebsd.org Thu Jul 30 12:18:48 2015 Return-Path: Delivered-To: freebsd-fs@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id EEC999AD4A7 for ; Thu, 30 Jul 2015 12:18:48 +0000 (UTC) (envelope-from kostikbel@gmail.com) Received: from kib.kiev.ua (kib.kiev.ua [IPv6:2001:470:d5e7:1::1]) (using TLSv1 with cipher DHE-RSA-CAMELLIA256-SHA (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 6432AAFF; Thu, 30 Jul 2015 12:18:48 +0000 (UTC) (envelope-from kostikbel@gmail.com) Received: from tom.home (kostik@localhost [127.0.0.1]) by kib.kiev.ua (8.15.2/8.15.2) with ESMTPS id t6UCIeKu095846 (version=TLSv1 cipher=DHE-RSA-CAMELLIA256-SHA bits=256 verify=NO); Thu, 30 Jul 2015 15:18:40 +0300 (EEST) (envelope-from kostikbel@gmail.com) DKIM-Filter: OpenDKIM Filter v2.9.2 kib.kiev.ua t6UCIeKu095846 Received: (from kostik@localhost) by tom.home (8.15.2/8.15.2/Submit) id t6UCIeNZ095845; Thu, 30 Jul 2015 15:18:40 +0300 (EEST) (envelope-from kostikbel@gmail.com) X-Authentication-Warning: tom.home: kostik set sender to kostikbel@gmail.com using -f Date: Thu, 30 Jul 2015 15:18:40 +0300 From: Konstantin Belousov To: Lev Serebryakov Cc: freebsd-fs@freebsd.org Subject: Re: ZFS on 10-STABLE r281159: programs, accessing ZFS pauses for minutes in state [*kmem arena] Message-ID: <20150730121840.GS2072@kib.kiev.ua> References: <164833736.20150730143008@serebryakov.spb.ru> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <164833736.20150730143008@serebryakov.spb.ru> User-Agent: Mutt/1.5.23 (2014-03-12) X-Spam-Status: No, score=-2.0 required=5.0 tests=ALL_TRUSTED,BAYES_00, DKIM_ADSP_CUSTOM_MED,FREEMAIL_FROM,NML_ADSP_CUSTOM_MED autolearn=no autolearn_force=no version=3.4.1 X-Spam-Checker-Version: SpamAssassin 3.4.1 (2015-04-28) on tom.home X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 30 Jul 2015 12:18:49 -0000 On Thu, Jul 30, 2015 at 02:30:08PM +0300, Lev Serebryakov wrote: > Hello Freebsd-fs, > > > I'm migrating my NAS from geom_raid5 + UFS to ZFS raidz. My main storage > is 5x2Tb HDDs. Additionaly, I have 2x3Tb HDDs attached to hold my data when > I re-make my main storage. > > So, I have now two ZFS pools: > > ztemp mirror ada0 ada1 [both are 3Tb HDDS] > zstor raidz ada3 ada4 ada5 ada6 ada7 [all of them are 2Tb] > > ztemp contain one filesystem with 2.1Tb of my data. ztemp was populated > with my data from old geom_raid5 + UFS installation via "rsync" and it was > FAST (HDD-speed). > > zstor contains several empty file systems (one per user), like: > > zstor/home/lev > zstor/home/sveta > zstor/home/nsvn > zstor/home/torrents > zstor/home/storage > > Deduplication IS TURNED OFF. atime is turned off. Record size set to 1M as > I have a lot of big files (movies, RAW photo from DSLR, etc). Compression is > turned off. > > When I try to copy all my data from temporary HDDs (ztemp pool) to my new > shiny RIAD (zstor pool) with > > cd /ztemp/fs && rsync -avH lev sveta nsvn storage /usr/home/ > > rsync pauses for tens of minutes (!) after several hundreds of files. ^T > and top shows state "[*kmem arena]". When I stop rsync with ^C and try to do > "zfs list" it waits forever, in state "[*kmem arena]" again. Show the output of sysctl debug.vmem_check. > > This server is equipped with 6GiB of RAM. > > It looks FreeBSD contains bug about year ago which leads to this behavior, > but mailing lists says, that it was fixed in r272221, 10 months ago. > > -- > Best regards, > Lev mailto:lev@FreeBSD.org > > _______________________________________________ > freebsd-fs@freebsd.org mailing list > http://lists.freebsd.org/mailman/listinfo/freebsd-fs > To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" From owner-freebsd-fs@freebsd.org Thu Jul 30 12:38:07 2015 Return-Path: Delivered-To: freebsd-fs@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id C50869ADB5D for ; Thu, 30 Jul 2015 12:38:07 +0000 (UTC) (envelope-from lev@FreeBSD.org) Received: from onlyone.friendlyhosting.spb.ru (onlyone.friendlyhosting.spb.ru [46.4.40.135]) by mx1.freebsd.org (Postfix) with ESMTP id 8C63C15FD for ; Thu, 30 Jul 2015 12:38:07 +0000 (UTC) (envelope-from lev@FreeBSD.org) Received: from lion.home.serebryakov.spb.ru (unknown [IPv6:2001:470:923f:1:2924:7e01:7d9c:bbfe]) (Authenticated sender: lev@serebryakov.spb.ru) by onlyone.friendlyhosting.spb.ru (Postfix) with ESMTPSA id 82CC22D9F; Thu, 30 Jul 2015 15:38:04 +0300 (MSK) Date: Thu, 30 Jul 2015 15:37:58 +0300 From: Lev Serebryakov Reply-To: lev@FreeBSD.org Organization: FreeBSD X-Priority: 3 (Normal) Message-ID: <843537366.20150730153758@serebryakov.spb.ru> To: Steven Hartland CC: freebsd-fs@freebsd.org Subject: Re: ZFS on 10-STABLE r281159: programs, accessing ZFS pauses for minutes in state [*kmem arena] In-Reply-To: <55BA0F41.6070508@multiplay.co.uk> References: <164833736.20150730143008@serebryakov.spb.ru> <55BA0F41.6070508@multiplay.co.uk> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Transfer-Encoding: 7bit X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 30 Jul 2015 12:38:07 -0000 Hello Steven, Thursday, July 30, 2015, 2:49:21 PM, you wrote: > You don't need to do that as record set size is a min not a max, if you > don't force it large files will still be stored efficiently. Oh, I thought it is max after reading some blog-posts and forum threads (this one, for example: https://forums.freebsd.org/threads/1mb-recordsize-performance-recordsize-discussion.50414/ and this one https://blogs.oracle.com/roch/entry/tuning_zfs_recordsize and this https://www.joyent.com/blog/bruning-questions-zfs-record-size) Does it mean, that files could not occupy less than 128K with default setting? Anyway, I'll recreate pool. >> It looks FreeBSD contains bug about year ago which leads to this behavior, >> but mailing lists says, that it was fixed in r272221, 10 months ago. > When this happens what is the state of memory on the machine? Now I'm in single-user mode and have some problems to analyze massive output of "vmstat -z" :) top shows 4.5-5G of total ARC size and very low (~150Mb) free memory. > Top will give a good overview, while sysctl vm.stats.vm and vmstat -z > will provide some detail. > If you're seeing significant memory pressure, which could well be the > case with a mixed ZFS UFS system during this transfer (they use Now I have pure ZFS situation, with ZFS to ZFS transfer. Mixed scenario with UFS to ZFS transfer didn't show this behavior. > competing memory resource pools) then you could try limiting ARC via > vfs.zfs.arc_max I'll try to set it ot 3G... > You could also see if the patch on > https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=187594 help. I should try this, but for now I should migrate as quickly as possible & return my spare HDDs which are lent to me for short time... -- Best regards, Lev mailto:lev@FreeBSD.org From owner-freebsd-fs@freebsd.org Thu Jul 30 12:39:12 2015 Return-Path: Delivered-To: freebsd-fs@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id D9CDF9ADBB2 for ; Thu, 30 Jul 2015 12:39:12 +0000 (UTC) (envelope-from lev@FreeBSD.org) Received: from onlyone.friendlyhosting.spb.ru (onlyone.friendlyhosting.spb.ru [46.4.40.135]) by mx1.freebsd.org (Postfix) with ESMTP id A18A0167E for ; Thu, 30 Jul 2015 12:39:12 +0000 (UTC) (envelope-from lev@FreeBSD.org) Received: from lion.home.serebryakov.spb.ru (unknown [IPv6:2001:470:923f:1:2924:7e01:7d9c:bbfe]) (Authenticated sender: lev@serebryakov.spb.ru) by onlyone.friendlyhosting.spb.ru (Postfix) with ESMTPSA id 2E0212DA1; Thu, 30 Jul 2015 15:39:11 +0300 (MSK) Date: Thu, 30 Jul 2015 15:39:05 +0300 From: Lev Serebryakov Reply-To: lev@FreeBSD.org Organization: FreeBSD X-Priority: 3 (Normal) Message-ID: <34202837.20150730153905@serebryakov.spb.ru> To: Konstantin Belousov CC: freebsd-fs@freebsd.org Subject: Re: ZFS on 10-STABLE r281159: programs, accessing ZFS pauses for minutes in state [*kmem arena] In-Reply-To: <20150730121840.GS2072@kib.kiev.ua> References: <164833736.20150730143008@serebryakov.spb.ru> <20150730121840.GS2072@kib.kiev.ua> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Transfer-Encoding: 7bit X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 30 Jul 2015 12:39:12 -0000 Hello Konstantin, Thursday, July 30, 2015, 3:18:40 PM, you wrote: >> rsync pauses for tens of minutes (!) after several hundreds of files. ^T >> and top shows state "[*kmem arena]". When I stop rsync with ^C and try to do >> "zfs list" it waits forever, in state "[*kmem arena]" again. > Show the output of sysctl debug.vmem_check. It is 1. -- Best regards, Lev mailto:lev@FreeBSD.org From owner-freebsd-fs@freebsd.org Thu Jul 30 12:48:34 2015 Return-Path: Delivered-To: freebsd-fs@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id E56159ADE65 for ; Thu, 30 Jul 2015 12:48:34 +0000 (UTC) (envelope-from kostikbel@gmail.com) Received: from kib.kiev.ua (kib.kiev.ua [IPv6:2001:470:d5e7:1::1]) (using TLSv1 with cipher DHE-RSA-CAMELLIA256-SHA (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 764281D96; Thu, 30 Jul 2015 12:48:34 +0000 (UTC) (envelope-from kostikbel@gmail.com) Received: from tom.home (kostik@localhost [127.0.0.1]) by kib.kiev.ua (8.15.2/8.15.2) with ESMTPS id t6UCmTec008810 (version=TLSv1 cipher=DHE-RSA-CAMELLIA256-SHA bits=256 verify=NO); Thu, 30 Jul 2015 15:48:30 +0300 (EEST) (envelope-from kostikbel@gmail.com) DKIM-Filter: OpenDKIM Filter v2.9.2 kib.kiev.ua t6UCmTec008810 Received: (from kostik@localhost) by tom.home (8.15.2/8.15.2/Submit) id t6UCmT9b008809; Thu, 30 Jul 2015 15:48:29 +0300 (EEST) (envelope-from kostikbel@gmail.com) X-Authentication-Warning: tom.home: kostik set sender to kostikbel@gmail.com using -f Date: Thu, 30 Jul 2015 15:48:29 +0300 From: Konstantin Belousov To: Lev Serebryakov Cc: freebsd-fs@freebsd.org Subject: Re: ZFS on 10-STABLE r281159: programs, accessing ZFS pauses for minutes in state [*kmem arena] Message-ID: <20150730124829.GV2072@kib.kiev.ua> References: <164833736.20150730143008@serebryakov.spb.ru> <20150730121840.GS2072@kib.kiev.ua> <34202837.20150730153905@serebryakov.spb.ru> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <34202837.20150730153905@serebryakov.spb.ru> User-Agent: Mutt/1.5.23 (2014-03-12) X-Spam-Status: No, score=-2.0 required=5.0 tests=ALL_TRUSTED,BAYES_00, DKIM_ADSP_CUSTOM_MED,FREEMAIL_FROM,NML_ADSP_CUSTOM_MED autolearn=no autolearn_force=no version=3.4.1 X-Spam-Checker-Version: SpamAssassin 3.4.1 (2015-04-28) on tom.home X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 30 Jul 2015 12:48:35 -0000 On Thu, Jul 30, 2015 at 03:39:05PM +0300, Lev Serebryakov wrote: > Hello Konstantin, > > Thursday, July 30, 2015, 3:18:40 PM, you wrote: > > >> rsync pauses for tens of minutes (!) after several hundreds of files. ^T > >> and top shows state "[*kmem arena]". When I stop rsync with ^C and try to do > >> "zfs list" it waits forever, in state "[*kmem arena]" again. > > Show the output of sysctl debug.vmem_check. > It is 1. So set it to zero. From owner-freebsd-fs@freebsd.org Thu Jul 30 12:55:09 2015 Return-Path: Delivered-To: freebsd-fs@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 17DEC9AE03A for ; Thu, 30 Jul 2015 12:55:09 +0000 (UTC) (envelope-from lev@FreeBSD.org) Received: from onlyone.friendlyhosting.spb.ru (onlyone.friendlyhosting.spb.ru [46.4.40.135]) by mx1.freebsd.org (Postfix) with ESMTP id D2F49267 for ; Thu, 30 Jul 2015 12:55:08 +0000 (UTC) (envelope-from lev@FreeBSD.org) Received: from lion.home.serebryakov.spb.ru (unknown [IPv6:2001:470:923f:1:2924:7e01:7d9c:bbfe]) (Authenticated sender: lev@serebryakov.spb.ru) by onlyone.friendlyhosting.spb.ru (Postfix) with ESMTPSA id 69DD22DAA; Thu, 30 Jul 2015 15:55:07 +0300 (MSK) Date: Thu, 30 Jul 2015 15:55:01 +0300 From: Lev Serebryakov Reply-To: lev@FreeBSD.org Organization: FreeBSD X-Priority: 3 (Normal) Message-ID: <1617907428.20150730155501@serebryakov.spb.ru> To: Konstantin Belousov CC: freebsd-fs@freebsd.org Subject: Re: ZFS on 10-STABLE r281159: programs, accessing ZFS pauses for minutes in state [*kmem arena] In-Reply-To: <20150730124829.GV2072@kib.kiev.ua> References: <164833736.20150730143008@serebryakov.spb.ru> <20150730121840.GS2072@kib.kiev.ua> <34202837.20150730153905@serebryakov.spb.ru> <20150730124829.GV2072@kib.kiev.ua> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Transfer-Encoding: 7bit X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 30 Jul 2015 12:55:09 -0000 Hello Konstantin, Thursday, July 30, 2015, 3:48:29 PM, you wrote: >> >> rsync pauses for tens of minutes (!) after several hundreds of files. ^T >> >> and top shows state "[*kmem arena]". When I stop rsync with ^C and try to do >> >> "zfs list" it waits forever, in state "[*kmem arena]" again. >> > Show the output of sysctl debug.vmem_check. >> It is 1. > So set it to zero. It improved situation by itself, but limiting ARC to 3GiB is helped much more. Looks like, ARC auto-sizing should be tuned anyway (PR187594? It is already more than year old!) -- Best regards, Lev mailto:lev@FreeBSD.org From owner-freebsd-fs@freebsd.org Thu Jul 30 14:41:23 2015 Return-Path: Delivered-To: freebsd-fs@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id A24099AF774 for ; Thu, 30 Jul 2015 14:41:23 +0000 (UTC) (envelope-from paul@kraus-haus.org) Received: from mail-qg0-f54.google.com (mail-qg0-f54.google.com [209.85.192.54]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (Client CN "smtp.gmail.com", Issuer "Google Internet Authority G2" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id 64EF4DC9 for ; Thu, 30 Jul 2015 14:41:23 +0000 (UTC) (envelope-from paul@kraus-haus.org) Received: by qgii95 with SMTP id i95so25438501qgi.2 for ; Thu, 30 Jul 2015 07:41:22 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:content-type:mime-version:subject:from :in-reply-to:date:content-transfer-encoding:message-id:references:to; bh=nglfaL0Asb6tmuOYaaY4EsNjhtq8TkM+79tN2D0Ij/0=; b=kar7fvq5m9sTWq0ratWI2OtqhpVLHvIWcau8HYx/KsZ73ppTX2oAKzK9v4f3Q1BsZA AZQv1/kv5z32erelO5jEGXTkF6gMp1YT+7Ur6SM6wkHfXJ8ZpjFJUjT1d0vF51598W2W dBR81xZ4TLRDqxS9r3IZBMruwNxieBVPQ/bVxnYBJSHgdtrZMb5oIrvBH8wIytckyNwU XphucEn9/jGLah2ZZ9JXa1KS4l6k11HKHncPyBoJNgTO+37stq02jBI9qQk3U5hHhQe4 niSo/PNKKiBmrjj8E3L8qYfw1qRNIH2pb79s/fzZhdDGhiycaAIQWWwpkDlpW5/e8VGC cDSA== X-Gm-Message-State: ALoCoQl555dNoN//yyynblrPeRxO/aT8MUCxNTgGY3JaWJCyRZ9ZkBzizkI20NkQgAxGPMEZAESq X-Received: by 10.140.201.204 with SMTP id w195mr6013114qha.16.1438267282166; Thu, 30 Jul 2015 07:41:22 -0700 (PDT) Received: from mbp-1.thecreativeadvantage.com (mail.thecreativeadvantage.com. [96.236.20.34]) by smtp.gmail.com with ESMTPSA id 34sm545986qkz.38.2015.07.30.07.41.20 (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Thu, 30 Jul 2015 07:41:20 -0700 (PDT) Content-Type: text/plain; charset=us-ascii Mime-Version: 1.0 (Mac OS X Mail 7.3 \(1878.6\)) Subject: Re: ZFS on 10-STABLE r281159: programs, accessing ZFS pauses for minutes in state [*kmem arena] From: Paul Kraus In-Reply-To: <55BA0F41.6070508@multiplay.co.uk> Date: Thu, 30 Jul 2015 10:41:19 -0400 Content-Transfer-Encoding: quoted-printable Message-Id: <26DA7547-3258-44CC-A3EA-338AFA13640E@kraus-haus.org> References: <164833736.20150730143008@serebryakov.spb.ru> <55BA0F41.6070508@multiplay.co.uk> To: FreeBSD Filesystems X-Mailer: Apple Mail (2.1878.6) X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 30 Jul 2015 14:41:23 -0000 On Jul 30, 2015, at 7:49, Steven Hartland = wrote: > On 30/07/2015 12:30, Lev Serebryakov wrote: >>=20 >> Deduplication IS TURNED OFF. atime is turned off. Record size set to = 1M as >> I have a lot of big files (movies, RAW photo from DSLR, etc). = Compression is >> turned off. > You don't need to do that as record set size is a min not a max, if = you don't force it large files will still be stored efficiently. Can you point to documentation for that ? I really hope that the 128KB default is not a minimum record size or a = 1KB file will take up 128 KB of FS space. As far as I know, zfs recordsize has always, since the very beginning of = ZFS under Solaris, been the MAX recrodsize, but it is also a hint and = not a fixed value. ZFS will write any size records (powers of 2) from = 512 bytes (4 KB in the case of an shift =3D 4 pool) up to recordsize. = Tuning of recordsize has been frowned upon since the beginning unless = you _know_ the size of your writes and they are fixed (like 8 KB = database records).=20 Also note that ZFS will fit the write to the pool in the case of = RAIDz, see Matt Ahrens bloig entry here: = http://blog.delphix.com/matt/2014/06/06/zfs-stripe-width/ -- Paul Kraus paul@kraus-haus.org From owner-freebsd-fs@freebsd.org Thu Jul 30 15:03:16 2015 Return-Path: Delivered-To: freebsd-fs@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id C36C59AFBF0 for ; Thu, 30 Jul 2015 15:03:16 +0000 (UTC) (envelope-from lev@FreeBSD.org) Received: from onlyone.friendlyhosting.spb.ru (onlyone.friendlyhosting.spb.ru [IPv6:2a01:4f8:131:60a2::2]) by mx1.freebsd.org (Postfix) with ESMTP id 8A1DC1CF8 for ; Thu, 30 Jul 2015 15:03:16 +0000 (UTC) (envelope-from lev@FreeBSD.org) Received: from lion.home.serebryakov.spb.ru (unknown [IPv6:2001:470:923f:1:2924:7e01:7d9c:bbfe]) (Authenticated sender: lev@serebryakov.spb.ru) by onlyone.friendlyhosting.spb.ru (Postfix) with ESMTPSA id B89FF2DD9; Thu, 30 Jul 2015 18:03:13 +0300 (MSK) Date: Thu, 30 Jul 2015 18:03:08 +0300 From: Lev Serebryakov Reply-To: lev@FreeBSD.org Organization: FreeBSD X-Priority: 3 (Normal) Message-ID: <37782304.20150730180308@serebryakov.spb.ru> To: kpneal@pobox.com CC: freebsd-fs@freebsd.org Subject: Re: ZFS on 10-STABLE r281159: programs, accessing ZFS pauses for minutes in state [*kmem arena] In-Reply-To: <20150730140713.GA84864@neutralgood.org> References: <164833736.20150730143008@serebryakov.spb.ru> <20150730140713.GA84864@neutralgood.org> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Transfer-Encoding: 7bit X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 30 Jul 2015 15:03:16 -0000 Hello Kpneal, Thursday, July 30, 2015, 5:07:13 PM, you wrote: >> This server is equipped with 6GiB of RAM. > Another idea would be to use zfs send | zfs receive to transfer the data > from one pool to another. source pool is one-FS-for-everything and I want to split it to different FSes on target pool... Looks like, my problem was fixed (or at least addressed) in r282361, r283310. -- Best regards, Lev mailto:lev@FreeBSD.org From owner-freebsd-fs@freebsd.org Thu Jul 30 15:41:19 2015 Return-Path: Delivered-To: freebsd-fs@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id B0C359AE406 for ; Thu, 30 Jul 2015 15:41:19 +0000 (UTC) (envelope-from killing@multiplay.co.uk) Received: from mail-wi0-f182.google.com (mail-wi0-f182.google.com [209.85.212.182]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (Client CN "smtp.gmail.com", Issuer "Google Internet Authority G2" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id 6CD391156 for ; Thu, 30 Jul 2015 15:41:19 +0000 (UTC) (envelope-from killing@multiplay.co.uk) Received: by wicmv11 with SMTP id mv11so26200252wic.0 for ; Thu, 30 Jul 2015 08:41:18 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:subject:to:references:from:message-id:date :user-agent:mime-version:in-reply-to:content-type :content-transfer-encoding; bh=bKUa3WsjGS9w6kLWWBeURWG90n3j6gW8zrK83XQ3v/8=; b=g4EDnQZ1/OImqAqZ27Q92WWdcYHKzSM4tuCIWiHn5G5M7evi27VohVvUT1mGtXxGZe hmbvtApXjUprCzcvL7O/1Ec7F5sXimn1c1eZKZMWfid3N4TQ7FUc//9OgfFkH9vdZaA4 bbrvBGjWTs0lW4vO640aibIgwStKHZlMTECFIf8Ft0cCVS1VGqGig4byfiqYXH8LHzOG E1Z4MGVvHz+nDKLATlkOVwS934NHAyxzR76jqa7fCqh3IMCqClojTBNkg+pksZqeJP2a thfufPBIGBWZYSui3is5wbPpxjUv2/qCsxqQfsqV3i1yN+mJldFYE5imgwxyRqMZyAls yfVQ== X-Gm-Message-State: ALoCoQncMWsK/3/V2lSLsHcDHETvbEH5zWr5eWWitQCSCNTk4boVqF6SHXOeJqCN0VtQoXfUF1Xg X-Received: by 10.194.23.194 with SMTP id o2mr92368120wjf.63.1438270877785; Thu, 30 Jul 2015 08:41:17 -0700 (PDT) Received: from [10.10.1.68] (82-69-141-170.dsl.in-addr.zen.co.uk. [82.69.141.170]) by smtp.gmail.com with ESMTPSA id uo6sm2485990wjc.1.2015.07.30.08.41.17 for (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Thu, 30 Jul 2015 08:41:17 -0700 (PDT) Subject: Re: ZFS on 10-STABLE r281159: programs, accessing ZFS pauses for minutes in state [*kmem arena] To: freebsd-fs@freebsd.org References: <164833736.20150730143008@serebryakov.spb.ru> <55BA0F41.6070508@multiplay.co.uk> <26DA7547-3258-44CC-A3EA-338AFA13640E@kraus-haus.org> From: Steven Hartland Message-ID: <55BA45A0.508@multiplay.co.uk> Date: Thu, 30 Jul 2015 16:41:20 +0100 User-Agent: Mozilla/5.0 (Windows NT 5.1; rv:38.0) Gecko/20100101 Thunderbird/38.1.0 MIME-Version: 1.0 In-Reply-To: <26DA7547-3258-44CC-A3EA-338AFA13640E@kraus-haus.org> Content-Type: text/plain; charset=windows-1252; format=flowed Content-Transfer-Encoding: 7bit X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 30 Jul 2015 15:41:19 -0000 On 30/07/2015 15:41, Paul Kraus wrote: > On Jul 30, 2015, at 7:49, Steven Hartland wrote: > >> On 30/07/2015 12:30, Lev Serebryakov wrote: >>> Deduplication IS TURNED OFF. atime is turned off. Record size set to 1M as >>> I have a lot of big files (movies, RAW photo from DSLR, etc). Compression is >>> turned off. >> You don't need to do that as record set size is a min not a max, if you don't force it large files will still be stored efficiently. > Can you point to documentation for that ? Ignore my previous comment there I was clearly having a special moment. recordsize sets the suggested block size which is effectively the largest block size for a given file. Its generally not about efficient storage more efficient access, so that's what you usually want to consider except in extreme cases. If you set recordsize to 1MB you get large block support which is detailed here: https://reviews.csiden.org/r/51/ Key info from this: Recommended uses center around improving performance of random reads of large blocks (>= 128KB): - files that are randomly read in large chunks (e.g. video files when streaming many concurrent streams such that prefetch can not effectively cache data); performance will be improved in this case because random 1MB reads from rotating disks has higher bandwidth than random 128KB reads. - typically, performance of scrub/resilver is improved, especially with RAID-Z The tradeoffs to consider when using large blocks include: - accessing large blocks tends to increase latency of all operations, because even small reads will need to get in line benind large reads/writes - sub-block writes (i.e. write to 128KB of a 1MB block) will incur even larger read-modify-write penalty - the last, partially-filled block of each file will be larger, wasting memory, and if compression is not enabled, disk space (expected waste is 1/2 the recordsize per file, assuming random file length) recordsize is documented in the man page: https://www.freebsd.org/cgi/man.cgi?query=zfs&apropos=0&sektion=8&manpath=FreeBSD+10.2-stable&arch=default&format=html > I really hope that the 128KB default is not a minimum record size or a 1KB file will take up 128 KB of FS space. Setting the recordsize sets the suggested block size used so if you set 1MB then the minimum size a file can occupy is 1MB even if its on a 512b file. > As far as I know, zfs recordsize has always, since the very beginning of ZFS under Solaris, been the MAX recrodsize, but it is also a hint and not a fixed value. ZFS will write any size records (powers of 2) from 512 bytes (4 KB in the case of an shift = 4 pool) up to recordsize. Tuning of recordsize has been frowned upon since the beginning unless you _know_ the size of your writes and they are fixed (like 8 KB database records). > > Also note that ZFS will fit the write to the pool in the case of RAIDz, see Matt Ahrens bloig entry here: http://blog.delphix.com/matt/2014/06/06/zfs-stripe-width/ Another nice article on this can be found here: https://www.joyent.com/blog/bruning-questions-zfs-record-size Regards Steve From owner-freebsd-fs@freebsd.org Thu Jul 30 21:41:30 2015 Return-Path: Delivered-To: freebsd-fs@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id EA4BF9AF115 for ; Thu, 30 Jul 2015 21:41:30 +0000 (UTC) (envelope-from bugzilla-noreply@freebsd.org) Received: from kenobi.freebsd.org (kenobi.freebsd.org [IPv6:2001:1900:2254:206a::16:76]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id D679F10D for ; Thu, 30 Jul 2015 21:41:30 +0000 (UTC) (envelope-from bugzilla-noreply@freebsd.org) Received: from bugs.freebsd.org ([127.0.1.118]) by kenobi.freebsd.org (8.15.2/8.15.2) with ESMTP id t6ULfU5T046265 for ; Thu, 30 Jul 2015 21:41:30 GMT (envelope-from bugzilla-noreply@freebsd.org) From: bugzilla-noreply@freebsd.org To: freebsd-fs@FreeBSD.org Subject: [Bug 172334] [unionfs] unionfs permits recursive union mounts; causes panic quickly Date: Thu, 30 Jul 2015 21:41:30 +0000 X-Bugzilla-Reason: AssignedTo X-Bugzilla-Type: changed X-Bugzilla-Watch-Reason: None X-Bugzilla-Product: Base System X-Bugzilla-Component: kern X-Bugzilla-Version: 9.0-RELEASE X-Bugzilla-Keywords: X-Bugzilla-Severity: Affects Only Me X-Bugzilla-Who: brd@FreeBSD.org X-Bugzilla-Status: In Progress X-Bugzilla-Priority: Normal X-Bugzilla-Assigned-To: freebsd-fs@FreeBSD.org X-Bugzilla-Target-Milestone: --- X-Bugzilla-Flags: X-Bugzilla-Changed-Fields: cc Message-ID: In-Reply-To: References: Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: 7bit X-Bugzilla-URL: https://bugs.freebsd.org/bugzilla/ Auto-Submitted: auto-generated MIME-Version: 1.0 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 30 Jul 2015 21:41:31 -0000 https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=172334 Brad Davis changed: What |Removed |Added ---------------------------------------------------------------------------- CC| |brd@FreeBSD.org --- Comment #4 from Brad Davis --- ping -- You are receiving this mail because: You are the assignee for the bug. From owner-freebsd-fs@freebsd.org Thu Jul 30 21:54:04 2015 Return-Path: Delivered-To: freebsd-fs@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id C6BD49AF25B for ; Thu, 30 Jul 2015 21:54:04 +0000 (UTC) (envelope-from grarpamp@gmail.com) Received: from mail-ig0-x22f.google.com (mail-ig0-x22f.google.com [IPv6:2607:f8b0:4001:c05::22f]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (Client CN "smtp.gmail.com", Issuer "Google Internet Authority G2" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id 8D5217C8 for ; Thu, 30 Jul 2015 21:54:04 +0000 (UTC) (envelope-from grarpamp@gmail.com) Received: by iggf3 with SMTP id f3so5170771igg.1 for ; Thu, 30 Jul 2015 14:54:04 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :content-type; bh=KPbc6wMJGXonTwS9ZK50BRtbC3xXTC5+mWRmE5BxAxI=; b=MG8H2xNXhmkExWVe9JPeq5DHdjBPWIthV+i1vNaZ5IhbIweQ/sRJpcT3MYYtlZFMhh Jp9NF+UUyE15ibWv+a2C2l/HhD+m5cjGABRWhVDZ0jYPWTTyT44w91JF/vA9kl4zctFS Uv6W//zIPzWqQx2mZCjkHF/TfNqzTZF9vE4r77SmuLhAmLXeA50pO9QkTjY43TlNJhnW YyPBj+aW08CpRQ+Uahzqe1rbjU47dYQDKO8zQVRU73KXPe620u64jdoKZN4pMihGRGPc s8oPOkAKGDBJ71QhEkdXTP0J6hX5coit4Jf9FMqDLYKbN+CshpcGY7xFOASyc4mnKdNT HDkw== MIME-Version: 1.0 X-Received: by 10.50.108.98 with SMTP id hj2mr320622igb.52.1438293243907; Thu, 30 Jul 2015 14:54:03 -0700 (PDT) Received: by 10.36.44.69 with HTTP; Thu, 30 Jul 2015 14:54:03 -0700 (PDT) In-Reply-To: References: Date: Thu, 30 Jul 2015 17:54:03 -0400 Message-ID: Subject: Re: TRIM erases user data From: grarpamp To: freebsd-fs@freebsd.org Content-Type: text/plain; charset=UTF-8 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 30 Jul 2015 21:54:04 -0000 Update... http://www.spinics.net/lists/raid/msg49440.html http://www.spinics.net/lists/raid/msg49452.html https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/commit/?id=f3f5da624e0a891c34d8cd513c57f1d9b0c7dadc From owner-freebsd-fs@freebsd.org Fri Jul 31 09:10:01 2015 Return-Path: Delivered-To: freebsd-fs@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 909DF9AE60A for ; Fri, 31 Jul 2015 09:10:01 +0000 (UTC) (envelope-from baptiste.daroussin@gmail.com) Received: from mailman.ysv.freebsd.org (mailman.ysv.freebsd.org [IPv6:2001:1900:2254:206a::50:5]) by mx1.freebsd.org (Postfix) with ESMTP id 6FBBF1C0B for ; Fri, 31 Jul 2015 09:10:01 +0000 (UTC) (envelope-from baptiste.daroussin@gmail.com) Received: by mailman.ysv.freebsd.org (Postfix) id 6C2D29AE609; Fri, 31 Jul 2015 09:10:01 +0000 (UTC) Delivered-To: fs@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 6BB9C9AE607 for ; Fri, 31 Jul 2015 09:10:01 +0000 (UTC) (envelope-from baptiste.daroussin@gmail.com) Received: from mail-wi0-x229.google.com (mail-wi0-x229.google.com [IPv6:2a00:1450:400c:c05::229]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (Client CN "smtp.gmail.com", Issuer "Google Internet Authority G2" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id F057D1C0A for ; Fri, 31 Jul 2015 09:10:00 +0000 (UTC) (envelope-from baptiste.daroussin@gmail.com) Received: by wibud3 with SMTP id ud3so50314006wib.1 for ; Fri, 31 Jul 2015 02:09:59 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=sender:date:from:to:subject:message-id:mime-version:content-type :content-disposition:user-agent; bh=0Inp38YMkpsSQmkexshdVwGPIxuQfGhW3Bt6r9XY6jY=; b=BXBCkeCfOem4QAxMPnknyZCgAEa08lJArvwLuhNfRtIeCV9p1kQIq5ImQe5NZwnYP8 5onu+WT20eyX0ikri5cfs/viU+jsPfk+RM+vpkqMgfDBve/wbFxTutnR2eRQ4dZkx7oU R6WSjhycSDlAA+8H9KQEeM6cQyaVjAGdOP1qsrxGIBCNC5SwQFtKt8ZMNHWMVd2HUPwY FPNQzWp7zqDxn+TrnkEcRrId7XyNj+DL72qzJbR6F/Et2oHYzKMi8jSCRYfaLjS+rm4v auRiCrAUJFYujrjQ2IVmPaoySOuGMsdE9cTXz5Pf5frMEo9R8KNlw5YqQ6xVbKnN1qwY o3TA== X-Received: by 10.194.235.169 with SMTP id un9mr3707219wjc.136.1438333799242; Fri, 31 Jul 2015 02:09:59 -0700 (PDT) Received: from ivaldir.etoilebsd.net ([2001:41d0:8:db4c::1]) by smtp.gmail.com with ESMTPSA id z9sm3438195wiv.9.2015.07.31.02.09.58 for (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Fri, 31 Jul 2015 02:09:58 -0700 (PDT) Sender: Baptiste Daroussin Date: Fri, 31 Jul 2015 11:09:56 +0200 From: Baptiste Daroussin To: fs@FreeBSD.org Subject: Activating GEOM_SUNLABEL by default in GENERIC Message-ID: <20150731090956.GG57604@ivaldir.etoilebsd.net> MIME-Version: 1.0 Content-Type: multipart/signed; micalg=pgp-sha1; protocol="application/pgp-signature"; boundary="XaUbO9McV5wPQijU" Content-Disposition: inline User-Agent: Mutt/1.5.23 (2014-03-12) X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 31 Jul 2015 09:10:01 -0000 --XaUbO9McV5wPQijU Content-Type: text/plain; charset=us-ascii Content-Disposition: inline Content-Transfer-Encoding: quoted-printable Hi, I think it would be a good idea to add GEOM_SUNLABEL in the GENERIC kernel. This would allow the kernel by default to recognize Sun partitions meaning = one would import zpools created on top of those patition easily, meaning migrat= ing =66rom Illumos/Solaris to FreeBSD will be a bit more simple. Is there any know issues with GEOM_SUNLABEL that would make it a bad idea to activate it by default? Best regards, Bapt --XaUbO9McV5wPQijU Content-Type: application/pgp-signature -----BEGIN PGP SIGNATURE----- Version: GnuPG v1 iEYEARECAAYFAlW7O2QACgkQ8kTtMUmk6EyrVwCdHkoAhSmUC5xdD269l0N5rUcI 8+UAn3jednF3IpVP7Rc2jBm6DCv0EBur =vqhg -----END PGP SIGNATURE----- --XaUbO9McV5wPQijU-- From owner-freebsd-fs@freebsd.org Fri Jul 31 09:12:12 2015 Return-Path: Delivered-To: freebsd-fs@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 10B889AE7E3 for ; Fri, 31 Jul 2015 09:12:12 +0000 (UTC) (envelope-from ae@FreeBSD.org) Received: from mailman.ysv.freebsd.org (mailman.ysv.freebsd.org [IPv6:2001:1900:2254:206a::50:5]) by mx1.freebsd.org (Postfix) with ESMTP id EC49A9A for ; Fri, 31 Jul 2015 09:12:11 +0000 (UTC) (envelope-from ae@FreeBSD.org) Received: by mailman.ysv.freebsd.org (Postfix) id EACE89AE7E2; Fri, 31 Jul 2015 09:12:11 +0000 (UTC) Delivered-To: fs@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id EA5CF9AE7E1 for ; Fri, 31 Jul 2015 09:12:11 +0000 (UTC) (envelope-from ae@FreeBSD.org) Received: from mx2.freebsd.org (mx2.freebsd.org [IPv6:2001:1900:2254:206a::19:2]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client CN "mx2.freebsd.org", Issuer "Gandi Standard SSL CA 2" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id D2D2397; Fri, 31 Jul 2015 09:12:11 +0000 (UTC) (envelope-from ae@FreeBSD.org) Received: from butcher-nb.yandex.net (freefall.freebsd.org [IPv6:2001:1900:2254:206c::16:87]) by mx2.freebsd.org (Postfix) with ESMTP id 1D6D267445; Fri, 31 Jul 2015 09:12:10 +0000 (UTC) (envelope-from ae@FreeBSD.org) Message-ID: <55BB3BD2.1000509@FreeBSD.org> Date: Fri, 31 Jul 2015 12:11:46 +0300 From: "Andrey V. Elsukov" User-Agent: Mozilla/5.0 (X11; FreeBSD amd64; rv:31.0) Gecko/20100101 Thunderbird/31.7.0 MIME-Version: 1.0 To: Baptiste Daroussin , fs@FreeBSD.org Subject: Re: Activating GEOM_SUNLABEL by default in GENERIC References: <20150731090956.GG57604@ivaldir.etoilebsd.net> In-Reply-To: <20150731090956.GG57604@ivaldir.etoilebsd.net> Content-Type: multipart/signed; micalg=pgp-sha256; protocol="application/pgp-signature"; boundary="vjVcaBUxhTMMJqroG7J3MSvN2FN3n964I" X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 31 Jul 2015 09:12:12 -0000 This is an OpenPGP/MIME signed message (RFC 4880 and 3156) --vjVcaBUxhTMMJqroG7J3MSvN2FN3n964I Content-Type: text/plain; charset=windows-1252 Content-Transfer-Encoding: quoted-printable On 31.07.2015 12:09, Baptiste Daroussin wrote: > Hi, >=20 > I think it would be a good idea to add GEOM_SUNLABEL in the GENERIC ker= nel. >=20 > This would allow the kernel by default to recognize Sun partitions mean= ing one > would import zpools created on top of those patition easily, meaning mi= grating > from Illumos/Solaris to FreeBSD will be a bit more simple. >=20 > Is there any know issues with GEOM_SUNLABEL that would make it a bad id= ea to > activate it by default? It is deprecated, use GEOM_PART_VTOC8 instead. --=20 WBR, Andrey V. Elsukov --vjVcaBUxhTMMJqroG7J3MSvN2FN3n964I Content-Type: application/pgp-signature; name="signature.asc" Content-Description: OpenPGP digital signature Content-Disposition: attachment; filename="signature.asc" -----BEGIN PGP SIGNATURE----- Version: GnuPG v2 Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/ iQEcBAEBCAAGBQJVuzvTAAoJEAHF6gQQyKF6UH8IAJNUP9AhMx1JIVz+5R3GRiRu iq9fR51qZjRhB/WF+ZFa04dnbYdAwCmne3r2v8wz1uhaPif8ux71ihspp54bFLAz bdaCOKwM1b0Hm9KVmUPdgoulZU2LL0rcAvzk/wJNBnlYx98f5uG0Ob5oFL9vAV7M trqs2fdHGgnfysQ4Ext9Z6+QGYVbzkXfvLA+RKBLm87ScAmSqZAZ0HY8ILf069zQ HjyuY7/eCgU3OgEm+DPnKjdiWP/k2p5BzdcoJWYv/Nx9p1yP7jchEMq6905bFC01 oPSm2aT3ZUAT0sf08xUh5GxKDBfgwrRqwbAVbho1K/bclDO9hr4/u0zorYE59ks= =kk9O -----END PGP SIGNATURE----- --vjVcaBUxhTMMJqroG7J3MSvN2FN3n964I-- From owner-freebsd-fs@freebsd.org Fri Jul 31 09:20:47 2015 Return-Path: Delivered-To: freebsd-fs@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 6F23C9AE9DC for ; Fri, 31 Jul 2015 09:20:47 +0000 (UTC) (envelope-from killing@multiplay.co.uk) Received: from mail-wi0-f182.google.com (mail-wi0-f182.google.com [209.85.212.182]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (Client CN "smtp.gmail.com", Issuer "Google Internet Authority G2" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id 08FB79FF for ; Fri, 31 Jul 2015 09:20:46 +0000 (UTC) (envelope-from killing@multiplay.co.uk) Received: by wicmv11 with SMTP id mv11so50556311wic.0 for ; Fri, 31 Jul 2015 02:20:44 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:subject:to:references:from:message-id:date :user-agent:mime-version:in-reply-to:content-type :content-transfer-encoding; bh=8fmg14FW8Jt23+v0YxZfRuDxxvg+hVk4u0uTWe1bl9I=; b=BgGI5VWLtO9gpXqVZaxhoH/o2Lp4/aotDpssPkicnowdh4D8wRRB6AVnmxtXJdd6dq 7JT4dBYYLiGK39e9xxqKcOCtk1v525UjnNQbqiFHuEL3A4zpMVsGHI0LVNCB21SABFIE Qbe4jYh04zS/UK4vydOEGiPyfwrW7GGo2qJ9T3wkJkopXI4Pl1p7ZHI8fYYC45bZxufv 9VDusPr7EpP1YD3l7/lxutaopFyJVRn0sMW0JLYogrlH/HXPFMx48+Ud6MneUNzwG9ex T24szN5cyUiIQH904clyLC0XN83kCF3wjnbUJ/QQ4gLdwaPor2ukQW9Zsxddpct4/cGB gUBA== X-Gm-Message-State: ALoCoQm5czjqsiR8Qj2Ghj3JkkSKrso1BMukC/cFgYplqbBVnbLE2CqJkRDQHp+aHGF/fRWkHg1k X-Received: by 10.180.205.230 with SMTP id lj6mr5181275wic.82.1438334444246; Fri, 31 Jul 2015 02:20:44 -0700 (PDT) Received: from [10.10.1.68] (82-69-141-170.dsl.in-addr.zen.co.uk. [82.69.141.170]) by smtp.gmail.com with ESMTPSA id fq15sm6189581wjc.12.2015.07.31.02.20.43 for (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Fri, 31 Jul 2015 02:20:43 -0700 (PDT) Subject: Re: TRIM erases user data To: freebsd-fs@freebsd.org References: From: Steven Hartland Message-ID: <55BB3DEF.4080901@multiplay.co.uk> Date: Fri, 31 Jul 2015 10:20:47 +0100 User-Agent: Mozilla/5.0 (Windows NT 5.1; rv:38.0) Gecko/20100101 Thunderbird/38.1.0 MIME-Version: 1.0 In-Reply-To: Content-Type: text/plain; charset=windows-1252; format=flowed Content-Transfer-Encoding: 7bit X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 31 Jul 2015 09:20:47 -0000 So it was a bug in Linux, explains why we never saw it :) On 30/07/2015 22:54, grarpamp wrote: > Update... > http://www.spinics.net/lists/raid/msg49440.html > http://www.spinics.net/lists/raid/msg49452.html > https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/commit/?id=f3f5da624e0a891c34d8cd513c57f1d9b0c7dadc > _______________________________________________ > freebsd-fs@freebsd.org mailing list > http://lists.freebsd.org/mailman/listinfo/freebsd-fs > To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" From owner-freebsd-fs@freebsd.org Fri Jul 31 09:28:19 2015 Return-Path: Delivered-To: freebsd-fs@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 841449AEC5D for ; Fri, 31 Jul 2015 09:28:19 +0000 (UTC) (envelope-from baptiste.daroussin@gmail.com) Received: from mailman.ysv.freebsd.org (mailman.ysv.freebsd.org [IPv6:2001:1900:2254:206a::50:5]) by mx1.freebsd.org (Postfix) with ESMTP id 61C24D93 for ; Fri, 31 Jul 2015 09:28:19 +0000 (UTC) (envelope-from baptiste.daroussin@gmail.com) Received: by mailman.ysv.freebsd.org (Postfix) id 5E1D19AEC5B; Fri, 31 Jul 2015 09:28:19 +0000 (UTC) Delivered-To: fs@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 5DBF79AEC5A for ; Fri, 31 Jul 2015 09:28:19 +0000 (UTC) (envelope-from baptiste.daroussin@gmail.com) Received: from mail-wi0-x235.google.com (mail-wi0-x235.google.com [IPv6:2a00:1450:400c:c05::235]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (Client CN "smtp.gmail.com", Issuer "Google Internet Authority G2" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id 06EF1D91; Fri, 31 Jul 2015 09:28:19 +0000 (UTC) (envelope-from baptiste.daroussin@gmail.com) Received: by wibud3 with SMTP id ud3so50873116wib.1; Fri, 31 Jul 2015 02:28:17 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=sender:date:from:to:cc:subject:message-id:references:mime-version :content-type:content-disposition:in-reply-to:user-agent; bh=aaBmJmPpFBE3Xs9XQjVOVjHdq+ntbHATn2vgtmE+twM=; b=W4Mu6m+jPxdDvwGXCGwD3gEWp+hzByKd45y+mFRZoZiGOKBVbNSE7GBD/Wpwz4J3X3 2g4TloAzM9Q+TXby+ezFy0dG+T+DpnRaxsau00zLTG1/bB2doBDR7UgwZ6XN5ELB4fmp G/EDwV8KcAPU5j+b0EKko/JuM285IW0RD4kfjn9RSy7dvUftfmWoQyFrNNJk2JOUTPkj iiKGrbDMYPAVXM2JH7f6dDBKcMJLZ1h9xyhMRrQPR0cS4YnJj9FRCikasRnIcG3XfS7j BSamFq5/9O6FHplRkohLmd6JvaLsbbijpl2UUj52OENrtVE0R81oxfapHaIUTNm2X+Cx yV2A== X-Received: by 10.194.5.103 with SMTP id r7mr3748382wjr.47.1438334897547; Fri, 31 Jul 2015 02:28:17 -0700 (PDT) Received: from ivaldir.etoilebsd.net ([2001:41d0:8:db4c::1]) by smtp.gmail.com with ESMTPSA id c7sm6212314wjb.19.2015.07.31.02.28.16 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Fri, 31 Jul 2015 02:28:16 -0700 (PDT) Sender: Baptiste Daroussin Date: Fri, 31 Jul 2015 11:28:14 +0200 From: Baptiste Daroussin To: "Andrey V. Elsukov" Cc: fs@FreeBSD.org Subject: Re: Activating GEOM_SUNLABEL by default in GENERIC Message-ID: <20150731092814.GH57604@ivaldir.etoilebsd.net> References: <20150731090956.GG57604@ivaldir.etoilebsd.net> <55BB3BD2.1000509@FreeBSD.org> MIME-Version: 1.0 Content-Type: multipart/signed; micalg=pgp-sha1; protocol="application/pgp-signature"; boundary="54u2kuW9sGWg/X+X" Content-Disposition: inline In-Reply-To: <55BB3BD2.1000509@FreeBSD.org> User-Agent: Mutt/1.5.23 (2014-03-12) X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 31 Jul 2015 09:28:19 -0000 --54u2kuW9sGWg/X+X Content-Type: text/plain; charset=us-ascii Content-Disposition: inline Content-Transfer-Encoding: quoted-printable On Fri, Jul 31, 2015 at 12:11:46PM +0300, Andrey V. Elsukov wrote: > On 31.07.2015 12:09, Baptiste Daroussin wrote: > > Hi, > >=20 > > I think it would be a good idea to add GEOM_SUNLABEL in the GENERIC ker= nel. > >=20 > > This would allow the kernel by default to recognize Sun partitions mean= ing one > > would import zpools created on top of those patition easily, meaning mi= grating > > from Illumos/Solaris to FreeBSD will be a bit more simple. > >=20 > > Is there any know issues with GEOM_SUNLABEL that would make it a bad id= ea to > > activate it by default? >=20 > It is deprecated, use GEOM_PART_VTOC8 instead. Ah yes sorry that is the one I meant, and I used but somehow I couldn't fin= d the name anymore :( Thanks for correction Any issues known with GEOM_PART_VTOC8? Best regards, Bapt --54u2kuW9sGWg/X+X Content-Type: application/pgp-signature -----BEGIN PGP SIGNATURE----- Version: GnuPG v1 iEYEARECAAYFAlW7P64ACgkQ8kTtMUmk6EyLtQCgnLD+H9h1TZwIEqJHQS3x0Xpt +qsAn2Pkh35m3cQXwPxC1XkhjCvUS/qn =/FmN -----END PGP SIGNATURE----- --54u2kuW9sGWg/X+X-- From owner-freebsd-fs@freebsd.org Fri Jul 31 09:46:20 2015 Return-Path: Delivered-To: freebsd-fs@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id E5A159AF05C for ; Fri, 31 Jul 2015 09:46:20 +0000 (UTC) (envelope-from lev@FreeBSD.org) Received: from onlyone.friendlyhosting.spb.ru (onlyone.friendlyhosting.spb.ru [46.4.40.135]) by mx1.freebsd.org (Postfix) with ESMTP id A8F9D192A for ; Fri, 31 Jul 2015 09:46:20 +0000 (UTC) (envelope-from lev@FreeBSD.org) Received: from lion.home.serebryakov.spb.ru (unknown [IPv6:2001:470:923f:1:2924:7e01:7d9c:bbfe]) (Authenticated sender: lev@serebryakov.spb.ru) by onlyone.friendlyhosting.spb.ru (Postfix) with ESMTPSA id 42D6E2EDC for ; Fri, 31 Jul 2015 12:46:18 +0300 (MSK) Date: Fri, 31 Jul 2015 12:46:11 +0300 From: Lev Serebryakov Reply-To: lev@FreeBSD.org Organization: FreeBSD X-Priority: 3 (Normal) Message-ID: <575615785.20150731124611@serebryakov.spb.ru> To: freebsd-fs@freebsd.org Subject: ZFS with large blocks on 10.2-PRERELEASE r286065: still have big problems with [kmem arena] MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Transfer-Encoding: 7bit X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 31 Jul 2015 09:46:21 -0000 Hello Freebsd-fs, I've rebuilt OS to ~latest version to have VM-related fixes, and still have problems with processes stuck in "kmem arena" state when they access files on ZFS with large blocks. I have ARC limited to 3GiB (out of 6GiB of physical memory), and ZFS with large files (1Gb+) and large blocks (16MiB) on radiz vdev (5 disks). Simple "dd if=large.file of=/dev/null bs=1m" could spent in "kmem arena" state Looks like such large blocks was bad idea, but anyway 500Kb/s linear reading? Really? What is strange, CPU is totally idle when reading process is stuck. No kernel threads are working for tens of minutes! I'm rebuilding pool with more resonable block sizes (and lets it waste space and time for metadata), but this situation (no work could be done when CPU is totally idle) looks wired and ugly anyway. -- Best regards, Lev mailto:lev@FreeBSD.org From owner-freebsd-fs@freebsd.org Fri Jul 31 09:48:10 2015 Return-Path: Delivered-To: freebsd-fs@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 56CEE9AF0AD for ; Fri, 31 Jul 2015 09:48:10 +0000 (UTC) (envelope-from karl@denninger.net) Received: from fs.denninger.net (wsip-70-169-168-7.pn.at.cox.net [70.169.168.7]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client CN "NewFS.denninger.net", Issuer "NewFS.denninger.net" (not verified)) by mx1.freebsd.org (Postfix) with ESMTPS id 0521B19CF for ; Fri, 31 Jul 2015 09:48:09 +0000 (UTC) (envelope-from karl@denninger.net) Received: from [192.168.1.40] (localhost [127.0.0.1]) by fs.denninger.net (8.15.2/8.14.8) with ESMTP id t6V9m2s7012376 for ; Fri, 31 Jul 2015 04:48:02 -0500 (CDT) (envelope-from karl@denninger.net) Received: from [192.168.1.40] [192.168.1.40] (Via SSLv3 AES128-SHA) ; by Spamblock-sys (LOCAL/AUTH) Fri Jul 31 04:48:02 2015 Message-ID: <55BB443E.8040801@denninger.net> Date: Fri, 31 Jul 2015 04:47:42 -0500 From: Karl Denninger User-Agent: Mozilla/5.0 (Windows NT 6.3; WOW64; rv:31.0) Gecko/20100101 Thunderbird/31.7.0 MIME-Version: 1.0 To: freebsd-fs@freebsd.org Subject: Panic in ZFS during zfs recv (while snapshots being destroyed) Content-Type: multipart/signed; protocol="application/pkcs7-signature"; micalg=sha1; boundary="------------ms040002020007030900020609" X-Content-Filtered-By: Mailman/MimeDel 2.1.20 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 31 Jul 2015 09:48:10 -0000 This is a cryptographically signed message in MIME format. --------------ms040002020007030900020609 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: quoted-printable I have an automated script that runs zfs send/recv copies to bring a backup data set into congruence with the running copies nightly. The source has automated snapshots running on a fairly frequent basis through zfs-auto-snapshot. Recently I have started having a panic show up about once a week during the backup run, but it's inconsistent. It is in the same place, but I cannot force it to repeat. The trap itself is a page fault in kernel mode in the zfs code at zfs_unmount_snap(); here's the traceback from the kvm (sorry for the image link but I don't have a better option right now.) I'll try to get a dump, this is a production machine with encrypted swap so it's not normally turned on. Note that the pool that appears to be involved (the backup pool) has passed a scrub and thus I would assume the on-disk structure is ok..... but that might be an unfair assumption. It is always occurring in the same dataset although there are a half-dozen that are sync'd -- if this one (the first one) successfully completes during the run then all the rest will as well (that is, whenever I restart the process it has always failed here.) The source pool is also clean and passes a scrub. traceback is at http://www.denninger.net/kvmimage.png; apologies for the image traceback but this is coming from a remote KVM. I first saw this on 10.1-STABLE and it is still happening on FreeBSD 10.2-PRERELEASE #9 r285890M, which I updated to in an attempt to see if the problem was something that had been addressed. --=20 Karl Denninger karl@denninger.net /The Market Ticker/ /[S/MIME encrypted email preferred]/ --------------ms040002020007030900020609 Content-Type: application/pkcs7-signature; name="smime.p7s" Content-Transfer-Encoding: base64 Content-Disposition: attachment; filename="smime.p7s" Content-Description: S/MIME Cryptographic Signature MIAGCSqGSIb3DQEHAqCAMIACAQExCzAJBgUrDgMCGgUAMIAGCSqGSIb3DQEHAQAAoIIGXzCC BlswggRDoAMCAQICASkwDQYJKoZIhvcNAQELBQAwgZAxCzAJBgNVBAYTAlVTMRAwDgYDVQQI EwdGbG9yaWRhMRIwEAYDVQQHEwlOaWNldmlsbGUxGTAXBgNVBAoTEEN1ZGEgU3lzdGVtcyBM TEMxHDAaBgNVBAMTE0N1ZGEgU3lzdGVtcyBMTEMgQ0ExIjAgBgkqhkiG9w0BCQEWE0N1ZGEg U3lzdGVtcyBMTEMgQ0EwHhcNMTUwNDIxMDIyMTU5WhcNMjAwNDE5MDIyMTU5WjBaMQswCQYD VQQGEwJVUzEQMA4GA1UECBMHRmxvcmlkYTEZMBcGA1UEChMQQ3VkYSBTeXN0ZW1zIExMQzEe MBwGA1UEAxMVS2FybCBEZW5uaW5nZXIgKE9DU1ApMIICIjANBgkqhkiG9w0BAQEFAAOCAg8A MIICCgKCAgEAuYRY+EB2mGtZ3grlVO8TmnEvduVFA/IYXcCmNSOC1q+pTVjylsjcHKBcOPb9 TP1KLxdWP+Q1soSORGHlKw2/HcVzShDW5WPIKrvML+Ry0XvIvNBu9adTiCsA9nci4Cnf98XE hVpenER0qbJkBUOGT1rP4iAcfjet0lEgzPEnm+pAxv6fYSNp1WqIY9u0b1pkQiaWrt8hgNOc rJOiLbc8CeQ/DBP6rUiQjYNO9/aPNauEtHkNNfR9RgLSfGUdZuOCmJqnIla1HsrZhA5p69Bv /e832BKiNPaH5wF6btAiPpTr2sRhwQO8/IIxcRX1Vxd1yZbjYtJGw+9lwEcWRYAmoxkzKLPi S6Zo/6z5wgNpeK1H+zOioMoZIczgI8BlX1iHxqy/FAvm4PHPnC8s+BLnJLwr+jvMNHm82QwL J9hC5Ho8AnFU6TkCuq+P2V8/clJVqnBuvTUKhYMGSm4mUp+lAgR4L+lwIEqSeWVsxirIcE7Z OKkvI7k5x3WeE3+c6w74L6PfWVAd84xFlo9DKRdU9YbkFuFZPu21fi/LmE5brImB5P+jdqnK eWnVwRq+RBFLy4kehCzMXooitAwgP8l/JJa9VDiSyd/PAHaVGiat2vCdDh4b8cFL7SV6jPA4 k0MgGUA/6Et7wDmhZmCigggr9K6VQCx8jpKB3x1NlNNiaWECAwEAAaOB9DCB8TA3BggrBgEF BQcBAQQrMCkwJwYIKwYBBQUHMAGGG2h0dHA6Ly9jdWRhc3lzdGVtcy5uZXQ6ODg4ODAJBgNV HRMEAjAAMBEGCWCGSAGG+EIBAQQEAwIFoDALBgNVHQ8EBAMCBeAwLAYJYIZIAYb4QgENBB8W HU9wZW5TU0wgR2VuZXJhdGVkIENlcnRpZmljYXRlMB0GA1UdDgQWBBTFHJQt6cloXBdG1Pv1 o2YgH+7lWTAfBgNVHSMEGDAWgBQkcZudhX383d29sMqSlAOh+tNtNTAdBgNVHREEFjAUgRJr YXJsQGRlbm5pbmdlci5uZXQwDQYJKoZIhvcNAQELBQADggIBAE9/dxi2YqjCYYhiybp4GKcm 7tBVa/GLW+qcHPcoT4dqmqghlLz8+iUH+HCJjRQATVGyMEnvISOKFVHC6aZIG+Sg7J8bfS4+ fjKDi9smRH2VPPx3bV8+yFYRNroMGHaPHZB/Xctmmvc+PZ9O2W7rExgrODtxIOB3Zs6wkYf+ ty+9r1KmTHlV+rRHI6timH1uiyFE3cPi1taAEBxf0851cJV8k40PGF8G48ewnq8SY9sCf5cv liXbpdgU+I4ND5BuTjg63WS32zuhLd1VSuH3ZC/QbcncMX5W3oLXmcQP5/5uTiBJy74kdPtG MSZ9rXwZPwNxP/8PXMSR7ViaFvjUkf4bJlyENFa2PGxLk4EUzOuO7t3brjMlQW1fuInfG+ko 3tVxko20Hp0tKGPe/9cOxBVBZeZH/VgpZn3cLculGzZjmdh2fqAQ6kv9Z9AVOG1+dq0c1zt8 2zm+Oi1pikGXkfz5UJq60psY6zbX25BuEZkthO/qiS4pxjxb7gQkS0rTEHTy+qv0l3QVL0wa NAT74Zaj7l5DEW3qdQQ0dtVieyvptg9CxkfQJE3JyBMb0zBj9Qhc5/hbTfhSlHzZMEbUuIyx h9vxqFAmGzfB1/WfOKkiNHChkpPW8ZeH9yPeDBKvrgZ96dREHFoVkDk7Vpw5lSM+tFOfdyLg xxhb/RZVUDeUMYIE4zCCBN8CAQEwgZYwgZAxCzAJBgNVBAYTAlVTMRAwDgYDVQQIEwdGbG9y aWRhMRIwEAYDVQQHEwlOaWNldmlsbGUxGTAXBgNVBAoTEEN1ZGEgU3lzdGVtcyBMTEMxHDAa BgNVBAMTE0N1ZGEgU3lzdGVtcyBMTEMgQ0ExIjAgBgkqhkiG9w0BCQEWE0N1ZGEgU3lzdGVt cyBMTEMgQ0ECASkwCQYFKw4DAhoFAKCCAiEwGAYJKoZIhvcNAQkDMQsGCSqGSIb3DQEHATAc BgkqhkiG9w0BCQUxDxcNMTUwNzMxMDk0NzQyWjAjBgkqhkiG9w0BCQQxFgQUoZV3i0Rm8pZx bjs2rpJJS1xtWhYwbAYJKoZIhvcNAQkPMV8wXTALBglghkgBZQMEASowCwYJYIZIAWUDBAEC MAoGCCqGSIb3DQMHMA4GCCqGSIb3DQMCAgIAgDANBggqhkiG9w0DAgIBQDAHBgUrDgMCBzAN BggqhkiG9w0DAgIBKDCBpwYJKwYBBAGCNxAEMYGZMIGWMIGQMQswCQYDVQQGEwJVUzEQMA4G A1UECBMHRmxvcmlkYTESMBAGA1UEBxMJTmljZXZpbGxlMRkwFwYDVQQKExBDdWRhIFN5c3Rl bXMgTExDMRwwGgYDVQQDExNDdWRhIFN5c3RlbXMgTExDIENBMSIwIAYJKoZIhvcNAQkBFhND dWRhIFN5c3RlbXMgTExDIENBAgEpMIGpBgsqhkiG9w0BCRACCzGBmaCBljCBkDELMAkGA1UE BhMCVVMxEDAOBgNVBAgTB0Zsb3JpZGExEjAQBgNVBAcTCU5pY2V2aWxsZTEZMBcGA1UEChMQ Q3VkYSBTeXN0ZW1zIExMQzEcMBoGA1UEAxMTQ3VkYSBTeXN0ZW1zIExMQyBDQTEiMCAGCSqG SIb3DQEJARYTQ3VkYSBTeXN0ZW1zIExMQyBDQQIBKTANBgkqhkiG9w0BAQEFAASCAgBurVIH oJXsG/Cev7VX6GUtDIURBJTgPn0RMK2oj7NP+WJzHSE1Wk8BxBNK8ejAvmjNmt/0AXlObXwC 6ikTh2ZoEkKAjYlE/6sOG3vQPeZRwca/861lZtfxpV9OO76VbByif843AtsQPbG5rnJ16XIW YIiD/WuzBuadX/DeT4xnqxonotbcRtVjNjEyx+XZReC7WDgnbz2DdzmBf8PumncI/nmPt8Q+ Xtc/sQ9XwlxjUHCC/urUTJKAEOHYfp/0253vMd5FzkaSDNAd/7Zsx8o6cw3LAw9v4vaudj2X UOT8RiAzE2j3gGH3cQ1usWjB+K1ecLtW3aIvo54aswFqNtaFSdBeQc+oZGuxu4fP8p5n1rlQ fSjpv/Dv8fUBVu17OyD8L2BwPKneeCs4rHZw+N7uEOW2YIO293NQhXf/zaaWpZ+5c7d6hoT6 8nv2JUsnsf4xwuelYNuxPys2Yq9RAqYE0VNu05btdXKlChJJHIaIXljtFIyOJTVyvdeRF6Hi tgnVp8OszuXhveEif9RinXXwLN6+kahqnsGXZhoThFG2Y+sSeQnbsg4vNzJqDzXQR0MZYO0I O4V9jYhvqt8ndzWhITj5m+CAsRvOXsDzWxItRu4bxse0tPVJ/O57D+wwz7gItXqewjFfriB2 XPFzoZpzcXt/FZayGFnTat1d9bJwnQAAAAAAAA== --------------ms040002020007030900020609-- From owner-freebsd-fs@freebsd.org Fri Jul 31 10:01:08 2015 Return-Path: Delivered-To: freebsd-fs@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id C4C899AF383 for ; Fri, 31 Jul 2015 10:01:08 +0000 (UTC) (envelope-from ae@FreeBSD.org) Received: from mailman.ysv.freebsd.org (mailman.ysv.freebsd.org [IPv6:2001:1900:2254:206a::50:5]) by mx1.freebsd.org (Postfix) with ESMTP id AB8281093 for ; Fri, 31 Jul 2015 10:01:08 +0000 (UTC) (envelope-from ae@FreeBSD.org) Received: by mailman.ysv.freebsd.org (Postfix) id A9EFA9AF382; Fri, 31 Jul 2015 10:01:08 +0000 (UTC) Delivered-To: fs@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id A987C9AF381 for ; Fri, 31 Jul 2015 10:01:08 +0000 (UTC) (envelope-from ae@FreeBSD.org) Received: from mx2.freebsd.org (mx2.freebsd.org [8.8.178.116]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client CN "mx2.freebsd.org", Issuer "Gandi Standard SSL CA 2" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id 92044108E; Fri, 31 Jul 2015 10:01:08 +0000 (UTC) (envelope-from ae@FreeBSD.org) Received: from butcher-nb.yandex.net (freefall.freebsd.org [IPv6:2001:1900:2254:206c::16:87]) by mx2.freebsd.org (Postfix) with ESMTP id 6C68224BB; Fri, 31 Jul 2015 10:01:07 +0000 (UTC) (envelope-from ae@FreeBSD.org) Message-ID: <55BB4745.7020502@FreeBSD.org> Date: Fri, 31 Jul 2015 13:00:37 +0300 From: "Andrey V. Elsukov" User-Agent: Mozilla/5.0 (X11; FreeBSD amd64; rv:31.0) Gecko/20100101 Thunderbird/31.7.0 MIME-Version: 1.0 To: Baptiste Daroussin CC: fs@FreeBSD.org Subject: Re: Activating GEOM_SUNLABEL by default in GENERIC References: <20150731090956.GG57604@ivaldir.etoilebsd.net> <55BB3BD2.1000509@FreeBSD.org> <20150731092814.GH57604@ivaldir.etoilebsd.net> In-Reply-To: <20150731092814.GH57604@ivaldir.etoilebsd.net> Content-Type: multipart/signed; micalg=pgp-sha256; protocol="application/pgp-signature"; boundary="I02t9Btp9JSqQAoCcTeI7vXwD6NUu4qm1" X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 31 Jul 2015 10:01:08 -0000 This is an OpenPGP/MIME signed message (RFC 4880 and 3156) --I02t9Btp9JSqQAoCcTeI7vXwD6NUu4qm1 Content-Type: text/plain; charset=windows-1252 Content-Transfer-Encoding: quoted-printable On 31.07.2015 12:28, Baptiste Daroussin wrote: >>> Is there any know issues with GEOM_SUNLABEL that would make it a bad = idea to >>> activate it by default? >> >> It is deprecated, use GEOM_PART_VTOC8 instead. >=20 > Ah yes sorry that is the one I meant, and I used but somehow I couldn't= find the > name anymore :( >=20 > Thanks for correction >=20 > Any issues known with GEOM_PART_VTOC8? I don't know about any problems, but what prevents you use kld? It should work. --=20 WBR, Andrey V. Elsukov --I02t9Btp9JSqQAoCcTeI7vXwD6NUu4qm1 Content-Type: application/pgp-signature; name="signature.asc" Content-Description: OpenPGP digital signature Content-Disposition: attachment; filename="signature.asc" -----BEGIN PGP SIGNATURE----- Version: GnuPG v2 Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/ iQEcBAEBCAAGBQJVu0dGAAoJEAHF6gQQyKF6/gwIAKoMFWHI9l7sRXErzcNf62tc Y6bVlkfcHm5HT94/P4xukgVwuW1emR7fWYeHYQZek+nCCU98pKm551W6lLojOcS2 an8z/XGZQBnJCtG+9JzJ2/v1PPGwaKurs8R2ZWpBFaKJFbyMUd0o/i92QmAm1UNn 9zMyTZbmmk6Xa0lQ6KxL1GHNJHKOLH0bnkylch9hFceXx2YxGnhU0lMKmIMAS1Kw rGW0ds6tT49GikEdSRDaC504XAAZNKGIarcqaIo07a1NMV1RCgtwOGAmHzLlf1EC PkcrLNEKPqp1UO0MfRdvnQp4EXJXGWAdwTXllI8/PKPl6SLHVw6uV/lOEfkx8Ng= =y/HR -----END PGP SIGNATURE----- --I02t9Btp9JSqQAoCcTeI7vXwD6NUu4qm1-- From owner-freebsd-fs@freebsd.org Fri Jul 31 10:07:22 2015 Return-Path: Delivered-To: freebsd-fs@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id D040B9AF450 for ; Fri, 31 Jul 2015 10:07:22 +0000 (UTC) (envelope-from baptiste.daroussin@gmail.com) Received: from mailman.ysv.freebsd.org (mailman.ysv.freebsd.org [IPv6:2001:1900:2254:206a::50:5]) by mx1.freebsd.org (Postfix) with ESMTP id ACC4813A1 for ; Fri, 31 Jul 2015 10:07:22 +0000 (UTC) (envelope-from baptiste.daroussin@gmail.com) Received: by mailman.ysv.freebsd.org (Postfix) id A91A99AF44E; Fri, 31 Jul 2015 10:07:22 +0000 (UTC) Delivered-To: fs@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id A8A829AF44C for ; Fri, 31 Jul 2015 10:07:22 +0000 (UTC) (envelope-from baptiste.daroussin@gmail.com) Received: from mail-wi0-x233.google.com (mail-wi0-x233.google.com [IPv6:2a00:1450:400c:c05::233]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (Client CN "smtp.gmail.com", Issuer "Google Internet Authority G2" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id 5CD42139F; Fri, 31 Jul 2015 10:07:22 +0000 (UTC) (envelope-from baptiste.daroussin@gmail.com) Received: by wicgj17 with SMTP id gj17so10682226wic.1; Fri, 31 Jul 2015 03:07:20 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=sender:date:from:to:cc:subject:message-id:references:mime-version :content-type:content-disposition:in-reply-to:user-agent; bh=EQwP63E4gs/3oVExcMYRXDr7o7OFvi9+Ge3cHVdCzDo=; b=jov/3lXqN6lIRgpjehW/RCMS9p4PmkcUOpOkOMXhOHRo6ACshhnL8C9nAYcNbw5sF7 tUZ+gHuTPxbN6emSb6UOJOs4P1YDjpoH38ndBaQ/I+0aH4Ak6fe3gIeqTu2Hbiak3/3P Bq+zQxQIpBi1nLxziqAk6ksdVQEInNV1IQeC0pO4b3obgF5Y9AF3/dwBxllcT8cbz3ZT RaCjNsGre6I98ZRMOlgrQ4FbBwY2A0lYe/75e4hP9DDgQpjim+5gkkCdS/8aC3a4PKjq 89rPtct9E3/00AIzhIMGrAabMhTOtr6IJVDfQIKxQThS1xr0aJynpC1DA+6VgsdpDO2+ nRuQ== X-Received: by 10.194.87.69 with SMTP id v5mr4080313wjz.140.1438337239930; Fri, 31 Jul 2015 03:07:19 -0700 (PDT) Received: from ivaldir.etoilebsd.net ([2001:41d0:8:db4c::1]) by smtp.gmail.com with ESMTPSA id bg6sm6384739wjc.13.2015.07.31.03.07.18 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Fri, 31 Jul 2015 03:07:18 -0700 (PDT) Sender: Baptiste Daroussin Date: Fri, 31 Jul 2015 12:07:16 +0200 From: Baptiste Daroussin To: "Andrey V. Elsukov" Cc: fs@FreeBSD.org Subject: Re: Activating GEOM_SUNLABEL by default in GENERIC Message-ID: <20150731100716.GJ57604@ivaldir.etoilebsd.net> References: <20150731090956.GG57604@ivaldir.etoilebsd.net> <55BB3BD2.1000509@FreeBSD.org> <20150731092814.GH57604@ivaldir.etoilebsd.net> <55BB4745.7020502@FreeBSD.org> MIME-Version: 1.0 Content-Type: multipart/signed; micalg=pgp-sha1; protocol="application/pgp-signature"; boundary="+Hr//EUsa8//ouuB" Content-Disposition: inline In-Reply-To: <55BB4745.7020502@FreeBSD.org> User-Agent: Mutt/1.5.23 (2014-03-12) X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 31 Jul 2015 10:07:23 -0000 --+Hr//EUsa8//ouuB Content-Type: text/plain; charset=us-ascii Content-Disposition: inline Content-Transfer-Encoding: quoted-printable On Fri, Jul 31, 2015 at 01:00:37PM +0300, Andrey V. Elsukov wrote: > On 31.07.2015 12:28, Baptiste Daroussin wrote: > >>> Is there any know issues with GEOM_SUNLABEL that would make it a bad = idea to > >>> activate it by default? > >> > >> It is deprecated, use GEOM_PART_VTOC8 instead. > >=20 > > Ah yes sorry that is the one I meant, and I used but somehow I couldn't= find the > > name anymore :( > >=20 > > Thanks for correction > >=20 > > Any issues known with GEOM_PART_VTOC8? >=20 > I don't know about any problems, but what prevents you use kld? It > should work. >=20 Nothing just I received feedbacks from people which were not able to find h= owto import those, so I bet having them by default will help. The name of the module is not obvious, no documentation describes it (or not easy to find) maybe the right answer should in that case probably to add documentation :)= but it was easier to modify the kernel config :) Bapt --+Hr//EUsa8//ouuB Content-Type: application/pgp-signature -----BEGIN PGP SIGNATURE----- Version: GnuPG v1 iEYEARECAAYFAlW7SNQACgkQ8kTtMUmk6Ey9mQCdFjTq13VeEHaDitjLHKs0BRfP X/cAoKt3dJCrnkBJLlYv/UtUIKlQCmvu =p+MX -----END PGP SIGNATURE----- --+Hr//EUsa8//ouuB-- From owner-freebsd-fs@freebsd.org Fri Jul 31 16:42:00 2015 Return-Path: Delivered-To: freebsd-fs@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id A5EB09AFB8E for ; Fri, 31 Jul 2015 16:42:00 +0000 (UTC) (envelope-from alex.bakhtin@gmail.com) Received: from mail-la0-x22a.google.com (mail-la0-x22a.google.com [IPv6:2a00:1450:4010:c03::22a]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (Client CN "smtp.gmail.com", Issuer "Google Internet Authority G2" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id CC32F1C28 for ; Fri, 31 Jul 2015 16:41:59 +0000 (UTC) (envelope-from alex.bakhtin@gmail.com) Received: by labiq1 with SMTP id iq1so13009665lab.3 for ; Fri, 31 Jul 2015 09:41:58 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:from:date:message-id:subject:to:content-type; bh=5FX/Fl6dOuwVOqJg/2OF5ZSHgEKOYlmxrfzQBelKirw=; b=PE0rEzJPflJXVUpcvlDI7VODHGuoBEFdOFG38ylZ6bTiUmEY4vajCK3ruX144ndDna yR5t8paezsto2ZsreAViIrS8mwn2P8JTQYTwN5EnDYVvEKaRm7GKcUQ0yREyFk9WZIAY NZhqx32StJDmchZZCjrxwnudkJLpuUhAUuC9Somd23VZ07rUsCZW9/OUo27nffmd9ngC 6G8T2v00DlG+TDGZeGbTQRTNJ0XA7OfrBIfRxFPi9nzQzA5p+VDDOCrDU3vv05MxD8ZS ZTrlBgabKeemUia7U58XepcShVUy+P1JNmyRiaV1OZH00aPb6bwhMkLeTmc2qHV1zRr3 3GsA== X-Received: by 10.112.142.105 with SMTP id rv9mr3951039lbb.11.1438360917982; Fri, 31 Jul 2015 09:41:57 -0700 (PDT) MIME-Version: 1.0 Received: by 10.25.39.77 with HTTP; Fri, 31 Jul 2015 09:41:18 -0700 (PDT) From: Alex Bakhtin Date: Fri, 31 Jul 2015 19:41:18 +0300 Message-ID: Subject: [zfs] 10.1-R i386 deadlock To: freebsd-fs@freebsd.org Content-Type: multipart/mixed; boundary=001a11c3775c93ac95051c2e7ffe X-Content-Filtered-By: Mailman/MimeDel 2.1.20 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 31 Jul 2015 16:42:00 -0000 --001a11c3775c93ac95051c2e7ffe Content-Type: text/plain; charset=UTF-8 Hello, I'm experiencing a problems with ZFS locks on my i386 buildbox. This is a VM running under vSphere with 4 vCPU (non-HT) and 4G RAM. I'm experiencing a deadlock sometime in the normal build process (poudriere) and right now I found a way to easily reproduce it - make index in /usr/ports hangs (tried several times - no success). To troubleshoot this I moved VM from very slow NFS datastore to a fast SSD based local one (the idea was - if the problem with some timing - it would be fixed). This doesn't help. Right now I have the latest 10.1-RELEASE: FreeBSD builder-i386 10.1-RELEASE-p16 FreeBSD 10.1-RELEASE-p16 #0: Tue Jul 28 11:41:12 UTC 2015 root@amd64-builder.daemonology.net:/usr/obj/usr/src/sys/GENERIC i386 I read the recomendations in https://wiki.freebsd.org/AvgZfsDeadlockDebug and I really see some threads locked in *zfs_freebsd_read* and *arc_read.* procstat -kk -a output attached. ufs based root is fine. To recover usually I have to reset VM from vSphere. I have a recommended ZFS tunables in loader.conf: ====================================== vm.kmem_size="512M" vm.kmem_size_max="512M" vfs.zfs.arc_max="40M" vfs.zfs.vdev.cache.size="5M" vfs.zfs.prefetch_disable=0 # Decrease ZFS txg timeout value from 30 (default) to 5 seconds. This # should increase throughput and decrease the "bursty" stalls that # happen during immense I/O with ZFS. # http://lists.freebsd.org/pipermail/freebsd-fs/2009-December/007343.html # http://lists.freebsd.org/pipermail/freebsd-fs/2009-December/007355.html # default in FreeBSD since ZFS v28 vfs.zfs.txg.timeout="5" ====================================== ============================== bakhtin@builder-i386:~ % zpool status pool: storage state: ONLINE scan: scrub repaired 0 in 0h16m with 0 errors on Fri Jun 12 09:43:00 2015 config: NAME STATE READ WRITE CKSUM storage ONLINE 0 0 0 da1 ONLINE 0 0 0 errors: No known data errors ============================== I have no ideas. Because of this problem I have to reset VM and restart poudriere build several times to build packages I need. Moving to amd64 is not an option because I need 32bit host to build packages for armv6 target. The same setup of amd64 host on the same vSphere is working pretty fine. -- --- Alex Bakhtin --001a11c3775c93ac95051c2e7ffe Content-Type: text/plain; charset=US-ASCII; name="procstat.new.txt" Content-Disposition: attachment; filename="procstat.new.txt" Content-Transfer-Encoding: base64 X-Attachment-Id: f_icruennx0 ICBQSUQgICAgVElEIENPTU0gICAgICAgICAgICAgVEROQU1FICAgICAgICAgICBLU1RBQ0sgICAg ICAgICAgICAgICAgICAgICAgIAogICAgMCAxMDAwMDAga2VybmVsICAgICAgICAgICBzd2FwcGVy ICAgICAgICAgIG1pX3N3aXRjaCsweDEyMiBzbGVlcHFfc3dpdGNoKzB4MTViIHNsZWVwcV90aW1l ZHdhaXQrMHgzZiBfc2xlZXArMHgyODIgc3dhcHBlcisweDJjMCBiZWdpbisweDJjIAogICAgMCAx MDAwMTYga2VybmVsICAgICAgICAgICBmaXJtd2FyZSB0YXNrcSAgIG1pX3N3aXRjaCsweDEyMiBz bGVlcHFfc3dpdGNoKzB4MTViIHNsZWVwcV93YWl0KzB4M2YgX3NsZWVwKzB4MmFlIHRhc2txdWV1 ZV90aHJlYWRfbG9vcCsweDExYiBmb3JrX2V4aXQrMHhhMyBmb3JrX3RyYW1wb2xpbmUrMHg4IAog ICAgMCAxMDAwMTgga2VybmVsICAgICAgICAgICBhY3BpX3Rhc2tfMCAgICAgIG1pX3N3aXRjaCsw eDEyMiBzbGVlcHFfc3dpdGNoKzB4MTViIHNsZWVwcV93YWl0KzB4M2YgbXNsZWVwX3NwaW5fc2J0 KzB4MWQ0IHRhc2txdWV1ZV90aHJlYWRfbG9vcCsweDE1YyBmb3JrX2V4aXQrMHhhMyBmb3JrX3Ry YW1wb2xpbmUrMHg4IAogICAgMCAxMDAwMTkga2VybmVsICAgICAgICAgICBhY3BpX3Rhc2tfMSAg ICAgIG1pX3N3aXRjaCsweDEyMiBzbGVlcHFfc3dpdGNoKzB4MTViIHNsZWVwcV93YWl0KzB4M2Yg bXNsZWVwX3NwaW5fc2J0KzB4MWQ0IHRhc2txdWV1ZV90aHJlYWRfbG9vcCsweDE1YyBmb3JrX2V4 aXQrMHhhMyBmb3JrX3RyYW1wb2xpbmUrMHg4IAogICAgMCAxMDAwMjAga2VybmVsICAgICAgICAg ICBhY3BpX3Rhc2tfMiAgICAgIG1pX3N3aXRjaCsweDEyMiBzbGVlcHFfc3dpdGNoKzB4MTViIHNs ZWVwcV93YWl0KzB4M2YgbXNsZWVwX3NwaW5fc2J0KzB4MWQ0IHRhc2txdWV1ZV90aHJlYWRfbG9v cCsweDE1YyBmb3JrX2V4aXQrMHhhMyBmb3JrX3RyYW1wb2xpbmUrMHg4IAogICAgMCAxMDAwMjMg a2VybmVsICAgICAgICAgICB0aHJlYWQgdGFza3EgICAgIG1pX3N3aXRjaCsweDEyMiBzbGVlcHFf c3dpdGNoKzB4MTViIHNsZWVwcV93YWl0KzB4M2YgX3NsZWVwKzB4MmFlIHRhc2txdWV1ZV90aHJl YWRfbG9vcCsweDExYiBmb3JrX2V4aXQrMHhhMyBmb3JrX3RyYW1wb2xpbmUrMHg4IAogICAgMCAx MDAwMjQga2VybmVsICAgICAgICAgICBmZnNfdHJpbSB0YXNrcSAgIG1pX3N3aXRjaCsweDEyMiBz bGVlcHFfc3dpdGNoKzB4MTViIHNsZWVwcV93YWl0KzB4M2YgX3NsZWVwKzB4MmFlIHRhc2txdWV1 ZV90aHJlYWRfbG9vcCsweDExYiBmb3JrX2V4aXQrMHhhMyBmb3JrX3RyYW1wb2xpbmUrMHg4IAog ICAgMCAxMDAwMjYga2VybmVsICAgICAgICAgICBrcXVldWUgdGFza3EgICAgIG1pX3N3aXRjaCsw eDEyMiBzbGVlcHFfc3dpdGNoKzB4MTViIHNsZWVwcV93YWl0KzB4M2YgX3NsZWVwKzB4MmFlIHRh c2txdWV1ZV90aHJlYWRfbG9vcCsweDExYiBmb3JrX2V4aXQrMHhhMyBmb3JrX3RyYW1wb2xpbmUr MHg4IAogICAgMCAxMDAwMzMga2VybmVsICAgICAgICAgICB2bXgwIHRhc2txICAgICAgIG1pX3N3 aXRjaCsweDEyMiBzbGVlcHFfc3dpdGNoKzB4MTViIHNsZWVwcV93YWl0KzB4M2YgX3NsZWVwKzB4 MmFlIHRhc2txdWV1ZV90aHJlYWRfbG9vcCsweDExYiBmb3JrX2V4aXQrMHhhMyBmb3JrX3RyYW1w b2xpbmUrMHg4IAogICAgMCAxMDAwNDAga2VybmVsICAgICAgICAgICBDQU0gdGFza3EgICAgICAg IG1pX3N3aXRjaCsweDEyMiBzbGVlcHFfc3dpdGNoKzB4MTViIHNsZWVwcV93YWl0KzB4M2YgX3Ns ZWVwKzB4MmFlIHRhc2txdWV1ZV90aHJlYWRfbG9vcCsweDExYiBmb3JrX2V4aXQrMHhhMyBmb3Jr X3RyYW1wb2xpbmUrMHg4IAogICAgMCAxMDAwNTMga2VybmVsICAgICAgICAgICBzeXN0ZW1fdGFz a3FfMCAgIG1pX3N3aXRjaCsweDEyMiBzbGVlcHFfc3dpdGNoKzB4MTViIHNsZWVwcV93YWl0KzB4 M2YgX3NsZWVwKzB4MmFlIHRhc2txdWV1ZV90aHJlYWRfbG9vcCsweDExYiBmb3JrX2V4aXQrMHhh MyBmb3JrX3RyYW1wb2xpbmUrMHg4IAogICAgMCAxMDAwNTQga2VybmVsICAgICAgICAgICBzeXN0 ZW1fdGFza3FfMSAgIG1pX3N3aXRjaCsweDEyMiBzbGVlcHFfc3dpdGNoKzB4MTViIHNsZWVwcV93 YWl0KzB4M2YgX3NsZWVwKzB4MmFlIHRhc2txdWV1ZV90aHJlYWRfbG9vcCsweDExYiBmb3JrX2V4 aXQrMHhhMyBmb3JrX3RyYW1wb2xpbmUrMHg4IAogICAgMCAxMDAwNTUga2VybmVsICAgICAgICAg ICBzeXN0ZW1fdGFza3FfMiAgIG1pX3N3aXRjaCsweDEyMiBzbGVlcHFfc3dpdGNoKzB4MTViIHNs ZWVwcV93YWl0KzB4M2YgX3NsZWVwKzB4MmFlIHRhc2txdWV1ZV90aHJlYWRfbG9vcCsweDExYiBm b3JrX2V4aXQrMHhhMyBmb3JrX3RyYW1wb2xpbmUrMHg4IAogICAgMCAxMDAwNTYga2VybmVsICAg ICAgICAgICBzeXN0ZW1fdGFza3FfMyAgIG1pX3N3aXRjaCsweDEyMiBzbGVlcHFfc3dpdGNoKzB4 MTViIHNsZWVwcV93YWl0KzB4M2YgX3NsZWVwKzB4MmFlIHRhc2txdWV1ZV90aHJlYWRfbG9vcCsw eDExYiBmb3JrX2V4aXQrMHhhMyBmb3JrX3RyYW1wb2xpbmUrMHg4IAogICAgMCAxMDAxOTgga2Vy bmVsICAgICAgICAgICB6aW9fbnVsbF9pc3N1ZSAgIG1pX3N3aXRjaCsweDEyMiBzbGVlcHFfc3dp dGNoKzB4MTViIHNsZWVwcV93YWl0KzB4M2YgX3NsZWVwKzB4MmFlIHRhc2txdWV1ZV90aHJlYWRf bG9vcCsweDExYiBmb3JrX2V4aXQrMHhhMyBmb3JrX3RyYW1wb2xpbmUrMHg4IAogICAgMCAxMDAx OTkga2VybmVsICAgICAgICAgICB6aW9fbnVsbF9pbnRyICAgIG1pX3N3aXRjaCsweDEyMiBzbGVl cHFfc3dpdGNoKzB4MTViIHNsZWVwcV93YWl0KzB4M2YgX3NsZWVwKzB4MmFlIHRhc2txdWV1ZV90 aHJlYWRfbG9vcCsweDExYiBmb3JrX2V4aXQrMHhhMyBmb3JrX3RyYW1wb2xpbmUrMHg4IAogICAg MCAxMDAyMDAga2VybmVsICAgICAgICAgICB6aW9fcmVhZF9pc3N1ZV8wIG1pX3N3aXRjaCsweDEy MiBzbGVlcHFfc3dpdGNoKzB4MTViIHNsZWVwcV93YWl0KzB4M2YgX3NsZWVwKzB4MmFlIHRhc2tx dWV1ZV90aHJlYWRfbG9vcCsweDExYiBmb3JrX2V4aXQrMHhhMyBmb3JrX3RyYW1wb2xpbmUrMHg4 IAogICAgMCAxMDAyMDEga2VybmVsICAgICAgICAgICB6aW9fcmVhZF9pc3N1ZV8xIG1pX3N3aXRj aCsweDEyMiBzbGVlcHFfc3dpdGNoKzB4MTViIHNsZWVwcV93YWl0KzB4M2YgX3NsZWVwKzB4MmFl IHRhc2txdWV1ZV90aHJlYWRfbG9vcCsweDExYiBmb3JrX2V4aXQrMHhhMyBmb3JrX3RyYW1wb2xp bmUrMHg4IAogICAgMCAxMDAyMDIga2VybmVsICAgICAgICAgICB6aW9fcmVhZF9pc3N1ZV8yIG1p X3N3aXRjaCsweDEyMiBzbGVlcHFfc3dpdGNoKzB4MTViIHNsZWVwcV93YWl0KzB4M2YgX3NsZWVw KzB4MmFlIHRhc2txdWV1ZV90aHJlYWRfbG9vcCsweDExYiBmb3JrX2V4aXQrMHhhMyBmb3JrX3Ry YW1wb2xpbmUrMHg4IAogICAgMCAxMDAyMDMga2VybmVsICAgICAgICAgICB6aW9fcmVhZF9pc3N1 ZV8zIG1pX3N3aXRjaCsweDEyMiBzbGVlcHFfc3dpdGNoKzB4MTViIHNsZWVwcV93YWl0KzB4M2Yg X3NsZWVwKzB4MmFlIHRhc2txdWV1ZV90aHJlYWRfbG9vcCsweDExYiBmb3JrX2V4aXQrMHhhMyBm b3JrX3RyYW1wb2xpbmUrMHg4IAogICAgMCAxMDAyMDQga2VybmVsICAgICAgICAgICB6aW9fcmVh ZF9pc3N1ZV80IG1pX3N3aXRjaCsweDEyMiBzbGVlcHFfc3dpdGNoKzB4MTViIHNsZWVwcV93YWl0 KzB4M2YgX3NsZWVwKzB4MmFlIHRhc2txdWV1ZV90aHJlYWRfbG9vcCsweDExYiBmb3JrX2V4aXQr MHhhMyBmb3JrX3RyYW1wb2xpbmUrMHg4IAogICAgMCAxMDAyMDUga2VybmVsICAgICAgICAgICB6 aW9fcmVhZF9pc3N1ZV81IG1pX3N3aXRjaCsweDEyMiBzbGVlcHFfc3dpdGNoKzB4MTViIHNsZWVw cV93YWl0KzB4M2YgX3NsZWVwKzB4MmFlIHRhc2txdWV1ZV90aHJlYWRfbG9vcCsweDExYiBmb3Jr X2V4aXQrMHhhMyBmb3JrX3RyYW1wb2xpbmUrMHg4IAogICAgMCAxMDAyMDYga2VybmVsICAgICAg ICAgICB6aW9fcmVhZF9pc3N1ZV82IG1pX3N3aXRjaCsweDEyMiBzbGVlcHFfc3dpdGNoKzB4MTVi IHNsZWVwcV93YWl0KzB4M2YgX3NsZWVwKzB4MmFlIHRhc2txdWV1ZV90aHJlYWRfbG9vcCsweDEx YiBmb3JrX2V4aXQrMHhhMyBmb3JrX3RyYW1wb2xpbmUrMHg4IAogICAgMCAxMDAyMDcga2VybmVs ICAgICAgICAgICB6aW9fcmVhZF9pc3N1ZV83IG1pX3N3aXRjaCsweDEyMiBzbGVlcHFfc3dpdGNo KzB4MTViIHNsZWVwcV93YWl0KzB4M2YgX3NsZWVwKzB4MmFlIHRhc2txdWV1ZV90aHJlYWRfbG9v cCsweDExYiBmb3JrX2V4aXQrMHhhMyBmb3JrX3RyYW1wb2xpbmUrMHg4IAogICAgMCAxMDAyMDgg a2VybmVsICAgICAgICAgICB6aW9fcmVhZF9pbnRyXzAgIG1pX3N3aXRjaCsweDEyMiBzbGVlcHFf c3dpdGNoKzB4MTViIHNsZWVwcV93YWl0KzB4M2YgX3NsZWVwKzB4MmFlIHRhc2txdWV1ZV90aHJl YWRfbG9vcCsweDExYiBmb3JrX2V4aXQrMHhhMyBmb3JrX3RyYW1wb2xpbmUrMHg4IAogICAgMCAx MDAyMDkga2VybmVsICAgICAgICAgICB6aW9fcmVhZF9pbnRyXzEgIG1pX3N3aXRjaCsweDEyMiBz bGVlcHFfc3dpdGNoKzB4MTViIHNsZWVwcV93YWl0KzB4M2YgX3NsZWVwKzB4MmFlIHRhc2txdWV1 ZV90aHJlYWRfbG9vcCsweDExYiBmb3JrX2V4aXQrMHhhMyBmb3JrX3RyYW1wb2xpbmUrMHg4IAog ICAgMCAxMDAyMTAga2VybmVsICAgICAgICAgICB6aW9fcmVhZF9pbnRyXzIgIG1pX3N3aXRjaCsw eDEyMiBzbGVlcHFfc3dpdGNoKzB4MTViIHNsZWVwcV93YWl0KzB4M2YgX3NsZWVwKzB4MmFlIHRh c2txdWV1ZV90aHJlYWRfbG9vcCsweDExYiBmb3JrX2V4aXQrMHhhMyBmb3JrX3RyYW1wb2xpbmUr MHg4IAogICAgMCAxMDAyMTEga2VybmVsICAgICAgICAgICB6aW9fd3JpdGVfaXNzdWVfIG1pX3N3 aXRjaCsweDEyMiBzbGVlcHFfc3dpdGNoKzB4MTViIHNsZWVwcV93YWl0KzB4M2YgX3NsZWVwKzB4 MmFlIHRhc2txdWV1ZV90aHJlYWRfbG9vcCsweDExYiBmb3JrX2V4aXQrMHhhMyBmb3JrX3RyYW1w b2xpbmUrMHg4IAogICAgMCAxMDAyMTIga2VybmVsICAgICAgICAgICB6aW9fd3JpdGVfaXNzdWVf IG1pX3N3aXRjaCsweDEyMiBzbGVlcHFfc3dpdGNoKzB4MTViIHNsZWVwcV93YWl0KzB4M2YgX3Ns ZWVwKzB4MmFlIHRhc2txdWV1ZV90aHJlYWRfbG9vcCsweDExYiBmb3JrX2V4aXQrMHhhMyBmb3Jr X3RyYW1wb2xpbmUrMHg4IAogICAgMCAxMDAyMTMga2VybmVsICAgICAgICAgICB6aW9fd3JpdGVf aXNzdWVfIG1pX3N3aXRjaCsweDEyMiBzbGVlcHFfc3dpdGNoKzB4MTViIHNsZWVwcV93YWl0KzB4 M2YgX3NsZWVwKzB4MmFlIHRhc2txdWV1ZV90aHJlYWRfbG9vcCsweDExYiBmb3JrX2V4aXQrMHhh MyBmb3JrX3RyYW1wb2xpbmUrMHg4IAogICAgMCAxMDAyMTQga2VybmVsICAgICAgICAgICB6aW9f d3JpdGVfaXNzdWVfIG1pX3N3aXRjaCsweDEyMiBzbGVlcHFfc3dpdGNoKzB4MTViIHNsZWVwcV93 YWl0KzB4M2YgX3NsZWVwKzB4MmFlIHRhc2txdWV1ZV90aHJlYWRfbG9vcCsweDExYiBmb3JrX2V4 aXQrMHhhMyBmb3JrX3RyYW1wb2xpbmUrMHg4IAogICAgMCAxMDAyMTUga2VybmVsICAgICAgICAg ICB6aW9fd3JpdGVfaXNzdWVfIG1pX3N3aXRjaCsweDEyMiBzbGVlcHFfc3dpdGNoKzB4MTViIHNs ZWVwcV93YWl0KzB4M2YgX3NsZWVwKzB4MmFlIHRhc2txdWV1ZV90aHJlYWRfbG9vcCsweDExYiBm b3JrX2V4aXQrMHhhMyBmb3JrX3RyYW1wb2xpbmUrMHg4IAogICAgMCAxMDAyMTYga2VybmVsICAg ICAgICAgICB6aW9fd3JpdGVfaXNzdWVfIG1pX3N3aXRjaCsweDEyMiBzbGVlcHFfc3dpdGNoKzB4 MTViIHNsZWVwcV93YWl0KzB4M2YgX3NsZWVwKzB4MmFlIHRhc2txdWV1ZV90aHJlYWRfbG9vcCsw eDExYiBmb3JrX2V4aXQrMHhhMyBmb3JrX3RyYW1wb2xpbmUrMHg4IAogICAgMCAxMDAyMTcga2Vy bmVsICAgICAgICAgICB6aW9fd3JpdGVfaXNzdWVfIG1pX3N3aXRjaCsweDEyMiBzbGVlcHFfc3dp dGNoKzB4MTViIHNsZWVwcV93YWl0KzB4M2YgX3NsZWVwKzB4MmFlIHRhc2txdWV1ZV90aHJlYWRf bG9vcCsweDExYiBmb3JrX2V4aXQrMHhhMyBmb3JrX3RyYW1wb2xpbmUrMHg4IAogICAgMCAxMDAy MTgga2VybmVsICAgICAgICAgICB6aW9fd3JpdGVfaXNzdWVfIG1pX3N3aXRjaCsweDEyMiBzbGVl cHFfc3dpdGNoKzB4MTViIHNsZWVwcV93YWl0KzB4M2YgX3NsZWVwKzB4MmFlIHRhc2txdWV1ZV90 aHJlYWRfbG9vcCsweDExYiBmb3JrX2V4aXQrMHhhMyBmb3JrX3RyYW1wb2xpbmUrMHg4IAogICAg MCAxMDAyMTkga2VybmVsICAgICAgICAgICB6aW9fd3JpdGVfaW50cl8wIG1pX3N3aXRjaCsweDEy MiBzbGVlcHFfc3dpdGNoKzB4MTViIHNsZWVwcV93YWl0KzB4M2YgX3NsZWVwKzB4MmFlIHRhc2tx dWV1ZV90aHJlYWRfbG9vcCsweDExYiBmb3JrX2V4aXQrMHhhMyBmb3JrX3RyYW1wb2xpbmUrMHg4 IAogICAgMCAxMDAyMjAga2VybmVsICAgICAgICAgICB6aW9fd3JpdGVfaW50cl8xIG1pX3N3aXRj aCsweDEyMiBzbGVlcHFfc3dpdGNoKzB4MTViIHNsZWVwcV93YWl0KzB4M2YgX3NsZWVwKzB4MmFl IHRhc2txdWV1ZV90aHJlYWRfbG9vcCsweDExYiBmb3JrX2V4aXQrMHhhMyBmb3JrX3RyYW1wb2xp bmUrMHg4IAogICAgMCAxMDAyMjEga2VybmVsICAgICAgICAgICB6aW9fd3JpdGVfaW50cl8yIG1p X3N3aXRjaCsweDEyMiBzbGVlcHFfc3dpdGNoKzB4MTViIHNsZWVwcV93YWl0KzB4M2YgX3NsZWVw KzB4MmFlIHRhc2txdWV1ZV90aHJlYWRfbG9vcCsweDExYiBmb3JrX2V4aXQrMHhhMyBmb3JrX3Ry YW1wb2xpbmUrMHg4IAogICAgMCAxMDAyMjIga2VybmVsICAgICAgICAgICB6aW9fd3JpdGVfaW50 cl8zIG1pX3N3aXRjaCsweDEyMiBzbGVlcHFfc3dpdGNoKzB4MTViIHNsZWVwcV93YWl0KzB4M2Yg X3NsZWVwKzB4MmFlIHRhc2txdWV1ZV90aHJlYWRfbG9vcCsweDExYiBmb3JrX2V4aXQrMHhhMyBm b3JrX3RyYW1wb2xpbmUrMHg4IAogICAgMCAxMDAyMjMga2VybmVsICAgICAgICAgICB6aW9fd3Jp dGVfaW50cl80IG1pX3N3aXRjaCsweDEyMiBzbGVlcHFfc3dpdGNoKzB4MTViIHNsZWVwcV93YWl0 KzB4M2YgX3NsZWVwKzB4MmFlIHRhc2txdWV1ZV90aHJlYWRfbG9vcCsweDExYiBmb3JrX2V4aXQr MHhhMyBmb3JrX3RyYW1wb2xpbmUrMHg4IAogICAgMCAxMDAyMjQga2VybmVsICAgICAgICAgICB6 aW9fd3JpdGVfaW50cl81IG1pX3N3aXRjaCsweDEyMiBzbGVlcHFfc3dpdGNoKzB4MTViIHNsZWVw cV93YWl0KzB4M2YgX3NsZWVwKzB4MmFlIHRhc2txdWV1ZV90aHJlYWRfbG9vcCsweDExYiBmb3Jr X2V4aXQrMHhhMyBmb3JrX3RyYW1wb2xpbmUrMHg4IAogICAgMCAxMDAyMjUga2VybmVsICAgICAg ICAgICB6aW9fd3JpdGVfaW50cl82IG1pX3N3aXRjaCsweDEyMiBzbGVlcHFfc3dpdGNoKzB4MTVi IHNsZWVwcV93YWl0KzB4M2YgX3NsZWVwKzB4MmFlIHRhc2txdWV1ZV90aHJlYWRfbG9vcCsweDEx YiBmb3JrX2V4aXQrMHhhMyBmb3JrX3RyYW1wb2xpbmUrMHg4IAogICAgMCAxMDAyMjYga2VybmVs ICAgICAgICAgICB6aW9fd3JpdGVfaW50cl83IG1pX3N3aXRjaCsweDEyMiBzbGVlcHFfc3dpdGNo KzB4MTViIHNsZWVwcV93YWl0KzB4M2YgX3NsZWVwKzB4MmFlIHRhc2txdWV1ZV90aHJlYWRfbG9v cCsweDExYiBmb3JrX2V4aXQrMHhhMyBmb3JrX3RyYW1wb2xpbmUrMHg4IAogICAgMCAxMDAyMjcg a2VybmVsICAgICAgICAgICB6aW9fd3JpdGVfaW50cl9oIG1pX3N3aXRjaCsweDEyMiBzbGVlcHFf c3dpdGNoKzB4MTViIHNsZWVwcV93YWl0KzB4M2YgX3NsZWVwKzB4MmFlIHRhc2txdWV1ZV90aHJl YWRfbG9vcCsweDExYiBmb3JrX2V4aXQrMHhhMyBmb3JrX3RyYW1wb2xpbmUrMHg4IAogICAgMCAx MDAyMjgga2VybmVsICAgICAgICAgICB6aW9fd3JpdGVfaW50cl9oIG1pX3N3aXRjaCsweDEyMiBz bGVlcHFfc3dpdGNoKzB4MTViIHNsZWVwcV93YWl0KzB4M2YgX3NsZWVwKzB4MmFlIHRhc2txdWV1 ZV90aHJlYWRfbG9vcCsweDExYiBmb3JrX2V4aXQrMHhhMyBmb3JrX3RyYW1wb2xpbmUrMHg4IAog ICAgMCAxMDAyMjkga2VybmVsICAgICAgICAgICB6aW9fd3JpdGVfaW50cl9oIG1pX3N3aXRjaCsw eDEyMiBzbGVlcHFfc3dpdGNoKzB4MTViIHNsZWVwcV93YWl0KzB4M2YgX3NsZWVwKzB4MmFlIHRh c2txdWV1ZV90aHJlYWRfbG9vcCsweDExYiBmb3JrX2V4aXQrMHhhMyBmb3JrX3RyYW1wb2xpbmUr MHg4IAogICAgMCAxMDAyMzAga2VybmVsICAgICAgICAgICB6aW9fd3JpdGVfaW50cl9oIG1pX3N3 aXRjaCsweDEyMiBzbGVlcHFfc3dpdGNoKzB4MTViIHNsZWVwcV93YWl0KzB4M2YgX3NsZWVwKzB4 MmFlIHRhc2txdWV1ZV90aHJlYWRfbG9vcCsweDExYiBmb3JrX2V4aXQrMHhhMyBmb3JrX3RyYW1w b2xpbmUrMHg4IAogICAgMCAxMDAyMzEga2VybmVsICAgICAgICAgICB6aW9fd3JpdGVfaW50cl9o IG1pX3N3aXRjaCsweDEyMiBzbGVlcHFfc3dpdGNoKzB4MTViIHNsZWVwcV93YWl0KzB4M2YgX3Ns ZWVwKzB4MmFlIHRhc2txdWV1ZV90aHJlYWRfbG9vcCsweDExYiBmb3JrX2V4aXQrMHhhMyBmb3Jr X3RyYW1wb2xpbmUrMHg4IAogICAgMCAxMDAyMzIga2VybmVsICAgICAgICAgICB6aW9fZnJlZV9p c3N1ZV8wIG1pX3N3aXRjaCsweDEyMiBzbGVlcHFfc3dpdGNoKzB4MTViIHNsZWVwcV93YWl0KzB4 M2YgX3NsZWVwKzB4MmFlIHRhc2txdWV1ZV90aHJlYWRfbG9vcCsweDExYiBmb3JrX2V4aXQrMHhh MyBmb3JrX3RyYW1wb2xpbmUrMHg4IAogICAgMCAxMDAyMzMga2VybmVsICAgICAgICAgICB6aW9f ZnJlZV9pc3N1ZV8wIG1pX3N3aXRjaCsweDEyMiBzbGVlcHFfc3dpdGNoKzB4MTViIHNsZWVwcV93 YWl0KzB4M2YgX3NsZWVwKzB4MmFlIHRhc2txdWV1ZV90aHJlYWRfbG9vcCsweDExYiBmb3JrX2V4 aXQrMHhhMyBmb3JrX3RyYW1wb2xpbmUrMHg4IAogICAgMCAxMDAyMzQga2VybmVsICAgICAgICAg ICB6aW9fZnJlZV9pc3N1ZV8wIG1pX3N3aXRjaCsweDEyMiBzbGVlcHFfc3dpdGNoKzB4MTViIHNs ZWVwcV93YWl0KzB4M2YgX3NsZWVwKzB4MmFlIHRhc2txdWV1ZV90aHJlYWRfbG9vcCsweDExYiBm b3JrX2V4aXQrMHhhMyBmb3JrX3RyYW1wb2xpbmUrMHg4IAogICAgMCAxMDAyMzUga2VybmVsICAg ICAgICAgICB6aW9fZnJlZV9pc3N1ZV8wIG1pX3N3aXRjaCsweDEyMiBzbGVlcHFfc3dpdGNoKzB4 MTViIHNsZWVwcV93YWl0KzB4M2YgX3NsZWVwKzB4MmFlIHRhc2txdWV1ZV90aHJlYWRfbG9vcCsw eDExYiBmb3JrX2V4aXQrMHhhMyBmb3JrX3RyYW1wb2xpbmUrMHg4IAogICAgMCAxMDAyMzYga2Vy bmVsICAgICAgICAgICB6aW9fZnJlZV9pc3N1ZV8wIG1pX3N3aXRjaCsweDEyMiBzbGVlcHFfc3dp dGNoKzB4MTViIHNsZWVwcV93YWl0KzB4M2YgX3NsZWVwKzB4MmFlIHRhc2txdWV1ZV90aHJlYWRf bG9vcCsweDExYiBmb3JrX2V4aXQrMHhhMyBmb3JrX3RyYW1wb2xpbmUrMHg4IAogICAgMCAxMDAy Mzcga2VybmVsICAgICAgICAgICB6aW9fZnJlZV9pc3N1ZV8wIG1pX3N3aXRjaCsweDEyMiBzbGVl cHFfc3dpdGNoKzB4MTViIHNsZWVwcV93YWl0KzB4M2YgX3NsZWVwKzB4MmFlIHRhc2txdWV1ZV90 aHJlYWRfbG9vcCsweDExYiBmb3JrX2V4aXQrMHhhMyBmb3JrX3RyYW1wb2xpbmUrMHg4IAogICAg MCAxMDAyMzgga2VybmVsICAgICAgICAgICB6aW9fZnJlZV9pc3N1ZV8wIG1pX3N3aXRjaCsweDEy MiBzbGVlcHFfc3dpdGNoKzB4MTViIHNsZWVwcV93YWl0KzB4M2YgX3NsZWVwKzB4MmFlIHRhc2tx dWV1ZV90aHJlYWRfbG9vcCsweDExYiBmb3JrX2V4aXQrMHhhMyBmb3JrX3RyYW1wb2xpbmUrMHg4 IAogICAgMCAxMDAyMzkga2VybmVsICAgICAgICAgICB6aW9fZnJlZV9pc3N1ZV8wIG1pX3N3aXRj aCsweDEyMiBzbGVlcHFfc3dpdGNoKzB4MTViIHNsZWVwcV93YWl0KzB4M2YgX3NsZWVwKzB4MmFl IHRhc2txdWV1ZV90aHJlYWRfbG9vcCsweDExYiBmb3JrX2V4aXQrMHhhMyBmb3JrX3RyYW1wb2xp bmUrMHg4IAogICAgMCAxMDAyNDAga2VybmVsICAgICAgICAgICB6aW9fZnJlZV9pc3N1ZV8wIG1p X3N3aXRjaCsweDEyMiBzbGVlcHFfc3dpdGNoKzB4MTViIHNsZWVwcV93YWl0KzB4M2YgX3NsZWVw KzB4MmFlIHRhc2txdWV1ZV90aHJlYWRfbG9vcCsweDExYiBmb3JrX2V4aXQrMHhhMyBmb3JrX3Ry YW1wb2xpbmUrMHg4IAogICAgMCAxMDAyNDEga2VybmVsICAgICAgICAgICB6aW9fZnJlZV9pc3N1 ZV8wIG1pX3N3aXRjaCsweDEyMiBzbGVlcHFfc3dpdGNoKzB4MTViIHNsZWVwcV93YWl0KzB4M2Yg X3NsZWVwKzB4MmFlIHRhc2txdWV1ZV90aHJlYWRfbG9vcCsweDExYiBmb3JrX2V4aXQrMHhhMyBm b3JrX3RyYW1wb2xpbmUrMHg4IAogICAgMCAxMDAyNDIga2VybmVsICAgICAgICAgICB6aW9fZnJl ZV9pc3N1ZV8wIG1pX3N3aXRjaCsweDEyMiBzbGVlcHFfc3dpdGNoKzB4MTViIHNsZWVwcV93YWl0 KzB4M2YgX3NsZWVwKzB4MmFlIHRhc2txdWV1ZV90aHJlYWRfbG9vcCsweDExYiBmb3JrX2V4aXQr MHhhMyBmb3JrX3RyYW1wb2xpbmUrMHg4IAogICAgMCAxMDAyNDMga2VybmVsICAgICAgICAgICB6 aW9fZnJlZV9pc3N1ZV8wIG1pX3N3aXRjaCsweDEyMiBzbGVlcHFfc3dpdGNoKzB4MTViIHNsZWVw cV93YWl0KzB4M2YgX3NsZWVwKzB4MmFlIHRhc2txdWV1ZV90aHJlYWRfbG9vcCsweDExYiBmb3Jr X2V4aXQrMHhhMyBmb3JrX3RyYW1wb2xpbmUrMHg4IAogICAgMCAxMDAyNDQga2VybmVsICAgICAg ICAgICB6aW9fZnJlZV9pc3N1ZV8xIG1pX3N3aXRjaCsweDEyMiBzbGVlcHFfc3dpdGNoKzB4MTVi IHNsZWVwcV93YWl0KzB4M2YgX3NsZWVwKzB4MmFlIHRhc2txdWV1ZV90aHJlYWRfbG9vcCsweDEx YiBmb3JrX2V4aXQrMHhhMyBmb3JrX3RyYW1wb2xpbmUrMHg4IAogICAgMCAxMDAyNDUga2VybmVs ICAgICAgICAgICB6aW9fZnJlZV9pc3N1ZV8xIG1pX3N3aXRjaCsweDEyMiBzbGVlcHFfc3dpdGNo KzB4MTViIHNsZWVwcV93YWl0KzB4M2YgX3NsZWVwKzB4MmFlIHRhc2txdWV1ZV90aHJlYWRfbG9v cCsweDExYiBmb3JrX2V4aXQrMHhhMyBmb3JrX3RyYW1wb2xpbmUrMHg4IAogICAgMCAxMDAyNDYg a2VybmVsICAgICAgICAgICB6aW9fZnJlZV9pc3N1ZV8xIG1pX3N3aXRjaCsweDEyMiBzbGVlcHFf c3dpdGNoKzB4MTViIHNsZWVwcV93YWl0KzB4M2YgX3NsZWVwKzB4MmFlIHRhc2txdWV1ZV90aHJl YWRfbG9vcCsweDExYiBmb3JrX2V4aXQrMHhhMyBmb3JrX3RyYW1wb2xpbmUrMHg4IAogICAgMCAx MDAyNDcga2VybmVsICAgICAgICAgICB6aW9fZnJlZV9pc3N1ZV8xIG1pX3N3aXRjaCsweDEyMiBz bGVlcHFfc3dpdGNoKzB4MTViIHNsZWVwcV93YWl0KzB4M2YgX3NsZWVwKzB4MmFlIHRhc2txdWV1 ZV90aHJlYWRfbG9vcCsweDExYiBmb3JrX2V4aXQrMHhhMyBmb3JrX3RyYW1wb2xpbmUrMHg4IAog ICAgMCAxMDAyNDgga2VybmVsICAgICAgICAgICB6aW9fZnJlZV9pc3N1ZV8xIG1pX3N3aXRjaCsw eDEyMiBzbGVlcHFfc3dpdGNoKzB4MTViIHNsZWVwcV93YWl0KzB4M2YgX3NsZWVwKzB4MmFlIHRh c2txdWV1ZV90aHJlYWRfbG9vcCsweDExYiBmb3JrX2V4aXQrMHhhMyBmb3JrX3RyYW1wb2xpbmUr MHg4IAogICAgMCAxMDAyNDkga2VybmVsICAgICAgICAgICB6aW9fZnJlZV9pc3N1ZV8xIG1pX3N3 aXRjaCsweDEyMiBzbGVlcHFfc3dpdGNoKzB4MTViIHNsZWVwcV93YWl0KzB4M2YgX3NsZWVwKzB4 MmFlIHRhc2txdWV1ZV90aHJlYWRfbG9vcCsweDExYiBmb3JrX2V4aXQrMHhhMyBmb3JrX3RyYW1w b2xpbmUrMHg4IAogICAgMCAxMDAyNTAga2VybmVsICAgICAgICAgICB6aW9fZnJlZV9pc3N1ZV8x IG1pX3N3aXRjaCsweDEyMiBzbGVlcHFfc3dpdGNoKzB4MTViIHNsZWVwcV93YWl0KzB4M2YgX3Ns ZWVwKzB4MmFlIHRhc2txdWV1ZV90aHJlYWRfbG9vcCsweDExYiBmb3JrX2V4aXQrMHhhMyBmb3Jr X3RyYW1wb2xpbmUrMHg4IAogICAgMCAxMDAyNTEga2VybmVsICAgICAgICAgICB6aW9fZnJlZV9p c3N1ZV8xIG1pX3N3aXRjaCsweDEyMiBzbGVlcHFfc3dpdGNoKzB4MTViIHNsZWVwcV93YWl0KzB4 M2YgX3NsZWVwKzB4MmFlIHRhc2txdWV1ZV90aHJlYWRfbG9vcCsweDExYiBmb3JrX2V4aXQrMHhh MyBmb3JrX3RyYW1wb2xpbmUrMHg4IAogICAgMCAxMDAyNTIga2VybmVsICAgICAgICAgICB6aW9f ZnJlZV9pc3N1ZV8xIG1pX3N3aXRjaCsweDEyMiBzbGVlcHFfc3dpdGNoKzB4MTViIHNsZWVwcV93 YWl0KzB4M2YgX3NsZWVwKzB4MmFlIHRhc2txdWV1ZV90aHJlYWRfbG9vcCsweDExYiBmb3JrX2V4 aXQrMHhhMyBmb3JrX3RyYW1wb2xpbmUrMHg4IAogICAgMCAxMDAyNTMga2VybmVsICAgICAgICAg ICB6aW9fZnJlZV9pc3N1ZV8xIG1pX3N3aXRjaCsweDEyMiBzbGVlcHFfc3dpdGNoKzB4MTViIHNs ZWVwcV93YWl0KzB4M2YgX3NsZWVwKzB4MmFlIHRhc2txdWV1ZV90aHJlYWRfbG9vcCsweDExYiBm b3JrX2V4aXQrMHhhMyBmb3JrX3RyYW1wb2xpbmUrMHg4IAogICAgMCAxMDAyNTQga2VybmVsICAg ICAgICAgICB6aW9fZnJlZV9pc3N1ZV8xIG1pX3N3aXRjaCsweDEyMiBzbGVlcHFfc3dpdGNoKzB4 MTViIHNsZWVwcV93YWl0KzB4M2YgX3NsZWVwKzB4MmFlIHRhc2txdWV1ZV90aHJlYWRfbG9vcCsw eDExYiBmb3JrX2V4aXQrMHhhMyBmb3JrX3RyYW1wb2xpbmUrMHg4IAogICAgMCAxMDAyNTUga2Vy bmVsICAgICAgICAgICB6aW9fZnJlZV9pc3N1ZV8xIG1pX3N3aXRjaCsweDEyMiBzbGVlcHFfc3dp dGNoKzB4MTViIHNsZWVwcV93YWl0KzB4M2YgX3NsZWVwKzB4MmFlIHRhc2txdWV1ZV90aHJlYWRf bG9vcCsweDExYiBmb3JrX2V4aXQrMHhhMyBmb3JrX3RyYW1wb2xpbmUrMHg4IAogICAgMCAxMDAy NTYga2VybmVsICAgICAgICAgICB6aW9fZnJlZV9pc3N1ZV8yIG1pX3N3aXRjaCsweDEyMiBzbGVl cHFfc3dpdGNoKzB4MTViIHNsZWVwcV93YWl0KzB4M2YgX3NsZWVwKzB4MmFlIHRhc2txdWV1ZV90 aHJlYWRfbG9vcCsweDExYiBmb3JrX2V4aXQrMHhhMyBmb3JrX3RyYW1wb2xpbmUrMHg4IAogICAg MCAxMDAyNTcga2VybmVsICAgICAgICAgICB6aW9fZnJlZV9pc3N1ZV8yIG1pX3N3aXRjaCsweDEy MiBzbGVlcHFfc3dpdGNoKzB4MTViIHNsZWVwcV93YWl0KzB4M2YgX3NsZWVwKzB4MmFlIHRhc2tx dWV1ZV90aHJlYWRfbG9vcCsweDExYiBmb3JrX2V4aXQrMHhhMyBmb3JrX3RyYW1wb2xpbmUrMHg4 IAogICAgMCAxMDAyNTgga2VybmVsICAgICAgICAgICB6aW9fZnJlZV9pc3N1ZV8yIG1pX3N3aXRj aCsweDEyMiBzbGVlcHFfc3dpdGNoKzB4MTViIHNsZWVwcV93YWl0KzB4M2YgX3NsZWVwKzB4MmFl IHRhc2txdWV1ZV90aHJlYWRfbG9vcCsweDExYiBmb3JrX2V4aXQrMHhhMyBmb3JrX3RyYW1wb2xp bmUrMHg4IAogICAgMCAxMDAyNTkga2VybmVsICAgICAgICAgICB6aW9fZnJlZV9pc3N1ZV8yIG1p X3N3aXRjaCsweDEyMiBzbGVlcHFfc3dpdGNoKzB4MTViIHNsZWVwcV93YWl0KzB4M2YgX3NsZWVw KzB4MmFlIHRhc2txdWV1ZV90aHJlYWRfbG9vcCsweDExYiBmb3JrX2V4aXQrMHhhMyBmb3JrX3Ry YW1wb2xpbmUrMHg4IAogICAgMCAxMDAyNjAga2VybmVsICAgICAgICAgICB6aW9fZnJlZV9pc3N1 ZV8yIG1pX3N3aXRjaCsweDEyMiBzbGVlcHFfc3dpdGNoKzB4MTViIHNsZWVwcV93YWl0KzB4M2Yg X3NsZWVwKzB4MmFlIHRhc2txdWV1ZV90aHJlYWRfbG9vcCsweDExYiBmb3JrX2V4aXQrMHhhMyBm b3JrX3RyYW1wb2xpbmUrMHg4IAogICAgMCAxMDAyNjEga2VybmVsICAgICAgICAgICB6aW9fZnJl ZV9pc3N1ZV8yIG1pX3N3aXRjaCsweDEyMiBzbGVlcHFfc3dpdGNoKzB4MTViIHNsZWVwcV93YWl0 KzB4M2YgX3NsZWVwKzB4MmFlIHRhc2txdWV1ZV90aHJlYWRfbG9vcCsweDExYiBmb3JrX2V4aXQr MHhhMyBmb3JrX3RyYW1wb2xpbmUrMHg4IAogICAgMCAxMDAyNjIga2VybmVsICAgICAgICAgICB6 aW9fZnJlZV9pc3N1ZV8yIG1pX3N3aXRjaCsweDEyMiBzbGVlcHFfc3dpdGNoKzB4MTViIHNsZWVw cV93YWl0KzB4M2YgX3NsZWVwKzB4MmFlIHRhc2txdWV1ZV90aHJlYWRfbG9vcCsweDExYiBmb3Jr X2V4aXQrMHhhMyBmb3JrX3RyYW1wb2xpbmUrMHg4IAogICAgMCAxMDAyNjMga2VybmVsICAgICAg ICAgICB6aW9fZnJlZV9pc3N1ZV8yIG1pX3N3aXRjaCsweDEyMiBzbGVlcHFfc3dpdGNoKzB4MTVi IHNsZWVwcV93YWl0KzB4M2YgX3NsZWVwKzB4MmFlIHRhc2txdWV1ZV90aHJlYWRfbG9vcCsweDEx YiBmb3JrX2V4aXQrMHhhMyBmb3JrX3RyYW1wb2xpbmUrMHg4IAogICAgMCAxMDAyNjQga2VybmVs ICAgICAgICAgICB6aW9fZnJlZV9pc3N1ZV8yIG1pX3N3aXRjaCsweDEyMiBzbGVlcHFfc3dpdGNo KzB4MTViIHNsZWVwcV93YWl0KzB4M2YgX3NsZWVwKzB4MmFlIHRhc2txdWV1ZV90aHJlYWRfbG9v cCsweDExYiBmb3JrX2V4aXQrMHhhMyBmb3JrX3RyYW1wb2xpbmUrMHg4IAogICAgMCAxMDAyNjUg a2VybmVsICAgICAgICAgICB6aW9fZnJlZV9pc3N1ZV8yIG1pX3N3aXRjaCsweDEyMiBzbGVlcHFf c3dpdGNoKzB4MTViIHNsZWVwcV93YWl0KzB4M2YgX3NsZWVwKzB4MmFlIHRhc2txdWV1ZV90aHJl YWRfbG9vcCsweDExYiBmb3JrX2V4aXQrMHhhMyBmb3JrX3RyYW1wb2xpbmUrMHg4IAogICAgMCAx MDAyNjYga2VybmVsICAgICAgICAgICB6aW9fZnJlZV9pc3N1ZV8yIG1pX3N3aXRjaCsweDEyMiBz bGVlcHFfc3dpdGNoKzB4MTViIHNsZWVwcV93YWl0KzB4M2YgX3NsZWVwKzB4MmFlIHRhc2txdWV1 ZV90aHJlYWRfbG9vcCsweDExYiBmb3JrX2V4aXQrMHhhMyBmb3JrX3RyYW1wb2xpbmUrMHg4IAog ICAgMCAxMDAyNjcga2VybmVsICAgICAgICAgICB6aW9fZnJlZV9pc3N1ZV8yIG1pX3N3aXRjaCsw eDEyMiBzbGVlcHFfc3dpdGNoKzB4MTViIHNsZWVwcV93YWl0KzB4M2YgX3NsZWVwKzB4MmFlIHRh c2txdWV1ZV90aHJlYWRfbG9vcCsweDExYiBmb3JrX2V4aXQrMHhhMyBmb3JrX3RyYW1wb2xpbmUr MHg4IAogICAgMCAxMDAyNjgga2VybmVsICAgICAgICAgICB6aW9fZnJlZV9pc3N1ZV8zIG1pX3N3 aXRjaCsweDEyMiBzbGVlcHFfc3dpdGNoKzB4MTViIHNsZWVwcV93YWl0KzB4M2YgX3NsZWVwKzB4 MmFlIHRhc2txdWV1ZV90aHJlYWRfbG9vcCsweDExYiBmb3JrX2V4aXQrMHhhMyBmb3JrX3RyYW1w b2xpbmUrMHg4IAogICAgMCAxMDAyNjkga2VybmVsICAgICAgICAgICB6aW9fZnJlZV9pc3N1ZV8z IG1pX3N3aXRjaCsweDEyMiBzbGVlcHFfc3dpdGNoKzB4MTViIHNsZWVwcV93YWl0KzB4M2YgX3Ns ZWVwKzB4MmFlIHRhc2txdWV1ZV90aHJlYWRfbG9vcCsweDExYiBmb3JrX2V4aXQrMHhhMyBmb3Jr X3RyYW1wb2xpbmUrMHg4IAogICAgMCAxMDAyNzAga2VybmVsICAgICAgICAgICB6aW9fZnJlZV9p c3N1ZV8zIG1pX3N3aXRjaCsweDEyMiBzbGVlcHFfc3dpdGNoKzB4MTViIHNsZWVwcV93YWl0KzB4 M2YgX3NsZWVwKzB4MmFlIHRhc2txdWV1ZV90aHJlYWRfbG9vcCsweDExYiBmb3JrX2V4aXQrMHhh MyBmb3JrX3RyYW1wb2xpbmUrMHg4IAogICAgMCAxMDAyNzEga2VybmVsICAgICAgICAgICB6aW9f ZnJlZV9pc3N1ZV8zIG1pX3N3aXRjaCsweDEyMiBzbGVlcHFfc3dpdGNoKzB4MTViIHNsZWVwcV93 YWl0KzB4M2YgX3NsZWVwKzB4MmFlIHRhc2txdWV1ZV90aHJlYWRfbG9vcCsweDExYiBmb3JrX2V4 aXQrMHhhMyBmb3JrX3RyYW1wb2xpbmUrMHg4IAogICAgMCAxMDAyNzIga2VybmVsICAgICAgICAg ICB6aW9fZnJlZV9pc3N1ZV8zIG1pX3N3aXRjaCsweDEyMiBzbGVlcHFfc3dpdGNoKzB4MTViIHNs ZWVwcV93YWl0KzB4M2YgX3NsZWVwKzB4MmFlIHRhc2txdWV1ZV90aHJlYWRfbG9vcCsweDExYiBm b3JrX2V4aXQrMHhhMyBmb3JrX3RyYW1wb2xpbmUrMHg4IAogICAgMCAxMDAyNzMga2VybmVsICAg ICAgICAgICB6aW9fZnJlZV9pc3N1ZV8zIG1pX3N3aXRjaCsweDEyMiBzbGVlcHFfc3dpdGNoKzB4 MTViIHNsZWVwcV93YWl0KzB4M2YgX3NsZWVwKzB4MmFlIHRhc2txdWV1ZV90aHJlYWRfbG9vcCsw eDExYiBmb3JrX2V4aXQrMHhhMyBmb3JrX3RyYW1wb2xpbmUrMHg4IAogICAgMCAxMDAyNzQga2Vy bmVsICAgICAgICAgICB6aW9fZnJlZV9pc3N1ZV8zIG1pX3N3aXRjaCsweDEyMiBzbGVlcHFfc3dp dGNoKzB4MTViIHNsZWVwcV93YWl0KzB4M2YgX3NsZWVwKzB4MmFlIHRhc2txdWV1ZV90aHJlYWRf bG9vcCsweDExYiBmb3JrX2V4aXQrMHhhMyBmb3JrX3RyYW1wb2xpbmUrMHg4IAogICAgMCAxMDAy NzUga2VybmVsICAgICAgICAgICB6aW9fZnJlZV9pc3N1ZV8zIG1pX3N3aXRjaCsweDEyMiBzbGVl cHFfc3dpdGNoKzB4MTViIHNsZWVwcV93YWl0KzB4M2YgX3NsZWVwKzB4MmFlIHRhc2txdWV1ZV90 aHJlYWRfbG9vcCsweDExYiBmb3JrX2V4aXQrMHhhMyBmb3JrX3RyYW1wb2xpbmUrMHg4IAogICAg MCAxMDAyNzYga2VybmVsICAgICAgICAgICB6aW9fZnJlZV9pc3N1ZV8zIG1pX3N3aXRjaCsweDEy MiBzbGVlcHFfc3dpdGNoKzB4MTViIHNsZWVwcV93YWl0KzB4M2YgX3NsZWVwKzB4MmFlIHRhc2tx dWV1ZV90aHJlYWRfbG9vcCsweDExYiBmb3JrX2V4aXQrMHhhMyBmb3JrX3RyYW1wb2xpbmUrMHg4 IAogICAgMCAxMDAyNzcga2VybmVsICAgICAgICAgICB6aW9fZnJlZV9pc3N1ZV8zIG1pX3N3aXRj aCsweDEyMiBzbGVlcHFfc3dpdGNoKzB4MTViIHNsZWVwcV93YWl0KzB4M2YgX3NsZWVwKzB4MmFl IHRhc2txdWV1ZV90aHJlYWRfbG9vcCsweDExYiBmb3JrX2V4aXQrMHhhMyBmb3JrX3RyYW1wb2xp bmUrMHg4IAogICAgMCAxMDAyNzgga2VybmVsICAgICAgICAgICB6aW9fZnJlZV9pc3N1ZV8zIG1p X3N3aXRjaCsweDEyMiBzbGVlcHFfc3dpdGNoKzB4MTViIHNsZWVwcV93YWl0KzB4M2YgX3NsZWVw KzB4MmFlIHRhc2txdWV1ZV90aHJlYWRfbG9vcCsweDExYiBmb3JrX2V4aXQrMHhhMyBmb3JrX3Ry YW1wb2xpbmUrMHg4IAogICAgMCAxMDAyNzkga2VybmVsICAgICAgICAgICB6aW9fZnJlZV9pc3N1 ZV8zIG1pX3N3aXRjaCsweDEyMiBzbGVlcHFfc3dpdGNoKzB4MTViIHNsZWVwcV93YWl0KzB4M2Yg X3NsZWVwKzB4MmFlIHRhc2txdWV1ZV90aHJlYWRfbG9vcCsweDExYiBmb3JrX2V4aXQrMHhhMyBm b3JrX3RyYW1wb2xpbmUrMHg4IAogICAgMCAxMDAyODAga2VybmVsICAgICAgICAgICB6aW9fZnJl ZV9pc3N1ZV80IG1pX3N3aXRjaCsweDEyMiBzbGVlcHFfc3dpdGNoKzB4MTViIHNsZWVwcV93YWl0 KzB4M2YgX3NsZWVwKzB4MmFlIHRhc2txdWV1ZV90aHJlYWRfbG9vcCsweDExYiBmb3JrX2V4aXQr MHhhMyBmb3JrX3RyYW1wb2xpbmUrMHg4IAogICAgMCAxMDAyODEga2VybmVsICAgICAgICAgICB6 aW9fZnJlZV9pc3N1ZV80IG1pX3N3aXRjaCsweDEyMiBzbGVlcHFfc3dpdGNoKzB4MTViIHNsZWVw cV93YWl0KzB4M2YgX3NsZWVwKzB4MmFlIHRhc2txdWV1ZV90aHJlYWRfbG9vcCsweDExYiBmb3Jr X2V4aXQrMHhhMyBmb3JrX3RyYW1wb2xpbmUrMHg4IAogICAgMCAxMDAyODIga2VybmVsICAgICAg ICAgICB6aW9fZnJlZV9pc3N1ZV80IG1pX3N3aXRjaCsweDEyMiBzbGVlcHFfc3dpdGNoKzB4MTVi IHNsZWVwcV93YWl0KzB4M2YgX3NsZWVwKzB4MmFlIHRhc2txdWV1ZV90aHJlYWRfbG9vcCsweDEx YiBmb3JrX2V4aXQrMHhhMyBmb3JrX3RyYW1wb2xpbmUrMHg4IAogICAgMCAxMDAyODMga2VybmVs ICAgICAgICAgICB6aW9fZnJlZV9pc3N1ZV80IG1pX3N3aXRjaCsweDEyMiBzbGVlcHFfc3dpdGNo KzB4MTViIHNsZWVwcV93YWl0KzB4M2YgX3NsZWVwKzB4MmFlIHRhc2txdWV1ZV90aHJlYWRfbG9v cCsweDExYiBmb3JrX2V4aXQrMHhhMyBmb3JrX3RyYW1wb2xpbmUrMHg4IAogICAgMCAxMDAyODQg a2VybmVsICAgICAgICAgICB6aW9fZnJlZV9pc3N1ZV80IG1pX3N3aXRjaCsweDEyMiBzbGVlcHFf c3dpdGNoKzB4MTViIHNsZWVwcV93YWl0KzB4M2YgX3NsZWVwKzB4MmFlIHRhc2txdWV1ZV90aHJl YWRfbG9vcCsweDExYiBmb3JrX2V4aXQrMHhhMyBmb3JrX3RyYW1wb2xpbmUrMHg4IAogICAgMCAx MDAyODUga2VybmVsICAgICAgICAgICB6aW9fZnJlZV9pc3N1ZV80IG1pX3N3aXRjaCsweDEyMiBz bGVlcHFfc3dpdGNoKzB4MTViIHNsZWVwcV93YWl0KzB4M2YgX3NsZWVwKzB4MmFlIHRhc2txdWV1 ZV90aHJlYWRfbG9vcCsweDExYiBmb3JrX2V4aXQrMHhhMyBmb3JrX3RyYW1wb2xpbmUrMHg4IAog ICAgMCAxMDAyODYga2VybmVsICAgICAgICAgICB6aW9fZnJlZV9pc3N1ZV80IG1pX3N3aXRjaCsw eDEyMiBzbGVlcHFfc3dpdGNoKzB4MTViIHNsZWVwcV93YWl0KzB4M2YgX3NsZWVwKzB4MmFlIHRh c2txdWV1ZV90aHJlYWRfbG9vcCsweDExYiBmb3JrX2V4aXQrMHhhMyBmb3JrX3RyYW1wb2xpbmUr MHg4IAogICAgMCAxMDAyODcga2VybmVsICAgICAgICAgICB6aW9fZnJlZV9pc3N1ZV80IG1pX3N3 aXRjaCsweDEyMiBzbGVlcHFfc3dpdGNoKzB4MTViIHNsZWVwcV93YWl0KzB4M2YgX3NsZWVwKzB4 MmFlIHRhc2txdWV1ZV90aHJlYWRfbG9vcCsweDExYiBmb3JrX2V4aXQrMHhhMyBmb3JrX3RyYW1w b2xpbmUrMHg4IAogICAgMCAxMDAyODgga2VybmVsICAgICAgICAgICB6aW9fZnJlZV9pc3N1ZV80 IG1pX3N3aXRjaCsweDEyMiBzbGVlcHFfc3dpdGNoKzB4MTViIHNsZWVwcV93YWl0KzB4M2YgX3Ns ZWVwKzB4MmFlIHRhc2txdWV1ZV90aHJlYWRfbG9vcCsweDExYiBmb3JrX2V4aXQrMHhhMyBmb3Jr X3RyYW1wb2xpbmUrMHg4IAogICAgMCAxMDAyODkga2VybmVsICAgICAgICAgICB6aW9fZnJlZV9p c3N1ZV80IG1pX3N3aXRjaCsweDEyMiBzbGVlcHFfc3dpdGNoKzB4MTViIHNsZWVwcV93YWl0KzB4 M2YgX3NsZWVwKzB4MmFlIHRhc2txdWV1ZV90aHJlYWRfbG9vcCsweDExYiBmb3JrX2V4aXQrMHhh MyBmb3JrX3RyYW1wb2xpbmUrMHg4IAogICAgMCAxMDAyOTAga2VybmVsICAgICAgICAgICB6aW9f ZnJlZV9pc3N1ZV80IG1pX3N3aXRjaCsweDEyMiBzbGVlcHFfc3dpdGNoKzB4MTViIHNsZWVwcV93 YWl0KzB4M2YgX3NsZWVwKzB4MmFlIHRhc2txdWV1ZV90aHJlYWRfbG9vcCsweDExYiBmb3JrX2V4 aXQrMHhhMyBmb3JrX3RyYW1wb2xpbmUrMHg4IAogICAgMCAxMDAyOTEga2VybmVsICAgICAgICAg ICB6aW9fZnJlZV9pc3N1ZV80IG1pX3N3aXRjaCsweDEyMiBzbGVlcHFfc3dpdGNoKzB4MTViIHNs ZWVwcV93YWl0KzB4M2YgX3NsZWVwKzB4MmFlIHRhc2txdWV1ZV90aHJlYWRfbG9vcCsweDExYiBm b3JrX2V4aXQrMHhhMyBmb3JrX3RyYW1wb2xpbmUrMHg4IAogICAgMCAxMDAyOTIga2VybmVsICAg ICAgICAgICB6aW9fZnJlZV9pc3N1ZV81IG1pX3N3aXRjaCsweDEyMiBzbGVlcHFfc3dpdGNoKzB4 MTViIHNsZWVwcV93YWl0KzB4M2YgX3NsZWVwKzB4MmFlIHRhc2txdWV1ZV90aHJlYWRfbG9vcCsw eDExYiBmb3JrX2V4aXQrMHhhMyBmb3JrX3RyYW1wb2xpbmUrMHg4IAogICAgMCAxMDAyOTMga2Vy bmVsICAgICAgICAgICB6aW9fZnJlZV9pc3N1ZV81IG1pX3N3aXRjaCsweDEyMiBzbGVlcHFfc3dp dGNoKzB4MTViIHNsZWVwcV93YWl0KzB4M2YgX3NsZWVwKzB4MmFlIHRhc2txdWV1ZV90aHJlYWRf bG9vcCsweDExYiBmb3JrX2V4aXQrMHhhMyBmb3JrX3RyYW1wb2xpbmUrMHg4IAogICAgMCAxMDAy OTQga2VybmVsICAgICAgICAgICB6aW9fZnJlZV9pc3N1ZV81IG1pX3N3aXRjaCsweDEyMiBzbGVl cHFfc3dpdGNoKzB4MTViIHNsZWVwcV93YWl0KzB4M2YgX3NsZWVwKzB4MmFlIHRhc2txdWV1ZV90 aHJlYWRfbG9vcCsweDExYiBmb3JrX2V4aXQrMHhhMyBmb3JrX3RyYW1wb2xpbmUrMHg4IAogICAg MCAxMDAyOTUga2VybmVsICAgICAgICAgICB6aW9fZnJlZV9pc3N1ZV81IG1pX3N3aXRjaCsweDEy MiBzbGVlcHFfc3dpdGNoKzB4MTViIHNsZWVwcV93YWl0KzB4M2YgX3NsZWVwKzB4MmFlIHRhc2tx dWV1ZV90aHJlYWRfbG9vcCsweDExYiBmb3JrX2V4aXQrMHhhMyBmb3JrX3RyYW1wb2xpbmUrMHg4 IAogICAgMCAxMDAyOTYga2VybmVsICAgICAgICAgICB6aW9fZnJlZV9pc3N1ZV81IG1pX3N3aXRj aCsweDEyMiBzbGVlcHFfc3dpdGNoKzB4MTViIHNsZWVwcV93YWl0KzB4M2YgX3NsZWVwKzB4MmFl IHRhc2txdWV1ZV90aHJlYWRfbG9vcCsweDExYiBmb3JrX2V4aXQrMHhhMyBmb3JrX3RyYW1wb2xp bmUrMHg4IAogICAgMCAxMDAyOTcga2VybmVsICAgICAgICAgICB6aW9fZnJlZV9pc3N1ZV81IG1p X3N3aXRjaCsweDEyMiBzbGVlcHFfc3dpdGNoKzB4MTViIHNsZWVwcV93YWl0KzB4M2YgX3NsZWVw KzB4MmFlIHRhc2txdWV1ZV90aHJlYWRfbG9vcCsweDExYiBmb3JrX2V4aXQrMHhhMyBmb3JrX3Ry YW1wb2xpbmUrMHg4IAogICAgMCAxMDAyOTgga2VybmVsICAgICAgICAgICB6aW9fZnJlZV9pc3N1 ZV81IG1pX3N3aXRjaCsweDEyMiBzbGVlcHFfc3dpdGNoKzB4MTViIHNsZWVwcV93YWl0KzB4M2Yg X3NsZWVwKzB4MmFlIHRhc2txdWV1ZV90aHJlYWRfbG9vcCsweDExYiBmb3JrX2V4aXQrMHhhMyBm b3JrX3RyYW1wb2xpbmUrMHg4IAogICAgMCAxMDAyOTkga2VybmVsICAgICAgICAgICB6aW9fZnJl ZV9pc3N1ZV81IG1pX3N3aXRjaCsweDEyMiBzbGVlcHFfc3dpdGNoKzB4MTViIHNsZWVwcV93YWl0 KzB4M2YgX3NsZWVwKzB4MmFlIHRhc2txdWV1ZV90aHJlYWRfbG9vcCsweDExYiBmb3JrX2V4aXQr MHhhMyBmb3JrX3RyYW1wb2xpbmUrMHg4IAogICAgMCAxMDAzMDAga2VybmVsICAgICAgICAgICB6 aW9fZnJlZV9pc3N1ZV81IG1pX3N3aXRjaCsweDEyMiBzbGVlcHFfc3dpdGNoKzB4MTViIHNsZWVw cV93YWl0KzB4M2YgX3NsZWVwKzB4MmFlIHRhc2txdWV1ZV90aHJlYWRfbG9vcCsweDExYiBmb3Jr X2V4aXQrMHhhMyBmb3JrX3RyYW1wb2xpbmUrMHg4IAogICAgMCAxMDAzMDEga2VybmVsICAgICAg ICAgICB6aW9fZnJlZV9pc3N1ZV81IG1pX3N3aXRjaCsweDEyMiBzbGVlcHFfc3dpdGNoKzB4MTVi IHNsZWVwcV93YWl0KzB4M2YgX3NsZWVwKzB4MmFlIHRhc2txdWV1ZV90aHJlYWRfbG9vcCsweDEx YiBmb3JrX2V4aXQrMHhhMyBmb3JrX3RyYW1wb2xpbmUrMHg4IAogICAgMCAxMDAzMDIga2VybmVs ICAgICAgICAgICB6aW9fZnJlZV9pc3N1ZV81IG1pX3N3aXRjaCsweDEyMiBzbGVlcHFfc3dpdGNo KzB4MTViIHNsZWVwcV93YWl0KzB4M2YgX3NsZWVwKzB4MmFlIHRhc2txdWV1ZV90aHJlYWRfbG9v cCsweDExYiBmb3JrX2V4aXQrMHhhMyBmb3JrX3RyYW1wb2xpbmUrMHg4IAogICAgMCAxMDAzMDMg a2VybmVsICAgICAgICAgICB6aW9fZnJlZV9pc3N1ZV81IG1pX3N3aXRjaCsweDEyMiBzbGVlcHFf c3dpdGNoKzB4MTViIHNsZWVwcV93YWl0KzB4M2YgX3NsZWVwKzB4MmFlIHRhc2txdWV1ZV90aHJl YWRfbG9vcCsweDExYiBmb3JrX2V4aXQrMHhhMyBmb3JrX3RyYW1wb2xpbmUrMHg4IAogICAgMCAx MDAzMDQga2VybmVsICAgICAgICAgICB6aW9fZnJlZV9pc3N1ZV82IG1pX3N3aXRjaCsweDEyMiBz bGVlcHFfc3dpdGNoKzB4MTViIHNsZWVwcV93YWl0KzB4M2YgX3NsZWVwKzB4MmFlIHRhc2txdWV1 ZV90aHJlYWRfbG9vcCsweDExYiBmb3JrX2V4aXQrMHhhMyBmb3JrX3RyYW1wb2xpbmUrMHg4IAog ICAgMCAxMDAzMDUga2VybmVsICAgICAgICAgICB6aW9fZnJlZV9pc3N1ZV82IG1pX3N3aXRjaCsw eDEyMiBzbGVlcHFfc3dpdGNoKzB4MTViIHNsZWVwcV93YWl0KzB4M2YgX3NsZWVwKzB4MmFlIHRh c2txdWV1ZV90aHJlYWRfbG9vcCsweDExYiBmb3JrX2V4aXQrMHhhMyBmb3JrX3RyYW1wb2xpbmUr MHg4IAogICAgMCAxMDAzMDYga2VybmVsICAgICAgICAgICB6aW9fZnJlZV9pc3N1ZV82IG1pX3N3 aXRjaCsweDEyMiBzbGVlcHFfc3dpdGNoKzB4MTViIHNsZWVwcV93YWl0KzB4M2YgX3NsZWVwKzB4 MmFlIHRhc2txdWV1ZV90aHJlYWRfbG9vcCsweDExYiBmb3JrX2V4aXQrMHhhMyBmb3JrX3RyYW1w b2xpbmUrMHg4IAogICAgMCAxMDAzMDcga2VybmVsICAgICAgICAgICB6aW9fZnJlZV9pc3N1ZV82 IG1pX3N3aXRjaCsweDEyMiBzbGVlcHFfc3dpdGNoKzB4MTViIHNsZWVwcV93YWl0KzB4M2YgX3Ns ZWVwKzB4MmFlIHRhc2txdWV1ZV90aHJlYWRfbG9vcCsweDExYiBmb3JrX2V4aXQrMHhhMyBmb3Jr X3RyYW1wb2xpbmUrMHg4IAogICAgMCAxMDAzMDgga2VybmVsICAgICAgICAgICB6aW9fZnJlZV9p c3N1ZV82IG1pX3N3aXRjaCsweDEyMiBzbGVlcHFfc3dpdGNoKzB4MTViIHNsZWVwcV93YWl0KzB4 M2YgX3NsZWVwKzB4MmFlIHRhc2txdWV1ZV90aHJlYWRfbG9vcCsweDExYiBmb3JrX2V4aXQrMHhh MyBmb3JrX3RyYW1wb2xpbmUrMHg4IAogICAgMCAxMDAzMDkga2VybmVsICAgICAgICAgICB6aW9f ZnJlZV9pc3N1ZV82IG1pX3N3aXRjaCsweDEyMiBzbGVlcHFfc3dpdGNoKzB4MTViIHNsZWVwcV93 YWl0KzB4M2YgX3NsZWVwKzB4MmFlIHRhc2txdWV1ZV90aHJlYWRfbG9vcCsweDExYiBmb3JrX2V4 aXQrMHhhMyBmb3JrX3RyYW1wb2xpbmUrMHg4IAogICAgMCAxMDAzMTAga2VybmVsICAgICAgICAg ICB6aW9fZnJlZV9pc3N1ZV82IG1pX3N3aXRjaCsweDEyMiBzbGVlcHFfc3dpdGNoKzB4MTViIHNs ZWVwcV93YWl0KzB4M2YgX3NsZWVwKzB4MmFlIHRhc2txdWV1ZV90aHJlYWRfbG9vcCsweDExYiBm b3JrX2V4aXQrMHhhMyBmb3JrX3RyYW1wb2xpbmUrMHg4IAogICAgMCAxMDAzMTEga2VybmVsICAg ICAgICAgICB6aW9fZnJlZV9pc3N1ZV82IG1pX3N3aXRjaCsweDEyMiBzbGVlcHFfc3dpdGNoKzB4 MTViIHNsZWVwcV93YWl0KzB4M2YgX3NsZWVwKzB4MmFlIHRhc2txdWV1ZV90aHJlYWRfbG9vcCsw eDExYiBmb3JrX2V4aXQrMHhhMyBmb3JrX3RyYW1wb2xpbmUrMHg4IAogICAgMCAxMDAzMTIga2Vy bmVsICAgICAgICAgICB6aW9fZnJlZV9pc3N1ZV82IG1pX3N3aXRjaCsweDEyMiBzbGVlcHFfc3dp dGNoKzB4MTViIHNsZWVwcV93YWl0KzB4M2YgX3NsZWVwKzB4MmFlIHRhc2txdWV1ZV90aHJlYWRf bG9vcCsweDExYiBmb3JrX2V4aXQrMHhhMyBmb3JrX3RyYW1wb2xpbmUrMHg4IAogICAgMCAxMDAz MTMga2VybmVsICAgICAgICAgICB6aW9fZnJlZV9pc3N1ZV82IG1pX3N3aXRjaCsweDEyMiBzbGVl cHFfc3dpdGNoKzB4MTViIHNsZWVwcV93YWl0KzB4M2YgX3NsZWVwKzB4MmFlIHRhc2txdWV1ZV90 aHJlYWRfbG9vcCsweDExYiBmb3JrX2V4aXQrMHhhMyBmb3JrX3RyYW1wb2xpbmUrMHg4IAogICAg MCAxMDAzMTQga2VybmVsICAgICAgICAgICB6aW9fZnJlZV9pc3N1ZV82IG1pX3N3aXRjaCsweDEy MiBzbGVlcHFfc3dpdGNoKzB4MTViIHNsZWVwcV93YWl0KzB4M2YgX3NsZWVwKzB4MmFlIHRhc2tx dWV1ZV90aHJlYWRfbG9vcCsweDExYiBmb3JrX2V4aXQrMHhhMyBmb3JrX3RyYW1wb2xpbmUrMHg4 IAogICAgMCAxMDAzMTUga2VybmVsICAgICAgICAgICB6aW9fZnJlZV9pc3N1ZV82IG1pX3N3aXRj aCsweDEyMiBzbGVlcHFfc3dpdGNoKzB4MTViIHNsZWVwcV93YWl0KzB4M2YgX3NsZWVwKzB4MmFl IHRhc2txdWV1ZV90aHJlYWRfbG9vcCsweDExYiBmb3JrX2V4aXQrMHhhMyBmb3JrX3RyYW1wb2xp bmUrMHg4IAogICAgMCAxMDAzMTYga2VybmVsICAgICAgICAgICB6aW9fZnJlZV9pc3N1ZV83IG1p X3N3aXRjaCsweDEyMiBzbGVlcHFfc3dpdGNoKzB4MTViIHNsZWVwcV93YWl0KzB4M2YgX3NsZWVw KzB4MmFlIHRhc2txdWV1ZV90aHJlYWRfbG9vcCsweDExYiBmb3JrX2V4aXQrMHhhMyBmb3JrX3Ry YW1wb2xpbmUrMHg4IAogICAgMCAxMDAzMTcga2VybmVsICAgICAgICAgICB6aW9fZnJlZV9pc3N1 ZV83IG1pX3N3aXRjaCsweDEyMiBzbGVlcHFfc3dpdGNoKzB4MTViIHNsZWVwcV93YWl0KzB4M2Yg X3NsZWVwKzB4MmFlIHRhc2txdWV1ZV90aHJlYWRfbG9vcCsweDExYiBmb3JrX2V4aXQrMHhhMyBm b3JrX3RyYW1wb2xpbmUrMHg4IAogICAgMCAxMDAzMTgga2VybmVsICAgICAgICAgICB6aW9fZnJl ZV9pc3N1ZV83IG1pX3N3aXRjaCsweDEyMiBzbGVlcHFfc3dpdGNoKzB4MTViIHNsZWVwcV93YWl0 KzB4M2YgX3NsZWVwKzB4MmFlIHRhc2txdWV1ZV90aHJlYWRfbG9vcCsweDExYiBmb3JrX2V4aXQr MHhhMyBmb3JrX3RyYW1wb2xpbmUrMHg4IAogICAgMCAxMDAzMTkga2VybmVsICAgICAgICAgICB6 aW9fZnJlZV9pc3N1ZV83IG1pX3N3aXRjaCsweDEyMiBzbGVlcHFfc3dpdGNoKzB4MTViIHNsZWVw cV93YWl0KzB4M2YgX3NsZWVwKzB4MmFlIHRhc2txdWV1ZV90aHJlYWRfbG9vcCsweDExYiBmb3Jr X2V4aXQrMHhhMyBmb3JrX3RyYW1wb2xpbmUrMHg4IAogICAgMCAxMDAzMjAga2VybmVsICAgICAg ICAgICB6aW9fZnJlZV9pc3N1ZV83IG1pX3N3aXRjaCsweDEyMiBzbGVlcHFfc3dpdGNoKzB4MTVi IHNsZWVwcV93YWl0KzB4M2YgX3NsZWVwKzB4MmFlIHRhc2txdWV1ZV90aHJlYWRfbG9vcCsweDEx YiBmb3JrX2V4aXQrMHhhMyBmb3JrX3RyYW1wb2xpbmUrMHg4IAogICAgMCAxMDAzMjEga2VybmVs ICAgICAgICAgICB6aW9fZnJlZV9pc3N1ZV83IG1pX3N3aXRjaCsweDEyMiBzbGVlcHFfc3dpdGNo KzB4MTViIHNsZWVwcV93YWl0KzB4M2YgX3NsZWVwKzB4MmFlIHRhc2txdWV1ZV90aHJlYWRfbG9v cCsweDExYiBmb3JrX2V4aXQrMHhhMyBmb3JrX3RyYW1wb2xpbmUrMHg4IAogICAgMCAxMDAzMjIg a2VybmVsICAgICAgICAgICB6aW9fZnJlZV9pc3N1ZV83IG1pX3N3aXRjaCsweDEyMiBzbGVlcHFf c3dpdGNoKzB4MTViIHNsZWVwcV93YWl0KzB4M2YgX3NsZWVwKzB4MmFlIHRhc2txdWV1ZV90aHJl YWRfbG9vcCsweDExYiBmb3JrX2V4aXQrMHhhMyBmb3JrX3RyYW1wb2xpbmUrMHg4IAogICAgMCAx MDAzMjMga2VybmVsICAgICAgICAgICB6aW9fZnJlZV9pc3N1ZV83IG1pX3N3aXRjaCsweDEyMiBz bGVlcHFfc3dpdGNoKzB4MTViIHNsZWVwcV93YWl0KzB4M2YgX3NsZWVwKzB4MmFlIHRhc2txdWV1 ZV90aHJlYWRfbG9vcCsweDExYiBmb3JrX2V4aXQrMHhhMyBmb3JrX3RyYW1wb2xpbmUrMHg4IAog ICAgMCAxMDAzMjQga2VybmVsICAgICAgICAgICB6aW9fZnJlZV9pc3N1ZV83IG1pX3N3aXRjaCsw eDEyMiBzbGVlcHFfc3dpdGNoKzB4MTViIHNsZWVwcV93YWl0KzB4M2YgX3NsZWVwKzB4MmFlIHRh c2txdWV1ZV90aHJlYWRfbG9vcCsweDExYiBmb3JrX2V4aXQrMHhhMyBmb3JrX3RyYW1wb2xpbmUr MHg4IAogICAgMCAxMDAzMjUga2VybmVsICAgICAgICAgICB6aW9fZnJlZV9pc3N1ZV83IG1pX3N3 aXRjaCsweDEyMiBzbGVlcHFfc3dpdGNoKzB4MTViIHNsZWVwcV93YWl0KzB4M2YgX3NsZWVwKzB4 MmFlIHRhc2txdWV1ZV90aHJlYWRfbG9vcCsweDExYiBmb3JrX2V4aXQrMHhhMyBmb3JrX3RyYW1w b2xpbmUrMHg4IAogICAgMCAxMDAzMjYga2VybmVsICAgICAgICAgICB6aW9fZnJlZV9pc3N1ZV83 IG1pX3N3aXRjaCsweDEyMiBzbGVlcHFfc3dpdGNoKzB4MTViIHNsZWVwcV93YWl0KzB4M2YgX3Ns ZWVwKzB4MmFlIHRhc2txdWV1ZV90aHJlYWRfbG9vcCsweDExYiBmb3JrX2V4aXQrMHhhMyBmb3Jr X3RyYW1wb2xpbmUrMHg4IAogICAgMCAxMDAzMjcga2VybmVsICAgICAgICAgICB6aW9fZnJlZV9p c3N1ZV83IG1pX3N3aXRjaCsweDEyMiBzbGVlcHFfc3dpdGNoKzB4MTViIHNsZWVwcV93YWl0KzB4 M2YgX3NsZWVwKzB4MmFlIHRhc2txdWV1ZV90aHJlYWRfbG9vcCsweDExYiBmb3JrX2V4aXQrMHhh MyBmb3JrX3RyYW1wb2xpbmUrMHg4IAogICAgMCAxMDAzMjgga2VybmVsICAgICAgICAgICB6aW9f ZnJlZV9pbnRyICAgIG1pX3N3aXRjaCsweDEyMiBzbGVlcHFfc3dpdGNoKzB4MTViIHNsZWVwcV93 YWl0KzB4M2YgX3NsZWVwKzB4MmFlIHRhc2txdWV1ZV90aHJlYWRfbG9vcCsweDExYiBmb3JrX2V4 aXQrMHhhMyBmb3JrX3RyYW1wb2xpbmUrMHg4IAogICAgMCAxMDAzMjkga2VybmVsICAgICAgICAg ICB6aW9fY2xhaW1faXNzdWUgIG1pX3N3aXRjaCsweDEyMiBzbGVlcHFfc3dpdGNoKzB4MTViIHNs ZWVwcV93YWl0KzB4M2YgX3NsZWVwKzB4MmFlIHRhc2txdWV1ZV90aHJlYWRfbG9vcCsweDExYiBm b3JrX2V4aXQrMHhhMyBmb3JrX3RyYW1wb2xpbmUrMHg4IAogICAgMCAxMDAzMzAga2VybmVsICAg ICAgICAgICB6aW9fY2xhaW1faW50ciAgIG1pX3N3aXRjaCsweDEyMiBzbGVlcHFfc3dpdGNoKzB4 MTViIHNsZWVwcV93YWl0KzB4M2YgX3NsZWVwKzB4MmFlIHRhc2txdWV1ZV90aHJlYWRfbG9vcCsw eDExYiBmb3JrX2V4aXQrMHhhMyBmb3JrX3RyYW1wb2xpbmUrMHg4IAogICAgMCAxMDAzMzEga2Vy bmVsICAgICAgICAgICB6aW9faW9jdGxfaXNzdWUgIG1pX3N3aXRjaCsweDEyMiBzbGVlcHFfc3dp dGNoKzB4MTViIHNsZWVwcV93YWl0KzB4M2YgX3NsZWVwKzB4MmFlIHRhc2txdWV1ZV90aHJlYWRf bG9vcCsweDExYiBmb3JrX2V4aXQrMHhhMyBmb3JrX3RyYW1wb2xpbmUrMHg4IAogICAgMCAxMDAz MzIga2VybmVsICAgICAgICAgICB6aW9faW9jdGxfaW50ciAgIG1pX3N3aXRjaCsweDEyMiBzbGVl cHFfc3dpdGNoKzB4MTViIHNsZWVwcV93YWl0KzB4M2YgX3NsZWVwKzB4MmFlIHRhc2txdWV1ZV90 aHJlYWRfbG9vcCsweDExYiBmb3JrX2V4aXQrMHhhMyBmb3JrX3RyYW1wb2xpbmUrMHg4IAogICAg MCAxMDAzMzQga2VybmVsICAgICAgICAgICBtZXRhc2xhYl9ncm91cF90IG1pX3N3aXRjaCsweDEy MiBzbGVlcHFfc3dpdGNoKzB4MTViIHNsZWVwcV93YWl0KzB4M2YgX3NsZWVwKzB4MmFlIHRhc2tx dWV1ZV90aHJlYWRfbG9vcCsweDExYiBmb3JrX2V4aXQrMHhhMyBmb3JrX3RyYW1wb2xpbmUrMHg4 IAogICAgMCAxMDAzMzUga2VybmVsICAgICAgICAgICBtZXRhc2xhYl9ncm91cF90IG1pX3N3aXRj aCsweDEyMiBzbGVlcHFfc3dpdGNoKzB4MTViIHNsZWVwcV93YWl0KzB4M2YgX3NsZWVwKzB4MmFl IHRhc2txdWV1ZV90aHJlYWRfbG9vcCsweDExYiBmb3JrX2V4aXQrMHhhMyBmb3JrX3RyYW1wb2xp bmUrMHg4IAogICAgMCAxMDAzMzYga2VybmVsICAgICAgICAgICB6ZnNfdm5fcmVsZV90YXNrIG1p X3N3aXRjaCsweDEyMiBzbGVlcHFfc3dpdGNoKzB4MTViIHNsZWVwcV93YWl0KzB4M2YgX3NsZWVw KzB4MmFlIHRhc2txdWV1ZV90aHJlYWRfbG9vcCsweDExYiBmb3JrX2V4aXQrMHhhMyBmb3JrX3Ry YW1wb2xpbmUrMHg4IAogICAgMCAxMDAzNDcga2VybmVsICAgICAgICAgICB6aWxfY2xlYW4gICAg ICAgIG1pX3N3aXRjaCsweDEyMiBzbGVlcHFfc3dpdGNoKzB4MTViIHNsZWVwcV93YWl0KzB4M2Yg X3NsZWVwKzB4MmFlIHRhc2txdWV1ZV90aHJlYWRfbG9vcCsweDExYiBmb3JrX2V4aXQrMHhhMyBm b3JrX3RyYW1wb2xpbmUrMHg4IAogICAgMCAxMDAzNDgga2VybmVsICAgICAgICAgICB6aWxfY2xl YW4gICAgICAgIG1pX3N3aXRjaCsweDEyMiBzbGVlcHFfc3dpdGNoKzB4MTViIHNsZWVwcV93YWl0 KzB4M2YgX3NsZWVwKzB4MmFlIHRhc2txdWV1ZV90aHJlYWRfbG9vcCsweDExYiBmb3JrX2V4aXQr MHhhMyBmb3JrX3RyYW1wb2xpbmUrMHg4IAogICAgMCAxMDAzNDkga2VybmVsICAgICAgICAgICB6 aWxfY2xlYW4gICAgICAgIG1pX3N3aXRjaCsweDEyMiBzbGVlcHFfc3dpdGNoKzB4MTViIHNsZWVw cV93YWl0KzB4M2YgX3NsZWVwKzB4MmFlIHRhc2txdWV1ZV90aHJlYWRfbG9vcCsweDExYiBmb3Jr X2V4aXQrMHhhMyBmb3JrX3RyYW1wb2xpbmUrMHg4IAogICAgMCAxMDAzNTAga2VybmVsICAgICAg ICAgICB6aWxfY2xlYW4gICAgICAgIG1pX3N3aXRjaCsweDEyMiBzbGVlcHFfc3dpdGNoKzB4MTVi IHNsZWVwcV93YWl0KzB4M2YgX3NsZWVwKzB4MmFlIHRhc2txdWV1ZV90aHJlYWRfbG9vcCsweDEx YiBmb3JrX2V4aXQrMHhhMyBmb3JrX3RyYW1wb2xpbmUrMHg4IAogICAgMCAxMDAzNTEga2VybmVs ICAgICAgICAgICB6aWxfY2xlYW4gICAgICAgIG1pX3N3aXRjaCsweDEyMiBzbGVlcHFfc3dpdGNo KzB4MTViIHNsZWVwcV93YWl0KzB4M2YgX3NsZWVwKzB4MmFlIHRhc2txdWV1ZV90aHJlYWRfbG9v cCsweDExYiBmb3JrX2V4aXQrMHhhMyBmb3JrX3RyYW1wb2xpbmUrMHg4IAogICAgMCAxMDAzNTIg a2VybmVsICAgICAgICAgICB6aWxfY2xlYW4gICAgICAgIG1pX3N3aXRjaCsweDEyMiBzbGVlcHFf c3dpdGNoKzB4MTViIHNsZWVwcV93YWl0KzB4M2YgX3NsZWVwKzB4MmFlIHRhc2txdWV1ZV90aHJl YWRfbG9vcCsweDExYiBmb3JrX2V4aXQrMHhhMyBmb3JrX3RyYW1wb2xpbmUrMHg4IAogICAgMCAx MDAzNTMga2VybmVsICAgICAgICAgICB6aWxfY2xlYW4gICAgICAgIG1pX3N3aXRjaCsweDEyMiBz bGVlcHFfc3dpdGNoKzB4MTViIHNsZWVwcV93YWl0KzB4M2YgX3NsZWVwKzB4MmFlIHRhc2txdWV1 ZV90aHJlYWRfbG9vcCsweDExYiBmb3JrX2V4aXQrMHhhMyBmb3JrX3RyYW1wb2xpbmUrMHg4IAog ICAgMCAxMDAzNTQga2VybmVsICAgICAgICAgICB6aWxfY2xlYW4gICAgICAgIG1pX3N3aXRjaCsw eDEyMiBzbGVlcHFfc3dpdGNoKzB4MTViIHNsZWVwcV93YWl0KzB4M2YgX3NsZWVwKzB4MmFlIHRh c2txdWV1ZV90aHJlYWRfbG9vcCsweDExYiBmb3JrX2V4aXQrMHhhMyBmb3JrX3RyYW1wb2xpbmUr MHg4IAogICAgMCAxMDAzNTUga2VybmVsICAgICAgICAgICB6aWxfY2xlYW4gICAgICAgIG1pX3N3 aXRjaCsweDEyMiBzbGVlcHFfc3dpdGNoKzB4MTViIHNsZWVwcV93YWl0KzB4M2YgX3NsZWVwKzB4 MmFlIHRhc2txdWV1ZV90aHJlYWRfbG9vcCsweDExYiBmb3JrX2V4aXQrMHhhMyBmb3JrX3RyYW1w b2xpbmUrMHg4IAogICAgMCAxMDAzNTYga2VybmVsICAgICAgICAgICB6aWxfY2xlYW4gICAgICAg IG1pX3N3aXRjaCsweDEyMiBzbGVlcHFfc3dpdGNoKzB4MTViIHNsZWVwcV93YWl0KzB4M2YgX3Ns ZWVwKzB4MmFlIHRhc2txdWV1ZV90aHJlYWRfbG9vcCsweDExYiBmb3JrX2V4aXQrMHhhMyBmb3Jr X3RyYW1wb2xpbmUrMHg4IAogICAgMCAxMDAzNTcga2VybmVsICAgICAgICAgICB6aWxfY2xlYW4g ICAgICAgIG1pX3N3aXRjaCsweDEyMiBzbGVlcHFfc3dpdGNoKzB4MTViIHNsZWVwcV93YWl0KzB4 M2YgX3NsZWVwKzB4MmFlIHRhc2txdWV1ZV90aHJlYWRfbG9vcCsweDExYiBmb3JrX2V4aXQrMHhh MyBmb3JrX3RyYW1wb2xpbmUrMHg4IAogICAgMCAxMDAzNTgga2VybmVsICAgICAgICAgICB6aWxf Y2xlYW4gICAgICAgIG1pX3N3aXRjaCsweDEyMiBzbGVlcHFfc3dpdGNoKzB4MTViIHNsZWVwcV93 YWl0KzB4M2YgX3NsZWVwKzB4MmFlIHRhc2txdWV1ZV90aHJlYWRfbG9vcCsweDExYiBmb3JrX2V4 aXQrMHhhMyBmb3JrX3RyYW1wb2xpbmUrMHg4IAogICAgMCAxMDAzNTkga2VybmVsICAgICAgICAg ICB6aWxfY2xlYW4gICAgICAgIG1pX3N3aXRjaCsweDEyMiBzbGVlcHFfc3dpdGNoKzB4MTViIHNs ZWVwcV93YWl0KzB4M2YgX3NsZWVwKzB4MmFlIHRhc2txdWV1ZV90aHJlYWRfbG9vcCsweDExYiBm b3JrX2V4aXQrMHhhMyBmb3JrX3RyYW1wb2xpbmUrMHg4IAogICAgMCAxMDAzNjAga2VybmVsICAg ICAgICAgICB6aWxfY2xlYW4gICAgICAgIG1pX3N3aXRjaCsweDEyMiBzbGVlcHFfc3dpdGNoKzB4 MTViIHNsZWVwcV93YWl0KzB4M2YgX3NsZWVwKzB4MmFlIHRhc2txdWV1ZV90aHJlYWRfbG9vcCsw eDExYiBmb3JrX2V4aXQrMHhhMyBmb3JrX3RyYW1wb2xpbmUrMHg4IAogICAgMCAxMDAzNjEga2Vy bmVsICAgICAgICAgICB6aWxfY2xlYW4gICAgICAgIG1pX3N3aXRjaCsweDEyMiBzbGVlcHFfc3dp dGNoKzB4MTViIHNsZWVwcV93YWl0KzB4M2YgX3NsZWVwKzB4MmFlIHRhc2txdWV1ZV90aHJlYWRf bG9vcCsweDExYiBmb3JrX2V4aXQrMHhhMyBmb3JrX3RyYW1wb2xpbmUrMHg4IAogICAgMCAxMDAz NjIga2VybmVsICAgICAgICAgICB6aWxfY2xlYW4gICAgICAgIG1pX3N3aXRjaCsweDEyMiBzbGVl cHFfc3dpdGNoKzB4MTViIHNsZWVwcV93YWl0KzB4M2YgX3NsZWVwKzB4MmFlIHRhc2txdWV1ZV90 aHJlYWRfbG9vcCsweDExYiBmb3JrX2V4aXQrMHhhMyBmb3JrX3RyYW1wb2xpbmUrMHg4IAogICAg MCAxMDAzNjMga2VybmVsICAgICAgICAgICB6aWxfY2xlYW4gICAgICAgIG1pX3N3aXRjaCsweDEy MiBzbGVlcHFfc3dpdGNoKzB4MTViIHNsZWVwcV93YWl0KzB4M2YgX3NsZWVwKzB4MmFlIHRhc2tx dWV1ZV90aHJlYWRfbG9vcCsweDExYiBmb3JrX2V4aXQrMHhhMyBmb3JrX3RyYW1wb2xpbmUrMHg4 IAogICAgMCAxMDAzNjQga2VybmVsICAgICAgICAgICB6aWxfY2xlYW4gICAgICAgIG1pX3N3aXRj aCsweDEyMiBzbGVlcHFfc3dpdGNoKzB4MTViIHNsZWVwcV93YWl0KzB4M2YgX3NsZWVwKzB4MmFl IHRhc2txdWV1ZV90aHJlYWRfbG9vcCsweDExYiBmb3JrX2V4aXQrMHhhMyBmb3JrX3RyYW1wb2xp bmUrMHg4IAogICAgMCAxMDAzNjUga2VybmVsICAgICAgICAgICB6aWxfY2xlYW4gICAgICAgIG1p X3N3aXRjaCsweDEyMiBzbGVlcHFfc3dpdGNoKzB4MTViIHNsZWVwcV93YWl0KzB4M2YgX3NsZWVw KzB4MmFlIHRhc2txdWV1ZV90aHJlYWRfbG9vcCsweDExYiBmb3JrX2V4aXQrMHhhMyBmb3JrX3Ry YW1wb2xpbmUrMHg4IAogICAgMCAxMDAzNjYga2VybmVsICAgICAgICAgICB6aWxfY2xlYW4gICAg ICAgIG1pX3N3aXRjaCsweDEyMiBzbGVlcHFfc3dpdGNoKzB4MTViIHNsZWVwcV93YWl0KzB4M2Yg X3NsZWVwKzB4MmFlIHRhc2txdWV1ZV90aHJlYWRfbG9vcCsweDExYiBmb3JrX2V4aXQrMHhhMyBm b3JrX3RyYW1wb2xpbmUrMHg4IAogICAgMCAxMDAzNjcga2VybmVsICAgICAgICAgICB6aWxfY2xl YW4gICAgICAgIG1pX3N3aXRjaCsweDEyMiBzbGVlcHFfc3dpdGNoKzB4MTViIHNsZWVwcV93YWl0 KzB4M2YgX3NsZWVwKzB4MmFlIHRhc2txdWV1ZV90aHJlYWRfbG9vcCsweDExYiBmb3JrX2V4aXQr MHhhMyBmb3JrX3RyYW1wb2xpbmUrMHg4IAogICAgMCAxMDAzNjgga2VybmVsICAgICAgICAgICB6 aWxfY2xlYW4gICAgICAgIG1pX3N3aXRjaCsweDEyMiBzbGVlcHFfc3dpdGNoKzB4MTViIHNsZWVw cV93YWl0KzB4M2YgX3NsZWVwKzB4MmFlIHRhc2txdWV1ZV90aHJlYWRfbG9vcCsweDExYiBmb3Jr X2V4aXQrMHhhMyBmb3JrX3RyYW1wb2xpbmUrMHg4IAogICAgMSAxMDAwMDIgaW5pdCAgICAgICAg ICAgICAtICAgICAgICAgICAgICAgIG1pX3N3aXRjaCsweDEyMiBzbGVlcHFfc3dpdGNoKzB4MTVi IHNsZWVwcV9jYXRjaF9zaWduYWxzKzB4NWJlIHNsZWVwcV93YWl0X3NpZysweDE0IF9zbGVlcCsw eDI5YiBrZXJuX3dhaXQ2KzB4NzI1IHN5c193YWl0NCsweDk0IHN5c2NhbGwrMHg0OGIgWGludDB4 ODBfc3lzY2FsbCsweDIxIAogICAgMiAxMDAwMjcgY2FtICAgICAgICAgICAgICBkb25lcTAgICAg ICAgICAgIG1pX3N3aXRjaCsweDEyMiBzbGVlcHFfc3dpdGNoKzB4MTViIHNsZWVwcV93YWl0KzB4 M2YgX3NsZWVwKzB4MmFlIHhwdF9kb25lX3RkKzB4Y2UgZm9ya19leGl0KzB4YTMgZm9ya190cmFt cG9saW5lKzB4OCAKICAgIDIgMTAwMDQxIGNhbSAgICAgICAgICAgICAgc2Nhbm5lciAgICAgICAg ICBtaV9zd2l0Y2grMHgxMjIgc2xlZXBxX3N3aXRjaCsweDE1YiBzbGVlcHFfd2FpdCsweDNmIF9z bGVlcCsweDJhZSB4cHRfc2Nhbm5lcl90aHJlYWQrMHhjYyBmb3JrX2V4aXQrMHhhMyBmb3JrX3Ry YW1wb2xpbmUrMHg4IAogICAgMyAxMDAwMzEgbXB0X3JlY292ZXJ5MCAgICAtICAgICAgICAgICAg ICAgIG1pX3N3aXRjaCsweDEyMiBzbGVlcHFfc3dpdGNoKzB4MTViIHNsZWVwcV93YWl0KzB4M2Yg X3NsZWVwKzB4MmFlIG1wdF9yZWNvdmVyeV90aHJlYWQrMHhkNiBmb3JrX2V4aXQrMHhhMyBmb3Jr X3RyYW1wb2xpbmUrMHg4IAogICAgNCAxMDAwMzggZmRjMCAgICAgICAgICAgICAtICAgICAgICAg ICAgICAgIG1pX3N3aXRjaCsweDEyMiBzbGVlcHFfc3dpdGNoKzB4MTViIHNsZWVwcV90aW1lZHdh aXQrMHgzZiBfc2xlZXArMHgyODIgZmRjX3RocmVhZCsweDg3NyBmb3JrX2V4aXQrMHhhMyBmb3Jr X3RyYW1wb2xpbmUrMHg4IAogICAgNSAxMDAwMzkgc2N0cF9pdGVyYXRvciAgICAtICAgICAgICAg ICAgICAgIG1pX3N3aXRjaCsweDEyMiBzbGVlcHFfc3dpdGNoKzB4MTViIHNsZWVwcV93YWl0KzB4 M2YgX3NsZWVwKzB4MmFlIHNjdHBfaXRlcmF0b3JfdGhyZWFkKzB4OWMgZm9ya19leGl0KzB4YTMg Zm9ya190cmFtcG9saW5lKzB4OCAKICAgIDYgMTAwMDQyIHBhZ2VkYWVtb24gICAgICAgLSAgICAg ICAgICAgICAgICBtaV9zd2l0Y2grMHgxMjIgc2xlZXBxX3N3aXRjaCsweDE1YiBzbGVlcHFfdGlt ZWR3YWl0KzB4M2YgX3NsZWVwKzB4MjgyIHZtX3BhZ2VvdXQrMHgyYWIgZm9ya19leGl0KzB4YTMg Zm9ya190cmFtcG9saW5lKzB4OCAKICAgIDcgMTAwMDQzIHZtZGFlbW9uICAgICAgICAgLSAgICAg ICAgICAgICAgICBtaV9zd2l0Y2grMHgxMjIgc2xlZXBxX3N3aXRjaCsweDE1YiBzbGVlcHFfd2Fp dCsweDNmIF9zbGVlcCsweDJhZSB2bV9kYWVtb24rMHhjZiBmb3JrX2V4aXQrMHhhMyBmb3JrX3Ry YW1wb2xpbmUrMHg4IAogICAgOCAxMDAwNDQgcGFnZXplcm8gICAgICAgICAtICAgICAgICAgICAg ICAgIG1pX3N3aXRjaCsweDEyMiBzbGVlcHFfc3dpdGNoKzB4MTViIHNsZWVwcV90aW1lZHdhaXQr MHgzZiBfc2xlZXArMHgyODIgdm1fcGFnZXplcm8rMHhkMiBmb3JrX2V4aXQrMHhhMyBmb3JrX3Ry YW1wb2xpbmUrMHg4IAogICAgOSAxMDAwNDUgYnVmZGFlbW9uICAgICAgICAtICAgICAgICAgICAg ICAgIG1pX3N3aXRjaCsweDEyMiBzbGVlcHFfc3dpdGNoKzB4MTViIHNsZWVwcV90aW1lZHdhaXQr MHgzZiBfc2xlZXArMHgyODIgYnVmX2RhZW1vbisweGFjIGZvcmtfZXhpdCsweGEzIGZvcmtfdHJh bXBvbGluZSsweDggCiAgICA5IDEwMDM0MyBidWZkYWVtb24gICAgICAgIC8gd29ya2VyICAgICAg ICAgbWlfc3dpdGNoKzB4MTIyIHNsZWVwcV9zd2l0Y2grMHgxNWIgc2xlZXBxX3RpbWVkd2FpdCsw eDNmIF9zbGVlcCsweDI4MiBzb2Z0ZGVwX2ZsdXNoKzB4MWZhIGZvcmtfZXhpdCsweGEzIGZvcmtf dHJhbXBvbGluZSsweDggCiAgIDEwIDEwMDAwMSBhdWRpdCAgICAgICAgICAgIC0gICAgICAgICAg ICAgICAgbWlfc3dpdGNoKzB4MTIyIHNsZWVwcV9zd2l0Y2grMHgxNWIgc2xlZXBxX3dhaXQrMHgz ZiBfY3Zfd2FpdCsweDE4MiBhdWRpdF93b3JrZXIrMHhhNCBmb3JrX2V4aXQrMHhhMyBmb3JrX3Ry YW1wb2xpbmUrMHg4IAogICAxMSAxMDAwMDMgaWRsZSAgICAgICAgICAgICBpZGxlOiBjcHUwICAg ICAgIDxydW5uaW5nPiAgICAgICAgICAgICAgICAgICAgCiAgIDExIDEwMDAwNCBpZGxlICAgICAg ICAgICAgIGlkbGU6IGNwdTEgICAgICAgbWlfc3dpdGNoKzB4MTIyIHNjaGVkX2lkbGV0ZCsweDNj ZiBmb3JrX2V4aXQrMHhhMyBmb3JrX3RyYW1wb2xpbmUrMHg4IAogICAxMSAxMDAwMDUgaWRsZSAg ICAgICAgICAgICBpZGxlOiBjcHUyICAgICAgIDxydW5uaW5nPiAgICAgICAgICAgICAgICAgICAg CiAgIDExIDEwMDAwNiBpZGxlICAgICAgICAgICAgIGlkbGU6IGNwdTMgICAgICAgPHJ1bm5pbmc+ ICAgICAgICAgICAgICAgICAgICAKICAgMTIgMTAwMDA3IGludHIgICAgICAgICAgICAgc3dpMzog dm0gICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIAogICAxMiAxMDAwMDggaW50 ciAgICAgICAgICAgICBzd2k0OiBjbG9jayAgICAgIG1pX3N3aXRjaCsweDEyMiBpdGhyZWFkX2xv b3ArMHgxYjEgZm9ya19leGl0KzB4YTMgZm9ya190cmFtcG9saW5lKzB4OCAKICAgMTIgMTAwMDA5 IGludHIgICAgICAgICAgICAgc3dpNDogY2xvY2sgICAgICAgICAgICAgICAgICAgICAgICAgICAg ICAgICAgIAogICAxMiAxMDAwMTAgaW50ciAgICAgICAgICAgICBzd2k0OiBjbG9jayAgICAgICAg ICAgICAgICAgICAgICAgICAgICAgICAgICAgCiAgIDEyIDEwMDAxMSBpbnRyICAgICAgICAgICAg IHN3aTQ6IGNsb2NrICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAKICAgMTIgMTAw MDEyIGludHIgICAgICAgICAgICAgc3dpMTogbmV0aXNyIDAgICBtaV9zd2l0Y2grMHgxMjIgaXRo cmVhZF9sb29wKzB4MWIxIGZvcmtfZXhpdCsweGEzIGZvcmtfdHJhbXBvbGluZSsweDggCiAgIDEy IDEwMDAyMSBpbnRyICAgICAgICAgICAgIHN3aTY6IHRhc2sgcXVldWUgbWlfc3dpdGNoKzB4MTIy IGl0aHJlYWRfbG9vcCsweDFiMSBmb3JrX2V4aXQrMHhhMyBmb3JrX3RyYW1wb2xpbmUrMHg4IAog ICAxMiAxMDAwMjIgaW50ciAgICAgICAgICAgICBzd2k2OiBHaWFudCB0YXNrIG1pX3N3aXRjaCsw eDEyMiBpdGhyZWFkX2xvb3ArMHgxYjEgZm9ya19leGl0KzB4YTMgZm9ya190cmFtcG9saW5lKzB4 OCAKICAgMTIgMTAwMDI1IGludHIgICAgICAgICAgICAgc3dpNTogZmFzdCB0YXNrcSAgICAgICAg ICAgICAgICAgICAgICAgICAgICAgIAogICAxMiAxMDAwMjggaW50ciAgICAgICAgICAgICBpcnEx NDogYXRhMCAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgCiAgIDEyIDEwMDAyOSBp bnRyICAgICAgICAgICAgIGlycTE1OiBhdGExICAgICAgbWlfc3dpdGNoKzB4MTIyIGl0aHJlYWRf bG9vcCsweDFiMSBmb3JrX2V4aXQrMHhhMyBmb3JrX3RyYW1wb2xpbmUrMHg4IAogICAxMiAxMDAw MzAgaW50ciAgICAgICAgICAgICBpcnExNzogbXB0MCAgICAgIG1pX3N3aXRjaCsweDEyMiBpdGhy ZWFkX2xvb3ArMHgxYjEgZm9ya19leGl0KzB4YTMgZm9ya190cmFtcG9saW5lKzB4OCAKICAgMTIg MTAwMDMyIGludHIgICAgICAgICAgICAgaXJxMjU2OiB2bXgwICAgICBtaV9zd2l0Y2grMHgxMjIg aXRocmVhZF9sb29wKzB4MWIxIGZvcmtfZXhpdCsweGEzIGZvcmtfdHJhbXBvbGluZSsweDggCiAg IDEyIDEwMDAzNCBpbnRyICAgICAgICAgICAgIGlycTE6IGF0a2JkMCAgICAgbWlfc3dpdGNoKzB4 MTIyIGl0aHJlYWRfbG9vcCsweDFiMSBmb3JrX2V4aXQrMHhhMyBmb3JrX3RyYW1wb2xpbmUrMHg4 IAogICAxMiAxMDAwMzUgaW50ciAgICAgICAgICAgICBpcnExMjogcHNtMCAgICAgICAgICAgICAg ICAgICAgICAgICAgICAgICAgICAgCiAgIDEyIDEwMDAzNiBpbnRyICAgICAgICAgICAgIGlycTc6 IHBwYzAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAKICAgMTIgMTAwMDM3IGlu dHIgICAgICAgICAgICAgc3dpMDogdWFydCB1YXJ0ICAgICAgICAgICAgICAgICAgICAgICAgICAg ICAgIAogICAxMyAxMDAwMTMgZ2VvbSAgICAgICAgICAgICBnX2V2ZW50ICAgICAgICAgIG1pX3N3 aXRjaCsweDEyMiBzbGVlcHFfc3dpdGNoKzB4MTViIHNsZWVwcV93YWl0KzB4M2YgX3NsZWVwKzB4 MmFlIGdfcnVuX2V2ZW50cysweDYyIGZvcmtfZXhpdCsweGEzIGZvcmtfdHJhbXBvbGluZSsweDgg CiAgIDEzIDEwMDAxNCBnZW9tICAgICAgICAgICAgIGdfdXAgICAgICAgICAgICAgbWlfc3dpdGNo KzB4MTIyIHNsZWVwcV9zd2l0Y2grMHgxNWIgc2xlZXBxX3dhaXQrMHgzZiBfc2xlZXArMHgyYWUg Z19pb19zY2hlZHVsZV91cCsweGQ1IGdfdXBfcHJvY2JvZHkrMHg2ZCBmb3JrX2V4aXQrMHhhMyBm b3JrX3RyYW1wb2xpbmUrMHg4IAogICAxMyAxMDAwMTUgZ2VvbSAgICAgICAgICAgICBnX2Rvd24g ICAgICAgICAgIG1pX3N3aXRjaCsweDEyMiBzbGVlcHFfc3dpdGNoKzB4MTViIHNsZWVwcV93YWl0 KzB4M2YgX3NsZWVwKzB4MmFlIGdfaW9fc2NoZWR1bGVfZG93bisweDVjIGdfZG93bl9wcm9jYm9k eSsweDZkIGZvcmtfZXhpdCsweGEzIGZvcmtfdHJhbXBvbGluZSsweDggCiAgIDE0IDEwMDAxNyBy YW5kX2hhcnZlc3RxICAgIC0gICAgICAgICAgICAgICAgbWlfc3dpdGNoKzB4MTIyIHNsZWVwcV9z d2l0Y2grMHgxNWIgc2xlZXBxX3RpbWVkd2FpdCsweDNmIG1zbGVlcF9zcGluX3NidCsweDFjMCBy YW5kb21fa3RocmVhZCsweDJhMiBmb3JrX2V4aXQrMHhhMyBmb3JrX3RyYW1wb2xpbmUrMHg4IAog ICAxNSAxMDAwNDYgdm5scnUgICAgICAgICAgICAtICAgICAgICAgICAgICAgIG1pX3N3aXRjaCsw eDEyMiBzbGVlcHFfc3dpdGNoKzB4MTViIHNsZWVwcV90aW1lZHdhaXQrMHgzZiBfc2xlZXArMHgy ODIgdm5scnVfcHJvYysweGNmIGZvcmtfZXhpdCsweGEzIGZvcmtfdHJhbXBvbGluZSsweDggCiAg IDE2IDEwMDA0NyBzeW5jZXIgICAgICAgICAgIC0gICAgICAgICAgICAgICAgbWlfc3dpdGNoKzB4 MTIyIHNsZWVwcV9zd2l0Y2grMHgxNWIgc2xlZXBxX3dhaXQrMHgzZiBfY3Zfd2FpdCsweDE4MiB2 bWVtX3hhbGxvYysweDE2ZiB2bWVtX2FsbG9jKzB4NWQga21lbV9tYWxsb2MrMHgzZCBwYWdlX2Fs bG9jKzB4Mjgga2VnX2FsbG9jX3NsYWIrMHhlNCBrZWdfZmV0Y2hfc2xhYisweDE2ZSB6b25lX2Zl dGNoX3NsYWIrMHg4MCB6b25lX2ltcG9ydCsweDM5IHVtYV96YWxsb2NfYXJnKzB4MzVhIG1hbGxv YysweDExMiBzb2Z0ZGVwX2Rpc2tfaW9faW5pdGlhdGlvbisweDk3OCBmZnNfZ2VvbV9zdHJhdGVn eSsweGUwIHVmc19zdHJhdGVneSsweGI0IFZPUF9TVFJBVEVHWV9BUFYrMHhhMSAKICAgMzUgMTAw MDU3IHpmc2tlcm4gICAgICAgICAgYXJjX3JlY2xhaW1fdGhyZSBtaV9zd2l0Y2grMHgxMjIgc2xl ZXBxX3N3aXRjaCsweDE1YiBzbGVlcHFfdGltZWR3YWl0KzB4M2YgX2N2X3RpbWVkd2FpdF9zYnQr MHgxYTcgYXJjX3JlY2xhaW1fdGhyZWFkKzB4M2IyIGZvcmtfZXhpdCsweGEzIGZvcmtfdHJhbXBv bGluZSsweDggCiAgIDM1IDEwMDA1OCB6ZnNrZXJuICAgICAgICAgIGwyYXJjX2ZlZWRfdGhyZWEg bWlfc3dpdGNoKzB4MTIyIHNsZWVwcV9zd2l0Y2grMHgxNWIgc2xlZXBxX3RpbWVkd2FpdCsweDNm IF9jdl90aW1lZHdhaXRfc2J0KzB4MWE3IGwyYXJjX2ZlZWRfdGhyZWFkKzB4MjcyIGZvcmtfZXhp dCsweGEzIGZvcmtfdHJhbXBvbGluZSsweDggCiAgIDM1IDEwMDMzMyB6ZnNrZXJuICAgICAgICAg IHRyaW0gc3RvcmFnZSAgICAgbWlfc3dpdGNoKzB4MTIyIHNsZWVwcV9zd2l0Y2grMHgxNWIgc2xl ZXBxX3RpbWVkd2FpdCsweDNmIF9jdl90aW1lZHdhaXRfc2J0KzB4MWE3IHRyaW1fdGhyZWFkKzB4 YzkgZm9ya19leGl0KzB4YTMgZm9ya190cmFtcG9saW5lKzB4OCAKICAgMzUgMTAwMzM5IHpmc2tl cm4gICAgICAgICAgdHhnX3RocmVhZF9lbnRlciBtaV9zd2l0Y2grMHgxMjIgc2xlZXBxX3N3aXRj aCsweDE1YiBzbGVlcHFfd2FpdCsweDNmIF9jdl93YWl0KzB4MTgyIHR4Z19xdWllc2NlX3RocmVh ZCsweDQzZCBmb3JrX2V4aXQrMHhhMyBmb3JrX3RyYW1wb2xpbmUrMHg4IAogICAzNSAxMDAzNDAg emZza2VybiAgICAgICAgICB0eGdfdGhyZWFkX2VudGVyIG1pX3N3aXRjaCsweDEyMiBzbGVlcHFf c3dpdGNoKzB4MTViIHNsZWVwcV90aW1lZHdhaXQrMHgzZiBfY3ZfdGltZWR3YWl0X3NidCsweDFh NyB0eGdfc3luY190aHJlYWQrMHgyOGIgZm9ya19leGl0KzB4YTMgZm9ya190cmFtcG9saW5lKzB4 OCAKICAzMDMgMTAwMzc0IGRoY2xpZW50ICAgICAgICAgLSAgICAgICAgICAgICAgICBtaV9zd2l0 Y2grMHgxMjIgc2xlZXBxX3N3aXRjaCsweDE1YiBzbGVlcHFfY2F0Y2hfc2lnbmFscysweDViZSBz bGVlcHFfd2FpdF9zaWcrMHgxNCBfY3Zfd2FpdF9zaWcrMHgxN2Mgc2VsdGR3YWl0KzB4Y2Ygc3lz X3BvbGwrMHg0NzIgc3lzY2FsbCsweDQ4YiBYaW50MHg4MF9zeXNjYWxsKzB4MjEgCiAgMzM5IDEw MDM3MSBkaGNsaWVudCAgICAgICAgIC0gICAgICAgICAgICAgICAgbWlfc3dpdGNoKzB4MTIyIHNs ZWVwcV9zd2l0Y2grMHgxNWIgc2xlZXBxX2NhdGNoX3NpZ25hbHMrMHg1YmUgc2xlZXBxX3RpbWVk d2FpdF9zaWcrMHgxNCBfY3ZfdGltZWR3YWl0X3NpZ19zYnQrMHgxYTcgc2VsdGR3YWl0KzB4YzEg c3lzX3BvbGwrMHg0NzIgc3lzY2FsbCsweDQ4YiBYaW50MHg4MF9zeXNjYWxsKzB4MjEgCiAgMzQw IDEwMDM3MyBkZXZkICAgICAgICAgICAgIC0gICAgICAgICAgICAgICAgbWlfc3dpdGNoKzB4MTIy IHNsZWVwcV9zd2l0Y2grMHgxNWIgc2xlZXBxX2NhdGNoX3NpZ25hbHMrMHg1YmUgc2xlZXBxX3Rp bWVkd2FpdF9zaWcrMHgxNCBfY3ZfdGltZWR3YWl0X3NpZ19zYnQrMHgxYTcgc2VsdGR3YWl0KzB4 YzEga2Vybl9zZWxlY3QrMHg4YzIgc3lzX3NlbGVjdCsweDY5IHN5c2NhbGwrMHg0OGIgWGludDB4 ODBfc3lzY2FsbCsweDIxIAogIDQxOSAxMDAzODAgc3lzbG9nZCAgICAgICAgICAtICAgICAgICAg ICAgICAgIG1pX3N3aXRjaCsweDEyMiBzbGVlcHFfc3dpdGNoKzB4MTViIHNsZWVwcV9jYXRjaF9z aWduYWxzKzB4NWJlIHNsZWVwcV93YWl0X3NpZysweDE0IF9jdl93YWl0X3NpZysweDE3YyBzZWx0 ZHdhaXQrMHhjZiBrZXJuX3NlbGVjdCsweDhjMiBzeXNfc2VsZWN0KzB4Njkgc3lzY2FsbCsweDQ4 YiBYaW50MHg4MF9zeXNjYWxsKzB4MjEgCiAgNTQ0IDEwMDM4MSB2bXRvb2xzZCAgICAgICAgIC0g ICAgICAgICAgICAgICAgbWlfc3dpdGNoKzB4MTIyIHNsZWVwcV9zd2l0Y2grMHgxNWIgc2xlZXBx X2NhdGNoX3NpZ25hbHMrMHg1YmUgc2xlZXBxX3RpbWVkd2FpdF9zaWcrMHgxNCBfY3ZfdGltZWR3 YWl0X3NpZ19zYnQrMHgxYTcgc2VsdGR3YWl0KzB4YzEgc3lzX3BvbGwrMHg0NzIgc3lzY2FsbCsw eDQ4YiBYaW50MHg4MF9zeXNjYWxsKzB4MjEgCiAgNTc1IDEwMDM3NSBudHBkICAgICAgICAgICAg IC0gICAgICAgICAgICAgICAgbWlfc3dpdGNoKzB4MTIyIHNsZWVwcV9zd2l0Y2grMHgxNWIgc2xl ZXBxX2NhdGNoX3NpZ25hbHMrMHg1YmUgc2xlZXBxX3dhaXRfc2lnKzB4MTQgX2N2X3dhaXRfc2ln KzB4MTdjIHNlbHRkd2FpdCsweGNmIGtlcm5fc2VsZWN0KzB4OGMyIHN5c19zZWxlY3QrMHg2OSBz eXNjYWxsKzB4NDhiIFhpbnQweDgwX3N5c2NhbGwrMHgyMSAKICA2MTYgMTAwMzc5IHNzaGQgICAg ICAgICAgICAgLSAgICAgICAgICAgICAgICBtaV9zd2l0Y2grMHgxMjIgc2xlZXBxX3N3aXRjaCsw eDE1YiBzbGVlcHFfY2F0Y2hfc2lnbmFscysweDViZSBzbGVlcHFfd2FpdF9zaWcrMHgxNCBfY3Zf d2FpdF9zaWcrMHgxN2Mgc2VsdGR3YWl0KzB4Y2Yga2Vybl9zZWxlY3QrMHg4YzIgc3lzX3NlbGVj dCsweDY5IHN5c2NhbGwrMHg0OGIgWGludDB4ODBfc3lzY2FsbCsweDIxIAogIDYxOSAxMDAzNzAg c2VuZG1haWwgICAgICAgICAtICAgICAgICAgICAgICAgIG1pX3N3aXRjaCsweDEyMiBzbGVlcHFf c3dpdGNoKzB4MTViIHNsZWVwcV9jYXRjaF9zaWduYWxzKzB4NWJlIHNsZWVwcV90aW1lZHdhaXRf c2lnKzB4MTQgX2N2X3RpbWVkd2FpdF9zaWdfc2J0KzB4MWE3IHNlbHRkd2FpdCsweGMxIGtlcm5f c2VsZWN0KzB4OGMyIHN5c19zZWxlY3QrMHg2OSBzeXNjYWxsKzB4NDhiIFhpbnQweDgwX3N5c2Nh bGwrMHgyMSAKICA2MjIgMTAwMzg1IHNlbmRtYWlsICAgICAgICAgLSAgICAgICAgICAgICAgICBt aV9zd2l0Y2grMHgxMjIgc2xlZXBxX3N3aXRjaCsweDE1YiBzbGVlcHFfY2F0Y2hfc2lnbmFscysw eDViZSBzbGVlcHFfd2FpdF9zaWcrMHgxNCBfc2xlZXArMHgyOWIga2Vybl9zaWdzdXNwZW5kKzB4 MTM3IHN5c19zaWdzdXNwZW5kKzB4NTggc3lzY2FsbCsweDQ4YiBYaW50MHg4MF9zeXNjYWxsKzB4 MjEgCiAgNjI2IDEwMDM3NyBjcm9uICAgICAgICAgICAgIC0gICAgICAgICAgICAgICAgbWlfc3dp dGNoKzB4MTIyIHNsZWVwcV9zd2l0Y2grMHgxNWIgc2xlZXBxX2NhdGNoX3NpZ25hbHMrMHg1YmUg c2xlZXBxX3RpbWVkd2FpdF9zaWcrMHgxNCBfc2xlZXArMHgyNmIga2Vybl9uYW5vc2xlZXArMHgx NGIgc3lzX25hbm9zbGVlcCsweDY5IHN5c2NhbGwrMHg0OGIgWGludDB4ODBfc3lzY2FsbCsweDIx IAogIDY2OCAxMDAwNDggZ2V0dHkgICAgICAgICAgICAtICAgICAgICAgICAgICAgIG1pX3N3aXRj aCsweDEyMiBzbGVlcHFfc3dpdGNoKzB4MTViIHNsZWVwcV9jYXRjaF9zaWduYWxzKzB4NWJlIHNs ZWVwcV93YWl0X3NpZysweDE0IF9jdl93YWl0X3NpZysweDE3YyB0dHlfd2FpdCsweDFmIHR0eWRp c2NfcmVhZCsweDMxYyB0dHlkZXZfcmVhZCsweDhmIGRldmZzX3JlYWRfZisweGJmIGRvZmlsZXJl YWQrMHg5ZSBrZXJuX3JlYWR2KzB4OTYgc3lzX3JlYWQrMHg1YyBzeXNjYWxsKzB4NDhiIFhpbnQw eDgwX3N5c2NhbGwrMHgyMSAKICA2NjkgMTAwMzk0IGdldHR5ICAgICAgICAgICAgLSAgICAgICAg ICAgICAgICBtaV9zd2l0Y2grMHgxMjIgc2xlZXBxX3N3aXRjaCsweDE1YiBzbGVlcHFfY2F0Y2hf c2lnbmFscysweDViZSBzbGVlcHFfd2FpdF9zaWcrMHgxNCBfY3Zfd2FpdF9zaWcrMHgxN2MgdHR5 X3dhaXQrMHgxZiB0dHlkaXNjX3JlYWQrMHgzMWMgdHR5ZGV2X3JlYWQrMHg4ZiBkZXZmc19yZWFk X2YrMHhiZiBkb2ZpbGVyZWFkKzB4OWUga2Vybl9yZWFkdisweDk2IHN5c19yZWFkKzB4NWMgc3lz Y2FsbCsweDQ4YiBYaW50MHg4MF9zeXNjYWxsKzB4MjEgCiAgNjcwIDEwMDM5NSBnZXR0eSAgICAg ICAgICAgIC0gICAgICAgICAgICAgICAgbWlfc3dpdGNoKzB4MTIyIHNsZWVwcV9zd2l0Y2grMHgx NWIgc2xlZXBxX2NhdGNoX3NpZ25hbHMrMHg1YmUgc2xlZXBxX3dhaXRfc2lnKzB4MTQgX2N2X3dh aXRfc2lnKzB4MTdjIHR0eV93YWl0KzB4MWYgdHR5ZGlzY19yZWFkKzB4MzFjIHR0eWRldl9yZWFk KzB4OGYgZGV2ZnNfcmVhZF9mKzB4YmYgZG9maWxlcmVhZCsweDllIGtlcm5fcmVhZHYrMHg5NiBz eXNfcmVhZCsweDVjIHN5c2NhbGwrMHg0OGIgWGludDB4ODBfc3lzY2FsbCsweDIxIAogIDY3MSAx MDAzOTYgZ2V0dHkgICAgICAgICAgICAtICAgICAgICAgICAgICAgIG1pX3N3aXRjaCsweDEyMiBz bGVlcHFfc3dpdGNoKzB4MTViIHNsZWVwcV9jYXRjaF9zaWduYWxzKzB4NWJlIHNsZWVwcV93YWl0 X3NpZysweDE0IF9jdl93YWl0X3NpZysweDE3YyB0dHlfd2FpdCsweDFmIHR0eWRpc2NfcmVhZCsw eDMxYyB0dHlkZXZfcmVhZCsweDhmIGRldmZzX3JlYWRfZisweGJmIGRvZmlsZXJlYWQrMHg5ZSBr ZXJuX3JlYWR2KzB4OTYgc3lzX3JlYWQrMHg1YyBzeXNjYWxsKzB4NDhiIFhpbnQweDgwX3N5c2Nh bGwrMHgyMSAKICA2NzIgMTAwMzk3IGdldHR5ICAgICAgICAgICAgLSAgICAgICAgICAgICAgICBt aV9zd2l0Y2grMHgxMjIgc2xlZXBxX3N3aXRjaCsweDE1YiBzbGVlcHFfY2F0Y2hfc2lnbmFscysw eDViZSBzbGVlcHFfd2FpdF9zaWcrMHgxNCBfY3Zfd2FpdF9zaWcrMHgxN2MgdHR5X3dhaXQrMHgx ZiB0dHlkaXNjX3JlYWQrMHgzMWMgdHR5ZGV2X3JlYWQrMHg4ZiBkZXZmc19yZWFkX2YrMHhiZiBk b2ZpbGVyZWFkKzB4OWUga2Vybl9yZWFkdisweDk2IHN5c19yZWFkKzB4NWMgc3lzY2FsbCsweDQ4 YiBYaW50MHg4MF9zeXNjYWxsKzB4MjEgCiAgNjczIDEwMDM5OCBnZXR0eSAgICAgICAgICAgIC0g ICAgICAgICAgICAgICAgbWlfc3dpdGNoKzB4MTIyIHNsZWVwcV9zd2l0Y2grMHgxNWIgc2xlZXBx X2NhdGNoX3NpZ25hbHMrMHg1YmUgc2xlZXBxX3dhaXRfc2lnKzB4MTQgX2N2X3dhaXRfc2lnKzB4 MTdjIHR0eV93YWl0KzB4MWYgdHR5ZGlzY19yZWFkKzB4MzFjIHR0eWRldl9yZWFkKzB4OGYgZGV2 ZnNfcmVhZF9mKzB4YmYgZG9maWxlcmVhZCsweDllIGtlcm5fcmVhZHYrMHg5NiBzeXNfcmVhZCsw eDVjIHN5c2NhbGwrMHg0OGIgWGludDB4ODBfc3lzY2FsbCsweDIxIAogIDY3NCAxMDAzOTkgZ2V0 dHkgICAgICAgICAgICAtICAgICAgICAgICAgICAgIG1pX3N3aXRjaCsweDEyMiBzbGVlcHFfc3dp dGNoKzB4MTViIHNsZWVwcV9jYXRjaF9zaWduYWxzKzB4NWJlIHNsZWVwcV93YWl0X3NpZysweDE0 IF9jdl93YWl0X3NpZysweDE3YyB0dHlfd2FpdCsweDFmIHR0eWRpc2NfcmVhZCsweDMxYyB0dHlk ZXZfcmVhZCsweDhmIGRldmZzX3JlYWRfZisweGJmIGRvZmlsZXJlYWQrMHg5ZSBrZXJuX3JlYWR2 KzB4OTYgc3lzX3JlYWQrMHg1YyBzeXNjYWxsKzB4NDhiIFhpbnQweDgwX3N5c2NhbGwrMHgyMSAK ICA2NzUgMTAwNDAwIGdldHR5ICAgICAgICAgICAgLSAgICAgICAgICAgICAgICBtaV9zd2l0Y2gr MHgxMjIgc2xlZXBxX3N3aXRjaCsweDE1YiBzbGVlcHFfY2F0Y2hfc2lnbmFscysweDViZSBzbGVl cHFfd2FpdF9zaWcrMHgxNCBfY3Zfd2FpdF9zaWcrMHgxN2MgdHR5X3dhaXQrMHgxZiB0dHlkaXNj X3JlYWQrMHgzMWMgdHR5ZGV2X3JlYWQrMHg4ZiBkZXZmc19yZWFkX2YrMHhiZiBkb2ZpbGVyZWFk KzB4OWUga2Vybl9yZWFkdisweDk2IHN5c19yZWFkKzB4NWMgc3lzY2FsbCsweDQ4YiBYaW50MHg4 MF9zeXNjYWxsKzB4MjEgCiAgNjc2IDEwMDQwMSBzc2hkICAgICAgICAgICAgIC0gICAgICAgICAg ICAgICAgbWlfc3dpdGNoKzB4MTIyIHNsZWVwcV9zd2l0Y2grMHgxNWIgc2xlZXBxX2NhdGNoX3Np Z25hbHMrMHg1YmUgc2xlZXBxX3dhaXRfc2lnKzB4MTQgX2N2X3dhaXRfc2lnKzB4MTdjIHNlbHRk d2FpdCsweGNmIHN5c19wb2xsKzB4NDcyIHN5c2NhbGwrMHg0OGIgWGludDB4ODBfc3lzY2FsbCsw eDIxIAogIDY3OCAxMDAzODggc3NoZCAgICAgICAgICAgICAtICAgICAgICAgICAgICAgIG1pX3N3 aXRjaCsweDEyMiBzbGVlcHFfc3dpdGNoKzB4MTViIHNsZWVwcV9jYXRjaF9zaWduYWxzKzB4NWJl IHNsZWVwcV93YWl0X3NpZysweDE0IF9jdl93YWl0X3NpZysweDE3YyBzZWx0ZHdhaXQrMHhjZiBr ZXJuX3NlbGVjdCsweDhjMiBzeXNfc2VsZWN0KzB4Njkgc3lzY2FsbCsweDQ4YiBYaW50MHg4MF9z eXNjYWxsKzB4MjEgCiAgNjc5IDEwMDQwMiB0Y3NoICAgICAgICAgICAgIC0gICAgICAgICAgICAg ICAgbWlfc3dpdGNoKzB4MTIyIHNsZWVwcV9zd2l0Y2grMHgxNWIgc2xlZXBxX2NhdGNoX3NpZ25h bHMrMHg1YmUgc2xlZXBxX3dhaXRfc2lnKzB4MTQgX3NsZWVwKzB4MjliIGtlcm5fc2lnc3VzcGVu ZCsweDEzNyBzeXNfc2lnc3VzcGVuZCsweDU4IHN5c2NhbGwrMHg0OGIgWGludDB4ODBfc3lzY2Fs bCsweDIxIAogIDY4MSAxMDAzODIgc2NyZWVuICAgICAgICAgICAtICAgICAgICAgICAgICAgIG1p X3N3aXRjaCsweDEyMiBzbGVlcHFfc3dpdGNoKzB4MTViIHNsZWVwcV9jYXRjaF9zaWduYWxzKzB4 NWJlIHNsZWVwcV93YWl0X3NpZysweDE0IF9zbGVlcCsweDI5YiBrZXJuX3NpZ3N1c3BlbmQrMHgx Mzcgc3lzX3NpZ3N1c3BlbmQrMHg1OCBzeXNjYWxsKzB4NDhiIFhpbnQweDgwX3N5c2NhbGwrMHgy MSAKICA2ODIgMTAwMzkyIHNjcmVlbiAgICAgICAgICAgLSAgICAgICAgICAgICAgICBtaV9zd2l0 Y2grMHgxMjIgc2xlZXBxX3N3aXRjaCsweDE1YiBzbGVlcHFfY2F0Y2hfc2lnbmFscysweDViZSBz bGVlcHFfd2FpdF9zaWcrMHgxNCBfY3Zfd2FpdF9zaWcrMHgxN2Mgc2VsdGR3YWl0KzB4Y2Yga2Vy bl9zZWxlY3QrMHg4YzIgc3lzX3NlbGVjdCsweDY5IHN5c2NhbGwrMHg0OGIgWGludDB4ODBfc3lz Y2FsbCsweDIxIAogIDY4MyAxMDA0MDMgdGNzaCAgICAgICAgICAgICAtICAgICAgICAgICAgICAg IG1pX3N3aXRjaCsweDEyMiBzbGVlcHFfc3dpdGNoKzB4MTViIHNsZWVwcV9jYXRjaF9zaWduYWxz KzB4NWJlIHNsZWVwcV93YWl0X3NpZysweDE0IF9zbGVlcCsweDI5YiBrZXJuX3NpZ3N1c3BlbmQr MHgxMzcgc3lzX3NpZ3N1c3BlbmQrMHg1OCBzeXNjYWxsKzB4NDhiIFhpbnQweDgwX3N5c2NhbGwr MHgyMSAKMjQwMzYgMTAwNDEwIHN1ZG8gICAgICAgICAgICAgLSAgICAgICAgICAgICAgICBtaV9z d2l0Y2grMHgxMjIgc2xlZXBxX3N3aXRjaCsweDE1YiBzbGVlcHFfY2F0Y2hfc2lnbmFscysweDVi ZSBzbGVlcHFfd2FpdF9zaWcrMHgxNCBfY3Zfd2FpdF9zaWcrMHgxN2Mgc2VsdGR3YWl0KzB4Y2Yg c3lzX3BvbGwrMHg0NzIgc3lzY2FsbCsweDQ4YiBYaW50MHg4MF9zeXNjYWxsKzB4MjEgCjI0MDM3 IDEwMDQxMiBtYWtlICAgICAgICAgICAgIC0gICAgICAgICAgICAgICAgbWlfc3dpdGNoKzB4MTIy IHNsZWVwcV9zd2l0Y2grMHgxNWIgc2xlZXBxX2NhdGNoX3NpZ25hbHMrMHg1YmUgc2xlZXBxX3dh aXRfc2lnKzB4MTQgX3NsZWVwKzB4MjliIGtlcm5fd2FpdDYrMHg3MjUgc3lzX3dhaXQ0KzB4OTQg c3lzY2FsbCsweDQ4YiBYaW50MHg4MF9zeXNjYWxsKzB4MjEgCjI0MDg2IDEwMDQyOCBtYWtlICAg ICAgICAgICAgIC0gICAgICAgICAgICAgICAgbWlfc3dpdGNoKzB4MTIyIHNsZWVwcV9zd2l0Y2gr MHgxNWIgc2xlZXBxX2NhdGNoX3NpZ25hbHMrMHg1YmUgc2xlZXBxX3dhaXRfc2lnKzB4MTQgX3Ns ZWVwKzB4MjliIGtlcm5fd2FpdDYrMHg3MjUgc3lzX3dhaXQ0KzB4OTQgc3lzY2FsbCsweDQ4YiBY aW50MHg4MF9zeXNjYWxsKzB4MjEgCjI0MDg5IDEwMDQwNCBzaCAgICAgICAgICAgICAgIC0gICAg ICAgICAgICAgICAgbWlfc3dpdGNoKzB4MTIyIHNsZWVwcV9zd2l0Y2grMHgxNWIgc2xlZXBxX2Nh dGNoX3NpZ25hbHMrMHg1YmUgc2xlZXBxX3dhaXRfc2lnKzB4MTQgX3NsZWVwKzB4MjliIGtlcm5f d2FpdDYrMHg3MjUgc3lzX3dhaXQ0KzB4OTQgc3lzY2FsbCsweDQ4YiBYaW50MHg4MF9zeXNjYWxs KzB4MjEgCjI0MDkxIDEwMDQyNiBzaCAgICAgICAgICAgICAgIC0gICAgICAgICAgICAgICAgbWlf c3dpdGNoKzB4MTIyIHNsZWVwcV9zd2l0Y2grMHgxNWIgc2xlZXBxX2NhdGNoX3NpZ25hbHMrMHg1 YmUgc2xlZXBxX3dhaXRfc2lnKzB4MTQgX3NsZWVwKzB4MjliIGtlcm5fd2FpdDYrMHg3MjUgc3lz X3dhaXQ0KzB4OTQgc3lzY2FsbCsweDQ4YiBYaW50MHg4MF9zeXNjYWxsKzB4MjEgCjI0MDkyIDEw MDQyMCBtYWtlICAgICAgICAgICAgIC0gICAgICAgICAgICAgICAgbWlfc3dpdGNoKzB4MTIyIHNs ZWVwcV9zd2l0Y2grMHgxNWIgc2xlZXBxX2NhdGNoX3NpZ25hbHMrMHg1YmUgc2xlZXBxX3RpbWVk d2FpdF9zaWcrMHgxNCBfY3ZfdGltZWR3YWl0X3NpZ19zYnQrMHgxYTcgc2VsdGR3YWl0KzB4YzEg c3lzX3BvbGwrMHg0NzIgc3lzY2FsbCsweDQ4YiBYaW50MHg4MF9zeXNjYWxsKzB4MjEgCjI1MTk5 IDEwMDQzMCB0Y3NoICAgICAgICAgICAgIC0gICAgICAgICAgICAgICAgbWlfc3dpdGNoKzB4MTIy IHNsZWVwcV9zd2l0Y2grMHgxNWIgc2xlZXBxX2NhdGNoX3NpZ25hbHMrMHg1YmUgc2xlZXBxX3dh aXRfc2lnKzB4MTQgX3NsZWVwKzB4MjliIGtlcm5fc2lnc3VzcGVuZCsweDEzNyBzeXNfc2lnc3Vz cGVuZCsweDU4IHN5c2NhbGwrMHg0OGIgWGludDB4ODBfc3lzY2FsbCsweDIxIAoyNzI0MCAxMDA0 NTYgdG9wICAgICAgICAgICAgICAtICAgICAgICAgICAgICAgIG1pX3N3aXRjaCsweDEyMiBzbGVl cHFfc3dpdGNoKzB4MTViIHNsZWVwcV9jYXRjaF9zaWduYWxzKzB4NWJlIHNsZWVwcV90aW1lZHdh aXRfc2lnKzB4MTQgX2N2X3RpbWVkd2FpdF9zaWdfc2J0KzB4MWE3IHNlbHRkd2FpdCsweGMxIGtl cm5fc2VsZWN0KzB4OGMyIHN5c19zZWxlY3QrMHg2OSBzeXNjYWxsKzB4NDhiIFhpbnQweDgwX3N5 c2NhbGwrMHgyMSAKMjc0NTIgMTAwNDY1IHRjc2ggICAgICAgICAgICAgLSAgICAgICAgICAgICAg ICBtaV9zd2l0Y2grMHgxMjIgc2xlZXBxX3N3aXRjaCsweDE1YiBzbGVlcHFfY2F0Y2hfc2lnbmFs cysweDViZSBzbGVlcHFfd2FpdF9zaWcrMHgxNCBfc2xlZXArMHgyOWIga2Vybl9zaWdzdXNwZW5k KzB4MTM3IHN5c19zaWdzdXNwZW5kKzB4NTggc3lzY2FsbCsweDQ4YiBYaW50MHg4MF9zeXNjYWxs KzB4MjEgCjI5NzE5IDEwMDQ2OCBzdWRvICAgICAgICAgICAgIC0gICAgICAgICAgICAgICAgbWlf c3dpdGNoKzB4MTIyIHNsZWVwcV9zd2l0Y2grMHgxNWIgc2xlZXBxX2NhdGNoX3NpZ25hbHMrMHg1 YmUgc2xlZXBxX3dhaXRfc2lnKzB4MTQgX2N2X3dhaXRfc2lnKzB4MTdjIHNlbHRkd2FpdCsweGNm IHN5c19wb2xsKzB4NDcyIHN5c2NhbGwrMHg0OGIgWGludDB4ODBfc3lzY2FsbCsweDIxIAoyOTcy MyAxMDA0NzYgZ3N0YXQgICAgICAgICAgICAtICAgICAgICAgICAgICAgIG1pX3N3aXRjaCsweDEy MiBzbGVlcHFfc3dpdGNoKzB4MTViIHNsZWVwcV9jYXRjaF9zaWduYWxzKzB4NWJlIHNsZWVwcV90 aW1lZHdhaXRfc2lnKzB4MTQgX3NsZWVwKzB4MjZiIGtlcm5fbmFub3NsZWVwKzB4MTRiIHN5c19u YW5vc2xlZXArMHg2OSBzeXNjYWxsKzB4NDhiIFhpbnQweDgwX3N5c2NhbGwrMHgyMSAKMzczMDYg MTAwNDUzIHNoICAgICAgICAgICAgICAgLSAgICAgICAgICAgICAgICBtaV9zd2l0Y2grMHgxMjIg c2xlZXBxX3N3aXRjaCsweDE1YiBzbGVlcHFfY2F0Y2hfc2lnbmFscysweDViZSBzbGVlcHFfd2Fp dF9zaWcrMHgxNCBfc2xlZXArMHgyOWIga2Vybl93YWl0NisweDcyNSBzeXNfd2FpdDQrMHg5NCBz eXNjYWxsKzB4NDhiIFhpbnQweDgwX3N5c2NhbGwrMHgyMSAKMzczMDggMTAwNDY0IG1ha2UgICAg ICAgICAgICAgLSAgICAgICAgICAgICAgICBtaV9zd2l0Y2grMHgxMjIgc2xlZXBxX3N3aXRjaCsw eDE1YiBzbGVlcHFfY2F0Y2hfc2lnbmFscysweDViZSBzbGVlcHFfd2FpdF9zaWcrMHgxNCBfc2xl ZXArMHgyOWIga2Vybl93YWl0NisweDcyNSBzeXNfd2FpdDQrMHg5NCBzeXNjYWxsKzB4NDhiIFhp bnQweDgwX3N5c2NhbGwrMHgyMSAKMzczMjEgMTAwNDE5IG1ha2UgICAgICAgICAgICAgLSAgICAg ICAgICAgICAgICBtaV9zd2l0Y2grMHgxMjIgc2xlZXBxX3N3aXRjaCsweDE1YiBzbGVlcHFfY2F0 Y2hfc2lnbmFscysweDViZSBzbGVlcHFfd2FpdF9zaWcrMHgxNCBfc2xlZXArMHgyOWIga2Vybl93 YWl0NisweDcyNSBzeXNfd2FpdDQrMHg5NCBzeXNjYWxsKzB4NDhiIFhpbnQweDgwX3N5c2NhbGwr MHgyMSAKMzgwNzUgMTAwMzQ1IHNoICAgICAgICAgICAgICAgLSAgICAgICAgICAgICAgICBtaV9z d2l0Y2grMHgxMjIgc2xlZXBxX3N3aXRjaCsweDE1YiBzbGVlcHFfd2FpdCsweDNmIF9jdl93YWl0 KzB4MTgyIHZtZW1feGFsbG9jKzB4MTZmIHZtZW1fYWxsb2MrMHg1ZCBrbWVtX21hbGxvYysweDNk IHBhZ2VfYWxsb2MrMHgyOCBrZWdfYWxsb2Nfc2xhYisweGU0IGtlZ19mZXRjaF9zbGFiKzB4MTZl IHpvbmVfZmV0Y2hfc2xhYisweDgwIHpvbmVfaW1wb3J0KzB4MzkgdW1hX3phbGxvY19hcmcrMHgz NWEgbWFsbG9jKzB4MTEyIHpmc19rbWVtX2FsbG9jKzB4MjAgemlvX2J1Zl9hbGxvYysweDU0IGFy Y19nZXRfZGF0YV9idWYrMHg0NTIgYXJjX3JlYWQrMHg1YmMgCjY1NzU1IDEwMDQzOCBzaCAgICAg ICAgICAgICAgIC0gICAgICAgICAgICAgICAgbWlfc3dpdGNoKzB4MTIyIHNsZWVwcV9zd2l0Y2gr MHgxNWIgc2xlZXBxX2NhdGNoX3NpZ25hbHMrMHg1YmUgc2xlZXBxX3dhaXRfc2lnKzB4MTQgX3Ns ZWVwKzB4MjliIGtlcm5fd2FpdDYrMHg3MjUgc3lzX3dhaXQ0KzB4OTQgc3lzY2FsbCsweDQ4YiBY aW50MHg4MF9zeXNjYWxsKzB4MjEgCjY1NzU2IDEwMDQyNyBtYWtlICAgICAgICAgICAgIC0gICAg ICAgICAgICAgICAgbWlfc3dpdGNoKzB4MTIyIHNsZWVwcV9zd2l0Y2grMHgxNWIgc2xlZXBxX2Nh dGNoX3NpZ25hbHMrMHg1YmUgc2xlZXBxX3dhaXRfc2lnKzB4MTQgX3NsZWVwKzB4MjliIGtlcm5f d2FpdDYrMHg3MjUgc3lzX3dhaXQ0KzB4OTQgc3lzY2FsbCsweDQ4YiBYaW50MHg4MF9zeXNjYWxs KzB4MjEgCjY1NzY3IDEwMDQyOSBtYWtlICAgICAgICAgICAgIC0gICAgICAgICAgICAgICAgbWlf c3dpdGNoKzB4MTIyIHNsZWVwcV9zd2l0Y2grMHgxNWIgc2xlZXBxX2NhdGNoX3NpZ25hbHMrMHg1 YmUgc2xlZXBxX3dhaXRfc2lnKzB4MTQgX3NsZWVwKzB4MjliIGtlcm5fd2FpdDYrMHg3MjUgc3lz X3dhaXQ0KzB4OTQgc3lzY2FsbCsweDQ4YiBYaW50MHg4MF9zeXNjYWxsKzB4MjEgCjY1ODgwIDEw MDQ2NiBzaCAgICAgICAgICAgICAgIC0gICAgICAgICAgICAgICAgbWlfc3dpdGNoKzB4MTIyIHNs ZWVwcV9zd2l0Y2grMHgxNWIgc2xlZXBxX3dhaXQrMHgzZiBfY3Zfd2FpdCsweDE4MiB2bWVtX3hh bGxvYysweDE2ZiB2bWVtX2FsbG9jKzB4NWQga21lbV9tYWxsb2MrMHgzZCBwYWdlX2FsbG9jKzB4 Mjgga2VnX2FsbG9jX3NsYWIrMHhlNCBrZWdfZmV0Y2hfc2xhYisweDE2ZSB6b25lX2ZldGNoX3Ns YWIrMHg4MCB6b25lX2ltcG9ydCsweDM5IHVtYV96YWxsb2NfYXJnKzB4MzVhIG1hbGxvYysweDEx MiB6ZnNfa21lbV9hbGxvYysweDIwIHppb19yZWFkX2JwX2luaXQrMHhhYSB6aW9fZXhlY3V0ZSsw eDEzOSBhcmNfcmVhZCsweGE4YiAKNzY1NjYgMTAwNDIyIHNoICAgICAgICAgICAgICAgLSAgICAg ICAgICAgICAgICBtaV9zd2l0Y2grMHgxMjIgc2xlZXBxX3N3aXRjaCsweDE1YiBzbGVlcHFfY2F0 Y2hfc2lnbmFscysweDViZSBzbGVlcHFfd2FpdF9zaWcrMHgxNCBfc2xlZXArMHgyOWIga2Vybl93 YWl0NisweDcyNSBzeXNfd2FpdDQrMHg5NCBzeXNjYWxsKzB4NDhiIFhpbnQweDgwX3N5c2NhbGwr MHgyMSAKNzY1NjggMTAwNDU1IG1ha2UgICAgICAgICAgICAgLSAgICAgICAgICAgICAgICBtaV9z d2l0Y2grMHgxMjIgc2xlZXBxX3N3aXRjaCsweDE1YiBzbGVlcHFfY2F0Y2hfc2lnbmFscysweDVi ZSBzbGVlcHFfd2FpdF9zaWcrMHgxNCBfc2xlZXArMHgyOWIga2Vybl93YWl0NisweDcyNSBzeXNf d2FpdDQrMHg5NCBzeXNjYWxsKzB4NDhiIFhpbnQweDgwX3N5c2NhbGwrMHgyMSAKNzY1ODEgMTAw Mzc2IG1ha2UgICAgICAgICAgICAgLSAgICAgICAgICAgICAgICBtaV9zd2l0Y2grMHgxMjIgc2xl ZXBxX3N3aXRjaCsweDE1YiBzbGVlcHFfY2F0Y2hfc2lnbmFscysweDViZSBzbGVlcHFfd2FpdF9z aWcrMHgxNCBfc2xlZXArMHgyOWIga2Vybl93YWl0NisweDcyNSBzeXNfd2FpdDQrMHg5NCBzeXNj YWxsKzB4NDhiIFhpbnQweDgwX3N5c2NhbGwrMHgyMSAKNzY3NjUgMTAwNDM3IHNoICAgICAgICAg ICAgICAgLSAgICAgICAgICAgICAgICBtaV9zd2l0Y2grMHgxMjIgc2xlZXBxX3N3aXRjaCsweDE1 YiBzbGVlcHFfd2FpdCsweDNmIF9jdl93YWl0KzB4MTgyIHZtZW1feGFsbG9jKzB4MTZmIHZtZW1f YWxsb2MrMHg1ZCBrbWVtX21hbGxvYysweDNkIHVtYV9sYXJnZV9tYWxsb2MrMHg0MiBtYWxsb2Mr MHgzZSB6ZnNfa21lbV9hbGxvYysweDIwIHppb19idWZfYWxsb2MrMHg1NCBhcmNfZ2V0X2RhdGFf YnVmKzB4NDUyIGFyY19yZWFkKzB4NWJjIGRidWZfcmVhZCsweDdlMiBkbXVfYnVmX2hvbGQrMHg2 OCB6YXBfbG9ja2RpcisweDUyIHphcF9sb29rdXBfbm9ybSsweDQ5IHphcF9sb29rdXArMHg2YyAK OTkyODIgMTAwNDc3IHNoICAgICAgICAgICAgICAgLSAgICAgICAgICAgICAgICBtaV9zd2l0Y2gr MHgxMjIgc2xlZXBxX3N3aXRjaCsweDE1YiBzbGVlcHFfY2F0Y2hfc2lnbmFscysweDViZSBzbGVl cHFfd2FpdF9zaWcrMHgxNCBfc2xlZXArMHgyOWIga2Vybl93YWl0NisweDcyNSBzeXNfd2FpdDQr MHg5NCBzeXNjYWxsKzB4NDhiIFhpbnQweDgwX3N5c2NhbGwrMHgyMSAKOTkyODMgMTAwNDcyIG1h a2UgICAgICAgICAgICAgLSAgICAgICAgICAgICAgICBtaV9zd2l0Y2grMHgxMjIgc2xlZXBxX3N3 aXRjaCsweDE1YiBzbGVlcHFfY2F0Y2hfc2lnbmFscysweDViZSBzbGVlcHFfd2FpdF9zaWcrMHgx NCBfc2xlZXArMHgyOWIga2Vybl93YWl0NisweDcyNSBzeXNfd2FpdDQrMHg5NCBzeXNjYWxsKzB4 NDhiIFhpbnQweDgwX3N5c2NhbGwrMHgyMSAKOTkyOTkgMTAwNDYxIG1ha2UgICAgICAgICAgICAg LSAgICAgICAgICAgICAgICBtaV9zd2l0Y2grMHgxMjIgc2xlZXBxX3N3aXRjaCsweDE1YiBzbGVl cHFfd2FpdCsweDNmIF9zeF94bG9ja19oYXJkKzB4NTMzIF9zeF94bG9jaysweDc5IGJ1Zl9oYXNo X2ZpbmQrMHhjMiBhcmNfcmVhZCsweDdiIGRidWZfcmVhZCsweDdlMiBkbXVfYnVmX2hvbGQrMHg2 OCB6YXBfZ2V0X2xlYWZfYnlibGsrMHg2YSBmemFwX2N1cnNvcl9yZXRyaWV2ZSsweDE4NCB6YXBf Y3Vyc29yX3JldHJpZXZlKzB4MjRiIHpmc19mcmVlYnNkX3JlYWRkaXIrMHgzYjQgVk9QX1JFQURE SVJfQVBWKzB4OTUga2Vybl9nZXRkaXJlbnRyaWVzKzB4MWY4IHN5c19nZXRkaXJlbnRyaWVzKzB4 NDIgc3lzY2FsbCsweDQ4YiBYaW50MHg4MF9zeXNjYWxsKzB4MjEgCjk5ODg3IDEwMDQ0MiB0Y3No ICAgICAgICAgICAgIC0gICAgICAgICAgICAgICAgbWlfc3dpdGNoKzB4MTIyIHNsZWVwcV9zd2l0 Y2grMHgxNWIgc2xlZXBxX2NhdGNoX3NpZ25hbHMrMHg1YmUgc2xlZXBxX3dhaXRfc2lnKzB4MTQg X3NsZWVwKzB4MjliIGtlcm5fc2lnc3VzcGVuZCsweDEzNyBzeXNfc2lnc3VzcGVuZCsweDU4IHN5 c2NhbGwrMHg0OGIgWGludDB4ODBfc3lzY2FsbCsweDIxIAo5OTg5MiAxMDAzNjkgc3VkbyAgICAg ICAgICAgICAtICAgICAgICAgICAgICAgIG1pX3N3aXRjaCsweDEyMiBzbGVlcHFfc3dpdGNoKzB4 MTViIHNsZWVwcV9jYXRjaF9zaWduYWxzKzB4NWJlIHNsZWVwcV93YWl0X3NpZysweDE0IF9jdl93 YWl0X3NpZysweDE3YyBzZWx0ZHdhaXQrMHhjZiBzeXNfcG9sbCsweDQ3MiBzeXNjYWxsKzB4NDhi IFhpbnQweDgwX3N5c2NhbGwrMHgyMSAKOTk4OTMgMTAwMzg5IHByb2NzdGF0ICAgICAgICAgLSAg ICAgICAgICAgICAgICA8cnVubmluZz4gICAgICAgICAgICAgICAgICAgIAo= --001a11c3775c93ac95051c2e7ffe-- From owner-freebsd-fs@freebsd.org Fri Jul 31 21:27:21 2015 Return-Path: Delivered-To: freebsd-fs@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id D00219B0C29 for ; Fri, 31 Jul 2015 21:27:21 +0000 (UTC) (envelope-from truckman@FreeBSD.org) Received: from gw.catspoiler.org (cl-1657.chi-02.us.sixxs.net [IPv6:2001:4978:f:678::2]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 7C8921C48; Fri, 31 Jul 2015 21:27:21 +0000 (UTC) (envelope-from truckman@FreeBSD.org) Received: from FreeBSD.org (mousie.catspoiler.org [192.168.101.2]) by gw.catspoiler.org (8.13.3/8.13.3) with ESMTP id t6VLRAsE074782; Fri, 31 Jul 2015 14:27:14 -0700 (PDT) (envelope-from truckman@FreeBSD.org) Message-Id: <201507312127.t6VLRAsE074782@gw.catspoiler.org> Date: Fri, 31 Jul 2015 14:27:10 -0700 (PDT) From: Don Lewis Subject: Re: ZFS on 10-STABLE r281159: programs, accessing ZFS pauses for minutes in state [*kmem arena] To: kostikbel@gmail.com cc: lev@FreeBSD.org, freebsd-fs@FreeBSD.org In-Reply-To: <20150730121840.GS2072@kib.kiev.ua> MIME-Version: 1.0 Content-Type: TEXT/plain; charset=us-ascii X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 31 Jul 2015 21:27:22 -0000 On 30 Jul, Konstantin Belousov wrote: > On Thu, Jul 30, 2015 at 02:30:08PM +0300, Lev Serebryakov wrote: >> Hello Freebsd-fs, >> >> >> I'm migrating my NAS from geom_raid5 + UFS to ZFS raidz. My main storage >> is 5x2Tb HDDs. Additionaly, I have 2x3Tb HDDs attached to hold my data when >> I re-make my main storage. >> >> So, I have now two ZFS pools: >> >> ztemp mirror ada0 ada1 [both are 3Tb HDDS] >> zstor raidz ada3 ada4 ada5 ada6 ada7 [all of them are 2Tb] >> >> ztemp contain one filesystem with 2.1Tb of my data. ztemp was populated >> with my data from old geom_raid5 + UFS installation via "rsync" and it was >> FAST (HDD-speed). >> >> zstor contains several empty file systems (one per user), like: >> >> zstor/home/lev >> zstor/home/sveta >> zstor/home/nsvn >> zstor/home/torrents >> zstor/home/storage >> >> Deduplication IS TURNED OFF. atime is turned off. Record size set to 1M as >> I have a lot of big files (movies, RAW photo from DSLR, etc). Compression is >> turned off. >> >> When I try to copy all my data from temporary HDDs (ztemp pool) to my new >> shiny RIAD (zstor pool) with >> >> cd /ztemp/fs && rsync -avH lev sveta nsvn storage /usr/home/ >> >> rsync pauses for tens of minutes (!) after several hundreds of files. ^T >> and top shows state "[*kmem arena]". When I stop rsync with ^C and try to do >> "zfs list" it waits forever, in state "[*kmem arena]" again. > Show the output of sysctl debug.vmem_check. > >> >> This server is equipped with 6GiB of RAM. >> >> It looks FreeBSD contains bug about year ago which leads to this behavior, >> but mailing lists says, that it was fixed in r272221, 10 months ago. I think I may have gotten bitten by this yesterday on a fairly recent 10.2-PRERELEASE machine with 8 GB of RAM. It's nominally a zfs-only machine, but I had some data on a couple of UFS drives that I needed to copy over to a zfs filesystem. I connected one of the drives to a sata to usb adapater and plugged it into the machine, then ran rsync to transfer the contents of a ~100 GB filesystem. I had a number of active programs running, including a rather bloated firefox process that had gobbled lots of ram. In my case, arc stayed small (< 1 GB), inactive memory was a couple of GB, and several GB of data got pushed to swap. Free memory got very low, bouncing around in the 10's of MB for a while before the machine locked. It wasn't totally dead because my X11 desktop is configured in focus follow mouse mode and I could see the window focus change when I moved the mouse around. Eventually I did something to provoke the window manager and/or the Xorg server into locking up as well. I wasn't able to switch to console mode. I eventually gave up and hit the reset button. %sysctl debug.vmem_check sysctl: unknown oid 'debug.vmem_check': No such file or directory With the same set of processes running, but no UFS, this is what top says about memory usage: Mem: 1156M Active, 3403M Inact, 1682M Wired, 31M Cache, 1631M Free ARC: 1129M Total, 588M MFU, 492M MRU, 54K Anon, 10M Header, 39M Other Swap: 40G Total, 40G Free From owner-freebsd-fs@freebsd.org Sat Aug 1 00:37:20 2015 Return-Path: Delivered-To: freebsd-fs@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 10C439B0C7A for ; Sat, 1 Aug 2015 00:37:20 +0000 (UTC) (envelope-from quartz@sneakertech.com) Received: from douhisi.pair.com (douhisi.pair.com [209.68.5.179]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id E918F14A5 for ; Sat, 1 Aug 2015 00:37:19 +0000 (UTC) (envelope-from quartz@sneakertech.com) Received: from [10.2.2.1] (pool-173-48-121-235.bstnma.fios.verizon.net [173.48.121.235]) by douhisi.pair.com (Postfix) with ESMTPSA id 1108D3F71D for ; Fri, 31 Jul 2015 20:37:12 -0400 (EDT) Message-ID: <55BC14B7.9010009@sneakertech.com> Date: Fri, 31 Jul 2015 20:37:11 -0400 From: Quartz MIME-Version: 1.0 To: FreeBSD FS Subject: ZFS: Disabling ARC? Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 7bit X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 01 Aug 2015 00:37:20 -0000 Can someone help clear up a few ZFS basics for me? A few recent threads about ARC issues and memory-induced panics have made me realize I'm not 100% sure I understand ARC as well as I thought I did. Say you have a ZFS file server that houses very large single files which are very infrequently accessed. For the sake of argument, let's say you're using ZFS on a home server for your family, and it holds exclusively a whole bunch of multi-gig bluray rips or whatever (nothing else). When someone wants to watch something, they copy the file to their desktop and watch it there. Although the family will watch several videos each day, any given file will only be accessed maybe once every couple months. (I know streaming would make more sense in real life, and that this example is kinda silly in general, but ignore that for now). If I understand ARC correctly this would be a worst case scenario, right? Besides hogging ram, would ARC cause any problems here? Would disabling ARC and devoting the ram to other things be a wise idea? Is disabling ARC ever a wise idea? From owner-freebsd-fs@freebsd.org Sat Aug 1 11:04:44 2015 Return-Path: Delivered-To: freebsd-fs@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id EFE799B0E09 for ; Sat, 1 Aug 2015 11:04:44 +0000 (UTC) (envelope-from lev@FreeBSD.org) Received: from onlyone.friendlyhosting.spb.ru (onlyone.friendlyhosting.spb.ru [46.4.40.135]) by mx1.freebsd.org (Postfix) with ESMTP id ADEC310CB for ; Sat, 1 Aug 2015 11:04:44 +0000 (UTC) (envelope-from lev@FreeBSD.org) Received: from lion.home.serebryakov.spb.ru (unknown [IPv6:2001:470:923f:1:2924:7e01:7d9c:bbfe]) (Authenticated sender: lev@serebryakov.spb.ru) by onlyone.friendlyhosting.spb.ru (Postfix) with ESMTPSA id 5C8FD2040 for ; Sat, 1 Aug 2015 14:04:36 +0300 (MSK) Date: Sat, 1 Aug 2015 14:04:29 +0300 From: Lev Serebryakov Reply-To: lev@FreeBSD.org Organization: FreeBSD X-Priority: 3 (Normal) Message-ID: <795246861.20150801140429@serebryakov.spb.ru> To: FreeBSD FS Subject: NFS & ZFS: how to export whole FS hierarhy to mount it with one command on client? MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Transfer-Encoding: 7bit X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 01 Aug 2015 11:04:45 -0000 Hello FreeBSD, I had "/usr/home" UFS exported to several hosts (all of them are FreeBSD), and it worked as intended: remote host mounted "server:/usr/home" and got all user home dirs. Now I converted "/usr/home" to ZFS and created one FS per user (so, here is FSes "zhome/lev", "zhome/sveta", etc., on pool "zhome"). When client mount "server:/usr/home" now it gets all user directories, but all of them are empty, because NFS sees every user home dir as different FS! How could I export all this tree in one piece now? I don't want to have multiple NFS mounts (one per user) on each host which needs home directories. -- Best regards, Lev mailto:lev@FreeBSD.org From owner-freebsd-fs@freebsd.org Sat Aug 1 11:21:26 2015 Return-Path: Delivered-To: freebsd-fs@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id DC71B9B01C2 for ; Sat, 1 Aug 2015 11:21:26 +0000 (UTC) (envelope-from rmacklem@uoguelph.ca) Received: from esa-jnhn.mail.uoguelph.ca (esa-jnhn.mail.uoguelph.ca [131.104.91.44]) by mx1.freebsd.org (Postfix) with ESMTP id 8483C18C7; Sat, 1 Aug 2015 11:21:26 +0000 (UTC) (envelope-from rmacklem@uoguelph.ca) X-IronPort-Anti-Spam-Filtered: true X-IronPort-Anti-Spam-Result: A2AHAwB0q7xV/61jaINbGQEBAYNSaQaDHbkhCYFaIAqFL0oCgVoUAQEBAQEBAYEKhCQBAQEDAQEBICsgCxACAQgYAgINFgMCAicBCRURAgwHBAEcBIgNDbJDlXUBAQEHAQEBAQEdgSKKLYQ2AQEFFzQHF4JSgUMFlHmEe4RzhGuXOQImgj+BWiIxB4EHOoEEAQEB X-IronPort-AV: E=Sophos;i="5.15,591,1432612800"; d="scan'208";a="228605875" Received: from nipigon.cs.uoguelph.ca (HELO zcs1.mail.uoguelph.ca) ([131.104.99.173]) by esa-jnhn.mail.uoguelph.ca with ESMTP; 01 Aug 2015 07:21:11 -0400 Received: from localhost (localhost [127.0.0.1]) by zcs1.mail.uoguelph.ca (Postfix) with ESMTP id 0340C15F542; Sat, 1 Aug 2015 07:21:11 -0400 (EDT) Received: from zcs1.mail.uoguelph.ca ([127.0.0.1]) by localhost (zcs1.mail.uoguelph.ca [127.0.0.1]) (amavisd-new, port 10032) with ESMTP id 4ADdDVSF945a; Sat, 1 Aug 2015 07:21:10 -0400 (EDT) Received: from localhost (localhost [127.0.0.1]) by zcs1.mail.uoguelph.ca (Postfix) with ESMTP id 8300915F55D; Sat, 1 Aug 2015 07:21:10 -0400 (EDT) X-Virus-Scanned: amavisd-new at zcs1.mail.uoguelph.ca Received: from zcs1.mail.uoguelph.ca ([127.0.0.1]) by localhost (zcs1.mail.uoguelph.ca [127.0.0.1]) (amavisd-new, port 10026) with ESMTP id d0EfNklRCV2S; Sat, 1 Aug 2015 07:21:10 -0400 (EDT) Received: from zcs1.mail.uoguelph.ca (zcs1.mail.uoguelph.ca [172.17.95.18]) by zcs1.mail.uoguelph.ca (Postfix) with ESMTP id 6879215F542; Sat, 1 Aug 2015 07:21:10 -0400 (EDT) Date: Sat, 1 Aug 2015 07:21:10 -0400 (EDT) From: Rick Macklem To: lev@FreeBSD.org Cc: FreeBSD FS Message-ID: <1363497421.7238055.1438428070047.JavaMail.zimbra@uoguelph.ca> In-Reply-To: <795246861.20150801140429@serebryakov.spb.ru> References: <795246861.20150801140429@serebryakov.spb.ru> Subject: Re: NFS & ZFS: how to export whole FS hierarhy to mount it with one command on client? MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit X-Originating-IP: [172.17.95.10] X-Mailer: Zimbra 8.0.9_GA_6191 (ZimbraWebClient - FF34 (Win)/8.0.9_GA_6191) Thread-Topic: NFS & ZFS: how to export whole FS hierarhy to mount it with one command on client? Thread-Index: 9YIBfPvqQLmueClj3iWjhTuEvg65+w== X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 01 Aug 2015 11:21:27 -0000 Lev wrote: > Hello FreeBSD, > > I had "/usr/home" UFS exported to several hosts (all of them are FreeBSD), > and it worked as intended: remote host mounted "server:/usr/home" and got > all user home dirs. > > Now I converted "/usr/home" to ZFS and created one FS per user (so, here is > FSes "zhome/lev", "zhome/sveta", etc., on pool "zhome"). > > When client mount "server:/usr/home" now it gets all user directories, but > all of them are empty, because NFS sees every user home dir as different FS! > > How could I export all this tree in one piece now? I don't want to have > multiple NFS mounts (one per user) on each host which needs home > directories. > To mount multiple file systems as one mount, you'll need to use NFSv4. I believe you will have to have a separate export entry in the server for each of the file systems. rick > -- > Best regards, > Lev mailto:lev@FreeBSD.org > > _______________________________________________ > freebsd-fs@freebsd.org mailing list > http://lists.freebsd.org/mailman/listinfo/freebsd-fs > To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" > From owner-freebsd-fs@freebsd.org Sat Aug 1 11:31:02 2015 Return-Path: Delivered-To: freebsd-fs@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 87CB09B0505 for ; Sat, 1 Aug 2015 11:31:02 +0000 (UTC) (envelope-from lev@FreeBSD.org) Received: from onlyone.friendlyhosting.spb.ru (onlyone.friendlyhosting.spb.ru [IPv6:2a01:4f8:131:60a2::2]) by mx1.freebsd.org (Postfix) with ESMTP id 495501B5D for ; Sat, 1 Aug 2015 11:31:02 +0000 (UTC) (envelope-from lev@FreeBSD.org) Received: from lion.home.serebryakov.spb.ru (unknown [IPv6:2001:470:923f:1:2924:7e01:7d9c:bbfe]) (Authenticated sender: lev@serebryakov.spb.ru) by onlyone.friendlyhosting.spb.ru (Postfix) with ESMTPSA id 125DC204C; Sat, 1 Aug 2015 14:31:00 +0300 (MSK) Date: Sat, 1 Aug 2015 14:30:52 +0300 From: Lev Serebryakov Reply-To: lev@FreeBSD.org Organization: FreeBSD X-Priority: 3 (Normal) Message-ID: <1593307781.20150801143052@serebryakov.spb.ru> To: Rick Macklem CC: FreeBSD FS Subject: Re: NFS & ZFS: how to export whole FS hierarhy to mount it with one command on client? In-Reply-To: <1363497421.7238055.1438428070047.JavaMail.zimbra@uoguelph.ca> References: <795246861.20150801140429@serebryakov.spb.ru> <1363497421.7238055.1438428070047.JavaMail.zimbra@uoguelph.ca> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Transfer-Encoding: 7bit X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 01 Aug 2015 11:31:02 -0000 Hello Rick, Saturday, August 1, 2015, 2:21:10 PM, you wrote: > To mount multiple file systems as one mount, you'll need to use NFSv4. I believe > you will have to have a separate export entry in the server for each of the file > systems. So, /etc/exports needs to have BOTH v3-style exports & V4: root of tree line? -- Best regards, Lev mailto:lev@FreeBSD.org From owner-freebsd-fs@freebsd.org Sat Aug 1 11:36:54 2015 Return-Path: Delivered-To: freebsd-fs@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 8C27B9B06E2 for ; Sat, 1 Aug 2015 11:36:54 +0000 (UTC) (envelope-from alexander@leidinger.net) Received: from mail.ebusiness-leidinger.de (mail.ebusiness-leidinger.de [217.11.53.44]) (using TLSv1.2 with cipher DHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 18F2F1EB1 for ; Sat, 1 Aug 2015 11:36:53 +0000 (UTC) (envelope-from alexander@leidinger.net) Received: from outgoing.leidinger.net (p549CD545.dip0.t-ipconnect.de [84.156.213.69]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mail.ebusiness-leidinger.de (Postfix) with ESMTPSA id E02E483E402; Sat, 1 Aug 2015 13:36:35 +0200 (CEST) Received: from localhost (Titan.Leidinger.net [192.168.1.17]) by outgoing.leidinger.net (Postfix) with ESMTP id 4C5252B75; Sat, 1 Aug 2015 13:36:33 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=leidinger.net; s=outgoing-alex; t=1438428993; bh=GVXm/WSJeT+xWAxY61n9W4o0g5pCucDYuT6YX3cVQ2Q=; h=Date:From:To:Cc:Subject:In-Reply-To:References; b=0/cvWu590QHf/J4EL4IGVxOV6iZIo3L0dihfUcxqjLq1RfKlua2sH4O6XLzgvrkei bJzPd6ZobU6guoluMgPKp/DlzGnHgnFKe13A0bdZDPTyfo1XRex8h5aWomNX43Bz8r /aJD345Udr/Ukdv+KPsj45m+3kELmehHr3F9GR2aecPeZq6mJsEpgIc7mf3A4c1XkM 2FXoWeqyAuDLhDAmgSxlE9+e2hZXuZV6xdD92f7XabyvmCudjGYNwTcSb41qSS2gTM SYgW7dE8SqpRX89MLqE1JfCN+mgBG7/coPDjnPtMMrfKq7nwrxglkNeYmZjNeMQjsR Y/7nZazUpP9Fw== Date: Sat, 1 Aug 2015 13:36:35 +0200 From: Alexander Leidinger To: Quartz Cc: FreeBSD FS Subject: Re: ZFS: Disabling ARC? Message-ID: <20150801133635.00002ecc@Leidinger.net> In-Reply-To: <55BC14B7.9010009@sneakertech.com> References: <55BC14B7.9010009@sneakertech.com> X-Mailer: Claws Mail 3.10.1 (GTK+ 2.16.6; i586-pc-mingw32msvc) MIME-Version: 1.0 Content-Type: text/plain; charset=US-ASCII Content-Transfer-Encoding: 7bit X-EBL-MailScanner-Information: Please contact the ISP for more information X-EBL-MailScanner-ID: E02E483E402.A3B83 X-EBL-MailScanner: Found to be clean X-EBL-MailScanner-SpamCheck: not spam, spamhaus-ZEN, SpamAssassin (not cached, score=-1.023, required 6, autolearn=disabled, ALL_TRUSTED -1.00, DKIM_SIGNED 0.10, DKIM_VALID -0.10, DKIM_VALID_AU -0.10, TW_ZF 0.08) X-EBL-MailScanner-From: alexander@leidinger.net X-EBL-MailScanner-Watermark: 1439033799.74557@uWyi+f9Z7RPRRqyA/EYoPg X-EBL-Spam-Status: No X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 01 Aug 2015 11:36:54 -0000 On Fri, 31 Jul 2015 20:37:11 -0400 Quartz wrote: > Can someone help clear up a few ZFS basics for me? > > A few recent threads about ARC issues and memory-induced panics have > made me realize I'm not 100% sure I understand ARC as well as I > thought I did. > > Say you have a ZFS file server that houses very large single files > which are very infrequently accessed. For the sake of argument, let's > say you're using ZFS on a home server for your family, and it holds > exclusively a whole bunch of multi-gig bluray rips or whatever > (nothing else). When someone wants to watch something, they copy the > file to their desktop and watch it there. Although the family will > watch several videos each day, any given file will only be accessed > maybe once every couple months. (I know streaming would make more > sense in real life, and that this example is kinda silly in general, > but ignore that for now). No matter if you stream or copy, it's the same operation, read once in a while. > If I understand ARC correctly this would be a worst case scenario, > right? Besides hogging ram, would ARC cause any problems here? Would > disabling ARC and devoting the ram to other things be a wise idea? Is > disabling ARC ever a wise idea? You can tune how the ARC is used: # zfs get all space/export/Movies | grep cache space/export/Movies primarycache metadata local space/export/Movies secondarycache none local "primarycache" is the ARC in RAM, "secondarycache" is a cache device / L2ARC (SSD). "metadata" is directory listing, file sizes, access permissions and the like. So the above example means that metadata is allowed to go to the ARC in RAM, and nothing of the real data in this dataset shall be cached anywhere at all (neither in a cache device nor in RAM). Bye, Alexander. -- http://www.Leidinger.net Alexander@Leidinger.net: PGP 0xC773696B3BAC17DC http://www.FreeBSD.org netchild@FreeBSD.org : PGP 0xC773696B3BAC17DC From owner-freebsd-fs@freebsd.org Sat Aug 1 12:01:31 2015 Return-Path: Delivered-To: freebsd-fs@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 9734D9AE01A for ; Sat, 1 Aug 2015 12:01:31 +0000 (UTC) (envelope-from email.ahmedkamal@googlemail.com) Received: from mail-wi0-x22d.google.com (mail-wi0-x22d.google.com [IPv6:2a00:1450:400c:c05::22d]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (Client CN "smtp.gmail.com", Issuer "Google Internet Authority G2" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id 3DFA3CF1; Sat, 1 Aug 2015 12:01:31 +0000 (UTC) (envelope-from email.ahmedkamal@googlemail.com) Received: by wibxm9 with SMTP id xm9so64861854wib.0; Sat, 01 Aug 2015 05:01:29 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=googlemail.com; s=20120113; h=mime-version:in-reply-to:references:from:date:message-id:subject:to :cc:content-type; bh=0Zfhbfek68M7qpRGgLzn9UC8EPE/B5v6ot49xwuiDkI=; b=Zzjrx1D8g2ty+OhyuD+H2za4jgUO5Q+bb9zeKdXTdD6EemgpZF10040Zfj16Qt7+8L qMSTXlEELGc88etYLN65hEAMsYIQjWCaFZuLPSPpMluIUm+Z9MuJr8waGs1dvEzfwrP5 4U85pxV41ZoTVZ1/Wpy3bIed8GHBrj05RzpFfe88LzQ5qKwdiFzGZ9W3sMpVHzCOvgH6 jc1yCNApEVC/yttKhIK8BQhR6DXpXUwpR3N/9Mm8j+H125+aYsS7KYsQoxUlxCzx1a3J kRJTAumpuxvhDudFk0dEIzTpApOOTZKUyh77AQl8B7Gm3zk5g9YW4HRC82fabDlGIq7H 2erQ== X-Received: by 10.194.184.232 with SMTP id ex8mr6431442wjc.42.1438430489193; Sat, 01 Aug 2015 05:01:29 -0700 (PDT) MIME-Version: 1.0 Received: by 10.28.6.143 with HTTP; Sat, 1 Aug 2015 05:01:09 -0700 (PDT) In-Reply-To: <1593307781.20150801143052@serebryakov.spb.ru> References: <795246861.20150801140429@serebryakov.spb.ru> <1363497421.7238055.1438428070047.JavaMail.zimbra@uoguelph.ca> <1593307781.20150801143052@serebryakov.spb.ru> From: Ahmed Kamal Date: Sat, 1 Aug 2015 14:01:09 +0200 Message-ID: Subject: Re: NFS & ZFS: how to export whole FS hierarhy to mount it with one command on client? To: lev@freebsd.org Cc: Rick Macklem , FreeBSD FS Content-Type: text/plain; charset=UTF-8 X-Content-Filtered-By: Mailman/MimeDel 2.1.20 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 01 Aug 2015 12:01:31 -0000 My setup is # cat /etc/exports V4: / -sec=sys # zfs get sharenfs sitank/home NAME PROPERTY VALUE SOURCE sitank/home sharenfs on local and that's it .. All children zfs datasets under sitank/home inherit the sharenfs=on property and life works. The integration between zfs and sharing nfs is not too smooth .. so ensure it's the last step you do On Sat, Aug 1, 2015 at 1:30 PM, Lev Serebryakov wrote: > Hello Rick, > > Saturday, August 1, 2015, 2:21:10 PM, you wrote: > > > To mount multiple file systems as one mount, you'll need to use NFSv4. I > believe > > you will have to have a separate export entry in the server for each of > the file > > systems. > So, /etc/exports needs to have BOTH v3-style exports & V4: root of tree > line? > > -- > Best regards, > Lev mailto:lev@FreeBSD.org > > _______________________________________________ > freebsd-fs@freebsd.org mailing list > http://lists.freebsd.org/mailman/listinfo/freebsd-fs > To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" > From owner-freebsd-fs@freebsd.org Sat Aug 1 12:03:52 2015 Return-Path: Delivered-To: freebsd-fs@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 2DB869AE191 for ; Sat, 1 Aug 2015 12:03:52 +0000 (UTC) (envelope-from lev@FreeBSD.org) Received: from onlyone.friendlyhosting.spb.ru (onlyone.friendlyhosting.spb.ru [IPv6:2a01:4f8:131:60a2::2]) by mx1.freebsd.org (Postfix) with ESMTP id DD24CDEA for ; Sat, 1 Aug 2015 12:03:51 +0000 (UTC) (envelope-from lev@FreeBSD.org) Received: from lion.home.serebryakov.spb.ru (unknown [IPv6:2001:470:923f:1:2924:7e01:7d9c:bbfe]) (Authenticated sender: lev@serebryakov.spb.ru) by onlyone.friendlyhosting.spb.ru (Postfix) with ESMTPSA id AE5572053 for ; Sat, 1 Aug 2015 15:03:50 +0300 (MSK) Date: Sat, 1 Aug 2015 15:03:43 +0300 From: Lev Serebryakov Reply-To: lev@FreeBSD.org Organization: FreeBSD X-Priority: 3 (Normal) Message-ID: <130767529.20150801150343@serebryakov.spb.ru> To: FreeBSD FS Subject: Multiple entries in ZFS "sharenfs" property? MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Transfer-Encoding: 7bit X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 01 Aug 2015 12:03:52 -0000 Hello FreeBSD, Is it possible to put multiple entries (for multiple networks) into "sharenfs" property for ZFS filesystem? -- Best regards, Lev mailto:lev@FreeBSD.org From owner-freebsd-fs@freebsd.org Sat Aug 1 12:07:50 2015 Return-Path: Delivered-To: freebsd-fs@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 9B6C09AE202 for ; Sat, 1 Aug 2015 12:07:50 +0000 (UTC) (envelope-from lev@FreeBSD.org) Received: from onlyone.friendlyhosting.spb.ru (onlyone.friendlyhosting.spb.ru [46.4.40.135]) by mx1.freebsd.org (Postfix) with ESMTP id 603C4E9F for ; Sat, 1 Aug 2015 12:07:50 +0000 (UTC) (envelope-from lev@FreeBSD.org) Received: from lion.home.serebryakov.spb.ru (unknown [IPv6:2001:470:923f:1:2924:7e01:7d9c:bbfe]) (Authenticated sender: lev@serebryakov.spb.ru) by onlyone.friendlyhosting.spb.ru (Postfix) with ESMTPSA id 3F7EC2055; Sat, 1 Aug 2015 15:07:48 +0300 (MSK) Date: Sat, 1 Aug 2015 15:07:41 +0300 From: Lev Serebryakov Reply-To: lev@FreeBSD.org Organization: FreeBSD X-Priority: 3 (Normal) Message-ID: <1402689794.20150801150741@serebryakov.spb.ru> To: Ahmed Kamal via freebsd-fs Subject: Re: NFS & ZFS: how to export whole FS hierarhy to mount it with one command on client? In-Reply-To: References: <795246861.20150801140429@serebryakov.spb.ru> <1363497421.7238055.1438428070047.JavaMail.zimbra@uoguelph.ca> <1593307781.20150801143052@serebryakov.spb.ru> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Transfer-Encoding: 7bit X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 01 Aug 2015 12:07:50 -0000 Hello Ahmed, Saturday, August 1, 2015, 3:01:09 PM, you wrote: > My setup is > # cat /etc/exports > V4: / -sec=sys > # zfs get sharenfs sitank/home > NAME PROPERTY VALUE SOURCE > sitank/home sharenfs on local > and that's it .. All children zfs datasets under sitank/home inherit the > sharenfs=on property and life works. The integration between zfs and > sharing nfs is not too smooth .. so ensure it's the last step you do Problem is, I need different "maproot" and "ro/rw" options for different networks. Looks like I need to mention each filesystem (for each network!) manually in /etc/exports. -- Best regards, Lev mailto:lev@FreeBSD.org From owner-freebsd-fs@freebsd.org Sat Aug 1 12:23:20 2015 Return-Path: Delivered-To: freebsd-fs@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 884EF9AE83C for ; Sat, 1 Aug 2015 12:23:20 +0000 (UTC) (envelope-from rmacklem@uoguelph.ca) Received: from esa-annu.net.uoguelph.ca (esa-annu.mail.uoguelph.ca [131.104.91.36]) by mx1.freebsd.org (Postfix) with ESMTP id 30F641A34; Sat, 1 Aug 2015 12:23:19 +0000 (UTC) (envelope-from rmacklem@uoguelph.ca) X-IronPort-Anti-Spam-Filtered: true X-IronPort-Anti-Spam-Result: A2DWBAB4trxV/61jaINbGQEBAYQ7BoMdwScCgV0RAQEBAQEBAYEKhCQBAQQjVhACAQgYAgINFgMCAkYRAheILrJflXQBAQEBBgEBAQEBHYEiii2EPRc0BxeCUoFDBZR5phICJoI/gVoiMYFIgQQBAQE X-IronPort-AV: E=Sophos;i="5.15,591,1432612800"; d="scan'208";a="230243159" Received: from nipigon.cs.uoguelph.ca (HELO zcs1.mail.uoguelph.ca) ([131.104.99.173]) by esa-annu.net.uoguelph.ca with ESMTP; 01 Aug 2015 08:23:14 -0400 Received: from localhost (localhost [127.0.0.1]) by zcs1.mail.uoguelph.ca (Postfix) with ESMTP id 5312C15F542; Sat, 1 Aug 2015 08:23:13 -0400 (EDT) Received: from zcs1.mail.uoguelph.ca ([127.0.0.1]) by localhost (zcs1.mail.uoguelph.ca [127.0.0.1]) (amavisd-new, port 10032) with ESMTP id nqBiyWMDdMFW; Sat, 1 Aug 2015 08:23:13 -0400 (EDT) Received: from localhost (localhost [127.0.0.1]) by zcs1.mail.uoguelph.ca (Postfix) with ESMTP id E6F3B15F55D; Sat, 1 Aug 2015 08:23:12 -0400 (EDT) X-Virus-Scanned: amavisd-new at zcs1.mail.uoguelph.ca Received: from zcs1.mail.uoguelph.ca ([127.0.0.1]) by localhost (zcs1.mail.uoguelph.ca [127.0.0.1]) (amavisd-new, port 10026) with ESMTP id BBhcyWIaRSUq; Sat, 1 Aug 2015 08:23:12 -0400 (EDT) Received: from zcs1.mail.uoguelph.ca (zcs1.mail.uoguelph.ca [172.17.95.18]) by zcs1.mail.uoguelph.ca (Postfix) with ESMTP id CBCC915F542; Sat, 1 Aug 2015 08:23:12 -0400 (EDT) Date: Sat, 1 Aug 2015 08:23:12 -0400 (EDT) From: Rick Macklem To: lev@FreeBSD.org Cc: FreeBSD FS Message-ID: <2119966434.7262831.1438431792809.JavaMail.zimbra@uoguelph.ca> In-Reply-To: <1593307781.20150801143052@serebryakov.spb.ru> References: <795246861.20150801140429@serebryakov.spb.ru> <1363497421.7238055.1438428070047.JavaMail.zimbra@uoguelph.ca> <1593307781.20150801143052@serebryakov.spb.ru> Subject: Re: NFS & ZFS: how to export whole FS hierarhy to mount it with one command on client? MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit X-Originating-IP: [172.17.95.11] X-Mailer: Zimbra 8.0.9_GA_6191 (ZimbraWebClient - FF34 (Win)/8.0.9_GA_6191) Thread-Topic: NFS & ZFS: how to export whole FS hierarhy to mount it with one command on client? Thread-Index: 3Sgkcebswv6R1Muz/sQX/Wo3ZGYvhA== X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 01 Aug 2015 12:23:20 -0000 Lev wrote: > Hello Rick, > > Saturday, August 1, 2015, 2:21:10 PM, you wrote: > > > To mount multiple file systems as one mount, you'll need to use NFSv4. I > > believe > > you will have to have a separate export entry in the server for each of the > > file > > systems. > So, /etc/exports needs to have BOTH v3-style exports & V4: root of tree > line? > Yes, if you are doing it in /etc/exports. (The V4: line does not export any file systems.) I know nothing about ZFS or the sharenfs property it has, so others will have to clime in on that. rick > -- > Best regards, > Lev mailto:lev@FreeBSD.org > > From owner-freebsd-fs@freebsd.org Sat Aug 1 13:07:31 2015 Return-Path: Delivered-To: freebsd-fs@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id CE69E9AEF79 for ; Sat, 1 Aug 2015 13:07:31 +0000 (UTC) (envelope-from email.ahmedkamal@googlemail.com) Received: from mail-wi0-x232.google.com (mail-wi0-x232.google.com [IPv6:2a00:1450:400c:c05::232]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (Client CN "smtp.gmail.com", Issuer "Google Internet Authority G2" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id 6D61A9A3; Sat, 1 Aug 2015 13:07:31 +0000 (UTC) (envelope-from email.ahmedkamal@googlemail.com) Received: by wibud3 with SMTP id ud3so62254346wib.0; Sat, 01 Aug 2015 06:07:29 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=googlemail.com; s=20120113; h=mime-version:in-reply-to:references:from:date:message-id:subject:to :cc:content-type; bh=BwnhBS0iuK+Wty9KdLcjy3wqPzFtNEVO2Ze9ETdj56M=; b=k0OySfra2ZiJN5im2xHV1CJNzslddoAhH7vHc2dLs2OYOOojccQS1x+bOOkF1Oemaq OMkuZ2ZzGknjEO3tYjS6uiaitvzyFzt+l2rA9APdwJjFuhdeoNoNDbcaEJAPe2Epex9V iCiM4TegSzY+Bqq4V4dmgAYrdmMb7upShsgtCUwlzAB+1HqbOatQOh9NYQ+Xf0Xr5M6j QblDhIr9EUPrI9UrumOt8RIgwhjpj/INsV3n4M9O4sOggypQNGWvA1927I6nUitHXPQn 8qDE8GnpeRqsoXexkjZ47v2Ra3JBKXs19FroXAiTDSKPNPXwbbtj3/NS6QElq7is8Thy fvKg== X-Received: by 10.194.184.232 with SMTP id ex8mr6871444wjc.42.1438434449177; Sat, 01 Aug 2015 06:07:29 -0700 (PDT) MIME-Version: 1.0 Received: by 10.28.6.143 with HTTP; Sat, 1 Aug 2015 06:07:09 -0700 (PDT) In-Reply-To: <1402689794.20150801150741@serebryakov.spb.ru> References: <795246861.20150801140429@serebryakov.spb.ru> <1363497421.7238055.1438428070047.JavaMail.zimbra@uoguelph.ca> <1593307781.20150801143052@serebryakov.spb.ru> <1402689794.20150801150741@serebryakov.spb.ru> From: Ahmed Kamal Date: Sat, 1 Aug 2015 15:07:09 +0200 Message-ID: Subject: Re: NFS & ZFS: how to export whole FS hierarhy to mount it with one command on client? To: lev@freebsd.org Cc: Ahmed Kamal via freebsd-fs Content-Type: text/plain; charset=UTF-8 X-Content-Filtered-By: Mailman/MimeDel 2.1.20 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 01 Aug 2015 13:07:32 -0000 Check this bug report https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=147881 afaict, what you need is not really possible today .. Although this bug has a couple of patches that solve this limitation. No idea why no one picks up those gems and merges them :) On Sat, Aug 1, 2015 at 2:07 PM, Lev Serebryakov wrote: > Hello Ahmed, > > Saturday, August 1, 2015, 3:01:09 PM, you wrote: > > > My setup is > > # cat /etc/exports > > V4: / -sec=sys > > > # zfs get sharenfs sitank/home > > NAME PROPERTY VALUE SOURCE > > sitank/home sharenfs on local > > > and that's it .. All children zfs datasets under sitank/home inherit the > > sharenfs=on property and life works. The integration between zfs and > > sharing nfs is not too smooth .. so ensure it's the last step you do > Problem is, I need different "maproot" and "ro/rw" options for different > networks. Looks like I need to mention each filesystem (for each network!) > manually in /etc/exports. > > -- > Best regards, > Lev mailto:lev@FreeBSD.org > > From owner-freebsd-fs@freebsd.org Sat Aug 1 15:12:06 2015 Return-Path: Delivered-To: freebsd-fs@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 4982A9AF2F1 for ; Sat, 1 Aug 2015 15:12:06 +0000 (UTC) (envelope-from quartz@sneakertech.com) Received: from douhisi.pair.com (douhisi.pair.com [209.68.5.179]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 2253E186F for ; Sat, 1 Aug 2015 15:12:05 +0000 (UTC) (envelope-from quartz@sneakertech.com) Received: from [10.2.2.1] (pool-173-48-121-235.bstnma.fios.verizon.net [173.48.121.235]) by douhisi.pair.com (Postfix) with ESMTPSA id A0C5C3F707 for ; Sat, 1 Aug 2015 11:12:03 -0400 (EDT) Message-ID: <55BCE1C3.50307@sneakertech.com> Date: Sat, 01 Aug 2015 11:12:03 -0400 From: Quartz MIME-Version: 1.0 To: FreeBSD FS Subject: Re: ZFS: Disabling ARC? References: <55BC14B7.9010009@sneakertech.com> <20150801133635.00002ecc@Leidinger.net> In-Reply-To: <20150801133635.00002ecc@Leidinger.net> Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 01 Aug 2015 15:12:06 -0000 > You can tune how the ARC is used: I know *how* to tune it, what I don't know is *when*. Basically, under what sorts of use cases would I want to mess with it, and why? Given my contrived media server example, would adjusting the ARC for those datasets make a noticeable difference either way? From owner-freebsd-fs@freebsd.org Sat Aug 1 16:06:20 2015 Return-Path: Delivered-To: freebsd-fs@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 23A739AFC88 for ; Sat, 1 Aug 2015 16:06:20 +0000 (UTC) (envelope-from artemb@gmail.com) Received: from mail-oi0-x22e.google.com (mail-oi0-x22e.google.com [IPv6:2607:f8b0:4003:c06::22e]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (Client CN "smtp.gmail.com", Issuer "Google Internet Authority G2" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id DFC0B908 for ; Sat, 1 Aug 2015 16:06:19 +0000 (UTC) (envelope-from artemb@gmail.com) Received: by oig1 with SMTP id 1so18396525oig.0 for ; Sat, 01 Aug 2015 09:06:18 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:sender:in-reply-to:references:date:message-id:subject :from:to:cc:content-type; bh=UBavg02ghKVLoLGQZJml0dWJabfya/HDBr0GIRmsvU4=; b=ADZWHPLZMiE5kdHJ+IhyxtZIqnsWc3U4WU8OiTwcbg3S3zTC2VuPVcKRrx0tp1fczS 8dlaT6bx0jzf4xQ+ua0J8qspkB6+4ofBM/kHE1U1EJ3FRqRu2rh3e2G3JiuCVCPmxf5F dio3hbqihE+7b11ZnPVXXl7j/qfzJICv6kVK5N/W3w0p2GRcljJx5f8FSX8T0nxJh8ol FDwGuUnxL418CT4FQSmfYGu4ddZm+R/UtcbvzcR1YxgndXVe+SwyMYaFbK+LHU64rBPd j6S3PtwDT0I8zWZDfn8qFMbE7lpum3vV38Akxe89lazDHp3SwB+uTesiEm0vyezdsAnx OrAA== MIME-Version: 1.0 X-Received: by 10.202.170.66 with SMTP id t63mr8686550oie.40.1438445178702; Sat, 01 Aug 2015 09:06:18 -0700 (PDT) Sender: artemb@gmail.com Received: by 10.202.23.132 with HTTP; Sat, 1 Aug 2015 09:06:18 -0700 (PDT) In-Reply-To: <55BCE1C3.50307@sneakertech.com> References: <55BC14B7.9010009@sneakertech.com> <20150801133635.00002ecc@Leidinger.net> <55BCE1C3.50307@sneakertech.com> Date: Sat, 1 Aug 2015 09:06:18 -0700 X-Google-Sender-Auth: A9r4UANt4a73TyyeJSeuKtIXsME Message-ID: Subject: Re: ZFS: Disabling ARC? From: Artem Belevich To: Quartz Cc: FreeBSD FS Content-Type: text/plain; charset=UTF-8 X-Content-Filtered-By: Mailman/MimeDel 2.1.20 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 01 Aug 2015 16:06:20 -0000 On Sat, Aug 1, 2015 at 8:12 AM, Quartz wrote: > You can tune how the ARC is used: >> > > I know *how* to tune it, what I don't know is *when*. Basically, under > what sorts of use cases would I want to mess with it, and why? Given my > contrived media server example, would adjusting the ARC for those datasets > make a noticeable difference either way? Measure it. Find something that matters to you and see whether particular ARC parameters make any difference in *your* workload. --Artem From owner-freebsd-fs@freebsd.org Sat Aug 1 17:29:53 2015 Return-Path: Delivered-To: freebsd-fs@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 039CD9B09AD for ; Sat, 1 Aug 2015 17:29:53 +0000 (UTC) (envelope-from lev@FreeBSD.org) Received: from onlyone.friendlyhosting.spb.ru (onlyone.friendlyhosting.spb.ru [46.4.40.135]) by mx1.freebsd.org (Postfix) with ESMTP id B6AFF92E for ; Sat, 1 Aug 2015 17:29:52 +0000 (UTC) (envelope-from lev@FreeBSD.org) Received: from lion.home.serebryakov.spb.ru (unknown [IPv6:2001:470:923f:1:2924:7e01:7d9c:bbfe]) (Authenticated sender: lev@serebryakov.spb.ru) by onlyone.friendlyhosting.spb.ru (Postfix) with ESMTPSA id 5DA362076; Sat, 1 Aug 2015 20:29:50 +0300 (MSK) Date: Sat, 1 Aug 2015 20:29:43 +0300 From: Lev Serebryakov Reply-To: lev@FreeBSD.org Organization: FreeBSD X-Priority: 3 (Normal) Message-ID: <177858616.20150801202943@serebryakov.spb.ru> To: Ahmed Kamal via freebsd-fs Subject: Re: NFS & ZFS: how to export whole FS hierarhy to mount it with one command on client? In-Reply-To: References: <795246861.20150801140429@serebryakov.spb.ru> <1363497421.7238055.1438428070047.JavaMail.zimbra@uoguelph.ca> <1593307781.20150801143052@serebryakov.spb.ru> <1402689794.20150801150741@serebryakov.spb.ru> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Transfer-Encoding: 7bit X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 01 Aug 2015 17:29:53 -0000 Hello Ahmed, Saturday, August 1, 2015, 4:07:09 PM, you wrote: > Check this bug report > https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=147881 > afaict, what you need is not really possible today .. It works if you manually add all FSes to /etc/exports (not /etc/zfs/exports). > Although this bug has > a couple of patches that solve this limitation. No idea why no one picks up > those gems and merges them :) And these patches allows to do this tirck via ZFS property! -- Best regards, Lev mailto:lev@FreeBSD.org From owner-freebsd-fs@freebsd.org Sat Aug 1 18:06:31 2015 Return-Path: Delivered-To: freebsd-fs@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 5587B9B00B2 for ; Sat, 1 Aug 2015 18:06:31 +0000 (UTC) (envelope-from lev@FreeBSD.org) Received: from onlyone.friendlyhosting.spb.ru (onlyone.friendlyhosting.spb.ru [IPv6:2a01:4f8:131:60a2::2]) by mx1.freebsd.org (Postfix) with ESMTP id 1E0D01D4F for ; Sat, 1 Aug 2015 18:06:31 +0000 (UTC) (envelope-from lev@FreeBSD.org) Received: from lion.home.serebryakov.spb.ru (unknown [IPv6:2001:470:923f:1:2924:7e01:7d9c:bbfe]) (Authenticated sender: lev@serebryakov.spb.ru) by onlyone.friendlyhosting.spb.ru (Postfix) with ESMTPSA id 59B03207B; Sat, 1 Aug 2015 21:06:28 +0300 (MSK) Date: Sat, 1 Aug 2015 21:06:20 +0300 From: Lev Serebryakov Reply-To: lev@FreeBSD.org Organization: FreeBSD X-Priority: 3 (Normal) Message-ID: <704572885.20150801210620@serebryakov.spb.ru> To: Rick Macklem CC: FreeBSD FS Subject: Re: NFS & ZFS: how to export whole FS hierarhy to mount it with one command on client? In-Reply-To: <2119966434.7262831.1438431792809.JavaMail.zimbra@uoguelph.ca> References: <795246861.20150801140429@serebryakov.spb.ru> <1363497421.7238055.1438428070047.JavaMail.zimbra@uoguelph.ca> <1593307781.20150801143052@serebryakov.spb.ru> <2119966434.7262831.1438431792809.JavaMail.zimbra@uoguelph.ca> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Transfer-Encoding: 7bit X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 01 Aug 2015 18:06:31 -0000 Hello Rick, Saturday, August 1, 2015, 3:23:12 PM, you wrote: > Yes, if you are doing it in /etc/exports. (The V4: line does not export any file systems.) > I know nothing about ZFS or the sharenfs property it has, so others will have to clime in > on that. And how to disable NFSv3 on server in such case? -- Best regards, Lev mailto:lev@FreeBSD.org From owner-freebsd-fs@freebsd.org Sat Aug 1 20:49:58 2015 Return-Path: Delivered-To: freebsd-fs@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 769C29B196C for ; Sat, 1 Aug 2015 20:49:58 +0000 (UTC) (envelope-from rmacklem@uoguelph.ca) Received: from esa-jnhn.mail.uoguelph.ca (esa-jnhn.mail.uoguelph.ca [131.104.91.44]) by mx1.freebsd.org (Postfix) with ESMTP id 1BBBA102; Sat, 1 Aug 2015 20:49:57 +0000 (UTC) (envelope-from rmacklem@uoguelph.ca) X-IronPort-Anti-Spam-Filtered: true X-IronPort-Anti-Spam-Result: A2B8BQAUML1V/61jaINbGQEBAYQ7BoMduTCHfQKBXBIBAQEBAQEBgQqEJAEBBCNWEAIBCBgCAg0WAwICRhECF4gusSKVUQEBAQEGAQEBAQEdgSKKLYQ9FzQHF4JSgUMFlHmmEgImgj+BWiIxgUiBBAEBAQ X-IronPort-AV: E=Sophos;i="5.15,592,1432612800"; d="scan'208";a="228651601" Received: from nipigon.cs.uoguelph.ca (HELO zcs1.mail.uoguelph.ca) ([131.104.99.173]) by esa-jnhn.mail.uoguelph.ca with ESMTP; 01 Aug 2015 16:49:56 -0400 Received: from localhost (localhost [127.0.0.1]) by zcs1.mail.uoguelph.ca (Postfix) with ESMTP id 81C8515F542; Sat, 1 Aug 2015 16:49:56 -0400 (EDT) Received: from zcs1.mail.uoguelph.ca ([127.0.0.1]) by localhost (zcs1.mail.uoguelph.ca [127.0.0.1]) (amavisd-new, port 10032) with ESMTP id 4GWp3DsvAPgK; Sat, 1 Aug 2015 16:49:56 -0400 (EDT) Received: from localhost (localhost [127.0.0.1]) by zcs1.mail.uoguelph.ca (Postfix) with ESMTP id 2AEF915F55D; Sat, 1 Aug 2015 16:49:56 -0400 (EDT) X-Virus-Scanned: amavisd-new at zcs1.mail.uoguelph.ca Received: from zcs1.mail.uoguelph.ca ([127.0.0.1]) by localhost (zcs1.mail.uoguelph.ca [127.0.0.1]) (amavisd-new, port 10026) with ESMTP id Ai8APUDe7ue5; Sat, 1 Aug 2015 16:49:56 -0400 (EDT) Received: from zcs1.mail.uoguelph.ca (zcs1.mail.uoguelph.ca [172.17.95.18]) by zcs1.mail.uoguelph.ca (Postfix) with ESMTP id 1015D15F542; Sat, 1 Aug 2015 16:49:56 -0400 (EDT) Date: Sat, 1 Aug 2015 16:49:56 -0400 (EDT) From: Rick Macklem To: lev@FreeBSD.org Cc: FreeBSD FS Message-ID: <2088541681.7479791.1438462196036.JavaMail.zimbra@uoguelph.ca> In-Reply-To: <704572885.20150801210620@serebryakov.spb.ru> References: <795246861.20150801140429@serebryakov.spb.ru> <1363497421.7238055.1438428070047.JavaMail.zimbra@uoguelph.ca> <1593307781.20150801143052@serebryakov.spb.ru> <2119966434.7262831.1438431792809.JavaMail.zimbra@uoguelph.ca> <704572885.20150801210620@serebryakov.spb.ru> Subject: Re: NFS & ZFS: how to export whole FS hierarhy to mount it with one command on client? MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit X-Originating-IP: [172.17.95.10] X-Mailer: Zimbra 8.0.9_GA_6191 (ZimbraWebClient - FF34 (Win)/8.0.9_GA_6191) Thread-Topic: NFS & ZFS: how to export whole FS hierarhy to mount it with one command on client? Thread-Index: tr03g5rF5YCV96wZ9gep96QK4sDSLg== X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 01 Aug 2015 20:49:58 -0000 Lev wrote: > Hello Rick, > > Saturday, August 1, 2015, 3:23:12 PM, you wrote: > > > Yes, if you are doing it in /etc/exports. (The V4: line does not export any > > file systems.) > > I know nothing about ZFS or the sharenfs property it has, so others will > > have to clime in > > on that. > And how to disable NFSv3 on server in such case? > sysctl vfs.nfsd.server_min_nfsvers=4 set before the nfsd daemon is started. The FreeBSD client will require an explicit "vers=4" option (or "nfsv4" option) on the mount. I don't know what Linux does by default. "nfsstat -m" on the client tells you what it is actually using. rick > -- > Best regards, > Lev mailto:lev@FreeBSD.org > >