From owner-freebsd-fs@freebsd.org Tue May 17 09:08:21 2016 Return-Path: Delivered-To: freebsd-fs@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 41F3FB3DBBE; Tue, 17 May 2016 09:08:21 +0000 (UTC) (envelope-from rainer@ultra-secure.de) Received: from connect.ultra-secure.de (connect.ultra-secure.de [88.198.71.201]) by mx1.freebsd.org (Postfix) with ESMTP id 676BE13C1; Tue, 17 May 2016 09:08:19 +0000 (UTC) (envelope-from rainer@ultra-secure.de) Received: (Haraka outbound); Tue, 17 May 2016 11:08:18 +0200 Authentication-Results: connect.ultra-secure.de; auth=pass (login); spf=none smtp.mailfrom=ultra-secure.de Received-SPF: None (connect.ultra-secure.de: domain of ultra-secure.de does not designate 127.0.0.16 as permitted sender) receiver=connect.ultra-secure.de; identity=mailfrom; client-ip=127.0.0.16; helo=connect.ultra-secure.de; envelope-from= Received: from connect.ultra-secure.de (expwebmail [127.0.0.16]) by connect.ultra-secure.de (Haraka/2.6.2-toaster) with ESMTPSA id 6E9A37E4-94A9-49FA-B13F-28674A2778A6.1 envelope-from (authenticated bits=0) (version=TLSv1/SSLv3 cipher=AES128-GCM-SHA256 verify=NO); Tue, 17 May 2016 11:08:15 +0200 MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 8bit Date: Tue, 17 May 2016 11:08:14 +0200 From: rainer@ultra-secure.de To: Fabian Keil Cc: FreeBSD Filesystems , owner-freebsd-fs@freebsd.org Subject: Re: zfs receive stalls whole system In-Reply-To: <20160517102757.135c1468@fabiankeil.de> References: <0C2233A9-C64A-4773-ABA5-C0BCA0D037F0@ultra-secure.de> <20160517102757.135c1468@fabiankeil.de> Message-ID: X-Sender: rainer@ultra-secure.de User-Agent: Roundcube Webmail/1.1.4 X-Haraka-GeoIP: --, , NaNkm X-Haraka-GeoIP-Received: X-Haraka-p0f: os="undefined undefined" link_type="undefined" distance=undefined total_conn=undefined shared_ip=Y X-Spam-Checker-Version: SpamAssassin 3.4.1 (2015-04-28) on spamassassin X-Spam-Level: X-Spam-Status: No, score=-2.9 required=5.0 tests=ALL_TRUSTED,BAYES_00, URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.1 X-Haraka-Karma: score: 6, good: 42, bad: 0, connections: 57, history: 42, pass:all_good, relaying X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.22 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 17 May 2016 09:08:21 -0000 Am 2016-05-17 10:27, schrieb Fabian Keil: > Rainer Duffner wrote: > >> I have two servers, that were running FreeBSD 10.1-AMD64 for a long >> time, one zfs-sending to the other (via zxfer). Both are NFS-servers >> and MySQL-slaves, the sender is actively used as NFS-server, the >> recipient is just a warm-standby, in case something serious happens >> and we don’t want to wait for a day until the restore is back in >> place. The MySQL-Slaves are actively used as read-only servers (at the >> application level, Python’s SQL-Alchemy does that, apparently). >> >> They are HP DL380G8 (one CPU, hexacore) with over 128 GB RAM (I think >> one has 144, the other has 192). >> While they were running 10.1, they used HP P420 RAID-controllers with >> individual 12 RAID0 volumes that I pooled into 6-disk RAIDZ2 vdevs. >> I use zfsnap to do hourly, daily and weekly snapshots. > [...] >> Now, when I do a zxfer, sometimes the whole system stalls while the >> data is sent over, especially if the delta is large or if something >> else is reading from the disk at the same time (backup agent). >> >> I had this before, on 10.0 (I believe, we didn’t have this in 9.1 >> either, IIRC) and it went away in 10.1. > > Do you use geli for swap device(s)? Yes, I do. /dev/mirror/swap.eli none swap sw 0 0 Bad idea? >> It’s very difficult (well, impossible) to debug, because the system >> totally hangs and doesn’t accept any keypresses. > > You could try reducing ZFS's deadman timeout to get a panic. > On systems with local disks I usually use: > > vfs.zfs.deadman_enabled: 1 > vfs.zfs.deadman_checktime_ms: 5000 > vfs.zfs.deadman_synctime_ms: 10000 Too bad I don't have a spare-system I could use to test this ;-)