From owner-freebsd-fs@freebsd.org Sun Nov 29 05:17:22 2015 Return-Path: Delivered-To: freebsd-fs@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 5E212A3B7FE; Sun, 29 Nov 2015 05:17:22 +0000 (UTC) (envelope-from mi+thun@aldan.algebra.com) Received: from vms173007pub.verizon.net (vms173007pub.verizon.net [206.46.173.7]) (using TLSv1 with cipher DHE-RSA-AES128-SHA (128/128 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 3D95D190E; Sun, 29 Nov 2015 05:17:21 +0000 (UTC) (envelope-from mi+thun@aldan.algebra.com) MIME-version: 1.0 Received: from [192.168.1.8] ([100.1.236.52]) by vms173007.mailsrvcs.net (Oracle Communications Messaging Server 7.0.5.32.0 64bit (built Jul 16 2014)) with ESMTPA id <0NYK00LRR9BZXL50@vms173007.mailsrvcs.net>; Sat, 28 Nov 2015 23:16:52 -0600 (CST) X-CMAE-Score: 0 X-CMAE-Analysis: v=2.1 cv=MtGvkDue c=1 sm=1 tr=0 a=UorMnhrCY2jH/mPejITChw==:117 a=LaogzpLLAAAA:8 a=oR5dmqMzAAAA:8 a=qtqOOiqGOCEA:10 a=r77TgQKjGQsHNAKrUKIA:9 a=_aNIyrnYADYoUgAxp2cA:9 a=QEXdDO2ut3YA:10 a=llZLzHQZhQd3dyS0dCMA:9 a=G0KNu4EsI3M_aVGU:21 a=_W_S_7VecoQA:10 Subject: Re: cp from NFS to ZFS hung in "fifoor" To: Jilles Tjoelker References: <5659CB64.5020105@aldan.algebra.com> <20151128224101.GA8470@stack.nl> Cc: stable@freebsd.org, freebsd-fs From: "Mikhail T." X-Enigmail-Draft-Status: N1110 Message-id: <565A8A3E.205@aldan.algebra.com> Date: Sun, 29 Nov 2015 00:16:46 -0500 User-Agent: Mozilla/5.0 (X11; FreeBSD amd64; rv:38.0) Gecko/20100101 Thunderbird/38.2.0 In-reply-to: <20151128224101.GA8470@stack.nl> Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8BIT X-Content-Filtered-By: Mailman/MimeDel 2.1.20 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 29 Nov 2015 05:17:22 -0000 On 28.11.2015 17:41, Jilles Tjoelker wrote: > Although cp -R will normally copy a fifo by calling mkfifo at the destination, it may open one if a regular file is replaced with a fifo between the time it reads the directory and it copies that file. The sole fifo under /home here was mi/.licq/licq_fifo, created in 2003. I echoed something into it (on the NFS-client side) and the cp-process resumed. I then performed a simple test: 1. Create a fifo in an NFS-exported directory and try to copy it with the -R flag mi@narawntapu:/cache/src (792) mkfifo /green/tmp/test mi@narawntapu:/cache/src (793) cp -Rpn /green/tmp/test /tmp/ mi@narawntapu:/cache/src (794) ls -l /tmp/test prw-r--r-- 1 mi wheel 0 29 лис 00:05 /tmp/test The above worked fine. 2. Now, when I try to do the same thing via an NFS mount, I get the same hang in fifoor: root@aldan:ports/x11/kde4 (475) cp -Rpn /green/tmp/test /tmp/ load: 0.42 cmd: cp 38299 [fifoor] 1.15r 0.00u 0.00s 0% 1868k So, the good news is, this is not ZFS' fault. The bad news is, there is still a bug... Unless, of course, this is some known "feature" of the NFS... Compare, for example, how stat(1) describes the same named pipe from both machines: Local FS: 92 74636334 prw-r--r-- 1 mi wheel 0 0 "Nov 29 00:05:51 2015" "Nov 29 00:05:51 2015" "Nov 29 00:05:51 2015" "Nov 29 00:05:51 2015" 16384 0 0 /green/tmp/test NFS-client: 973143811 74636334 ?rw-r--r-- 1 mi wheel 4294967295 0 "Nov 29 00:05:51 2015" "Nov 29 00:05:51 2015" "Nov 29 00:05:51 2015" "Dec 31 18:59:59 1969" 16384 0 0 /green/tmp/test That question-mark in the node-type (instead of the "p") is, I guess, what confuses cp into trying to read from it instead of creating a fifo. Should I file a PR? Thank you! -mi From owner-freebsd-fs@freebsd.org Sun Nov 29 09:22:41 2015 Return-Path: Delivered-To: freebsd-fs@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 26868A3ADB3 for ; Sun, 29 Nov 2015 09:22:41 +0000 (UTC) (envelope-from kostikbel@gmail.com) Received: from kib.kiev.ua (kib.kiev.ua [IPv6:2001:470:d5e7:1::1]) (using TLSv1 with cipher DHE-RSA-CAMELLIA256-SHA (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id BBE5D1E0F for ; Sun, 29 Nov 2015 09:22:40 +0000 (UTC) (envelope-from kostikbel@gmail.com) Received: from tom.home (kostik@localhost [127.0.0.1]) by kib.kiev.ua (8.15.2/8.15.2) with ESMTPS id tAT9MZvP022022 (version=TLSv1 cipher=DHE-RSA-CAMELLIA256-SHA bits=256 verify=NO); Sun, 29 Nov 2015 11:22:35 +0200 (EET) (envelope-from kostikbel@gmail.com) DKIM-Filter: OpenDKIM Filter v2.10.3 kib.kiev.ua tAT9MZvP022022 Received: (from kostik@localhost) by tom.home (8.15.2/8.15.2/Submit) id tAT9MZra022021; Sun, 29 Nov 2015 11:22:35 +0200 (EET) (envelope-from kostikbel@gmail.com) X-Authentication-Warning: tom.home: kostik set sender to kostikbel@gmail.com using -f Date: Sun, 29 Nov 2015 11:22:35 +0200 From: Konstantin Belousov To: Rick Macklem Cc: FreeBSD FS Subject: Re: should mutexes be uniquely named? Message-ID: <20151129092235.GZ3448@kib.kiev.ua> References: <2132881382.109600978.1448717395325.JavaMail.zimbra@uoguelph.ca> <20151128142604.GW3448@kib.kiev.ua> <1688684587.110043576.1448746844037.JavaMail.zimbra@uoguelph.ca> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <1688684587.110043576.1448746844037.JavaMail.zimbra@uoguelph.ca> User-Agent: Mutt/1.5.24 (2015-08-30) X-Spam-Status: No, score=-2.0 required=5.0 tests=ALL_TRUSTED,BAYES_00, DKIM_ADSP_CUSTOM_MED,FREEMAIL_FROM,NML_ADSP_CUSTOM_MED autolearn=no autolearn_force=no version=3.4.1 X-Spam-Checker-Version: SpamAssassin 3.4.1 (2015-04-28) on tom.home X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 29 Nov 2015 09:22:41 -0000 On Sat, Nov 28, 2015 at 04:40:44PM -0500, Rick Macklem wrote: > Kostik wrote: > > On Sat, Nov 28, 2015 at 08:29:55AM -0500, Rick Macklem wrote: > > > Hi, > > > > > > I think the patches I posted last week that add "-manage-gids" are about > > > ready for a commit to head. > > > > > > However, there is one place in the code where I'm not sure which is better > > > to do: > > > --> The code replaces a single mutex with one for each hash list head > > > (table > > > entry). > > > I currently use MTX_DUPOK and call them all the same thing. > > > or > > > I could add a "lockname" field to the hash table enty structure and > > > give > > > each one a unique name (similar to what Garrett Wollman did in the > > > kernel rpc). > > > The only downside to this is 16bytes of storage for each hash table > > > entry. > > > (Admittedly, I don't think many sites would need to set the hash table > > > size > > > greater than a few thousand, so this isn't a lot of malloc()'d > > > memory.) > > Question is, why do you need to acquire two mutexes simultaneously ? > > If mutexes protect the hash list rooted in head, then this is somewhat > > unusual. > > > There are two hash tables, one hashed on names and the other uid/gid. The > entries are linked into both of these lists. > I suppose that I could use a different name for the "name" hash table entries > vs the "uid/gid" ones, which would avoid the duplication for the common cases. I think this is the easiest, together with ... > > There are also a couple of infrequent cases (when new entries are being added > to the cache) where, to avoid a LOR in mutex locking the above 2 hash tables, > the code locks all the table entries in the one hash table before doing the > other hash table. In this case, you will still end up with duplicates unless > each lock is uniquely named. ... using mtx_lock_flags(MTX_DUPOK), to only shut up witness where it is neccessary. > > Maybe I should use a different name for the "user/group name" hash table than > the "uid/gid" one, but still allow duplicates for the infrequent cases? Exactly. > > Thanks for any help, rick > > > Downside is not only the name, but also a witness overhead in the > > non-production kernels. > > > > > > > > > > So, what do you think. Should I add the code to make the mutex names > > > unique? > > > > > > Thanks in advance for any comments, rick > > > ps: The coding change is trivial. It just involves using more malloc()'d > > > memory. > > > _______________________________________________ > > > freebsd-fs@freebsd.org mailing list > > > https://lists.freebsd.org/mailman/listinfo/freebsd-fs > > > To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" > > From owner-freebsd-fs@freebsd.org Sun Nov 29 12:01:05 2015 Return-Path: Delivered-To: freebsd-fs@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 32D6BA3C267; Sun, 29 Nov 2015 12:01:05 +0000 (UTC) (envelope-from slw@zxy.spb.ru) Received: from zxy.spb.ru (zxy.spb.ru [195.70.199.98]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id E342915A1; Sun, 29 Nov 2015 12:01:04 +0000 (UTC) (envelope-from slw@zxy.spb.ru) Received: from slw by zxy.spb.ru with local (Exim 4.86 (FreeBSD)) (envelope-from ) id 1a30f0-000NEn-3C; Sun, 29 Nov 2015 15:00:54 +0300 Date: Sun, 29 Nov 2015 15:00:54 +0300 From: Slawa Olhovchenkov To: "Mikhail T." Cc: stable@freebsd.org, freebsd-fs Subject: Re: cp from NFS to ZFS hung in "fifoor" Message-ID: <20151129120053.GA70867@zxy.spb.ru> References: <5659CB64.5020105@aldan.algebra.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <5659CB64.5020105@aldan.algebra.com> User-Agent: Mutt/1.5.24 (2015-08-30) X-SA-Exim-Connect-IP: X-SA-Exim-Mail-From: slw@zxy.spb.ru X-SA-Exim-Scanned: No (on zxy.spb.ru); SAEximRunCond expanded to false X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 29 Nov 2015 12:01:05 -0000 On Sat, Nov 28, 2015 at 10:42:28AM -0500, Mikhail T. wrote: > I was copying /home from an old server (narawntapu) to a new one > (aldan). The narawntapu:/home is mounted on aldan as /mnt with flags > ro,intr. On narawntapu /home was simply located on an SSD, but on aldan > I created a ZFS filesystem for it. > > The copying was started thus: > > root@aldan:/home (435) cp -Rpn /mnt/* . > > for a while this was proceeding at a decent clip with cp making > newnfsreq-uests: > > load: 0.78 cmd: cp 38711 [newnfsreq] 802.84r 1.57u 140.63s 20% 10768k > /mnt/mi/.kde/share/apps/kmail/dimap/.42838394.directory/sent/cur/1219621413.32392.hd8cl:2,S > -> > ./mi/.kde/share/apps/kmail/dimap/.42838394.directory/sent/cur/1219621413.32392.hd8cl:2,S > 100% > load: 1.23 cmd: cp 38711 [newnfsreq] 874.19r 1.66u 154.74s 17% 4576k > /mnt/mi/.kde/share/apps/kmail/dimap/.42838394.directory/ML/cur/1219595347.32392.rMDFf:2,S > -> > ./mi/.kde/share/apps/kmail/dimap/.42838394.directory/ML/cur/1219595347.32392.rMDFf:2,S > 100% > > ZFS on the destination compressing and writing stuff out and the traffic > between the two ranging from 30 to 50Mb/s (according to systat), but > then something happened and the cp-process is now hung: > > load: 0.55 cmd: cp 38711 [fifoor] 1107.67r 2.09u 194.12s 0% 3300k > load: 0.50 cmd: cp 38711 [fifoor] 1112.66r 2.09u 194.12s 0% 3300k > load: 0.22 cmd: cp 38711 [fifoor] 1642.37r 2.09u 194.12s 0% 3300k > # grep -r fifoor /usr/src/ /usr/src/sys/fs/fifofs/fifo_vnops.c: PDROP | PCATCH | PSOCK, "fifoor", 0); May be cp try copy fifo's content, by incorrectly detect special file? From owner-freebsd-fs@freebsd.org Sun Nov 29 13:36:16 2015 Return-Path: Delivered-To: freebsd-fs@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 58C9BA3B8BE for ; Sun, 29 Nov 2015 13:36:16 +0000 (UTC) (envelope-from rmacklem@uoguelph.ca) Received: from esa-jnhn.mail.uoguelph.ca (esa-jnhn.mail.uoguelph.ca [131.104.91.44]) by mx1.freebsd.org (Postfix) with ESMTP id E62AE1059 for ; Sun, 29 Nov 2015 13:36:15 +0000 (UTC) (envelope-from rmacklem@uoguelph.ca) IronPort-PHdr: 9a23:PiFc3xYiuWL+Y+d7vzn+Iun/LSx+4OfEezUN459isYplN5qZpcS4bnLW6fgltlLVR4KTs6sC0LqL9fC+EjFbqb+681k8M7V0HycfjssXmwFySOWkMmbcaMDQUiohAc5ZX0Vk9XzoeWJcGcL5ekGA6ibqtW1aJBzzOEJPK/jvHcaK1oLsh730q8OYPl4ArQH+SI0xBS3+lR/WuMgSjNkqAYcK4TyNnEF1ff9Lz3hjP1OZkkW0zM6x+Jl+73YY4Kp5pIZoGJ/3dKUgTLFeEC9ucyVsvJWq5lH/Sl6s4X0HTmwQjhtOSyLI6BbnRZDv+n/5sfFh2SqQMMneQrU9WDDk5KBuHkzGkiACYgQ4+2Kfr8V7j6ZWpVr1vRl2yI3QbYS9Kf1xY67ZZdNcTmMXDZUZbDBIHo7pN9hHNOEGJ+sN6tCl/1Y= X-IronPort-Anti-Spam-Filtered: true X-IronPort-Anti-Spam-Result: A2DPAQCZ/lpW/61jaINXBoQObwa+IAENgWYXCoUkSgKBURQBAQEBAQEBAYEJgi2CBwEBAQMBAQEBIAQnIAsFCwIBCA4KAgINGQICJwEJJgIECAcEARwEiAUIDadBj18BAQEBAQEBAwEBAQEBAQEYBIEBhVOEfoQ7AQEHBoMrgUQFjSJ2iD+FKoUin0sCHwEBQoIQAR2BdCA0B4QhCBcjgQcBAQE X-IronPort-AV: E=Sophos;i="5.20,360,1444708800"; d="scan'208";a="253225290" Received: from nipigon.cs.uoguelph.ca (HELO zcs1.mail.uoguelph.ca) ([131.104.99.173]) by esa-jnhn.mail.uoguelph.ca with ESMTP; 29 Nov 2015 08:36:14 -0500 Received: from localhost (localhost [127.0.0.1]) by zcs1.mail.uoguelph.ca (Postfix) with ESMTP id 1D86015F5DA; Sun, 29 Nov 2015 08:36:14 -0500 (EST) Received: from zcs1.mail.uoguelph.ca ([127.0.0.1]) by localhost (zcs1.mail.uoguelph.ca [127.0.0.1]) (amavisd-new, port 10032) with ESMTP id 9zyeMzrLVdAT; Sun, 29 Nov 2015 08:36:13 -0500 (EST) Received: from localhost (localhost [127.0.0.1]) by zcs1.mail.uoguelph.ca (Postfix) with ESMTP id 7348015F5DD; Sun, 29 Nov 2015 08:36:13 -0500 (EST) X-Virus-Scanned: amavisd-new at zcs1.mail.uoguelph.ca Received: from zcs1.mail.uoguelph.ca ([127.0.0.1]) by localhost (zcs1.mail.uoguelph.ca [127.0.0.1]) (amavisd-new, port 10026) with ESMTP id GJF2BD-kBhLo; Sun, 29 Nov 2015 08:36:13 -0500 (EST) Received: from zcs1.mail.uoguelph.ca (zcs1.mail.uoguelph.ca [172.17.95.18]) by zcs1.mail.uoguelph.ca (Postfix) with ESMTP id 572FD15F5DA; Sun, 29 Nov 2015 08:36:13 -0500 (EST) Date: Sun, 29 Nov 2015 08:36:13 -0500 (EST) From: Rick Macklem To: Konstantin Belousov Cc: FreeBSD FS Message-ID: <437411622.110533552.1448804173238.JavaMail.zimbra@uoguelph.ca> In-Reply-To: <20151129092235.GZ3448@kib.kiev.ua> References: <2132881382.109600978.1448717395325.JavaMail.zimbra@uoguelph.ca> <20151128142604.GW3448@kib.kiev.ua> <1688684587.110043576.1448746844037.JavaMail.zimbra@uoguelph.ca> <20151129092235.GZ3448@kib.kiev.ua> Subject: Re: should mutexes be uniquely named? MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit X-Originating-IP: [172.17.95.10] X-Mailer: Zimbra 8.0.9_GA_6191 (ZimbraWebClient - FF34 (Win)/8.0.9_GA_6191) Thread-Topic: should mutexes be uniquely named? Thread-Index: XeqJtSrm9+dE9eMKqI+8RBlsP6tnjA== X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 29 Nov 2015 13:36:16 -0000 Kostik wrote: > On Sat, Nov 28, 2015 at 04:40:44PM -0500, Rick Macklem wrote: > > Kostik wrote: > > > On Sat, Nov 28, 2015 at 08:29:55AM -0500, Rick Macklem wrote: > > > > Hi, > > > > > > > > I think the patches I posted last week that add "-manage-gids" are > > > > about > > > > ready for a commit to head. > > > > > > > > However, there is one place in the code where I'm not sure which is > > > > better > > > > to do: > > > > --> The code replaces a single mutex with one for each hash list head > > > > (table > > > > entry). > > > > I currently use MTX_DUPOK and call them all the same thing. > > > > or > > > > I could add a "lockname" field to the hash table enty structure and > > > > give > > > > each one a unique name (similar to what Garrett Wollman did in the > > > > kernel rpc). > > > > The only downside to this is 16bytes of storage for each hash table > > > > entry. > > > > (Admittedly, I don't think many sites would need to set the hash > > > > table > > > > size > > > > greater than a few thousand, so this isn't a lot of malloc()'d > > > > memory.) > > > Question is, why do you need to acquire two mutexes simultaneously ? > > > If mutexes protect the hash list rooted in head, then this is somewhat > > > unusual. > > > > > There are two hash tables, one hashed on names and the other uid/gid. The > > entries are linked into both of these lists. > > I suppose that I could use a different name for the "name" hash table > > entries > > vs the "uid/gid" ones, which would avoid the duplication for the common > > cases. > I think this is the easiest, together with ... > > > > > There are also a couple of infrequent cases (when new entries are being > > added > > to the cache) where, to avoid a LOR in mutex locking the above 2 hash > > tables, > > the code locks all the table entries in the one hash table before doing the > > other hash table. In this case, you will still end up with duplicates > > unless > > each lock is uniquely named. > ... using mtx_lock_flags(MTX_DUPOK), to only shut up witness where it is > neccessary. > > > > > Maybe I should use a different name for the "user/group name" hash table > > than > > the "uid/gid" one, but still allow duplicates for the infrequent cases? > Exactly. > Thanks, that's what I will do unless others post with a differing opinion. rick > > > > Thanks for any help, rick > > > > > Downside is not only the name, but also a witness overhead in the > > > non-production kernels. > > > > > > > > > > > > > > So, what do you think. Should I add the code to make the mutex names > > > > unique? > > > > > > > > Thanks in advance for any comments, rick > > > > ps: The coding change is trivial. It just involves using more > > > > malloc()'d > > > > memory. > > > > _______________________________________________ > > > > freebsd-fs@freebsd.org mailing list > > > > https://lists.freebsd.org/mailman/listinfo/freebsd-fs > > > > To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" > > > > From owner-freebsd-fs@freebsd.org Sun Nov 29 14:38:33 2015 Return-Path: Delivered-To: freebsd-fs@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id D0CFDA3B6FF; Sun, 29 Nov 2015 14:38:33 +0000 (UTC) (envelope-from rmacklem@uoguelph.ca) Received: from esa-annu.net.uoguelph.ca (esa-annu.mail.uoguelph.ca [131.104.91.36]) by mx1.freebsd.org (Postfix) with ESMTP id 595E919B9; Sun, 29 Nov 2015 14:38:32 +0000 (UTC) (envelope-from rmacklem@uoguelph.ca) IronPort-PHdr: 9a23:SFuJZx+tc1q/Ef9uRHKM819IXTAuvvDOBiVQ1KB92+scTK2v8tzYMVDF4r011RmSDdidu6wP1beempujcFJDyK7JiGoFfp1IWk1NouQttCtkPvS4D1bmJuXhdS0wEZcKflZk+3amLRodQ56mNBXsq3G/pQQfBg/4fVIsYL+lR8iC0Y/piqibwN76XUZhvHKFe7R8LRG7/036l/I9ps9cEJs30QbDuXBSeu5blitCLFOXmAvgtI/rpMYwuwwZgf8q9tZBXKPmZOx4COUAVHV1e1wysYfOtBrDRAqLrkdaGC1ClxsLHwjY6jnzWpv4tG3zsuVw0jOTe8bxSOZndy6l6vJRSRTrwAIOPD09/WSf3tZ1halYpB+kjwF4zJPZZJmVcvF3KPCONegGTHZMC54CHxdKBZmxOs5WV7IM X-IronPort-Anti-Spam-Filtered: true X-IronPort-Anti-Spam-Result: A2CtBAC4DFtW/61jaINdhA5vBr4wgWYXCoUkSgKBUxIBAQEBAQEBAYEJgi2CBwEBAQMBAQEBICsgCwULAgEGAhgCAg0ZAgInAQkmAgQIBwQBGQMEiAUIDYoanTWPYQEBAQEBAQEDAQEBAQEBAQEBFgSBAYVThH6EOwEBBReDHIFEBY4YiD+FKoJygjCER4dojyyDcAIoCDOEIiA0B4QpOoEHAQEB X-IronPort-AV: E=Sophos;i="5.20,360,1444708800"; d="scan'208";a="254710208" Received: from nipigon.cs.uoguelph.ca (HELO zcs1.mail.uoguelph.ca) ([131.104.99.173]) by esa-annu.net.uoguelph.ca with ESMTP; 29 Nov 2015 09:38:25 -0500 Received: from localhost (localhost [127.0.0.1]) by zcs1.mail.uoguelph.ca (Postfix) with ESMTP id 8682515F5DA; Sun, 29 Nov 2015 09:38:25 -0500 (EST) Received: from zcs1.mail.uoguelph.ca ([127.0.0.1]) by localhost (zcs1.mail.uoguelph.ca [127.0.0.1]) (amavisd-new, port 10032) with ESMTP id 7baFwx5osivg; Sun, 29 Nov 2015 09:38:24 -0500 (EST) Received: from localhost (localhost [127.0.0.1]) by zcs1.mail.uoguelph.ca (Postfix) with ESMTP id CBD4715F5E2; Sun, 29 Nov 2015 09:38:24 -0500 (EST) X-Virus-Scanned: amavisd-new at zcs1.mail.uoguelph.ca Received: from zcs1.mail.uoguelph.ca ([127.0.0.1]) by localhost (zcs1.mail.uoguelph.ca [127.0.0.1]) (amavisd-new, port 10026) with ESMTP id FpwMkyJ4A2TL; Sun, 29 Nov 2015 09:38:24 -0500 (EST) Received: from zcs1.mail.uoguelph.ca (zcs1.mail.uoguelph.ca [172.17.95.18]) by zcs1.mail.uoguelph.ca (Postfix) with ESMTP id AEA1915F5DA; Sun, 29 Nov 2015 09:38:24 -0500 (EST) Date: Sun, 29 Nov 2015 09:38:24 -0500 (EST) From: Rick Macklem To: "Mikhail T." Cc: Jilles Tjoelker , freebsd-fs , stable@freebsd.org Message-ID: <1842140450.110568454.1448807904673.JavaMail.zimbra@uoguelph.ca> In-Reply-To: <565A8A3E.205@aldan.algebra.com> References: <5659CB64.5020105@aldan.algebra.com> <20151128224101.GA8470@stack.nl> <565A8A3E.205@aldan.algebra.com> Subject: Re: cp from NFS to ZFS hung in "fifoor" MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: quoted-printable X-Originating-IP: [172.17.95.11] X-Mailer: Zimbra 8.0.9_GA_6191 (ZimbraWebClient - FF34 (Win)/8.0.9_GA_6191) Thread-Topic: cp from NFS to ZFS hung in "fifoor" Thread-Index: H0MseaRTtkV1zweh2mlLoyUENH+T+A== X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 29 Nov 2015 14:38:33 -0000 Mikhail T. wrote: > On 28.11.2015 17:41, Jilles Tjoelker wrote: > > Although cp -R will normally copy a fifo by calling mkfifo at the > > destination, it may open one if a regular file is replaced with a fifo > > between the time it reads the directory and it copies that file. >=20 > The sole fifo under /home here was mi/.licq/licq_fifo, created in 2003. > I echoed something into it (on the NFS-client side) and the cp-process > resumed. >=20 > I then performed a simple test: >=20 > 1. Create a fifo in an NFS-exported directory and try to copy it with > the -R flag > mi@narawntapu:/cache/src (792) mkfifo /green/tmp/test > mi@narawntapu:/cache/src (793) cp -Rpn /green/tmp/test /tmp/ > mi@narawntapu:/cache/src (794) ls -l /tmp/test > prw-r--r-- 1 mi wheel 0 29 =D0=BB=D0=B8=D1=81 00:05 /tmp/test > The above worked fine. > 2. Now, when I try to do the same thing via an NFS mount, I get the > same hang in fifoor: > root@aldan:ports/x11/kde4 (475) cp -Rpn /green/tmp/test /tmp/ > load: 0.42 cmd: cp 38299 [fifoor] 1.15r 0.00u 0.00s 0% 1868k >=20 > So, the good news is, this is not ZFS' fault. The bad news is, there is > still a bug... Unless, of course, this is some known "feature" of the > NFS... Compare, for example, how stat(1) describes the same named pipe > from both machines: >=20 > Local FS: > 92 74636334 prw-r--r-- 1 mi wheel 0 0 "Nov 29 00:05:51 2015" "Nov 29 > 00:05:51 2015" "Nov 29 00:05:51 2015" "Nov 29 00:05:51 2015" 16384 0 > 0 /green/tmp/test > NFS-client: > 973143811 74636334 ?rw-r--r-- 1 mi wheel 4294967295 0 "Nov 29 > 00:05:51 2015" "Nov 29 00:05:51 2015" "Nov 29 00:05:51 2015" "Dec 31 > 18:59:59 1969" 16384 0 0 /green/tmp/test >=20 I just tried a trivial test (using a fairly old FreeBSD9 and a pretty recen= t FreeBSD-head) and wasn't able to reproduce the problem. For my tests, "ls -l" in the NFS client showed "p" and the "cp -R" worked. I only have UFS file systems and tested with those. I can only think of a couple of explanations: 1 - ZFS didn't fill the v_type in as FIFO. The NFS server uses the v_type field to determine it is a fifo and not the high order bits of va_mode (the S_IFMT bits). I don't have ZFS to test with. 2 - You somehow used an NFSv2 mount. (NFSv2 didn't have support for FIFOs, if I recall correctly.) You can check your mount options, including which version is in use via "nfsstat -m" unless you have a pretty old system. If you have a UFS file system on the NFS server, maybe you could try export= ing that and run a test, to see if it happens for a UFS export? rick > That question-mark in the node-type (instead of the "p") is, I guess, > what confuses cp into trying to read from it instead of creating a fifo. > Should I file a PR? Thank you! >=20 > -mi >=20 > _______________________________________________ > freebsd-stable@freebsd.org mailing list > https://lists.freebsd.org/mailman/listinfo/freebsd-stable > To unsubscribe, send any mail to "freebsd-stable-unsubscribe@freebsd.org" From owner-freebsd-fs@freebsd.org Sun Nov 29 14:42:12 2015 Return-Path: Delivered-To: freebsd-fs@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id AA1F0A3B86D; Sun, 29 Nov 2015 14:42:12 +0000 (UTC) (envelope-from rmacklem@uoguelph.ca) Received: from esa-annu.net.uoguelph.ca (esa-annu.mail.uoguelph.ca [131.104.91.36]) by mx1.freebsd.org (Postfix) with ESMTP id 4F13A1D48; Sun, 29 Nov 2015 14:42:11 +0000 (UTC) (envelope-from rmacklem@uoguelph.ca) IronPort-PHdr: 9a23:OTjwGRAnx5Zc451YC6NKUyQJP3N1i/DPJgcQr6AfoPdwSP7zpsbcNUDSrc9gkEXOFd2CrakU1qyG7eu+BCQp2tWojjMrSNR0TRgLiMEbzUQLIfWuLgnFFsPsdDEwB89YVVVorDmROElRH9viNRWJ+iXhpQAbFhi3DwdpPOO9QteU1JTqkbDssMOOKyxzxxODIppKZC2sqgvQssREyaBDEY0WjiXzn31TZu5NznlpL1/A1zz158O34YIxu38I46Fp34d6XK77Z6U1S6BDRHRjajhtpZ6jiR6WBy6O5XsVU2FerlwCS1zA7VLmXor3miL+uuN7niCeMsD8V7lyUjOnufRFUhjt3R0GPD1x1Wjcich9ieoPuheorB97zov8fYaaKfd6ZqObdtpMFjkJZdpYSyEUWtD0VIAIFedUeL8A94Q= X-IronPort-Anti-Spam-Filtered: true X-IronPort-Anti-Spam-Result: A2CtBADSDVtW/61jaINdhA5vBr4wgWYXCoUkSgKBUxIBAQEBAQEBAYEJgi2CBwEBAQMBAQEBICsgCwULAgEGAhgCAg0ZAgInAQkmAgQIBwQBGQMEiAUIDYoanTWPYgEBAQEBAQEBAQEBAQEBAQEBAQEWBIEBhVOEfoQqEAIBBReDHIFEBY4YiD+FKoUihEeHaJMcAigJMoQiIDQHhGOBBwEBAQ X-IronPort-AV: E=Sophos;i="5.20,360,1444708800"; d="scan'208";a="254710439" Received: from nipigon.cs.uoguelph.ca (HELO zcs1.mail.uoguelph.ca) ([131.104.99.173]) by esa-annu.net.uoguelph.ca with ESMTP; 29 Nov 2015 09:42:10 -0500 Received: from localhost (localhost [127.0.0.1]) by zcs1.mail.uoguelph.ca (Postfix) with ESMTP id EDEB015F5DA; Sun, 29 Nov 2015 09:42:10 -0500 (EST) Received: from zcs1.mail.uoguelph.ca ([127.0.0.1]) by localhost (zcs1.mail.uoguelph.ca [127.0.0.1]) (amavisd-new, port 10032) with ESMTP id 5kHnMLmVOY0J; Sun, 29 Nov 2015 09:42:10 -0500 (EST) Received: from localhost (localhost [127.0.0.1]) by zcs1.mail.uoguelph.ca (Postfix) with ESMTP id 5827915F5E2; Sun, 29 Nov 2015 09:42:10 -0500 (EST) X-Virus-Scanned: amavisd-new at zcs1.mail.uoguelph.ca Received: from zcs1.mail.uoguelph.ca ([127.0.0.1]) by localhost (zcs1.mail.uoguelph.ca [127.0.0.1]) (amavisd-new, port 10026) with ESMTP id Xnp6FVnS__2K; Sun, 29 Nov 2015 09:42:10 -0500 (EST) Received: from zcs1.mail.uoguelph.ca (zcs1.mail.uoguelph.ca [172.17.95.18]) by zcs1.mail.uoguelph.ca (Postfix) with ESMTP id 3C42315F5DA; Sun, 29 Nov 2015 09:42:10 -0500 (EST) Date: Sun, 29 Nov 2015 09:42:10 -0500 (EST) From: Rick Macklem To: "Mikhail T." Cc: Jilles Tjoelker , freebsd-fs , stable@freebsd.org Message-ID: <238719022.110571050.1448808130235.JavaMail.zimbra@uoguelph.ca> In-Reply-To: <565A8A3E.205@aldan.algebra.com> References: <5659CB64.5020105@aldan.algebra.com> <20151128224101.GA8470@stack.nl> <565A8A3E.205@aldan.algebra.com> Subject: Re: cp from NFS to ZFS hung in "fifoor" MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: quoted-printable X-Originating-IP: [172.17.95.12] X-Mailer: Zimbra 8.0.9_GA_6191 (ZimbraWebClient - FF34 (Win)/8.0.9_GA_6191) Thread-Topic: cp from NFS to ZFS hung in "fifoor" Thread-Index: 8ZJqDBnTyNfmHqhwLsgVGKIAsVMS3A== X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 29 Nov 2015 14:42:12 -0000 Mikhail T. wrote: > On 28.11.2015 17:41, Jilles Tjoelker wrote: > > Although cp -R will normally copy a fifo by calling mkfifo at the > > destination, it may open one if a regular file is replaced with a fifo > > between the time it reads the directory and it copies that file. >=20 > The sole fifo under /home here was mi/.licq/licq_fifo, created in 2003. > I echoed something into it (on the NFS-client side) and the cp-process > resumed. >=20 > I then performed a simple test: >=20 > 1. Create a fifo in an NFS-exported directory and try to copy it with > the -R flag > mi@narawntapu:/cache/src (792) mkfifo /green/tmp/test > mi@narawntapu:/cache/src (793) cp -Rpn /green/tmp/test /tmp/ > mi@narawntapu:/cache/src (794) ls -l /tmp/test > prw-r--r-- 1 mi wheel 0 29 =D0=BB=D0=B8=D1=81 00:05 /tmp/test > The above worked fine. > 2. Now, when I try to do the same thing via an NFS mount, I get the > same hang in fifoor: > root@aldan:ports/x11/kde4 (475) cp -Rpn /green/tmp/test /tmp/ > load: 0.42 cmd: cp 38299 [fifoor] 1.15r 0.00u 0.00s 0% 1868k >=20 > So, the good news is, this is not ZFS' fault. The bad news is, there is > still a bug... Unless, of course, this is some known "feature" of the > NFS... Compare, for example, how stat(1) describes the same named pipe > from both machines: >=20 > Local FS: > 92 74636334 prw-r--r-- 1 mi wheel 0 0 "Nov 29 00:05:51 2015" "Nov 29 > 00:05:51 2015" "Nov 29 00:05:51 2015" "Nov 29 00:05:51 2015" 16384 0 > 0 /green/tmp/test > NFS-client: > 973143811 74636334 ?rw-r--r-- 1 mi wheel 4294967295 0 "Nov 29 > 00:05:51 2015" "Nov 29 00:05:51 2015" "Nov 29 00:05:51 2015" "Dec 31 > 18:59:59 1969" 16384 0 0 /green/tmp/test >=20 The other thing you could do is capture packets for the "ls -l" from the NF= S client: tcpdump -s 0 -w fifo.pcap host run on the client while doing the "ls -l" should be sufficient. (Doing it j= ust after mounting will avoid any attribute cache hit.) You could then look at the fifo.pcap in wireshark (or email it to me as an attachment and I can look) and see if the file type attribute is FIFO. (If it isn't, then the NFS server is broken somehow.) rick > That question-mark in the node-type (instead of the "p") is, I guess, > what confuses cp into trying to read from it instead of creating a fifo. > Should I file a PR? Thank you! >=20 > -mi >=20 > _______________________________________________ > freebsd-stable@freebsd.org mailing list > https://lists.freebsd.org/mailman/listinfo/freebsd-stable > To unsubscribe, send any mail to "freebsd-stable-unsubscribe@freebsd.org" From owner-freebsd-fs@freebsd.org Sun Nov 29 22:34:14 2015 Return-Path: Delivered-To: freebsd-fs@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 78B21A3C300 for ; Sun, 29 Nov 2015 22:34:14 +0000 (UTC) (envelope-from bugzilla-noreply@freebsd.org) Received: from kenobi.freebsd.org (kenobi.freebsd.org [IPv6:2001:1900:2254:206a::16:76]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 64DEA19D9 for ; Sun, 29 Nov 2015 22:34:14 +0000 (UTC) (envelope-from bugzilla-noreply@freebsd.org) Received: from bugs.freebsd.org ([127.0.1.118]) by kenobi.freebsd.org (8.15.2/8.15.2) with ESMTP id tATMYEvt095647 for ; Sun, 29 Nov 2015 22:34:14 GMT (envelope-from bugzilla-noreply@freebsd.org) From: bugzilla-noreply@freebsd.org To: freebsd-fs@FreeBSD.org Subject: [Bug 204898] zfs root fails to boot Date: Sun, 29 Nov 2015 22:34:14 +0000 X-Bugzilla-Reason: AssignedTo X-Bugzilla-Type: changed X-Bugzilla-Watch-Reason: None X-Bugzilla-Product: Base System X-Bugzilla-Component: kern X-Bugzilla-Version: 10.1-RELEASE X-Bugzilla-Keywords: X-Bugzilla-Severity: Affects Many People X-Bugzilla-Who: linimon@FreeBSD.org X-Bugzilla-Status: New X-Bugzilla-Priority: --- X-Bugzilla-Assigned-To: freebsd-fs@FreeBSD.org X-Bugzilla-Target-Milestone: --- X-Bugzilla-Flags: X-Bugzilla-Changed-Fields: assigned_to Message-ID: In-Reply-To: References: Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: 7bit X-Bugzilla-URL: https://bugs.freebsd.org/bugzilla/ Auto-Submitted: auto-generated MIME-Version: 1.0 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 29 Nov 2015 22:34:14 -0000 https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=204898 Mark Linimon changed: What |Removed |Added ---------------------------------------------------------------------------- Assignee|freebsd-bugs@FreeBSD.org |freebsd-fs@FreeBSD.org -- You are receiving this mail because: You are the assignee for the bug. From owner-freebsd-fs@freebsd.org Sun Nov 29 22:34:35 2015 Return-Path: Delivered-To: freebsd-fs@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 38C5AA3C357 for ; Sun, 29 Nov 2015 22:34:35 +0000 (UTC) (envelope-from bugzilla-noreply@freebsd.org) Received: from kenobi.freebsd.org (kenobi.freebsd.org [IPv6:2001:1900:2254:206a::16:76]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 251791AEA for ; Sun, 29 Nov 2015 22:34:35 +0000 (UTC) (envelope-from bugzilla-noreply@freebsd.org) Received: from bugs.freebsd.org ([127.0.1.118]) by kenobi.freebsd.org (8.15.2/8.15.2) with ESMTP id tATMYZsJ096119 for ; Sun, 29 Nov 2015 22:34:35 GMT (envelope-from bugzilla-noreply@freebsd.org) From: bugzilla-noreply@freebsd.org To: freebsd-fs@FreeBSD.org Subject: [Bug 204892] zpool refuses to work with device symlinks Date: Sun, 29 Nov 2015 22:34:35 +0000 X-Bugzilla-Reason: AssignedTo X-Bugzilla-Type: changed X-Bugzilla-Watch-Reason: None X-Bugzilla-Product: Base System X-Bugzilla-Component: kern X-Bugzilla-Version: 10.2-STABLE X-Bugzilla-Keywords: X-Bugzilla-Severity: Affects Only Me X-Bugzilla-Who: linimon@FreeBSD.org X-Bugzilla-Status: New X-Bugzilla-Priority: --- X-Bugzilla-Assigned-To: freebsd-fs@FreeBSD.org X-Bugzilla-Target-Milestone: --- X-Bugzilla-Flags: X-Bugzilla-Changed-Fields: assigned_to Message-ID: In-Reply-To: References: Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: 7bit X-Bugzilla-URL: https://bugs.freebsd.org/bugzilla/ Auto-Submitted: auto-generated MIME-Version: 1.0 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 29 Nov 2015 22:34:35 -0000 https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=204892 Mark Linimon changed: What |Removed |Added ---------------------------------------------------------------------------- Assignee|freebsd-bugs@FreeBSD.org |freebsd-fs@FreeBSD.org -- You are receiving this mail because: You are the assignee for the bug. From owner-freebsd-fs@freebsd.org Mon Nov 30 09:17:36 2015 Return-Path: Delivered-To: freebsd-fs@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id E27CEA3C686 for ; Mon, 30 Nov 2015 09:17:36 +0000 (UTC) (envelope-from kraduk@gmail.com) Received: from mail-wm0-x236.google.com (mail-wm0-x236.google.com [IPv6:2a00:1450:400c:c09::236]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (Client CN "smtp.gmail.com", Issuer "Google Internet Authority G2" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id 829DD151B for ; Mon, 30 Nov 2015 09:17:36 +0000 (UTC) (envelope-from kraduk@gmail.com) Received: by wmvv187 with SMTP id v187so146169741wmv.1 for ; Mon, 30 Nov 2015 01:17:34 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type; bh=+0Ka9EXzJkn6Axjlnp6TLG61ZvXGmietG5TCK0vgiCI=; b=b8oT1Q+YlNtfomyxt8VlMnhW9skCe2/J4qO+9basMNtPfxtmGJsNUAiB44vcb50hjt QA4KVbOUVnQXivifDegMosCuaJNQSF/38z0J4sXgkkAJ8ehi5W6y1Y9SEb/dmhVyPQCH vyuJYGb1gVl7gWKjMo+biJ4tkIGuAcXDv0I+vSZbrSaB4+v2lWkY+y4daF9ChNpCZ6z2 I+hdrmk5k3BAKCVYMBXjRx0KxmCRjBla1a9eMvlCmJqylc0/9sTmvRMLVu9ZNH2GeFVf sD6hr2NBBxKy/6cxAtckIdKEK15Xwzq+fIXXjMKGZ5cT9bijGTadytNvUO7R8E2JeEkK OK+Q== MIME-Version: 1.0 X-Received: by 10.194.236.228 with SMTP id ux4mr10632467wjc.56.1448875054050; Mon, 30 Nov 2015 01:17:34 -0800 (PST) Received: by 10.28.181.213 with HTTP; Mon, 30 Nov 2015 01:17:33 -0800 (PST) In-Reply-To: <565875A7.6060004@free.de> References: <565875A7.6060004@free.de> Date: Mon, 30 Nov 2015 09:17:33 +0000 Message-ID: Subject: Re: High fragmentation on zpool log From: krad To: Kai Gallasch Cc: FreeBSD FS Content-Type: text/plain; charset=UTF-8 X-Content-Filtered-By: Mailman/MimeDel 2.1.20 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 30 Nov 2015 09:17:37 -0000 Fragmentation isn't really a big issue on SSD's as there are no heads to move around like on magnetic drives. Also due to wear levelling, you actually have no idea where a block actually is on a memory cell, as the drive only gives a logical representation of the layout of blocks, not an actual true mapping. On 27 November 2015 at 15:24, Kai Gallasch wrote: > > Hi. > > Today I had a look at the zpool of a server (FreeBSD 10.2, GENERIC > kernel, 100d uptime, 96GB RAM) I recently installed. > > The pool has eight SAS drives in a raid 10 setup (concatenated mirror > pairs) and uses a cache and a mirrored log. > > The log and cache both are on a pair of Intel SSDs. > > # gpart show -l da9 > => 34 195371501 da9 GPT (93G) > 34 6 - free - (3.0K) > 40 16777216 1 log-BTTV5234003K100FGN (8.0G) > 16777256 178594272 2 cache-BTTV5234003K100FGN (85G) > 195371528 7 - free - (3.5K) > > > Is 85% fragmentation of the log device something to worry about? > > Why does zpool list show so unrealistic values for FREE and CAP? > Is this normal? > > Atached: Some output of zpool list. > > Regards, > Kai. > > > (zpool list -v output, ommited columns: EXPANDSZ.,DEDUP, > HEALTH, ALTROOT) > > NAME SIZE ALLOC FREE FRAG CAP > rpool 7.25T 440G 6.82T 4% 5% > mirror 1.81T 110G 1.71T 4% 5% > gpt/rpool-WMC160D0SVZE - - - - - > gpt/rpool-WMC160D8MJPD - - - - - > mirror 1.81T 110G 1.70T 4% 5% > gpt/rpool-WMC160D9DLL2 - - - - - > gpt/rpool-WMC160D23CWA - - - - - > mirror 1.81T 110G 1.71T 4% 5% > gpt/rpool-WMC160D94930 - - - - - > gpt/rpool-WMC160D9V5LW - - - - - > mirror 1.81T 110G 1.71T 4% 5% > gpt/rpool-WMC160D9ZV0S - - - - - > gpt/rpool-WMC160D5HFT6 - - - - - > mirror 7.94G 43.2M 7.90G 85% 0% > gpt/log-BTTV523401U4100FGN - - - - - > gpt/log-BTTV5234003K100FGN - - - - - > cache - - - - - > gpt/cache-BTTV5234003K100FGN 85.2G 142G 16.0E 0% 166% > gpt/cache-BTTV523401U4100FGN 85.2G 172G 16.0E 0% 202% > > > > -- > PGP-KeyID = 0x70654D7C4FB1F588 > > > > From owner-freebsd-fs@freebsd.org Mon Nov 30 12:33:56 2015 Return-Path: Delivered-To: freebsd-fs@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 16691A244D7 for ; Mon, 30 Nov 2015 12:33:56 +0000 (UTC) (envelope-from bugzilla-noreply@freebsd.org) Received: from kenobi.freebsd.org (kenobi.freebsd.org [IPv6:2001:1900:2254:206a::16:76]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 03B2014F2 for ; Mon, 30 Nov 2015 12:33:56 +0000 (UTC) (envelope-from bugzilla-noreply@freebsd.org) Received: from bugs.freebsd.org ([127.0.1.118]) by kenobi.freebsd.org (8.15.2/8.15.2) with ESMTP id tAUCXtbI062534 for ; Mon, 30 Nov 2015 12:33:55 GMT (envelope-from bugzilla-noreply@freebsd.org) From: bugzilla-noreply@freebsd.org To: freebsd-fs@FreeBSD.org Subject: [Bug 194513] zfs recv hangs in state kmem arena Date: Mon, 30 Nov 2015 12:33:53 +0000 X-Bugzilla-Reason: AssignedTo X-Bugzilla-Type: changed X-Bugzilla-Watch-Reason: None X-Bugzilla-Product: Base System X-Bugzilla-Component: kern X-Bugzilla-Version: 10.0-STABLE X-Bugzilla-Keywords: X-Bugzilla-Severity: Affects Some People X-Bugzilla-Who: smh@FreeBSD.org X-Bugzilla-Status: Closed X-Bugzilla-Priority: --- X-Bugzilla-Assigned-To: freebsd-fs@FreeBSD.org X-Bugzilla-Target-Milestone: --- X-Bugzilla-Flags: X-Bugzilla-Changed-Fields: resolution bug_status Message-ID: In-Reply-To: References: Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: 7bit X-Bugzilla-URL: https://bugs.freebsd.org/bugzilla/ Auto-Submitted: auto-generated MIME-Version: 1.0 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 30 Nov 2015 12:33:56 -0000 https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=194513 Steven Hartland changed: What |Removed |Added ---------------------------------------------------------------------------- Resolution|--- |FIXED Status|In Progress |Closed --- Comment #12 from Steven Hartland --- Also likely that the following commit should have helped prevent this issue: https://svnweb.freebsd.org/base?view=revision&revision=282690 I'm closing this for now, if anyone sees this in Stable 10 > r283310 or 10.2-RELEASE or above feel free to reopen. -- You are receiving this mail because: You are the assignee for the bug. From owner-freebsd-fs@freebsd.org Mon Nov 30 12:49:40 2015 Return-Path: Delivered-To: freebsd-fs@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 57965A24824 for ; Mon, 30 Nov 2015 12:49:40 +0000 (UTC) (envelope-from killing@multiplay.co.uk) Received: from mail-wm0-x236.google.com (mail-wm0-x236.google.com [IPv6:2a00:1450:400c:c09::236]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (Client CN "smtp.gmail.com", Issuer "Google Internet Authority G2" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id E412B1C29 for ; Mon, 30 Nov 2015 12:49:39 +0000 (UTC) (envelope-from killing@multiplay.co.uk) Received: by wmww144 with SMTP id w144so136090478wmw.0 for ; Mon, 30 Nov 2015 04:49:37 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=multiplay-co-uk.20150623.gappssmtp.com; s=20150623; h=subject:to:references:from:message-id:date:user-agent:mime-version :in-reply-to:content-type:content-transfer-encoding; bh=A+UV68Vs1i+v74eg3SCCGmFu7l9fxdlHXLcmUxztOvI=; b=R6r/9rrT6xEcogw5PVagS200FkZtvucZ+gfQoD3QT4yximKE388g0qUwV3eB5s3nBX ERKLApX1ryr91LBtpS2Z7cShTR8lEBOyPiJi3zKYsl2EqUuzGGdtwy5TOv4xJZrf+y32 wc/NMK4CxjRmLvMd+tGYSyX1di0P28OKG5D6Zoig0Obfe8sf8wpg33iNUABSSrr+4aTp 3cYUimnmHVaKx8YxZBj3MlrZXk1SbJtNLneUTV0aZia9RJTsGI2vg09/hs3C+jn6trXK BqDYWHEtwFmROTSD4dLhLHSgL2JfX6kgf1h2YKF53f+ID+GGAPsuKw7GoWLD1q1XNtTf 4sgg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:subject:to:references:from:message-id:date :user-agent:mime-version:in-reply-to:content-type :content-transfer-encoding; bh=A+UV68Vs1i+v74eg3SCCGmFu7l9fxdlHXLcmUxztOvI=; b=YBHqgRZjlV4G33w6/ok3JyFf7U1pHtpWRzeRgYBYSLUogPxb9X9FIQj1gxGKHMP2Sb rYo8/Rpq9pCkeZ8pqagTRw0RfgGupcWaRXP6n9hi6Hle323MGDqfok4EySZY9wjuKJbZ 28NsrriTpscaxTUKEUsuJfi9qbURsPaL/fh4WmzJGHNbr2aQ7pKZDb9paMNkzHBXxYJa sz4HT44FyMqMbT31nSqHErGHMdtDgJGwCpI1xJyCY4IT5JIiXspqGUv2yYqVjXLkBqhF dVSBKn4NKOStiKr7XE8s8jckZxgZxKR7E34LYW65vj695MCoYiVcAXJZe+NPdZUq84mX xMsg== X-Gm-Message-State: ALoCoQmBGkHgtajWK9W3iJLejUpMSnB71HO//qPEinWj94c0fNCqHebkeHpxzhsWFoxqODaFLFSn X-Received: by 10.28.96.4 with SMTP id u4mr29748520wmb.52.1448887777283; Mon, 30 Nov 2015 04:49:37 -0800 (PST) Received: from [10.10.1.58] (liv3d.labs.multiplay.co.uk. [82.69.141.171]) by smtp.gmail.com with ESMTPSA id wh10sm46639852wjb.45.2015.11.30.04.49.36 for (version=TLSv1/SSLv3 cipher=OTHER); Mon, 30 Nov 2015 04:49:36 -0800 (PST) Subject: Re: ZFS on 10-STABLE r281159: programs, accessing ZFS pauses for minutes in state [*kmem arena] To: freebsd-fs@freebsd.org References: <201507312127.t6VLRAsE074782@gw.catspoiler.org> From: Steven Hartland Message-ID: <565C45DE.8060205@multiplay.co.uk> Date: Mon, 30 Nov 2015 12:49:34 +0000 User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:38.0) Gecko/20100101 Thunderbird/38.4.0 MIME-Version: 1.0 In-Reply-To: <201507312127.t6VLRAsE074782@gw.catspoiler.org> Content-Type: text/plain; charset=windows-1252; format=flowed Content-Transfer-Encoding: 7bit X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 30 Nov 2015 12:49:40 -0000 On 31/07/2015 22:27, Don Lewis wrote: > On 30 Jul, Konstantin Belousov wrote: >> On Thu, Jul 30, 2015 at 02:30:08PM +0300, Lev Serebryakov wrote: >>> Hello Freebsd-fs, >>> >>> >>> I'm migrating my NAS from geom_raid5 + UFS to ZFS raidz. My main storage >>> is 5x2Tb HDDs. Additionaly, I have 2x3Tb HDDs attached to hold my data when >>> I re-make my main storage. >>> >>> So, I have now two ZFS pools: >>> >>> ztemp mirror ada0 ada1 [both are 3Tb HDDS] >>> zstor raidz ada3 ada4 ada5 ada6 ada7 [all of them are 2Tb] >>> >>> ztemp contain one filesystem with 2.1Tb of my data. ztemp was populated >>> with my data from old geom_raid5 + UFS installation via "rsync" and it was >>> FAST (HDD-speed). >>> >>> zstor contains several empty file systems (one per user), like: >>> >>> zstor/home/lev >>> zstor/home/sveta >>> zstor/home/nsvn >>> zstor/home/torrents >>> zstor/home/storage >>> >>> Deduplication IS TURNED OFF. atime is turned off. Record size set to 1M as >>> I have a lot of big files (movies, RAW photo from DSLR, etc). Compression is >>> turned off. >>> >>> When I try to copy all my data from temporary HDDs (ztemp pool) to my new >>> shiny RIAD (zstor pool) with >>> >>> cd /ztemp/fs && rsync -avH lev sveta nsvn storage /usr/home/ >>> >>> rsync pauses for tens of minutes (!) after several hundreds of files. ^T >>> and top shows state "[*kmem arena]". When I stop rsync with ^C and try to do >>> "zfs list" it waits forever, in state "[*kmem arena]" again. >> Show the output of sysctl debug.vmem_check. >> >>> This server is equipped with 6GiB of RAM. >>> >>> It looks FreeBSD contains bug about year ago which leads to this behavior, >>> but mailing lists says, that it was fixed in r272221, 10 months ago. > I think I may have gotten bitten by this yesterday on a fairly recent > 10.2-PRERELEASE machine with 8 GB of RAM. It's nominally a zfs-only > machine, but I had some data on a couple of UFS drives that I needed to > copy over to a zfs filesystem. I connected one of the drives to a sata > to usb adapater and plugged it into the machine, then ran rsync to > transfer the contents of a ~100 GB filesystem. I had a number of active > programs running, including a rather bloated firefox process that had > gobbled lots of ram. In my case, arc stayed small (< 1 GB), inactive > memory was a couple of GB, and several GB of data got pushed to swap. > Free memory got very low, bouncing around in the 10's of MB for a while > before the machine locked. It wasn't totally dead because my X11 > desktop is configured in focus follow mouse mode and I could see the > window focus change when I moved the mouse around. Eventually I did > something to provoke the window manager and/or the Xorg server into > locking up as well. I wasn't able to switch to console mode. I > eventually gave up and hit the reset button. > > %sysctl debug.vmem_check > sysctl: unknown oid 'debug.vmem_check': No such file or directory > > With the same set of processes running, but no UFS, this is what top > says about memory usage: > > Mem: 1156M Active, 3403M Inact, 1682M Wired, 31M Cache, 1631M Free > ARC: 1129M Total, 588M MFU, 492M MRU, 54K Anon, 10M Header, 39M Other > Swap: 40G Total, 40G Free > This is a little late but just wanted to note for the record that the combination of r281026, r281108, r281109 (MFC10: r282361) and r282690 (MFC10: r283310) should address this issue, as noted on the PR: https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=194513 If anyone still experiences this please let us know on the PR. Regards Steve From owner-freebsd-fs@freebsd.org Mon Nov 30 18:25:43 2015 Return-Path: Delivered-To: freebsd-fs@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 5A5E4A3DAA2 for ; Mon, 30 Nov 2015 18:25:43 +0000 (UTC) (envelope-from kraduk@gmail.com) Received: from mail-wm0-x22a.google.com (mail-wm0-x22a.google.com [IPv6:2a00:1450:400c:c09::22a]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (Client CN "smtp.gmail.com", Issuer "Google Internet Authority G2" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id DA4121988 for ; Mon, 30 Nov 2015 18:25:42 +0000 (UTC) (envelope-from kraduk@gmail.com) Received: by wmec201 with SMTP id c201so150876990wme.1 for ; Mon, 30 Nov 2015 10:25:41 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type; bh=ej8XXQs4q6ICkitVA7rXIyJ22Ciwuy/jOsOQ3pOaYyk=; b=0ItVwdb7O/tQPCjK9gvlEX8KeBmdKqUo9LXAoFYs0Lz5fal8Ow9eWz4ETxn4T1T1Av ctBWsD5germPRedFBn23lCOsLAQqO8qX0VxWDvxU50bUWJtYmJplWX9p8Em3xIWaRwjo ZZdBwOt9bwh35z4WiukQ3rJBkj3MR+SG+l732KdsjHK0TicSOHyvLA6NK2/2w1dSW8yr 6qnWjMcNnVZBkslHLh3NLAqo+yfZfbbdJ1ryxOUWBw9pchD6Q0uzW7htQIzCHfbvHFxk ZSnmDFaf316Xab4kd5YaIpsvtTJ8ad7gvM+iNa5AGS3Nn1vaNYptoqtnJ66yetKn7zZm 6dMQ== MIME-Version: 1.0 X-Received: by 10.194.236.228 with SMTP id ux4mr13725830wjc.56.1448907941197; Mon, 30 Nov 2015 10:25:41 -0800 (PST) Received: by 10.28.181.213 with HTTP; Mon, 30 Nov 2015 10:25:41 -0800 (PST) In-Reply-To: <20151130160949.GA7354@neutralgood.org> References: <565875A7.6060004@free.de> <20151130160949.GA7354@neutralgood.org> Date: Mon, 30 Nov 2015 18:25:41 +0000 Message-ID: Subject: Re: High fragmentation on zpool log From: krad To: kpneal@pobox.com Cc: Kai Gallasch , FreeBSD FS Content-Type: text/plain; charset=UTF-8 X-Content-Filtered-By: Mailman/MimeDel 2.1.20 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 30 Nov 2015 18:25:43 -0000 thats true log devices do only hold a few seconds worth of data, but how much data that is will vary depending on throughput to the array. It's only sync data that is written to the log as well, so a simple rsync wouldnt in normal circumstances generate sync writes, just async. That is assuming you aren't running it over an nfs mount. On 30 November 2015 at 16:09, wrote: > On Mon, Nov 30, 2015 at 09:17:33AM +0000, krad wrote: > > Fragmentation isn't really a big issue on SSD's as there are no heads to > > move around like on magnetic drives. Also due to wear levelling, you > > actually have no idea where a block actually is on a memory cell, as > the > > drive only gives a logical representation of the layout of blocks, not an > > actual true mapping. > > Well, all of this is true, but I'm not convinced that was the real > question. > My interpretation was that the OP was asking how a log device 8GB in size > can get to be 85% fragmented. > > My guess was that 85% fragmentation of a log device may be a sign of a log > device that is too small. But I thought log devices only held a few seconds > of activity, so I'm a little confused about how a log device can get to > be 85% fragmented. Is this pool really moving a gigabyte a second or > faster? > > > On 27 November 2015 at 15:24, Kai Gallasch wrote: > > > > > > > > Hi. > > > > > > Today I had a look at the zpool of a server (FreeBSD 10.2, GENERIC > > > kernel, 100d uptime, 96GB RAM) I recently installed. > > > > > > The pool has eight SAS drives in a raid 10 setup (concatenated mirror > > > pairs) and uses a cache and a mirrored log. > > > > > > The log and cache both are on a pair of Intel SSDs. > > > > > > # gpart show -l da9 > > > => 34 195371501 da9 GPT (93G) > > > 34 6 - free - (3.0K) > > > 40 16777216 1 log-BTTV5234003K100FGN (8.0G) > > > 16777256 178594272 2 cache-BTTV5234003K100FGN (85G) > > > 195371528 7 - free - (3.5K) > > > > > > > > > Is 85% fragmentation of the log device something to worry about? > > > > > > Why does zpool list show so unrealistic values for FREE and CAP? > > > Is this normal? > > > > > > Atached: Some output of zpool list. > > > > > > Regards, > > > Kai. > > > > > > > > > (zpool list -v output, ommited columns: EXPANDSZ.,DEDUP, > > > HEALTH, ALTROOT) > > > > > > NAME SIZE ALLOC FREE FRAG CAP > > > rpool 7.25T 440G 6.82T 4% 5% > > > mirror 1.81T 110G 1.71T 4% 5% > > > gpt/rpool-WMC160D0SVZE - - - - - > > > gpt/rpool-WMC160D8MJPD - - - - - > > > mirror 1.81T 110G 1.70T 4% 5% > > > gpt/rpool-WMC160D9DLL2 - - - - - > > > gpt/rpool-WMC160D23CWA - - - - - > > > mirror 1.81T 110G 1.71T 4% 5% > > > gpt/rpool-WMC160D94930 - - - - - > > > gpt/rpool-WMC160D9V5LW - - - - - > > > mirror 1.81T 110G 1.71T 4% 5% > > > gpt/rpool-WMC160D9ZV0S - - - - - > > > gpt/rpool-WMC160D5HFT6 - - - - - > > > mirror 7.94G 43.2M 7.90G 85% 0% > > > gpt/log-BTTV523401U4100FGN - - - - - > > > gpt/log-BTTV5234003K100FGN - - - - - > > > cache - - - - - > > > gpt/cache-BTTV5234003K100FGN 85.2G 142G 16.0E 0% 166% > > > gpt/cache-BTTV523401U4100FGN 85.2G 172G 16.0E 0% 202% > -- > Kevin P. Neal http://www.pobox.com/~kpn/ > > Seen on bottom of IBM part number 1887724: > DO NOT EXPOSE MOUSE PAD TO DIRECT SUNLIGHT FOR EXTENDED PERIODS OF TIME. > From owner-freebsd-fs@freebsd.org Mon Nov 30 21:52:54 2015 Return-Path: Delivered-To: freebsd-fs@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 3979BA3DB4B for ; Mon, 30 Nov 2015 21:52:54 +0000 (UTC) (envelope-from bugzilla-noreply@freebsd.org) Received: from kenobi.freebsd.org (kenobi.freebsd.org [IPv6:2001:1900:2254:206a::16:76]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 263B31E81 for ; Mon, 30 Nov 2015 21:52:54 +0000 (UTC) (envelope-from bugzilla-noreply@freebsd.org) Received: from bugs.freebsd.org ([127.0.1.118]) by kenobi.freebsd.org (8.15.2/8.15.2) with ESMTP id tAULqsud014421 for ; Mon, 30 Nov 2015 21:52:54 GMT (envelope-from bugzilla-noreply@freebsd.org) From: bugzilla-noreply@freebsd.org To: freebsd-fs@FreeBSD.org Subject: [Bug 155615] [zfs] zfs v28 broken on sparc64 -current Date: Mon, 30 Nov 2015 21:52:54 +0000 X-Bugzilla-Reason: AssignedTo X-Bugzilla-Type: changed X-Bugzilla-Watch-Reason: None X-Bugzilla-Product: Base System X-Bugzilla-Component: kern X-Bugzilla-Version: unspecified X-Bugzilla-Keywords: X-Bugzilla-Severity: Affects Only Me X-Bugzilla-Who: mmoll@freebsd.org X-Bugzilla-Status: In Progress X-Bugzilla-Priority: Normal X-Bugzilla-Assigned-To: mmoll@freebsd.org X-Bugzilla-Target-Milestone: --- X-Bugzilla-Flags: X-Bugzilla-Changed-Fields: cc assigned_to Message-ID: In-Reply-To: References: Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: 7bit X-Bugzilla-URL: https://bugs.freebsd.org/bugzilla/ Auto-Submitted: auto-generated MIME-Version: 1.0 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 30 Nov 2015 21:52:54 -0000 https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=155615 Michael Moll changed: What |Removed |Added ---------------------------------------------------------------------------- CC| |mmoll@freebsd.org Assignee|freebsd-fs@FreeBSD.org |mmoll@freebsd.org --- Comment #2 from Michael Moll --- take. -- You are receiving this mail because: You are the assignee for the bug. From owner-freebsd-fs@freebsd.org Mon Nov 30 22:53:19 2015 Return-Path: Delivered-To: freebsd-fs@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id D54FFA3DABA for ; Mon, 30 Nov 2015 22:53:19 +0000 (UTC) (envelope-from k@free.de) Received: from smtp.free.de (smtp.free.de [91.204.6.103]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 1DB3F132A for ; Mon, 30 Nov 2015 22:53:18 +0000 (UTC) (envelope-from k@free.de) Received: (qmail 3272 invoked from network); 30 Nov 2015 23:53:10 +0100 Received: from smtp.free.de (HELO [91.204.7.46]) (k@free.de@[91.204.4.103]) (envelope-sender ) by smtp.free.de (qmail-ldap-1.03) with AES128-SHA encrypted SMTP for ; 30 Nov 2015 23:53:10 +0100 Subject: Re: High fragmentation on zpool log To: kpneal@pobox.com, krad References: <565875A7.6060004@free.de> <20151130160949.GA7354@neutralgood.org> Cc: FreeBSD FS From: Kai Gallasch X-Enigmail-Draft-Status: N1110 Organization: FREE! Message-ID: <565CD356.3010108@free.de> Date: Mon, 30 Nov 2015 23:53:10 +0100 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:38.0) Gecko/20100101 Thunderbird/38.3.0 MIME-Version: 1.0 In-Reply-To: <20151130160949.GA7354@neutralgood.org> Content-Type: multipart/signed; micalg=pgp-sha512; protocol="application/pgp-signature"; boundary="wiUTfqigaX0vUbVUjbfiaoKeWOuGlHRrH" X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 30 Nov 2015 22:53:20 -0000 This is an OpenPGP/MIME signed message (RFC 4880 and 3156) --wiUTfqigaX0vUbVUjbfiaoKeWOuGlHRrH Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: quoted-printable On 30.11.2015 17:09 kpneal@pobox.com wrote: > On Mon, Nov 30, 2015 at 09:17:33AM +0000, krad wrote: >> Fragmentation isn't really a big issue on SSD's as there are no heads = to >> move around like on magnetic drives. Also due to wear levelling, you >> actually have no idea where a block actually is on a memory cell, as= the >> drive only gives a logical representation of the layout of blocks, not= an >> actual true mapping. >=20 > Well, all of this is true, but I'm not convinced that was the real ques= tion. > My interpretation was that the OP was asking how a log device 8GB in si= ze > can get to be 85% fragmented. Yes. I am wondering why fragmentation with a more than sufficient size of the log and given the high throughput of SSDs is happening at all. Maybe because the log and cache are on the same pair of SSDs? > My guess was that 85% fragmentation of a log device may be a sign of a = log > device that is too small. But I thought log devices only held a few sec= onds > of activity, so I'm a little confused about how a log device can get to= > be 85% fragmented. Is this pool really moving a gigabyte a second or fa= ster? No, far from that. The pool is mostly read from and is used for local storage of roundabout 50 jails. The write rate of the log is in the region < 20 MB/s ave. >>> cache - - - - - >>> gpt/cache-BTTV5234003K100FGN 85.2G 142G 16.0E 0% 166% >>> gpt/cache-BTTV523401U4100FGN 85.2G 172G 16.0E 0% 202% Any theories why zpool list -v shows so funny values for FREE and CAP of the cache? Kai. --=20 PGP-KeyID =3D 0x70654D7C4FB1F588 --wiUTfqigaX0vUbVUjbfiaoKeWOuGlHRrH Content-Type: application/pgp-signature; name="signature.asc" Content-Description: OpenPGP digital signature Content-Disposition: attachment; filename="signature.asc" -----BEGIN PGP SIGNATURE----- Version: GnuPG v2.0.22 (GNU/Linux) iQIcBAEBCgAGBQJWXNNWAAoJEHBlTXxPsfWIRKwQAJkY+5j/eXS/x+uk7ot2mviB BMCHV/Jls9fme6mW6ryjRGE8FiFvwRigixy+5b6GkHsYHMtoRYE09+DYp291ph8/ MsbXqbKDMSuG6NwU4SUkQkrAe0yQUGRCpWOb0qxIqIcJ0GWeeu9SIVy61CFTlb5u WLei1YRY3ntxpcs/JRqefR8sbVTu51OjFxkBVbMgeYZZsgS4OWZY6mH0CCgrI59F 4jhIC5hbemJ8MgW4sOPTLH+GaX00RPLN52rB8GQ+X2NZpcFhgP/6KkV0junc/fI2 ZvRvpSRwWUjAoH26mOkaiQok+he9KKUTIhBvRez+qEn0J6dGEeLIBd6EhEZSlMsl 2ZTQ8nqUgylD2Vg44m1KZVYYFs7W2xN35uzlurOFDQxZ/IDyACrG6R62KjtSyttQ z9IU7wKIqPHEeenQdeeNEnWp6hqkJaauDaMGl4S5XnPMFuKNbuQWQTsKAM0dca1k AlMPnA5dGojruHNHArSVRn+m3aGtN1V+KKK/3oqVEUP2RFaUnfeXbGSV3MF9Bj1U 2RhfawVkgjjKBE+GaNMzRtf/ePAE/ohhrA5HN65autyhhiaMFiVXQDWs1nnsWA84 8BQLnvCSemZ0H6+5WqjlzynRbjBsAkZBN9T1DkLLdZy25oD4LyBAH0Cg54Aw1/0D Zs29k7vCtStMRTOX7beL =W04V -----END PGP SIGNATURE----- --wiUTfqigaX0vUbVUjbfiaoKeWOuGlHRrH-- From owner-freebsd-fs@freebsd.org Tue Dec 1 00:59:41 2015 Return-Path: Delivered-To: freebsd-fs@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 115C2A3AAD3 for ; Tue, 1 Dec 2015 00:59:41 +0000 (UTC) (envelope-from bugzilla-noreply@freebsd.org) Received: from kenobi.freebsd.org (kenobi.freebsd.org [IPv6:2001:1900:2254:206a::16:76]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id E965A1845 for ; Tue, 1 Dec 2015 00:59:40 +0000 (UTC) (envelope-from bugzilla-noreply@freebsd.org) Received: from bugs.freebsd.org ([127.0.1.118]) by kenobi.freebsd.org (8.15.2/8.15.2) with ESMTP id tB10xeIN030116 for ; Tue, 1 Dec 2015 00:59:40 GMT (envelope-from bugzilla-noreply@freebsd.org) From: bugzilla-noreply@freebsd.org To: freebsd-fs@FreeBSD.org Subject: [Bug 204898] zfs root fails to boot Date: Tue, 01 Dec 2015 00:59:41 +0000 X-Bugzilla-Reason: AssignedTo X-Bugzilla-Type: changed X-Bugzilla-Watch-Reason: None X-Bugzilla-Product: Base System X-Bugzilla-Component: kern X-Bugzilla-Version: 10.1-RELEASE X-Bugzilla-Keywords: X-Bugzilla-Severity: Affects Many People X-Bugzilla-Who: mikhail.rokhin@gmail.com X-Bugzilla-Status: New X-Bugzilla-Priority: --- X-Bugzilla-Assigned-To: freebsd-fs@FreeBSD.org X-Bugzilla-Target-Milestone: --- X-Bugzilla-Flags: X-Bugzilla-Changed-Fields: Message-ID: In-Reply-To: References: Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: 7bit X-Bugzilla-URL: https://bugs.freebsd.org/bugzilla/ Auto-Submitted: auto-generated MIME-Version: 1.0 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 01 Dec 2015 00:59:41 -0000 https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=204898 --- Comment #1 from mikhail.rokhin@gmail.com --- Either i386 9.3-release fails. -- You are receiving this mail because: You are the assignee for the bug. From owner-freebsd-fs@freebsd.org Tue Dec 1 01:24:33 2015 Return-Path: Delivered-To: freebsd-fs@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 5DD5CA3D12E for ; Tue, 1 Dec 2015 01:24:33 +0000 (UTC) (envelope-from bugzilla-noreply@freebsd.org) Received: from kenobi.freebsd.org (kenobi.freebsd.org [IPv6:2001:1900:2254:206a::16:76]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 4A063139A for ; Tue, 1 Dec 2015 01:24:33 +0000 (UTC) (envelope-from bugzilla-noreply@freebsd.org) Received: from bugs.freebsd.org ([127.0.1.118]) by kenobi.freebsd.org (8.15.2/8.15.2) with ESMTP id tB11OXSA015956 for ; Tue, 1 Dec 2015 01:24:33 GMT (envelope-from bugzilla-noreply@freebsd.org) From: bugzilla-noreply@freebsd.org To: freebsd-fs@FreeBSD.org Subject: [Bug 194513] zfs recv hangs in state kmem arena Date: Tue, 01 Dec 2015 01:24:30 +0000 X-Bugzilla-Reason: AssignedTo X-Bugzilla-Type: changed X-Bugzilla-Watch-Reason: None X-Bugzilla-Product: Base System X-Bugzilla-Component: kern X-Bugzilla-Version: 10.0-STABLE X-Bugzilla-Keywords: X-Bugzilla-Severity: Affects Some People X-Bugzilla-Who: mgoroff@quorum.net X-Bugzilla-Status: Closed X-Bugzilla-Priority: --- X-Bugzilla-Assigned-To: freebsd-fs@FreeBSD.org X-Bugzilla-Target-Milestone: --- X-Bugzilla-Flags: X-Bugzilla-Changed-Fields: cc Message-ID: In-Reply-To: References: Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: 7bit X-Bugzilla-URL: https://bugs.freebsd.org/bugzilla/ Auto-Submitted: auto-generated MIME-Version: 1.0 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 01 Dec 2015 01:24:33 -0000 https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=194513 mgoroff@quorum.net changed: What |Removed |Added ---------------------------------------------------------------------------- CC| |mgoroff@quorum.net --- Comment #13 from mgoroff@quorum.net --- We are encountering what appears to be the same issue on a large production server running 10.2-RELEASE. The machine has 128G of RAM and vfs.zfs.arc_max is set to 100G. I have a zfs recv currently stuck waiting on kmem arena. When this problem occurs, the ARC starts rapidly shrinking down to vfs.zfs.arc_min (13G in our case) but the process will remain blocked for a long period of time (hours) even though top shows free memory at 104G. ps -l for the process shows: UID PID PPID CPU PRI NI VSZ RSS MWCHAN STAT TT TIME COMMAND 0 37892 37814 0 20 0 42248 3428 kmem are D 1 0:04.16 zfs recv -u -d ezdata2 while procstat -kk -p 37892 shows: PID TID COMM TDNAME KSTACK 37892 101149 zfs - mi_switch+0xe1 sleepq_wait+0x3a _cv_wait+0x16d vmem_xalloc+0x568 vmem_alloc+0x3d kmem_malloc+0x33 uma_large_malloc+0x49 malloc+0x43 dmu_recv_stream+0x114 zfs_ioc_recv+0x955 zfsdev_ioctl+0x5ca devfs_ioctl_f+0x139 kern_ioctl+0x255 sys_ioctl+0x140 amd64_syscall+0x357 Xfast_syscall+0xfb -- You are receiving this mail because: You are the assignee for the bug. From owner-freebsd-fs@freebsd.org Tue Dec 1 01:37:26 2015 Return-Path: Delivered-To: freebsd-fs@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 9DFC9A3D409 for ; Tue, 1 Dec 2015 01:37:26 +0000 (UTC) (envelope-from bugzilla-noreply@freebsd.org) Received: from kenobi.freebsd.org (kenobi.freebsd.org [IPv6:2001:1900:2254:206a::16:76]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 8A3CC1B16 for ; Tue, 1 Dec 2015 01:37:26 +0000 (UTC) (envelope-from bugzilla-noreply@freebsd.org) Received: from bugs.freebsd.org ([127.0.1.118]) by kenobi.freebsd.org (8.15.2/8.15.2) with ESMTP id tB11bQLW039303 for ; Tue, 1 Dec 2015 01:37:26 GMT (envelope-from bugzilla-noreply@freebsd.org) From: bugzilla-noreply@freebsd.org To: freebsd-fs@FreeBSD.org Subject: [Bug 204898] zfs root fails to boot Date: Tue, 01 Dec 2015 01:37:26 +0000 X-Bugzilla-Reason: AssignedTo X-Bugzilla-Type: changed X-Bugzilla-Watch-Reason: None X-Bugzilla-Product: Base System X-Bugzilla-Component: kern X-Bugzilla-Version: 10.1-RELEASE X-Bugzilla-Keywords: X-Bugzilla-Severity: Affects Many People X-Bugzilla-Who: mikhail.rokhin@gmail.com X-Bugzilla-Status: New X-Bugzilla-Priority: --- X-Bugzilla-Assigned-To: freebsd-fs@FreeBSD.org X-Bugzilla-Target-Milestone: --- X-Bugzilla-Flags: X-Bugzilla-Changed-Fields: Message-ID: In-Reply-To: References: Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: 7bit X-Bugzilla-URL: https://bugs.freebsd.org/bugzilla/ Auto-Submitted: auto-generated MIME-Version: 1.0 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 01 Dec 2015 01:37:26 -0000 https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=204898 --- Comment #2 from mikhail.rokhin@gmail.com --- But !!)) FreeBSD-11.0-CURRENT-i386-20151119-r291085-bootonly.iso installs & boots fine into auto-zfs-root. What has changed since 9.3 & 10.2 ? -- You are receiving this mail because: You are the assignee for the bug. From owner-freebsd-fs@freebsd.org Tue Dec 1 08:59:34 2015 Return-Path: Delivered-To: freebsd-fs@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 9CE14A358AA for ; Tue, 1 Dec 2015 08:59:34 +0000 (UTC) (envelope-from bugzilla-noreply@freebsd.org) Received: from kenobi.freebsd.org (kenobi.freebsd.org [IPv6:2001:1900:2254:206a::16:76]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 8893011D9 for ; Tue, 1 Dec 2015 08:59:34 +0000 (UTC) (envelope-from bugzilla-noreply@freebsd.org) Received: from bugs.freebsd.org ([127.0.1.118]) by kenobi.freebsd.org (8.15.2/8.15.2) with ESMTP id tB18xYLC088105 for ; Tue, 1 Dec 2015 08:59:34 GMT (envelope-from bugzilla-noreply@freebsd.org) From: bugzilla-noreply@freebsd.org To: freebsd-fs@FreeBSD.org Subject: [Bug 194513] zfs recv hangs in state kmem arena Date: Tue, 01 Dec 2015 08:59:31 +0000 X-Bugzilla-Reason: AssignedTo X-Bugzilla-Type: changed X-Bugzilla-Watch-Reason: None X-Bugzilla-Product: Base System X-Bugzilla-Component: kern X-Bugzilla-Version: 10.0-STABLE X-Bugzilla-Keywords: X-Bugzilla-Severity: Affects Some People X-Bugzilla-Who: smh@FreeBSD.org X-Bugzilla-Status: Open X-Bugzilla-Priority: --- X-Bugzilla-Assigned-To: freebsd-fs@FreeBSD.org X-Bugzilla-Target-Milestone: --- X-Bugzilla-Flags: X-Bugzilla-Changed-Fields: resolution bug_status Message-ID: In-Reply-To: References: Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: 7bit X-Bugzilla-URL: https://bugs.freebsd.org/bugzilla/ Auto-Submitted: auto-generated MIME-Version: 1.0 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 01 Dec 2015 08:59:34 -0000 https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=194513 Steven Hartland changed: What |Removed |Added ---------------------------------------------------------------------------- Resolution|FIXED |--- Status|Closed |Open --- Comment #14 from Steven Hartland --- Thanks for the update Marc, reopened. -- You are receiving this mail because: You are the assignee for the bug. From owner-freebsd-fs@freebsd.org Tue Dec 1 09:48:28 2015 Return-Path: Delivered-To: freebsd-fs@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 97D46A3C761 for ; Tue, 1 Dec 2015 09:48:28 +0000 (UTC) (envelope-from jg@internetx.com) Received: from mx1.internetx.com (mx1.internetx.com [62.116.129.39]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 1DCE31C10 for ; Tue, 1 Dec 2015 09:48:27 +0000 (UTC) (envelope-from jg@internetx.com) Received: from localhost (localhost [127.0.0.1]) by mx1.internetx.com (Postfix) with ESMTP id D69114C4C6E5; Tue, 1 Dec 2015 10:48:18 +0100 (CET) X-Virus-Scanned: InterNetX GmbH amavisd-new at ix-mailer.internetx.de Received: from mx1.internetx.com ([62.116.129.39]) by localhost (ix-mailer.internetx.de [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id o3QgRkOLH4Lc; Tue, 1 Dec 2015 10:48:13 +0100 (CET) Received: from ox-groupware-node01.internetx.de (ox.internetx.com [85.236.36.83]) by mx1.internetx.com (Postfix) with ESMTP id 732734C4C68C; Tue, 1 Dec 2015 10:48:13 +0100 (CET) Received: from ox-groupware-node01.internetx.de (localhost [127.0.0.1]) by ox-groupware-node01.internetx.de (Postfix) with ESMTP id 62B9AA12112; Tue, 1 Dec 2015 10:48:13 +0100 (CET) Date: Tue, 1 Dec 2015 10:48:13 +0100 (CET) From: InterNetX - Juergen Gotteswinter To: Kai Gallasch Cc: FreeBSD FS Message-ID: <598756590.434.8607f2eb-aa36-44cc-ba27-77d495c7da6d.open-xchange@ox.internetx.com> In-Reply-To: <565CD356.3010108@free.de> References: <565875A7.6060004@free.de> <20151130160949.GA7354@neutralgood.org> <565CD356.3010108@free.de> Subject: Re: High fragmentation on zpool log MIME-Version: 1.0 X-Priority: 3 Importance: Medium X-Mailer: Open-Xchange Mailer v7.8.0-Rev6 Organization: InterNetX GmbH X-Originating-Client: com.openexchange.ox.gui.dhtml Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: quoted-printable X-Content-Filtered-By: Mailman/MimeDel 2.1.20 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 01 Dec 2015 09:48:28 -0000 > Kai Gallasch hat am 30. November 2015 um 23:53 geschrieben: > > > On 30.11.2015 17:09 kpneal@pobox.com wrote: > > On Mon, Nov 30, 2015 at 09:17:33AM +0000, krad wrote: > >> Fragmentation isn't really a big issue on SSD's as there are no heads = to > >> move around like on magnetic drives. Also due to wear levelling, you > >> actually have no idea where a block actually is on a memory cell, as t= he > >> drive only gives a logical representation of the layout of blocks, not= an > >> actual true mapping. > > > > Well, all of this is true, but I'm not convinced that was the real ques= tion. > > My interpretation was that the OP was asking how a log device 8GB in si= ze > > can get to be 85% fragmented. > > Yes. I am wondering why fragmentation with a more than sufficient size > of the log and given the high throughput of SSDs is happening at all. > > Maybe because the log and cache are on the same pair of SSDs? > > > My guess was that 85% fragmentation of a log device may be a sign of a = log > > device that is too small. But I thought log devices only held a few sec= onds > > of activity, so I'm a little confused about how a log device can get to > > be 85% fragmented. Is this pool really moving a gigabyte a second or fa= ster? > > No, far from that. The pool is mostly read from and is used for local > storage of roundabout 50 jails. The write rate of the log is in the > region < 20 MB/s ave. > > >>> cache - - - - - > >>> gpt/cache-BTTV5234003K100FGN 85.2G 142G 16.0E 0% 166% > >>> gpt/cache-BTTV523401U4100FGN 85.2G 172G 16.0E 0% 202% > > Any theories why zpool list -v shows so funny values for FREE and CAP of > the cache? =20 afaik thats a side effect from l2arc compression. i=C2=B4ve seen this on fr= eebsd & illumos as well. but it went away with some updates durinjg the last 2-3 mo= nths or so. not sure, but if i remember correctly there was something related to= this and data corruption caused by a bug in the l2arc compression (afaik only il= lumos distros where affected by corruption). =20 > > Kai. > > -- > PGP-KeyID =3D 0x70654D7C4FB1F588 > > > From owner-freebsd-fs@freebsd.org Tue Dec 1 11:29:42 2015 Return-Path: Delivered-To: freebsd-fs@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 9DFECA3C23F for ; Tue, 1 Dec 2015 11:29:42 +0000 (UTC) (envelope-from bugzilla-noreply@freebsd.org) Received: from kenobi.freebsd.org (kenobi.freebsd.org [IPv6:2001:1900:2254:206a::16:76]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 89D0D1185 for ; Tue, 1 Dec 2015 11:29:42 +0000 (UTC) (envelope-from bugzilla-noreply@freebsd.org) Received: from bugs.freebsd.org ([127.0.1.118]) by kenobi.freebsd.org (8.15.2/8.15.2) with ESMTP id tB1BTgQ9046794 for ; Tue, 1 Dec 2015 11:29:42 GMT (envelope-from bugzilla-noreply@freebsd.org) From: bugzilla-noreply@freebsd.org To: freebsd-fs@FreeBSD.org Subject: [Bug 194513] zfs recv hangs in state kmem arena Date: Tue, 01 Dec 2015 11:29:40 +0000 X-Bugzilla-Reason: AssignedTo X-Bugzilla-Type: changed X-Bugzilla-Watch-Reason: None X-Bugzilla-Product: Base System X-Bugzilla-Component: kern X-Bugzilla-Version: 10.0-STABLE X-Bugzilla-Keywords: X-Bugzilla-Severity: Affects Some People X-Bugzilla-Who: smh@FreeBSD.org X-Bugzilla-Status: Open X-Bugzilla-Priority: --- X-Bugzilla-Assigned-To: freebsd-fs@FreeBSD.org X-Bugzilla-Target-Milestone: --- X-Bugzilla-Flags: X-Bugzilla-Changed-Fields: attachments.isobsolete attachments.created Message-ID: In-Reply-To: References: Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: 7bit X-Bugzilla-URL: https://bugs.freebsd.org/bugzilla/ Auto-Submitted: auto-generated MIME-Version: 1.0 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 01 Dec 2015 11:29:42 -0000 https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=194513 Steven Hartland changed: What |Removed |Added ---------------------------------------------------------------------------- Attachment #148687|0 |1 is obsolete| | --- Comment #15 from Steven Hartland --- Created attachment 163702 --> https://bugs.freebsd.org/bugzilla/attachment.cgi?id=163702&action=edit utility to allocate memory to force defragmentation This is a little utility that allocates memory, which creates memory pressure in an attempt to force de-fragmentation as ARC free's up aggressively. Its based on James Van Artsdalen initial utility with fixes for compilation as well as adding progress output and unbounding the allocation which is now done in 1GB chunks. -- You are receiving this mail because: You are the assignee for the bug. From owner-freebsd-fs@freebsd.org Tue Dec 1 14:39:01 2015 Return-Path: Delivered-To: freebsd-fs@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 6B8C9A3EA25 for ; Tue, 1 Dec 2015 14:39:01 +0000 (UTC) (envelope-from csforgeron@gmail.com) Received: from mail-io0-x230.google.com (mail-io0-x230.google.com [IPv6:2607:f8b0:4001:c06::230]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (Client CN "smtp.gmail.com", Issuer "Google Internet Authority G2" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id 342EE152B; Tue, 1 Dec 2015 14:39:01 +0000 (UTC) (envelope-from csforgeron@gmail.com) Received: by iouu10 with SMTP id u10so11327193iou.0; Tue, 01 Dec 2015 06:39:00 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type; bh=GrelPFeqFc2JyjbYRU31ul80xKukKWzUd126BSxFHxA=; b=TS9DHnb/pncRH8ZBhjewt9hrT1FDSpmCJnKYlLIZzHzb5w2iBnwrsjM/wpp/z7kKLR 1C2az0w9nNPIrL4Gr/Wte2tGJiONnryhCVNjoL97Sel0sjYARCcMULdNyOdeCR3hJr2o PXK2afOArWTLATD7jFhDRR1njIw5197ktyrFNRrmnGNn7IxVgBkKIHUlInT4Ti/Zxzm2 PgyL1WbZhv4cHXwK5Iqdwuop40n5Z3nGN84h3RfrJwSTtv0INTED/91GrtU+OeHx1dxt jWnlIFTvZvaSxhmZFLI9c8HVQf9GqQTBUBlb6s+VV+aAw7v9GnV9dYYIMQFgTKbsfPuZ ACew== MIME-Version: 1.0 X-Received: by 10.107.137.226 with SMTP id t95mr61959341ioi.188.1448980740704; Tue, 01 Dec 2015 06:39:00 -0800 (PST) Received: by 10.36.159.67 with HTTP; Tue, 1 Dec 2015 06:39:00 -0800 (PST) In-Reply-To: <54F88DEA.2070301@hotplug.ru> References: <54F88DEA.2070301@hotplug.ru> Date: Tue, 1 Dec 2015 10:39:00 -0400 Message-ID: Subject: Re: CAM Target over FC and UNMAP problem From: Christopher Forgeron To: Emil Muratov Cc: FreeBSD Filesystems , Alexander Motin Content-Type: text/plain; charset=UTF-8 X-Content-Filtered-By: Mailman/MimeDel 2.1.20 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 01 Dec 2015 14:39:01 -0000 Did this ever progress further? I'm load testing 10.2 zvol / UNMAP, and having similar lockup issues. On Thu, Mar 5, 2015 at 1:10 PM, Emil Muratov wrote: > > I've got an issue with CTL UNMAP and zvol backends. > Seems that UNMAP from the initiator passed to the underlying disks > (without trim support) causes IO blocking to the whole pool. Not sure > where to address this problem. > > My setup: > - plain SATA 7.2 krpm drives attached to Adaptec aacraid SAS controller > - zfs raidz pool over plain drives, no partitioning > - zvol created with volmode=dev > - Qlogic ISP 2532 FC HBA in target mode > - FreeBSD 10.1-STABLE #1 r279593 > > Create a new LUN with a zvol backend > > ctladm realsync off > ctladm port -o on -p 5 > ctladm create -b block -o file=/dev/zvol/wd/tst1 -o unmap=on -l 0 -d > wd.tst1 -S tst1 > > Both target an initiator hosts connected to the FC fabric. Initiator is > Win2012 server, actually it is a VM with RDM LUN to the guest OS. > Formating, reading and writing large amounts of data (file copy/IOmeter) > - so far so good. > But as soon as I've tried to delete large files all IO to the LUN > blocks, initiator system just iowaits. gstat on target shows that > underlying disk load bumped to 100%, queue up to 10, but no iowrites > actually in progress, only decent amount of ioreads. After a minute or > so IO unblocks for a second or two than blocks again and so on again > until all UNMAPs are done, it could take up to 5 minutes to delete 10Gb > file. I can see that 'logicalused' property of a zvol shows that the > deleted space was actually released. System log is filled with CTL msgs: > > > kernel: (ctl2:isp1:0:0:3): ctlfestart: aborted command 0x12aaf4 discarded > kernel: (2:5:3/3): WRITE(10). CDB: 2a 00 2f d4 74 b8 00 00 08 00 > kernel: (2:5:3/3): Tag: 0x12ab24, type 1 > kernel: (2:5:3/3): ctl_process_done: 96 seconds > kernel: (ctl2:isp1:0:0:3): ctlfestart: aborted command 0x12afa4 discarded > kernel: (ctl2:isp1:0:0:3): ctlfestart: aborted command 0x12afd4 discarded > kernel: ctlfedone: got XPT_IMMEDIATE_NOTIFY status 0x36 tag 0xffffffff > seq 0x121104 > kernel: (ctl2:isp1:0:0:3): ctlfe_done: returning task I/O tag 0xffffffff > seq 0x1210d4 > > > I've tried to tackle some sysctls, but no success so far. > > vfs.zfs.vdev.bio_flush_disable: 1 > vfs.zfs.vdev.bio_delete_disable: 1 > vfs.zfs.trim.enabled=0 > > > Disabling UNMAP in CTL (-o unmap=off) resolves the issue completely but > than there is no space reclamation for zvol. > > Any hints would be appreciated. > > > > _______________________________________________ > freebsd-fs@freebsd.org mailing list > http://lists.freebsd.org/mailman/listinfo/freebsd-fs > To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" > From owner-freebsd-fs@freebsd.org Tue Dec 1 17:34:14 2015 Return-Path: Delivered-To: freebsd-fs@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id AF186A3E5AC for ; Tue, 1 Dec 2015 17:34:14 +0000 (UTC) (envelope-from mavbsd@gmail.com) Received: from mail-pa0-x235.google.com (mail-pa0-x235.google.com [IPv6:2607:f8b0:400e:c03::235]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (Client CN "smtp.gmail.com", Issuer "Google Internet Authority G2" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id 7F7D315FF for ; Tue, 1 Dec 2015 17:34:14 +0000 (UTC) (envelope-from mavbsd@gmail.com) Received: by pacej9 with SMTP id ej9so11448386pac.2 for ; Tue, 01 Dec 2015 09:34:14 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=sender:message-id:date:from:user-agent:mime-version:to:cc:subject :references:in-reply-to:content-type:content-transfer-encoding; bh=OxBxRCN2ZF7QV7KJr5gLzqRvvmFVx+4KtIeDs1zZjg0=; b=ltOWYTajnSbQbuyQ1AGdzD6ATn99TYiJ9otadgWLEWpHDl1v6vRLm8W+rsxpi/LZPX 3obq7Ssd9gcsq8wG4k1caiMLM90AEjgHjrJjoNjf3BV2kvE9ouYhgE34wWATzskAcxoN g5Hgy0ZSPT5oOXc9Kap2lefwr/kSwvTfS/JFO+O6Z46u5BuDv2OpwwK7qTb9aTjHM7g5 WL6oCOANP1o38o5mMo1oMAtUpd9g/h+wJYIX8wg45/Ec+U2eosnR6d5I5KXZn0fbZ5ty FpYT4ivWZwzNqCMNyadkY6TFdLQpl6wNQY81dXLo2jwe4r9moE8Jv6CBUqh7YCrvp+9r NAbA== X-Received: by 10.98.43.67 with SMTP id r64mr81886505pfr.3.1448991254043; Tue, 01 Dec 2015 09:34:14 -0800 (PST) Received: from mavbook.mavhome.dp.ua ([12.229.62.2]) by smtp.googlemail.com with ESMTPSA id u76sm59131101pfa.88.2015.12.01.09.34.13 (version=TLSv1/SSLv3 cipher=OTHER); Tue, 01 Dec 2015 09:34:13 -0800 (PST) Sender: Alexander Motin Message-ID: <565DDA14.4010006@FreeBSD.org> Date: Tue, 01 Dec 2015 19:34:12 +0200 From: Alexander Motin User-Agent: Mozilla/5.0 (X11; FreeBSD amd64; rv:31.0) Gecko/20100101 Thunderbird/31.6.0 MIME-Version: 1.0 To: Christopher Forgeron , Emil Muratov CC: FreeBSD Filesystems Subject: Re: CAM Target over FC and UNMAP problem References: <54F88DEA.2070301@hotplug.ru> In-Reply-To: Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 01 Dec 2015 17:34:14 -0000 Not really. But just as an idea you may try to set tloader unable vfs.zfs.trim.enabled=0 . Aside of obvious disabling TRIM, that is no-op for non-SSD ZFS pool, it also changes the day deletes are handled, possibly making them less blocking. I see no issue here from CTL side -- it does well. It indeed does not limit size of single UNMAP operation, that is not good, but that is because it has no idea about performance of backing store. On 01.12.2015 16:39, Christopher Forgeron wrote: > Did this ever progress further? I'm load testing 10.2 zvol / UNMAP, and > having similar lockup issues. > > On Thu, Mar 5, 2015 at 1:10 PM, Emil Muratov > wrote: > > > I've got an issue with CTL UNMAP and zvol backends. > Seems that UNMAP from the initiator passed to the underlying disks > (without trim support) causes IO blocking to the whole pool. Not sure > where to address this problem. > > My setup: > - plain SATA 7.2 krpm drives attached to Adaptec aacraid SAS controller > - zfs raidz pool over plain drives, no partitioning > - zvol created with volmode=dev > - Qlogic ISP 2532 FC HBA in target mode > - FreeBSD 10.1-STABLE #1 r279593 > > Create a new LUN with a zvol backend > > ctladm realsync off > ctladm port -o on -p 5 > ctladm create -b block -o file=/dev/zvol/wd/tst1 -o unmap=on -l 0 -d > wd.tst1 -S tst1 > > Both target an initiator hosts connected to the FC fabric. Initiator is > Win2012 server, actually it is a VM with RDM LUN to the guest OS. > Formating, reading and writing large amounts of data (file copy/IOmeter) > - so far so good. > But as soon as I've tried to delete large files all IO to the LUN > blocks, initiator system just iowaits. gstat on target shows that > underlying disk load bumped to 100%, queue up to 10, but no iowrites > actually in progress, only decent amount of ioreads. After a minute or > so IO unblocks for a second or two than blocks again and so on again > until all UNMAPs are done, it could take up to 5 minutes to delete 10Gb > file. I can see that 'logicalused' property of a zvol shows that the > deleted space was actually released. System log is filled with CTL msgs: > > > kernel: (ctl2:isp1:0:0:3): ctlfestart: aborted command 0x12aaf4 > discarded > kernel: (2:5:3/3): WRITE(10). CDB: 2a 00 2f d4 74 b8 00 00 08 00 > kernel: (2:5:3/3): Tag: 0x12ab24, type 1 > kernel: (2:5:3/3): ctl_process_done: 96 seconds > kernel: (ctl2:isp1:0:0:3): ctlfestart: aborted command 0x12afa4 > discarded > kernel: (ctl2:isp1:0:0:3): ctlfestart: aborted command 0x12afd4 > discarded > kernel: ctlfedone: got XPT_IMMEDIATE_NOTIFY status 0x36 tag 0xffffffff > seq 0x121104 > kernel: (ctl2:isp1:0:0:3): ctlfe_done: returning task I/O tag 0xffffffff > seq 0x1210d4 > > > I've tried to tackle some sysctls, but no success so far. > > vfs.zfs.vdev.bio_flush_disable: 1 > vfs.zfs.vdev.bio_delete_disable: 1 > vfs.zfs.trim.enabled=0 > > > Disabling UNMAP in CTL (-o unmap=off) resolves the issue completely but > than there is no space reclamation for zvol. > > Any hints would be appreciated. > > > > _______________________________________________ > freebsd-fs@freebsd.org mailing list > http://lists.freebsd.org/mailman/listinfo/freebsd-fs > To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org > " > > -- Alexander Motin From owner-freebsd-fs@freebsd.org Wed Dec 2 00:32:51 2015 Return-Path: Delivered-To: freebsd-fs@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 3D74BA3EEA2 for ; Wed, 2 Dec 2015 00:32:51 +0000 (UTC) (envelope-from csforgeron@gmail.com) Received: from mail-ig0-x231.google.com (mail-ig0-x231.google.com [IPv6:2607:f8b0:4001:c05::231]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (Client CN "smtp.gmail.com", Issuer "Google Internet Authority G2" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id 02F91139D; Wed, 2 Dec 2015 00:32:51 +0000 (UTC) (envelope-from csforgeron@gmail.com) Received: by igcto18 with SMTP id to18so20013029igc.0; Tue, 01 Dec 2015 16:32:50 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type; bh=zRNcGdFapbUWUMgRs38FTeDVJiTZl9n0QnWOJxO+WQU=; b=aSs+oAmB/icC9iaHGzihhnM2tW/2d005RpsHriQ19WJNBvaW4yu3i56KsStqpL0Lib hptSLknbIwXLpHUBskFmul77yI9nS07os68ABGQdR93sFxLHCWDgIk6tA7L/19H+kkGn ETJAthoj582ttc1eJO3etnkUsFoxZ0roVUAgtsGJHlI2gcGKBICobN1zoxMHgD4a+doc uEw/oCiHnv3+dEPBmgk12rVA6wuO3R3YvtXo977JQdPsoQFban+zs3zt+tGWccqvyPSk X/kbeqLm4t0zf6Jmct9IhoShYnwjNZtDmLbUEv+NT/iUuZlzoyv76hl0F114CrU+hDMP pKxQ== MIME-Version: 1.0 X-Received: by 10.50.161.33 with SMTP id xp1mr30129620igb.4.1449016370415; Tue, 01 Dec 2015 16:32:50 -0800 (PST) Received: by 10.36.159.67 with HTTP; Tue, 1 Dec 2015 16:32:50 -0800 (PST) In-Reply-To: <565DDA14.4010006@FreeBSD.org> References: <54F88DEA.2070301@hotplug.ru> <565DDA14.4010006@FreeBSD.org> Date: Tue, 1 Dec 2015 20:32:50 -0400 Message-ID: Subject: Re: CAM Target over FC and UNMAP problem From: Christopher Forgeron To: Alexander Motin Cc: Emil Muratov , FreeBSD Filesystems Content-Type: text/plain; charset=UTF-8 X-Content-Filtered-By: Mailman/MimeDel 2.1.20 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 02 Dec 2015 00:32:51 -0000 Thanks for the update. Perhaps my situation is different. I have a zpool (vPool175) that is made up of iSCSI disks. Those iSCSIi disks target a zvol on another machine (pool92) made of real disks. The performance of the UNMAP system is excellent when we're talking bulk UNMAPs - I can UNMAP a 5TiB zvol in 50 seconds or so. zpool create and destroy are fairly fast in this situation. However, once my vPool175 is into random writes, the UNMAP performance is terrible. 5 minutes of random writes (averaging 1000iops) will result in 50 MINUTES of UNMAP's after the test run. And it often will hang on I\O before the full 5 min of rrnd write is up. I feel like the UNMAP buffer/count is being exceeded (10,000 pending operations bu default). Sync writes don't have this issue. 5 min of 800iops sync writes will result in ~3 mintutes of UNMAP operations after the test is finished. It's not a ctl issue I would think - It's to do with the way ZFS needs to read metadata to then write out the UNMAPs. HOWEVER: I do notice that my remote zpool that is the iSCSI initiator (vPool175), can keep a queue depth of 64 for the UNMAP operations, but on the iSCSI target machine (pool92), the queue depth for the UNMAP operation on the zvol is never more than 1. I've tried modifying the various vfs.zfs.vdev. write controls, but none of them are set to a value of 1, so perhaps CTL is only passing a queue depth of 1 on for UNMAP operations? The zvol should be UNMAPing at the same queue depth as the remote machine - 1-64. I've tried setting the trim min_active on the zvol machine, but also no luck: root@pool92:~ # sysctl vfs.zfs.vdev.trim_min_active=10 vfs.zfs.vdev.trim_min_active: 1 -> 10 The zvol queue stays at 0/1 depth during the 50 minutes of small-block UNMAP. In fact, in general, the queue of the zvol on the iSCSI target machine (pool92) looks to be run at a very low queue depth I feel if we could at least get that queue depth up, we'd have a chance to keep up with the remote system asking for UNMAP. I'm curious to experiment with deeper TAG depths - say 4096, to see if UNMAP aggregation will help out - That may be for tomorrow. On Tue, Dec 1, 2015 at 1:34 PM, Alexander Motin wrote: > Not really. But just as an idea you may try to set tloader unable > vfs.zfs.trim.enabled=0 . Aside of obvious disabling TRIM, that is no-op > for non-SSD ZFS pool, it also changes the day deletes are handled, > possibly making them less blocking. > > I see no issue here from CTL side -- it does well. It indeed does not > limit size of single UNMAP operation, that is not good, but that is > because it has no idea about performance of backing store. > > On 01.12.2015 16:39, Christopher Forgeron wrote: > > Did this ever progress further? I'm load testing 10.2 zvol / UNMAP, and > > having similar lockup issues. > > > > On Thu, Mar 5, 2015 at 1:10 PM, Emil Muratov > > wrote: > > > > > > I've got an issue with CTL UNMAP and zvol backends. > > Seems that UNMAP from the initiator passed to the underlying disks > > (without trim support) causes IO blocking to the whole pool. Not sure > > where to address this problem. > > > > My setup: > > - plain SATA 7.2 krpm drives attached to Adaptec aacraid SAS > controller > > - zfs raidz pool over plain drives, no partitioning > > - zvol created with volmode=dev > > - Qlogic ISP 2532 FC HBA in target mode > > - FreeBSD 10.1-STABLE #1 r279593 > > > > Create a new LUN with a zvol backend > > > > ctladm realsync off > > ctladm port -o on -p 5 > > ctladm create -b block -o file=/dev/zvol/wd/tst1 -o unmap=on -l 0 -d > > wd.tst1 -S tst1 > > > > Both target an initiator hosts connected to the FC fabric. Initiator > is > > Win2012 server, actually it is a VM with RDM LUN to the guest OS. > > Formating, reading and writing large amounts of data (file > copy/IOmeter) > > - so far so good. > > But as soon as I've tried to delete large files all IO to the LUN > > blocks, initiator system just iowaits. gstat on target shows that > > underlying disk load bumped to 100%, queue up to 10, but no iowrites > > actually in progress, only decent amount of ioreads. After a minute > or > > so IO unblocks for a second or two than blocks again and so on again > > until all UNMAPs are done, it could take up to 5 minutes to delete > 10Gb > > file. I can see that 'logicalused' property of a zvol shows that the > > deleted space was actually released. System log is filled with CTL > msgs: > > > > > > kernel: (ctl2:isp1:0:0:3): ctlfestart: aborted command 0x12aaf4 > > discarded > > kernel: (2:5:3/3): WRITE(10). CDB: 2a 00 2f d4 74 b8 00 00 08 00 > > kernel: (2:5:3/3): Tag: 0x12ab24, type 1 > > kernel: (2:5:3/3): ctl_process_done: 96 seconds > > kernel: (ctl2:isp1:0:0:3): ctlfestart: aborted command 0x12afa4 > > discarded > > kernel: (ctl2:isp1:0:0:3): ctlfestart: aborted command 0x12afd4 > > discarded > > kernel: ctlfedone: got XPT_IMMEDIATE_NOTIFY status 0x36 tag > 0xffffffff > > seq 0x121104 > > kernel: (ctl2:isp1:0:0:3): ctlfe_done: returning task I/O tag > 0xffffffff > > seq 0x1210d4 > > > > > > I've tried to tackle some sysctls, but no success so far. > > > > vfs.zfs.vdev.bio_flush_disable: 1 > > vfs.zfs.vdev.bio_delete_disable: 1 > > vfs.zfs.trim.enabled=0 > > > > > > Disabling UNMAP in CTL (-o unmap=off) resolves the issue completely > but > > than there is no space reclamation for zvol. > > > > Any hints would be appreciated. > > > > > > > > _______________________________________________ > > freebsd-fs@freebsd.org mailing list > > http://lists.freebsd.org/mailman/listinfo/freebsd-fs > > To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org > > " > > > > > > > -- > Alexander Motin > From owner-freebsd-fs@freebsd.org Wed Dec 2 00:39:49 2015 Return-Path: Delivered-To: freebsd-fs@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 5462FA3EF79 for ; Wed, 2 Dec 2015 00:39:49 +0000 (UTC) (envelope-from csforgeron@gmail.com) Received: from mail-ig0-x22b.google.com (mail-ig0-x22b.google.com [IPv6:2607:f8b0:4001:c05::22b]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (Client CN "smtp.gmail.com", Issuer "Google Internet Authority G2" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id 1DED4153F; Wed, 2 Dec 2015 00:39:49 +0000 (UTC) (envelope-from csforgeron@gmail.com) Received: by igcmv3 with SMTP id mv3so106915231igc.0; Tue, 01 Dec 2015 16:39:48 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type; bh=+Wu+z+M4nrPZ7rV9nkoZ/KMua+wW0321XITeMCcF8Vw=; b=LEp7X11HqRw6qvszVSlOZdfutAZrwho4h5Cdct4GbqbiI45gPT7sB04IHzjz4zFCnK OvkhNalHG2CGh4E02DvmJUMYxIfWq6wGj8PG3VhJmEj8LVRJ64LKRHVhQNTM8/FI43Ut lB0+MW4eTii9rodjEOxL6AoXrMb0vz+5axniPTFYEkx+wGI9TjCQiy4BPxUutNE4hQhk bF5Oyw6rbxm/yXx57+cVRy0c20REt/i1LoCPJboqGwF+EkhH0JrzR7ZQMVlsopUuoCHv 8eacIPtajk7deVdqXoSRpVx2ri480OjuI2qAXHja3//ioqVN7S8oNfkwSs5yWvIqe2Pk ILiA== MIME-Version: 1.0 X-Received: by 10.50.30.6 with SMTP id o6mr31346781igh.94.1449016788463; Tue, 01 Dec 2015 16:39:48 -0800 (PST) Received: by 10.36.159.67 with HTTP; Tue, 1 Dec 2015 16:39:48 -0800 (PST) In-Reply-To: References: <54F88DEA.2070301@hotplug.ru> <565DDA14.4010006@FreeBSD.org> Date: Tue, 1 Dec 2015 20:39:48 -0400 Message-ID: Subject: Re: CAM Target over FC and UNMAP problem From: Christopher Forgeron To: Alexander Motin Cc: Emil Muratov , FreeBSD Filesystems Content-Type: text/plain; charset=UTF-8 X-Content-Filtered-By: Mailman/MimeDel 2.1.20 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 02 Dec 2015 00:39:49 -0000 Example of the difference between the machines: vPool175 (it's zpool drives are all iSCSI) dT: 1.060s w: 1.000s L(q) ops/s r/s kBps ms/r w/s kBps ms/w d/s kBps ms/d o/s ms/o %busy Name * 64 97 0 0 0.0 0 0 0.0 97 6643 713.4 0 0.0 109.0| da1* pool92 (it's the zvol target for vPool175's iSCSI connection) dT: 1.003s w: 1.000s L(q) ops/s r/s kBps ms/r w/s kBps ms/w d/s kBps ms/d o/s ms/o %busy Name 0 0 0 0 0.0 0 0 0.0 0 0 0.0 0 0.0 0.0| cd0 0 54 54 3447 2.8 0 0 0.0 0 0 0.0 0 0.0 14.9| da0 0 62 62 3958 3.6 0 0 0.0 0 0 0.0 0 0.0 22.0| da1 0 42 42 2681 3.3 0 0 0.0 0 0 0.0 0 0.0 13.9| da2 0 44 44 2809 2.4 0 0 0.0 0 0 0.0 0 0.0 10.5| da3 0 57 57 3638 3.8 0 0 0.0 0 0 0.0 0 0.0 21.7| da4 0 39 39 2489 3.9 0 0 0.0 0 0 0.0 0 0.0 15.1| da5 1 159 0 0 0.0 0 0 0.0 159 10405 6.3 0 0.0 100.2| zvol/pool92/iscsi0 There is a slight time lag bewteen my copies, but you can see the ~100 delete ops per sec on both sides. Perhaps the queue depth is nothing, but as you can see on the pool92 side with the physical disks, we're hardly working anything. On Tue, Dec 1, 2015 at 8:32 PM, Christopher Forgeron wrote: > Thanks for the update. > > Perhaps my situation is different. > > I have a zpool (vPool175) that is made up of iSCSI disks. > > Those iSCSIi disks target a zvol on another machine (pool92) made of real > disks. > > The performance of the UNMAP system is excellent when we're talking bulk > UNMAPs - I can UNMAP a 5TiB zvol in 50 seconds or so. zpool create and > destroy are fairly fast in this situation. > > However, once my vPool175 is into random writes, the UNMAP performance is > terrible. > > 5 minutes of random writes (averaging 1000iops) will result in 50 MINUTES > of UNMAP's after the test run. And it often will hang on I\O before the > full 5 min of rrnd write is up. I feel like the UNMAP buffer/count is > being exceeded (10,000 pending operations bu default). > > Sync writes don't have this issue. 5 min of 800iops sync writes will > result in ~3 mintutes of UNMAP operations after the test is finished. > > It's not a ctl issue I would think - It's to do with the way ZFS needs to > read metadata to then write out the UNMAPs. > > HOWEVER: > > I do notice that my remote zpool that is the iSCSI initiator (vPool175), > can keep a queue depth of 64 for the UNMAP operations, but on the iSCSI > target machine (pool92), the queue depth for the UNMAP operation on the > zvol is never more than 1. I've tried modifying the various vfs.zfs.vdev. > write controls, but none of them are set to a value of 1, so perhaps CTL is > only passing a queue depth of 1 on for UNMAP operations? The zvol should be > UNMAPing at the same queue depth as the remote machine - 1-64. > > I've tried setting the trim min_active on the zvol machine, but also no > luck: > > root@pool92:~ # sysctl vfs.zfs.vdev.trim_min_active=10 > vfs.zfs.vdev.trim_min_active: 1 -> 10 > > The zvol queue stays at 0/1 depth during the 50 minutes of small-block > UNMAP. > > In fact, in general, the queue of the zvol on the iSCSI target machine > (pool92) looks to be run at a very low queue depth > > I feel if we could at least get that queue depth up, we'd have a chance to > keep up with the remote system asking for UNMAP. > > I'm curious to experiment with deeper TAG depths - say 4096, to see if > UNMAP aggregation will help out - That may be for tomorrow. > > On Tue, Dec 1, 2015 at 1:34 PM, Alexander Motin wrote: > >> Not really. But just as an idea you may try to set tloader unable >> vfs.zfs.trim.enabled=0 . Aside of obvious disabling TRIM, that is no-op >> for non-SSD ZFS pool, it also changes the day deletes are handled, >> possibly making them less blocking. >> >> I see no issue here from CTL side -- it does well. It indeed does not >> limit size of single UNMAP operation, that is not good, but that is >> because it has no idea about performance of backing store. >> >> On 01.12.2015 16:39, Christopher Forgeron wrote: >> > Did this ever progress further? I'm load testing 10.2 zvol / UNMAP, and >> > having similar lockup issues. >> > >> > On Thu, Mar 5, 2015 at 1:10 PM, Emil Muratov > > > wrote: >> > >> > >> > I've got an issue with CTL UNMAP and zvol backends. >> > Seems that UNMAP from the initiator passed to the underlying disks >> > (without trim support) causes IO blocking to the whole pool. Not >> sure >> > where to address this problem. >> > >> > My setup: >> > - plain SATA 7.2 krpm drives attached to Adaptec aacraid SAS >> controller >> > - zfs raidz pool over plain drives, no partitioning >> > - zvol created with volmode=dev >> > - Qlogic ISP 2532 FC HBA in target mode >> > - FreeBSD 10.1-STABLE #1 r279593 >> > >> > Create a new LUN with a zvol backend >> > >> > ctladm realsync off >> > ctladm port -o on -p 5 >> > ctladm create -b block -o file=/dev/zvol/wd/tst1 -o unmap=on -l 0 -d >> > wd.tst1 -S tst1 >> > >> > Both target an initiator hosts connected to the FC fabric. >> Initiator is >> > Win2012 server, actually it is a VM with RDM LUN to the guest OS. >> > Formating, reading and writing large amounts of data (file >> copy/IOmeter) >> > - so far so good. >> > But as soon as I've tried to delete large files all IO to the LUN >> > blocks, initiator system just iowaits. gstat on target shows that >> > underlying disk load bumped to 100%, queue up to 10, but no iowrites >> > actually in progress, only decent amount of ioreads. After a minute >> or >> > so IO unblocks for a second or two than blocks again and so on again >> > until all UNMAPs are done, it could take up to 5 minutes to delete >> 10Gb >> > file. I can see that 'logicalused' property of a zvol shows that the >> > deleted space was actually released. System log is filled with CTL >> msgs: >> > >> > >> > kernel: (ctl2:isp1:0:0:3): ctlfestart: aborted command 0x12aaf4 >> > discarded >> > kernel: (2:5:3/3): WRITE(10). CDB: 2a 00 2f d4 74 b8 00 00 08 00 >> > kernel: (2:5:3/3): Tag: 0x12ab24, type 1 >> > kernel: (2:5:3/3): ctl_process_done: 96 seconds >> > kernel: (ctl2:isp1:0:0:3): ctlfestart: aborted command 0x12afa4 >> > discarded >> > kernel: (ctl2:isp1:0:0:3): ctlfestart: aborted command 0x12afd4 >> > discarded >> > kernel: ctlfedone: got XPT_IMMEDIATE_NOTIFY status 0x36 tag >> 0xffffffff >> > seq 0x121104 >> > kernel: (ctl2:isp1:0:0:3): ctlfe_done: returning task I/O tag >> 0xffffffff >> > seq 0x1210d4 >> > >> > >> > I've tried to tackle some sysctls, but no success so far. >> > >> > vfs.zfs.vdev.bio_flush_disable: 1 >> > vfs.zfs.vdev.bio_delete_disable: 1 >> > vfs.zfs.trim.enabled=0 >> > >> > >> > Disabling UNMAP in CTL (-o unmap=off) resolves the issue completely >> but >> > than there is no space reclamation for zvol. >> > >> > Any hints would be appreciated. >> > >> > >> > >> > _______________________________________________ >> > freebsd-fs@freebsd.org mailing list >> > http://lists.freebsd.org/mailman/listinfo/freebsd-fs >> > To unsubscribe, send any mail to " >> freebsd-fs-unsubscribe@freebsd.org >> > " >> > >> > >> >> >> -- >> Alexander Motin >> > > From owner-freebsd-fs@freebsd.org Wed Dec 2 11:38:55 2015 Return-Path: Delivered-To: freebsd-fs@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id CF112A3E818 for ; Wed, 2 Dec 2015 11:38:55 +0000 (UTC) (envelope-from zeus@ibs.dn.ua) Received: from smtp.new-ukraine.org (smtp.new-ukraine.org [148.251.53.51]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client CN "smtp.new-ukraine.org", Issuer "smtp.new-ukraine.org" (not verified)) by mx1.freebsd.org (Postfix) with ESMTPS id 5968F18E0 for ; Wed, 2 Dec 2015 11:38:54 +0000 (UTC) (envelope-from zeus@ibs.dn.ua) Received: on behalf of honored client by smtp.new-ukraine.org with ESMTP id tB2BYcLr035825 for on Wed, 2 Dec 2015 13:34:44 +0200 (EET) Message-ID: <20151202133428.35820@smtp.new-ukraine.org> Date: Wed, 02 Dec 2015 13:34:28 +0200 From: "Zeus Panchenko" To: "FreeBSD Filesystems" Subject: advice needed: zpool of 10 x (raidz2 on (4+2) x 2T HDD) Organization: I.B.S. LLC Reply-To: "Zeus Panchenko" X-Attribution: zeus Face: iVBORw0KGgoAAAANSUhEUgAAADAAAAAwBAMAAAClLOS0AAAAFVBMVEWxsbGdnZ3U1NQTExN cXFzx8fG/v7+f8hyWAAACXUlEQVQ4jUWSwXYiIRBFi4yyhtjtWpmRdTL0ZC3TJOukDa6Rc+T/P2F eFepwtFvr8upVFVDua8mLWw6La4VIKTuMdAPOebdU55sQs3n/D1xFFPFGVGh4AHKttr5K0bS6g7N ZCge7qpVLB+f1Z2WAj2OKXwIWt/bXpdXSiu8KXbviWkHxF5td9+lg2e3xlI2SCvatK8YLfHyh9lw 15yrad8Va5eXg4Llr7QmAaC+dL9sDt9iad/DX3OKvLMBf+dm0A0QuMrTvYIevSik1IaSVvgjIHt5 lSCG2ynNRpEcBZ8cgDWk+Ns99qzsYYV3MZoppWzGtYlTO9+meG6m/g92iNO9LfQB2JZsMpoJs7QG ku2KtabRK0bZRwDLyBDvwlxTm6ZlP7qyOqLcfqtLexpDSB4M0H3I/PQy1emvjjzgK+A0LmMKl6Lq zlqzh0VGAw440F6MJd8cY0nI7wiF/fVIBGY7UNCAXy6DmfYGCLLI0wtDbVcDUMqtJLmAhLqODQAe riERAxXJ1/QYGpa0ymqyytpKC19MNXHjvFmEsfcHIrncFR4xdbYWgmfEGLCcZokpGbGj1egMR+6M 1BkNX1pDdhPcOXpAnAeLQUwQLYepgQoZVNGS61yaE8CYA7gYAcWKzwGstACY2HTFvvOwk4FXAG/a mKHni/EcA/GkOk7I0IK7UMIf3+SahU8/FJdiE7KcuWdM3MFocUDEEIX9LfJoo4xV5tnNKc3jJuSs SZWgnnhepgU1zN4Hii18yW4RwDX52CXUtk0Hqz6cHOIUkWaX8fDcB+J7y1y2xDHwjv/8Buu8Ekz6 7tXQAAAAASUVORK5CYII= X-Mailer: MH-E 8.3.1; nil; GNU Emacs 24.3.1 MIME-Version: 1.0 Content-Type: text/plain Content-Transfer-Encoding: quoted-printable X-NewUkraine-Agent: mailfromd (7.99.92) X-NewUkraine-URL: https://mail.prozora-kraina.org/smtp.html X-NewUkraine-VirStat: NO X-NewUkraine-VirScan: ScanPE, ScanELF, ScanOLE2, ScanMail, PhishingSignatures, ScanHTML, ScanPDF X-NewUkraine-SpamStat: NO X-NewUkraine-SpamScore: -1.700 of 3.500 X-NewUkraine-SpamKeys: AWL,BAYES_00,NO_RECEIVED,NO_RELAYS X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 02 Dec 2015 11:38:56 -0000 =2D----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 greetings, we deployed storage, and as it was filling until now, I see I need an advice regarding the configuration and optimization/s ... the main cause I decided to ask for an advice is this: once per month (or even more frequently, depends on the load I suggest) host hangs and only power reset helps, nothing helpful in log files though ... just the fact of restart logged and usual ctld activity after reboot, `zpool import' lasts 40min and more, and during this time no resource of the host is used much ... neither CPU nor memory ... top and systat shows no load (I need to export pool first since I need to attach geli first, and if I attach geli with zpool still imported, I receive in the end a lot of "absent/damaged" disks in zpool which disappears after export/import) so, I'm wondering what can I do to trace the cause of hangs? what to monito= re to understand what to expect and how to prevent ...=20 so, please, advise =2D -----------------------------------------------------------------------= ----------- bellow the details are: =2D -----------------------------------------------------------------------= ----------- the box is Supermicro X9DRD-7LN4F with: CPU: Intel(R) Xeon(R) CPU E5-2630L (2 package(s) x 6 core(s) x 2 SMT thre= ads) RAM: 128Gb STOR: 3 x LSI SAS2308 PCI-Express Fusion-MPT SAS-2 (jbod) 60 x HDD 2T (ATA WDC WD20EFRX-68A 0A80, Fixed Direct Access SCSI-6 d= evice 600.000MB/s) OS: FreeBSD 10.1-RELEASE #0 r274401 amd64 to avoid OS memory shortage sysctl vfs.zfs.arc_max is set to 120275861504 to clients, storage is provided via iSCSI by ctld (each target is file back= ed) zpool created of 10 x raidz2, each raidz2 consists of 6 geli devices and now looks so (yes, deduplication is on): > zpool list storage NAME SIZE ALLOC FREE FRAG EXPANDSZ CAP DEDUP HEALTH = ALTROOT storage 109T 33.5T 75.2T - - 30% 1.57x ONLINE - > zpool history storage 2013-10-21.01:31:14 zpool create storage=20 raidz2 gpt/c0s00 gpt/c0s01 gpt/c1s00 gpt/c1s01 gpt/c2s00 gpt/c2s01 raidz2 gpt/c0s02 gpt/c0s03 gpt/c1s02 gpt/c1s03 gpt/c2s02 gpt/c2s03 ... raidz2 gpt/c0s18 gpt/c0s19 gpt/c1s18 gpt/c1s19 gpt/c2s18 gpt/c2s19 log mirror gpt/log0 gpt/log1 cache gpt/cache0 gpt/cache1 > zdb storage Cached configuration: version: 5000 name: 'storage' state: 0 txg: 13340514 pool_guid: 11994995707440773547 hostid: 1519855013 hostname: 'storage.foo.bar' vdev_children: 11 vdev_tree: type: 'root' id: 0 guid: 11994995707440773547 children[0]: type: 'raidz' id: 0 guid: 12290021428260525074 nparity: 2 metaslab_array: 46 metaslab_shift: 36 ashift: 12 asize: 12002364751872 is_log: 0 create_txg: 4 children[0]: type: 'disk' id: 0 guid: 3897093815971447961 path: '/dev/gpt/c0s00' phys_path: '/dev/gpt/c0s00' whole_disk: 1 DTL: 9133 create_txg: 4 children[1]: type: 'disk' id: 1 guid: 1036685341766239763 path: '/dev/gpt/c0s01' phys_path: '/dev/gpt/c0s01' whole_disk: 1 DTL: 9132 create_txg: 4 ... each geli is created on one HDD > geli list da50.eli Geom name: da50.eli State: ACTIVE EncryptionAlgorithm: AES-XTS KeyLength: 256 Crypto: hardware Version: 6 UsedKey: 0 Flags: (null) KeysAllocated: 466 KeysTotal: 466 Providers: 1. Name: da50.eli Mediasize: 2000398929920 (1.8T) Sectorsize: 4096 Mode: r1w1e3 Consumers: 1. Name: da50 Mediasize: 2000398934016 (1.8T) Sectorsize: 512 Stripesize: 4096 Stripeoffset: 0 Mode: r1w1e1 each raidz2 disk configured as: > gpart show da50.eli=20=20=20=20=20 =3D> 6 488378634 da50.eli GPT (1.8T) 6 488378634 1 freebsd-zfs (1.8T) > zfs-stats -a =2D -----------------------------------------------------------------------= --- ZFS Subsystem Report Wed Dec 2 09:59:27 2015 =2D -----------------------------------------------------------------------= --- System Information: Kernel Version: 1001000 (osreldate) Hardware Platform: amd64 Processor Architecture: amd64 FreeBSD 10.1-RELEASE #0 r274401: Tue Nov 11 21:02:49 UTC 2014 root 9:59AM up 1 day, 46 mins, 10 users, load averages: 1.03, 0.46, 0.75 =2D -----------------------------------------------------------------------= --- System Memory Statistics: Physical Memory: 131012.88M Kernel Memory: 1915.37M DATA: 98.62% 1888.90M TEXT: 1.38% 26.47M =2D -----------------------------------------------------------------------= --- ZFS pool information: Storage pool Version (spa): 5000 Filesystem Version (zpl): 5 =2D -----------------------------------------------------------------------= --- ARC Misc: Deleted: 1961248 Recycle Misses: 127014 Mutex Misses: 5973 Evict Skips: 5973 ARC Size: Current Size (arcsize): 100.00% 114703.88M Target Size (Adaptive, c): 100.00% 114704.00M Min Size (Hard Limit, c_min): 12.50% 14338.00M Max Size (High Water, c_max): ~8:1 114704.00M ARC Size Breakdown: Recently Used Cache Size (p): 93.75% 107535.69M Freq. Used Cache Size (c-p): 6.25% 7168.31M ARC Hash Breakdown: Elements Max: 6746532 Elements Current: 100.00% 6746313 Collisions: 9651654 Chain Max: 0 Chains: 1050203 ARC Eviction Statistics: Evicts Total: 194298918912 Evicts Eligible for L2: 81.00% 157373345280 Evicts Ineligible for L2: 19.00% 36925573632 Evicts Cached to L2: 97939090944 ARC Efficiency Cache Access Total: 109810376 Cache Hit Ratio: 91.57% 100555148 Cache Miss Ratio: 8.43% 9255228 Actual Hit Ratio: 90.54% 99423922 Data Demand Efficiency: 76.64% Data Prefetch Efficiency: 48.46% CACHE HITS BY CACHE LIST: Anonymously Used: 0.88% 881966 Most Recently Used (mru): 23.11% 23236902 Most Frequently Used (mfu): 75.77% 76187020 MRU Ghost (mru_ghost): 0.03% 26449 MFU Ghost (mfu_ghost): 0.22% 222811 CACHE HITS BY DATA TYPE: Demand Data: 10.17% 10227867 Prefetch Data: 0.45% 455126 Demand Metadata: 88.69% 89184329 Prefetch Metadata: 0.68% 687826 CACHE MISSES BY DATA TYPE: Demand Data: 33.69% 3117808 Prefetch Data: 5.23% 484140 Demand Metadata: 56.55% 5233984 Prefetch Metadata: 4.53% 419296 =2D -----------------------------------------------------------------------= --- L2 ARC Summary: Low Memory Aborts: 77 R/W Clashes: 13 Free on Write: 523 L2 ARC Size: Current Size: (Adaptive) 91988.13M Header Size: 0.13% 120.08M L2 ARC Read/Write Activity: Bytes Written: 97783.99M Bytes Read: 2464.81M L2 ARC Breakdown: Access Total: 8110124 Hit Ratio: 2.89% 234616 Miss Ratio: 97.11% 7875508 Feeds: 85129 WRITES: Sent Total: 100.00% 18448 =2D -----------------------------------------------------------------------= --- VDEV Cache Summary: Access Total: 0 Hits Ratio: 0.00% 0 Miss Ratio: 0.00% 0 Delegations: 0 =2D -----------------------------------------------------------------------= --- File-Level Prefetch Stats (DMU): DMU Efficiency: Access Total: 162279162 Hit Ratio: 91.69% 148788486 Miss Ratio: 8.31% 13490676 Colinear Access Total: 13490676 Colinear Hit Ratio: 0.06% 8166 Colinear Miss Ratio: 99.94% 13482510 Stride Access Total: 146863482 Stride Hit Ratio: 99.31% 145846806 Stride Miss Ratio: 0.69% 1016676 DMU misc: Reclaim successes: 124372 Reclaim failures: 13358138 Stream resets: 618 Stream noresets: 2938602 Bogus streams: 0 =2D -----------------------------------------------------------------------= --- ZFS Tunable (sysctl): kern.maxusers=3D8524 vfs.zfs.arc_max=3D120275861504 vfs.zfs.arc_min=3D15034482688 vfs.zfs.arc_average_blocksize=3D8192 vfs.zfs.arc_meta_used=3D24838283936 vfs.zfs.arc_meta_limit=3D30068965376 vfs.zfs.l2arc_write_max=3D8388608 vfs.zfs.l2arc_write_boost=3D8388608 vfs.zfs.l2arc_headroom=3D2 vfs.zfs.l2arc_feed_secs=3D1 vfs.zfs.l2arc_feed_min_ms=3D200 vfs.zfs.l2arc_noprefetch=3D1 vfs.zfs.l2arc_feed_again=3D1 vfs.zfs.l2arc_norw=3D1 vfs.zfs.anon_size=3D27974656 vfs.zfs.anon_metadata_lsize=3D0 vfs.zfs.anon_data_lsize=3D0 vfs.zfs.mru_size=3D112732930560 vfs.zfs.mru_metadata_lsize=3D18147921408 vfs.zfs.mru_data_lsize=3D92690379776 vfs.zfs.mru_ghost_size=3D7542758400 vfs.zfs.mru_ghost_metadata_lsize=3D1262705664 vfs.zfs.mru_ghost_data_lsize=3D6280052736 vfs.zfs.mfu_size=3D3748620800 vfs.zfs.mfu_metadata_lsize=3D1014886912 vfs.zfs.mfu_data_lsize=3D2723481600 vfs.zfs.mfu_ghost_size=3D24582345728 vfs.zfs.mfu_ghost_metadata_lsize=3D682512384 vfs.zfs.mfu_ghost_data_lsize=3D23899833344 vfs.zfs.l2c_only_size=3D66548531200 vfs.zfs.dedup.prefetch=3D1 vfs.zfs.nopwrite_enabled=3D1 vfs.zfs.mdcomp_disable=3D0 vfs.zfs.dirty_data_max=3D4294967296 vfs.zfs.dirty_data_max_max=3D4294967296 vfs.zfs.dirty_data_max_percent=3D10 vfs.zfs.dirty_data_sync=3D67108864 vfs.zfs.delay_min_dirty_percent=3D60 vfs.zfs.delay_scale=3D500000 vfs.zfs.prefetch_disable=3D0 vfs.zfs.zfetch.max_streams=3D8 vfs.zfs.zfetch.min_sec_reap=3D2 vfs.zfs.zfetch.block_cap=3D256 vfs.zfs.zfetch.array_rd_sz=3D1048576 vfs.zfs.top_maxinflight=3D32 vfs.zfs.resilver_delay=3D2 vfs.zfs.scrub_delay=3D4 vfs.zfs.scan_idle=3D50 vfs.zfs.scan_min_time_ms=3D1000 vfs.zfs.free_min_time_ms=3D1000 vfs.zfs.resilver_min_time_ms=3D3000 vfs.zfs.no_scrub_io=3D0 vfs.zfs.no_scrub_prefetch=3D0 vfs.zfs.metaslab.gang_bang=3D131073 vfs.zfs.metaslab.fragmentation_threshold=3D70 vfs.zfs.metaslab.debug_load=3D0 vfs.zfs.metaslab.debug_unload=3D0 vfs.zfs.metaslab.df_alloc_threshold=3D131072 vfs.zfs.metaslab.df_free_pct=3D4 vfs.zfs.metaslab.min_alloc_size=3D10485760 vfs.zfs.metaslab.load_pct=3D50 vfs.zfs.metaslab.unload_delay=3D8 vfs.zfs.metaslab.preload_limit=3D3 vfs.zfs.metaslab.preload_enabled=3D1 vfs.zfs.metaslab.fragmentation_factor_enabled=3D1 vfs.zfs.metaslab.lba_weighting_enabled=3D1 vfs.zfs.metaslab.bias_enabled=3D1 vfs.zfs.condense_pct=3D200 vfs.zfs.mg_noalloc_threshold=3D0 vfs.zfs.mg_fragmentation_threshold=3D85 vfs.zfs.check_hostid=3D1 vfs.zfs.spa_load_verify_maxinflight=3D10000 vfs.zfs.spa_load_verify_metadata=3D1 vfs.zfs.spa_load_verify_data=3D1 vfs.zfs.recover=3D0 vfs.zfs.deadman_synctime_ms=3D1000000 vfs.zfs.deadman_checktime_ms=3D5000 vfs.zfs.deadman_enabled=3D1 vfs.zfs.spa_asize_inflation=3D24 vfs.zfs.txg.timeout=3D5 vfs.zfs.vdev.cache.max=3D16384 vfs.zfs.vdev.cache.size=3D0 vfs.zfs.vdev.cache.bshift=3D16 vfs.zfs.vdev.trim_on_init=3D1 vfs.zfs.vdev.mirror.rotating_inc=3D0 vfs.zfs.vdev.mirror.rotating_seek_inc=3D5 vfs.zfs.vdev.mirror.rotating_seek_offset=3D1048576 vfs.zfs.vdev.mirror.non_rotating_inc=3D0 vfs.zfs.vdev.mirror.non_rotating_seek_inc=3D1 vfs.zfs.vdev.max_active=3D1000 vfs.zfs.vdev.sync_read_min_active=3D10 vfs.zfs.vdev.sync_read_max_active=3D10 vfs.zfs.vdev.sync_write_min_active=3D10 vfs.zfs.vdev.sync_write_max_active=3D10 vfs.zfs.vdev.async_read_min_active=3D1 vfs.zfs.vdev.async_read_max_active=3D3 vfs.zfs.vdev.async_write_min_active=3D1 vfs.zfs.vdev.async_write_max_active=3D10 vfs.zfs.vdev.scrub_min_active=3D1 vfs.zfs.vdev.scrub_max_active=3D2 vfs.zfs.vdev.trim_min_active=3D1 vfs.zfs.vdev.trim_max_active=3D64 vfs.zfs.vdev.aggregation_limit=3D131072 vfs.zfs.vdev.read_gap_limit=3D32768 vfs.zfs.vdev.write_gap_limit=3D4096 vfs.zfs.vdev.bio_flush_disable=3D0 vfs.zfs.vdev.bio_delete_disable=3D0 vfs.zfs.vdev.trim_max_bytes=3D2147483648 vfs.zfs.vdev.trim_max_pending=3D64 vfs.zfs.max_auto_ashift=3D13 vfs.zfs.min_auto_ashift=3D9 vfs.zfs.zil_replay_disable=3D0 vfs.zfs.cache_flush_disable=3D0 vfs.zfs.zio.use_uma=3D1 vfs.zfs.zio.exclude_metadata=3D0 vfs.zfs.sync_pass_deferred_free=3D2 vfs.zfs.sync_pass_dont_compress=3D5 vfs.zfs.sync_pass_rewrite=3D2 vfs.zfs.snapshot_list_prefetch=3D0 vfs.zfs.super_owner=3D0 vfs.zfs.debug=3D0 vfs.zfs.version.ioctl=3D4 vfs.zfs.version.acl=3D1 vfs.zfs.version.spa=3D5000 vfs.zfs.version.zpl=3D5 vfs.zfs.vol.mode=3D1 vfs.zfs.trim.enabled=3D1 vfs.zfs.trim.txg_delay=3D32 vfs.zfs.trim.timeout=3D30 vfs.zfs.trim.max_interval=3D1 vm.kmem_size=3D133823901696 vm.kmem_size_scale=3D1 vm.kmem_size_min=3D0 vm.kmem_size_max=3D1319413950874 =2D --=20 Zeus V. Panchenko jid:zeus@im.ibs.dn.ua IT Dpt., I.B.S. LLC GMT+2 (EET) =2D----BEGIN PGP SIGNATURE----- Version: GnuPG v1 iEYEARECAAYFAlZe10QACgkQr3jpPg/3oyqVAwCdHeRra+H9ac/+HCiQ80DhthlZ SSUAnjucvvosNjcUzTqKgGe+LlLctaoV =3DWPge =2D----END PGP SIGNATURE----- From owner-freebsd-fs@freebsd.org Wed Dec 2 11:45:42 2015 Return-Path: Delivered-To: freebsd-fs@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 615A1A3E9E2 for ; Wed, 2 Dec 2015 11:45:42 +0000 (UTC) (envelope-from jg@internetx.com) Received: from mx1.internetx.com (mx1.internetx.com [62.116.129.39]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id D57C61C97 for ; Wed, 2 Dec 2015 11:45:41 +0000 (UTC) (envelope-from jg@internetx.com) Received: from localhost (localhost [127.0.0.1]) by mx1.internetx.com (Postfix) with ESMTP id 6EC2C1472005; Wed, 2 Dec 2015 12:45:33 +0100 (CET) X-Virus-Scanned: InterNetX GmbH amavisd-new at ix-mailer.internetx.de Received: from mx1.internetx.com ([62.116.129.39]) by localhost (ix-mailer.internetx.de [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id LD-+g22QtbFM; Wed, 2 Dec 2015 12:45:30 +0100 (CET) Received: from [192.168.100.26] (pizza.internetx.de [62.116.129.3]) (using TLSv1 with cipher DHE-RSA-AES128-SHA (128/128 bits)) (No client certificate requested) by mx1.internetx.com (Postfix) with ESMTPSA id 387211472002; Wed, 2 Dec 2015 12:45:30 +0100 (CET) Subject: Re: advice needed: zpool of 10 x (raidz2 on (4+2) x 2T HDD) References: <20151202133428.35820@smtp.new-ukraine.org> To: Zeus Panchenko , FreeBSD Filesystems Reply-To: jg@internetx.com From: InterNetX - Juergen Gotteswinter Message-ID: <565ED9D4.5050202@internetx.com> Date: Wed, 2 Dec 2015 12:45:24 +0100 User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:38.0) Gecko/20100101 Thunderbird/38.3.0 MIME-Version: 1.0 In-Reply-To: <20151202133428.35820@smtp.new-ukraine.org> Content-Type: text/plain; charset=windows-1252 Content-Transfer-Encoding: 7bit X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 02 Dec 2015 11:45:42 -0000 Hi, 2 things i whould consider suspicious. probably 3 SATA Disks on SAS Controller, Dedup and probably the HBA Firmware Version Am 02.12.2015 um 12:34 schrieb Zeus Panchenko: > greetings, > > we deployed storage, and as it was filling until now, I see I need > an advice regarding the configuration and optimization/s ... > > the main cause I decided to ask for an advice is this: > > once per month (or even more frequently, depends on the load I > suggest) host hangs and only power reset helps, nothing helpful in log > files though ... just the fact of restart logged and usual ctld activity > > after reboot, `zpool import' lasts 40min and more, and during this time > no resource of the host is used much ... neither CPU nor memory ... top > and systat shows no load (I need to export pool first since I need to > attach geli first, and if I attach geli with zpool still imported, I > receive in the end a lot of "absent/damaged" disks in zpool which > disappears after export/import) > > > so, I'm wondering what can I do to trace the cause of hangs? what to monitore to > understand what to expect and how to prevent ... > > > so, please, advise > > > > ---------------------------------------------------------------------------------- > bellow the details are: > ---------------------------------------------------------------------------------- > > the box is Supermicro X9DRD-7LN4F with: > > CPU: Intel(R) Xeon(R) CPU E5-2630L (2 package(s) x 6 core(s) x 2 SMT threads) > RAM: 128Gb > STOR: 3 x LSI SAS2308 PCI-Express Fusion-MPT SAS-2 (jbod) > 60 x HDD 2T (ATA WDC WD20EFRX-68A 0A80, Fixed Direct Access SCSI-6 device 600.000MB/s) > > OS: FreeBSD 10.1-RELEASE #0 r274401 amd64 > > to avoid OS memory shortage sysctl vfs.zfs.arc_max is set to 120275861504 > > to clients, storage is provided via iSCSI by ctld (each target is file backed) > > zpool created of 10 x raidz2, each raidz2 consists of 6 geli devices and > now looks so (yes, deduplication is on): > >> zpool list storage > NAME SIZE ALLOC FREE FRAG EXPANDSZ CAP DEDUP HEALTH ALTROOT > storage 109T 33.5T 75.2T - - 30% 1.57x ONLINE - > > >> zpool history storage > 2013-10-21.01:31:14 zpool create storage > raidz2 gpt/c0s00 gpt/c0s01 gpt/c1s00 gpt/c1s01 gpt/c2s00 gpt/c2s01 > raidz2 gpt/c0s02 gpt/c0s03 gpt/c1s02 gpt/c1s03 gpt/c2s02 gpt/c2s03 > ... > raidz2 gpt/c0s18 gpt/c0s19 gpt/c1s18 gpt/c1s19 gpt/c2s18 gpt/c2s19 > log mirror gpt/log0 gpt/log1 > cache gpt/cache0 gpt/cache1 > > >> zdb storage > Cached configuration: > version: 5000 > name: 'storage' > state: 0 > txg: 13340514 > pool_guid: 11994995707440773547 > hostid: 1519855013 > hostname: 'storage.foo.bar' > vdev_children: 11 > vdev_tree: > type: 'root' > id: 0 > guid: 11994995707440773547 > children[0]: > type: 'raidz' > id: 0 > guid: 12290021428260525074 > nparity: 2 > metaslab_array: 46 > metaslab_shift: 36 > ashift: 12 > asize: 12002364751872 > is_log: 0 > create_txg: 4 > children[0]: > type: 'disk' > id: 0 > guid: 3897093815971447961 > path: '/dev/gpt/c0s00' > phys_path: '/dev/gpt/c0s00' > whole_disk: 1 > DTL: 9133 > create_txg: 4 > children[1]: > type: 'disk' > id: 1 > guid: 1036685341766239763 > path: '/dev/gpt/c0s01' > phys_path: '/dev/gpt/c0s01' > whole_disk: 1 > DTL: 9132 > create_txg: 4 > ... > > > each geli is created on one HDD >> geli list da50.eli > Geom name: da50.eli > State: ACTIVE > EncryptionAlgorithm: AES-XTS > KeyLength: 256 > Crypto: hardware > Version: 6 > UsedKey: 0 > Flags: (null) > KeysAllocated: 466 > KeysTotal: 466 > Providers: > 1. Name: da50.eli > Mediasize: 2000398929920 (1.8T) > Sectorsize: 4096 > Mode: r1w1e3 > Consumers: > 1. Name: da50 > Mediasize: 2000398934016 (1.8T) > Sectorsize: 512 > Stripesize: 4096 > Stripeoffset: 0 > Mode: r1w1e1 > > > > each raidz2 disk configured as: >> gpart show da50.eli > => 6 488378634 da50.eli GPT (1.8T) > 6 488378634 1 freebsd-zfs (1.8T) > > >> zfs-stats -a > -------------------------------------------------------------------------- > ZFS Subsystem Report Wed Dec 2 09:59:27 2015 > -------------------------------------------------------------------------- > System Information: > > Kernel Version: 1001000 (osreldate) > Hardware Platform: amd64 > Processor Architecture: amd64 > > FreeBSD 10.1-RELEASE #0 r274401: Tue Nov 11 21:02:49 UTC 2014 root > 9:59AM up 1 day, 46 mins, 10 users, load averages: 1.03, 0.46, 0.75 > -------------------------------------------------------------------------- > System Memory Statistics: > Physical Memory: 131012.88M > Kernel Memory: 1915.37M > DATA: 98.62% 1888.90M > TEXT: 1.38% 26.47M > -------------------------------------------------------------------------- > ZFS pool information: > Storage pool Version (spa): 5000 > Filesystem Version (zpl): 5 > -------------------------------------------------------------------------- > ARC Misc: > Deleted: 1961248 > Recycle Misses: 127014 > Mutex Misses: 5973 > Evict Skips: 5973 > > ARC Size: > Current Size (arcsize): 100.00% 114703.88M > Target Size (Adaptive, c): 100.00% 114704.00M > Min Size (Hard Limit, c_min): 12.50% 14338.00M > Max Size (High Water, c_max): ~8:1 114704.00M > > ARC Size Breakdown: > Recently Used Cache Size (p): 93.75% 107535.69M > Freq. Used Cache Size (c-p): 6.25% 7168.31M > > ARC Hash Breakdown: > Elements Max: 6746532 > Elements Current: 100.00% 6746313 > Collisions: 9651654 > Chain Max: 0 > Chains: 1050203 > > ARC Eviction Statistics: > Evicts Total: 194298918912 > Evicts Eligible for L2: 81.00% 157373345280 > Evicts Ineligible for L2: 19.00% 36925573632 > Evicts Cached to L2: 97939090944 > > ARC Efficiency > Cache Access Total: 109810376 > Cache Hit Ratio: 91.57% 100555148 > Cache Miss Ratio: 8.43% 9255228 > Actual Hit Ratio: 90.54% 99423922 > > Data Demand Efficiency: 76.64% > Data Prefetch Efficiency: 48.46% > > CACHE HITS BY CACHE LIST: > Anonymously Used: 0.88% 881966 > Most Recently Used (mru): 23.11% 23236902 > Most Frequently Used (mfu): 75.77% 76187020 > MRU Ghost (mru_ghost): 0.03% 26449 > MFU Ghost (mfu_ghost): 0.22% 222811 > > CACHE HITS BY DATA TYPE: > Demand Data: 10.17% 10227867 > Prefetch Data: 0.45% 455126 > Demand Metadata: 88.69% 89184329 > Prefetch Metadata: 0.68% 687826 > > CACHE MISSES BY DATA TYPE: > Demand Data: 33.69% 3117808 > Prefetch Data: 5.23% 484140 > Demand Metadata: 56.55% 5233984 > Prefetch Metadata: 4.53% 419296 > -------------------------------------------------------------------------- > L2 ARC Summary: > Low Memory Aborts: 77 > R/W Clashes: 13 > Free on Write: 523 > > L2 ARC Size: > Current Size: (Adaptive) 91988.13M > Header Size: 0.13% 120.08M > > L2 ARC Read/Write Activity: > Bytes Written: 97783.99M > Bytes Read: 2464.81M > > L2 ARC Breakdown: > Access Total: 8110124 > Hit Ratio: 2.89% 234616 > Miss Ratio: 97.11% 7875508 > Feeds: 85129 > > WRITES: > Sent Total: 100.00% 18448 > -------------------------------------------------------------------------- > VDEV Cache Summary: > Access Total: 0 > Hits Ratio: 0.00% 0 > Miss Ratio: 0.00% 0 > Delegations: 0 > -------------------------------------------------------------------------- > File-Level Prefetch Stats (DMU): > > DMU Efficiency: > Access Total: 162279162 > Hit Ratio: 91.69% 148788486 > Miss Ratio: 8.31% 13490676 > > Colinear Access Total: 13490676 > Colinear Hit Ratio: 0.06% 8166 > Colinear Miss Ratio: 99.94% 13482510 > > Stride Access Total: 146863482 > Stride Hit Ratio: 99.31% 145846806 > Stride Miss Ratio: 0.69% 1016676 > > DMU misc: > Reclaim successes: 124372 > Reclaim failures: 13358138 > Stream resets: 618 > Stream noresets: 2938602 > Bogus streams: 0 > -------------------------------------------------------------------------- > ZFS Tunable (sysctl): > kern.maxusers=8524 > vfs.zfs.arc_max=120275861504 > vfs.zfs.arc_min=15034482688 > vfs.zfs.arc_average_blocksize=8192 > vfs.zfs.arc_meta_used=24838283936 > vfs.zfs.arc_meta_limit=30068965376 > vfs.zfs.l2arc_write_max=8388608 > vfs.zfs.l2arc_write_boost=8388608 > vfs.zfs.l2arc_headroom=2 > vfs.zfs.l2arc_feed_secs=1 > vfs.zfs.l2arc_feed_min_ms=200 > vfs.zfs.l2arc_noprefetch=1 > vfs.zfs.l2arc_feed_again=1 > vfs.zfs.l2arc_norw=1 > vfs.zfs.anon_size=27974656 > vfs.zfs.anon_metadata_lsize=0 > vfs.zfs.anon_data_lsize=0 > vfs.zfs.mru_size=112732930560 > vfs.zfs.mru_metadata_lsize=18147921408 > vfs.zfs.mru_data_lsize=92690379776 > vfs.zfs.mru_ghost_size=7542758400 > vfs.zfs.mru_ghost_metadata_lsize=1262705664 > vfs.zfs.mru_ghost_data_lsize=6280052736 > vfs.zfs.mfu_size=3748620800 > vfs.zfs.mfu_metadata_lsize=1014886912 > vfs.zfs.mfu_data_lsize=2723481600 > vfs.zfs.mfu_ghost_size=24582345728 > vfs.zfs.mfu_ghost_metadata_lsize=682512384 > vfs.zfs.mfu_ghost_data_lsize=23899833344 > vfs.zfs.l2c_only_size=66548531200 > vfs.zfs.dedup.prefetch=1 > vfs.zfs.nopwrite_enabled=1 > vfs.zfs.mdcomp_disable=0 > vfs.zfs.dirty_data_max=4294967296 > vfs.zfs.dirty_data_max_max=4294967296 > vfs.zfs.dirty_data_max_percent=10 > vfs.zfs.dirty_data_sync=67108864 > vfs.zfs.delay_min_dirty_percent=60 > vfs.zfs.delay_scale=500000 > vfs.zfs.prefetch_disable=0 > vfs.zfs.zfetch.max_streams=8 > vfs.zfs.zfetch.min_sec_reap=2 > vfs.zfs.zfetch.block_cap=256 > vfs.zfs.zfetch.array_rd_sz=1048576 > vfs.zfs.top_maxinflight=32 > vfs.zfs.resilver_delay=2 > vfs.zfs.scrub_delay=4 > vfs.zfs.scan_idle=50 > vfs.zfs.scan_min_time_ms=1000 > vfs.zfs.free_min_time_ms=1000 > vfs.zfs.resilver_min_time_ms=3000 > vfs.zfs.no_scrub_io=0 > vfs.zfs.no_scrub_prefetch=0 > vfs.zfs.metaslab.gang_bang=131073 > vfs.zfs.metaslab.fragmentation_threshold=70 > vfs.zfs.metaslab.debug_load=0 > vfs.zfs.metaslab.debug_unload=0 > vfs.zfs.metaslab.df_alloc_threshold=131072 > vfs.zfs.metaslab.df_free_pct=4 > vfs.zfs.metaslab.min_alloc_size=10485760 > vfs.zfs.metaslab.load_pct=50 > vfs.zfs.metaslab.unload_delay=8 > vfs.zfs.metaslab.preload_limit=3 > vfs.zfs.metaslab.preload_enabled=1 > vfs.zfs.metaslab.fragmentation_factor_enabled=1 > vfs.zfs.metaslab.lba_weighting_enabled=1 > vfs.zfs.metaslab.bias_enabled=1 > vfs.zfs.condense_pct=200 > vfs.zfs.mg_noalloc_threshold=0 > vfs.zfs.mg_fragmentation_threshold=85 > vfs.zfs.check_hostid=1 > vfs.zfs.spa_load_verify_maxinflight=10000 > vfs.zfs.spa_load_verify_metadata=1 > vfs.zfs.spa_load_verify_data=1 > vfs.zfs.recover=0 > vfs.zfs.deadman_synctime_ms=1000000 > vfs.zfs.deadman_checktime_ms=5000 > vfs.zfs.deadman_enabled=1 > vfs.zfs.spa_asize_inflation=24 > vfs.zfs.txg.timeout=5 > vfs.zfs.vdev.cache.max=16384 > vfs.zfs.vdev.cache.size=0 > vfs.zfs.vdev.cache.bshift=16 > vfs.zfs.vdev.trim_on_init=1 > vfs.zfs.vdev.mirror.rotating_inc=0 > vfs.zfs.vdev.mirror.rotating_seek_inc=5 > vfs.zfs.vdev.mirror.rotating_seek_offset=1048576 > vfs.zfs.vdev.mirror.non_rotating_inc=0 > vfs.zfs.vdev.mirror.non_rotating_seek_inc=1 > vfs.zfs.vdev.max_active=1000 > vfs.zfs.vdev.sync_read_min_active=10 > vfs.zfs.vdev.sync_read_max_active=10 > vfs.zfs.vdev.sync_write_min_active=10 > vfs.zfs.vdev.sync_write_max_active=10 > vfs.zfs.vdev.async_read_min_active=1 > vfs.zfs.vdev.async_read_max_active=3 > vfs.zfs.vdev.async_write_min_active=1 > vfs.zfs.vdev.async_write_max_active=10 > vfs.zfs.vdev.scrub_min_active=1 > vfs.zfs.vdev.scrub_max_active=2 > vfs.zfs.vdev.trim_min_active=1 > vfs.zfs.vdev.trim_max_active=64 > vfs.zfs.vdev.aggregation_limit=131072 > vfs.zfs.vdev.read_gap_limit=32768 > vfs.zfs.vdev.write_gap_limit=4096 > vfs.zfs.vdev.bio_flush_disable=0 > vfs.zfs.vdev.bio_delete_disable=0 > vfs.zfs.vdev.trim_max_bytes=2147483648 > vfs.zfs.vdev.trim_max_pending=64 > vfs.zfs.max_auto_ashift=13 > vfs.zfs.min_auto_ashift=9 > vfs.zfs.zil_replay_disable=0 > vfs.zfs.cache_flush_disable=0 > vfs.zfs.zio.use_uma=1 > vfs.zfs.zio.exclude_metadata=0 > vfs.zfs.sync_pass_deferred_free=2 > vfs.zfs.sync_pass_dont_compress=5 > vfs.zfs.sync_pass_rewrite=2 > vfs.zfs.snapshot_list_prefetch=0 > vfs.zfs.super_owner=0 > vfs.zfs.debug=0 > vfs.zfs.version.ioctl=4 > vfs.zfs.version.acl=1 > vfs.zfs.version.spa=5000 > vfs.zfs.version.zpl=5 > vfs.zfs.vol.mode=1 > vfs.zfs.trim.enabled=1 > vfs.zfs.trim.txg_delay=32 > vfs.zfs.trim.timeout=30 > vfs.zfs.trim.max_interval=1 > vm.kmem_size=133823901696 > vm.kmem_size_scale=1 > vm.kmem_size_min=0 > vm.kmem_size_max=1319413950874 > > _______________________________________________ > freebsd-fs@freebsd.org mailing list > https://lists.freebsd.org/mailman/listinfo/freebsd-fs > To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" > From owner-freebsd-fs@freebsd.org Wed Dec 2 13:09:55 2015 Return-Path: Delivered-To: freebsd-fs@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id D726FA3E14B for ; Wed, 2 Dec 2015 13:09:55 +0000 (UTC) (envelope-from zeus@ibs.dn.ua) Received: from smtp.new-ukraine.org (smtp.new-ukraine.org [148.251.53.51]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client CN "smtp.new-ukraine.org", Issuer "smtp.new-ukraine.org" (not verified)) by mx1.freebsd.org (Postfix) with ESMTPS id 65AA31A9C for ; Wed, 2 Dec 2015 13:09:54 +0000 (UTC) (envelope-from zeus@ibs.dn.ua) Received: on behalf of honored client by smtp.new-ukraine.org with ESMTP id tB2D9gPD049863 on Wed, 2 Dec 2015 15:09:48 +0200 (EET) Message-ID: <20151202150932.49820@smtp.new-ukraine.org> Date: Wed, 02 Dec 2015 15:09:32 +0200 From: "Zeus Panchenko" To: Cc: "FreeBSD Filesystems" Subject: Re: advice needed: zpool of 10 x (raidz2 on (4+2) x 2T HDD) In-reply-to: Your message of Wed, 2 Dec 2015 12:45:24 +0100 <565ED9D4.5050202@internetx.com> References: <20151202133428.35820@smtp.new-ukraine.org> <565ED9D4.5050202@internetx.com> Organization: I.B.S. LLC Reply-To: "Zeus Panchenko" X-Attribution: zeus Face: iVBORw0KGgoAAAANSUhEUgAAADAAAAAwBAMAAAClLOS0AAAAFVBMVEWxsbGdnZ3U1NQTExN cXFzx8fG/v7+f8hyWAAACXUlEQVQ4jUWSwXYiIRBFi4yyhtjtWpmRdTL0ZC3TJOukDa6Rc+T/P2F eFepwtFvr8upVFVDua8mLWw6La4VIKTuMdAPOebdU55sQs3n/D1xFFPFGVGh4AHKttr5K0bS6g7N ZCge7qpVLB+f1Z2WAj2OKXwIWt/bXpdXSiu8KXbviWkHxF5td9+lg2e3xlI2SCvatK8YLfHyh9lw 15yrad8Va5eXg4Llr7QmAaC+dL9sDt9iad/DX3OKvLMBf+dm0A0QuMrTvYIevSik1IaSVvgjIHt5 lSCG2ynNRpEcBZ8cgDWk+Ns99qzsYYV3MZoppWzGtYlTO9+meG6m/g92iNO9LfQB2JZsMpoJs7QG ku2KtabRK0bZRwDLyBDvwlxTm6ZlP7qyOqLcfqtLexpDSB4M0H3I/PQy1emvjjzgK+A0LmMKl6Lq zlqzh0VGAw440F6MJd8cY0nI7wiF/fVIBGY7UNCAXy6DmfYGCLLI0wtDbVcDUMqtJLmAhLqODQAe riERAxXJ1/QYGpa0ymqyytpKC19MNXHjvFmEsfcHIrncFR4xdbYWgmfEGLCcZokpGbGj1egMR+6M 1BkNX1pDdhPcOXpAnAeLQUwQLYepgQoZVNGS61yaE8CYA7gYAcWKzwGstACY2HTFvvOwk4FXAG/a mKHni/EcA/GkOk7I0IK7UMIf3+SahU8/FJdiE7KcuWdM3MFocUDEEIX9LfJoo4xV5tnNKc3jJuSs SZWgnnhepgU1zN4Hii18yW4RwDX52CXUtk0Hqz6cHOIUkWaX8fDcB+J7y1y2xDHwjv/8Buu8Ekz6 7tXQAAAAASUVORK5CYII= X-Mailer: MH-E 8.3.1; nil; GNU Emacs 24.3.1 MIME-Version: 1.0 Content-Type: multipart/signed; boundary="=-=-="; micalg=pgp-sha1; protocol="application/pgp-signature" X-NewUkraine-Agent: mailfromd (7.99.92) X-NewUkraine-URL: https://mail.prozora-kraina.org/smtp.html X-NewUkraine-VirStat: NO X-NewUkraine-VirScan: ScanPE, ScanELF, ScanOLE2, ScanMail, PhishingSignatures, ScanHTML, ScanPDF X-NewUkraine-SpamStat: NO X-NewUkraine-SpamScore: -1.700 of 3.500 X-NewUkraine-SpamKeys: AWL,BAYES_00,NO_RECEIVED,NO_RELAYS X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 02 Dec 2015 13:09:55 -0000 --=-=-= Content-Type: text/plain Content-Transfer-Encoding: quoted-printable thanks for soon reply InterNetX - Juergen Gotteswinter wrote: > 2 things i whould consider suspicious. probably 3 >=20 > SATA Disks on SAS Controller, no way to change that > Dedup it is needed > and probably the HBA Firmware Version so, is it worth to try to upgrade it then? =2D-=20 Zeus V. Panchenko jid:zeus@im.ibs.dn.ua IT Dpt., I.B.S. LLC GMT+2 (EET) --=-=-= Content-Type: application/pgp-signature -----BEGIN PGP SIGNATURE----- Version: GnuPG v1 iEYEARECAAYFAlZe7YsACgkQr3jpPg/3oyrBHQCdGNFH/whiQE5Pb6yNX8Y6y2oB g6MAnjC/+OIL51xItVlAqqPYko+BILa7 =50TF -----END PGP SIGNATURE----- --=-=-=-- From owner-freebsd-fs@freebsd.org Wed Dec 2 13:10:11 2015 Return-Path: Delivered-To: freebsd-fs@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 8AD4CA3E18D for ; Wed, 2 Dec 2015 13:10:11 +0000 (UTC) (envelope-from zeus@ibs.dn.ua) Received: from smtp.new-ukraine.org (smtp.new-ukraine.org [148.251.53.51]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client CN "smtp.new-ukraine.org", Issuer "smtp.new-ukraine.org" (not verified)) by mx1.freebsd.org (Postfix) with ESMTPS id 18D671B0A for ; Wed, 2 Dec 2015 13:10:10 +0000 (UTC) (envelope-from zeus@ibs.dn.ua) Received: on behalf of honored client by smtp.new-ukraine.org with ESMTP id tB2D9wCI049918 on Wed, 2 Dec 2015 15:10:04 +0200 (EET) Message-ID: <20151202150948.49903@smtp.new-ukraine.org> Date: Wed, 02 Dec 2015 15:09:48 +0200 From: "Zeus Panchenko" To: Cc: "FreeBSD Filesystems" Subject: Re: advice needed: zpool of 10 x (raidz2 on (4+2) x 2T HDD) In-reply-to: Your message of Wed, 2 Dec 2015 12:45:24 +0100 <565ED9D4.5050202@internetx.com> References: <20151202133428.35820@smtp.new-ukraine.org> <565ED9D4.5050202@internetx.com> Organization: I.B.S. LLC Reply-To: "Zeus Panchenko" X-Attribution: zeus Face: iVBORw0KGgoAAAANSUhEUgAAADAAAAAwBAMAAAClLOS0AAAAFVBMVEWxsbGdnZ3U1NQTExN cXFzx8fG/v7+f8hyWAAACXUlEQVQ4jUWSwXYiIRBFi4yyhtjtWpmRdTL0ZC3TJOukDa6Rc+T/P2F eFepwtFvr8upVFVDua8mLWw6La4VIKTuMdAPOebdU55sQs3n/D1xFFPFGVGh4AHKttr5K0bS6g7N ZCge7qpVLB+f1Z2WAj2OKXwIWt/bXpdXSiu8KXbviWkHxF5td9+lg2e3xlI2SCvatK8YLfHyh9lw 15yrad8Va5eXg4Llr7QmAaC+dL9sDt9iad/DX3OKvLMBf+dm0A0QuMrTvYIevSik1IaSVvgjIHt5 lSCG2ynNRpEcBZ8cgDWk+Ns99qzsYYV3MZoppWzGtYlTO9+meG6m/g92iNO9LfQB2JZsMpoJs7QG ku2KtabRK0bZRwDLyBDvwlxTm6ZlP7qyOqLcfqtLexpDSB4M0H3I/PQy1emvjjzgK+A0LmMKl6Lq zlqzh0VGAw440F6MJd8cY0nI7wiF/fVIBGY7UNCAXy6DmfYGCLLI0wtDbVcDUMqtJLmAhLqODQAe riERAxXJ1/QYGpa0ymqyytpKC19MNXHjvFmEsfcHIrncFR4xdbYWgmfEGLCcZokpGbGj1egMR+6M 1BkNX1pDdhPcOXpAnAeLQUwQLYepgQoZVNGS61yaE8CYA7gYAcWKzwGstACY2HTFvvOwk4FXAG/a mKHni/EcA/GkOk7I0IK7UMIf3+SahU8/FJdiE7KcuWdM3MFocUDEEIX9LfJoo4xV5tnNKc3jJuSs SZWgnnhepgU1zN4Hii18yW4RwDX52CXUtk0Hqz6cHOIUkWaX8fDcB+J7y1y2xDHwjv/8Buu8Ekz6 7tXQAAAAASUVORK5CYII= X-Mailer: MH-E 8.3.1; nil; GNU Emacs 24.3.1 MIME-Version: 1.0 Content-Type: multipart/signed; boundary="=-=-="; micalg=pgp-sha1; protocol="application/pgp-signature" X-NewUkraine-Agent: mailfromd (7.99.92) X-NewUkraine-URL: https://mail.prozora-kraina.org/smtp.html X-NewUkraine-VirStat: NO X-NewUkraine-VirScan: ScanPE, ScanELF, ScanOLE2, ScanMail, PhishingSignatures, ScanHTML, ScanPDF X-NewUkraine-SpamStat: NO X-NewUkraine-SpamScore: -1.700 of 3.500 X-NewUkraine-SpamKeys: AWL,BAYES_00,NO_RECEIVED,NO_RELAYS X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 02 Dec 2015 13:10:11 -0000 --=-=-= Content-Type: text/plain Content-Transfer-Encoding: quoted-printable thanks for soon reply InterNetX - Juergen Gotteswinter wrote: > 2 things i whould consider suspicious. probably 3 >=20 > SATA Disks on SAS Controller, no way to change that > Dedup it is needed > and probably the HBA Firmware Version so, is it worth to try to upgrade it then? =2D-=20 Zeus V. Panchenko jid:zeus@im.ibs.dn.ua IT Dpt., I.B.S. LLC GMT+2 (EET) --=-=-= Content-Type: application/pgp-signature -----BEGIN PGP SIGNATURE----- Version: GnuPG v1 iEYEARECAAYFAlZe7YsACgkQr3jpPg/3oyrBHQCdGNFH/whiQE5Pb6yNX8Y6y2oB g6MAnjC/+OIL51xItVlAqqPYko+BILa7 =50TF -----END PGP SIGNATURE----- --=-=-=-- From owner-freebsd-fs@freebsd.org Wed Dec 2 13:13:16 2015 Return-Path: Delivered-To: freebsd-fs@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 4DE24A3E3A0 for ; Wed, 2 Dec 2015 13:13:16 +0000 (UTC) (envelope-from tevans.uk@googlemail.com) Received: from mail-lf0-x22e.google.com (mail-lf0-x22e.google.com [IPv6:2a00:1450:4010:c07::22e]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (Client CN "smtp.gmail.com", Issuer "Google Internet Authority G2" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id EDDBB1ED0 for ; Wed, 2 Dec 2015 13:13:15 +0000 (UTC) (envelope-from tevans.uk@googlemail.com) Received: by lfaz4 with SMTP id z4so49871604lfa.0 for ; Wed, 02 Dec 2015 05:13:13 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=googlemail.com; s=20120113; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type; bh=iMs1HLx3R7R2J1V+XqDFp6rVyU/30o4fD+F+DkcAj+Q=; b=q5cZd6IVmms9/dAX01maomA4e67oqKy32qQ+kkZzi69VblG5+XCXtK0IBKMK6S51hW uKGCyxaqLmkeZaFer8pwUK9o0woTGca76GbSlyHWZS6PizInCQvyyn1ZobbNkt5Zhqcx nDCguvmiVZyINhnf8So587sDrs3BDHt0w3viAw1jlkFOF2vs0J7LBz8TlPT5HmWul95k dc52/e1OEgNQ/Rd8DaSjLyE3rzVjd47vUOwE3GDXQ0BbUtfSCEDdC5x/q33MVVEOcpGR uCQ6h5YdkpvBFQiOKoYlemaD0Y1UJP6Hme52botH7W3Is+EjYhhElYWYcELJTVxtMnRz NqrA== MIME-Version: 1.0 X-Received: by 10.112.173.134 with SMTP id bk6mr2807342lbc.34.1449061993880; Wed, 02 Dec 2015 05:13:13 -0800 (PST) Received: by 10.25.84.134 with HTTP; Wed, 2 Dec 2015 05:13:13 -0800 (PST) In-Reply-To: <20151202133428.35820@smtp.new-ukraine.org> References: <20151202133428.35820@smtp.new-ukraine.org> Date: Wed, 2 Dec 2015 13:13:13 +0000 Message-ID: Subject: Re: advice needed: zpool of 10 x (raidz2 on (4+2) x 2T HDD) From: Tom Evans To: Zeus Panchenko Cc: FreeBSD Filesystems Content-Type: text/plain; charset=UTF-8 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 02 Dec 2015 13:13:16 -0000 On Wed, Dec 2, 2015 at 11:34 AM, Zeus Panchenko wrote: > -----BEGIN PGP SIGNED MESSAGE----- > Hash: SHA1 > > greetings, > > we deployed storage, and as it was filling until now, I see I need > an advice regarding the configuration and optimization/s ... > > the main cause I decided to ask for an advice is this: > > once per month (or even more frequently, depends on the load I > suggest) host hangs and only power reset helps, nothing helpful in log > files though ... just the fact of restart logged and usual ctld activity > > CPU: Intel(R) Xeon(R) CPU E5-2630L (2 package(s) x 6 core(s) x 2 SMT threads) > RAM: 128Gb > STOR: 3 x LSI SAS2308 PCI-Express Fusion-MPT SAS-2 (jbod) > 60 x HDD 2T (ATA WDC WD20EFRX-68A 0A80, Fixed Direct Access SCSI-6 device 600.000MB/s) > > OS: FreeBSD 10.1-RELEASE #0 r274401 amd64 > > to avoid OS memory shortage sysctl vfs.zfs.arc_max is set to 120275861504 > > to clients, storage is provided via iSCSI by ctld (each target is file backed) > > zpool created of 10 x raidz2, each raidz2 consists of 6 geli devices and > now looks so (yes, deduplication is on): > >> zpool list storage > NAME SIZE ALLOC FREE FRAG EXPANDSZ CAP DEDUP HEALTH ALTROOT > storage 109T 33.5T 75.2T - - 30% 1.57x ONLINE - You will need to examine zdb output to correctly determine the size of your dedupe table. Assuming an average block size of 64kb, your DDT will be approximately 167GB, ie well outside your RAM (my maths may be off of course - ((33.5*(2**40)/(64*1024))*320)/(2**30)). This article explains in detail: http://constantin.glez.de/blog/2011/07/zfs-dedupe-or-not-dedupe Cheers Tom From owner-freebsd-fs@freebsd.org Wed Dec 2 14:00:21 2015 Return-Path: Delivered-To: freebsd-fs@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id B9B05A3EE72 for ; Wed, 2 Dec 2015 14:00:21 +0000 (UTC) (envelope-from kraduk@gmail.com) Received: from mail-wm0-x233.google.com (mail-wm0-x233.google.com [IPv6:2a00:1450:400c:c09::233]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (Client CN "smtp.gmail.com", Issuer "Google Internet Authority G2" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id 46FCC13BC for ; Wed, 2 Dec 2015 14:00:21 +0000 (UTC) (envelope-from kraduk@gmail.com) Received: by wmww144 with SMTP id w144so216023294wmw.1 for ; Wed, 02 Dec 2015 06:00:19 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type; bh=Bdth3bVacNKb0WP78Ih/WiTJpcGQY1jzplwiEERk/IQ=; b=ei++WbwFT3htKBEKN2ZDg0MbRrzseNwjfmwF8l7kUWkcnispdxSYv+LM6BRJFb8TWH vYM5b/pXIPv8zsofZbZlinJJQKbcEHIhUIthdD0s8/H1Whxl8EtT2fr5TlAm3iJOUmZx 0iSLPqskqCR4qkIFuZ6X56t0gf4Y0kJ5Q60WEKhua7qvz19lcJu3ZcB+PztRJihkAgPT KBazslylWIosKePeMRIFTYwaoFVzo7EVJfwo8/ehL5SubYWcYSIwB2UbUkXZi5oACjCq tBLEVt/Iwel/NJ2n6PGJccRXYw9TzxfR5gF2ZvlulCeu/Y1UUg8Ro4aEOGEL623aKDtz CqRQ== MIME-Version: 1.0 X-Received: by 10.194.204.202 with SMTP id la10mr5455004wjc.81.1449064819665; Wed, 02 Dec 2015 06:00:19 -0800 (PST) Received: by 10.28.181.213 with HTTP; Wed, 2 Dec 2015 06:00:19 -0800 (PST) In-Reply-To: References: <20151202133428.35820@smtp.new-ukraine.org> Date: Wed, 2 Dec 2015 14:00:19 +0000 Message-ID: Subject: Re: advice needed: zpool of 10 x (raidz2 on (4+2) x 2T HDD) From: krad To: Tom Evans Cc: Zeus Panchenko , FreeBSD Filesystems Content-Type: text/plain; charset=UTF-8 X-Content-Filtered-By: Mailman/MimeDel 2.1.20 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 02 Dec 2015 14:00:21 -0000 If this is the case, put more ram in and a good amount of SSD for l2arc. SSD is fairly cheap, so put in a TB of the best you can afford. RAM is always best though if you can afford it and fit it onto the board. It's probably worth going to 10.2 as well, and maybe even stable as there have been a lot of zfs related patches since 10.1. On 2 December 2015 at 13:13, Tom Evans via freebsd-fs < freebsd-fs@freebsd.org> wrote: > On Wed, Dec 2, 2015 at 11:34 AM, Zeus Panchenko wrote: > > -----BEGIN PGP SIGNED MESSAGE----- > > Hash: SHA1 > > > > greetings, > > > > we deployed storage, and as it was filling until now, I see I need > > an advice regarding the configuration and optimization/s ... > > > > the main cause I decided to ask for an advice is this: > > > > once per month (or even more frequently, depends on the load I > > suggest) host hangs and only power reset helps, nothing helpful in log > > files though ... just the fact of restart logged and usual ctld activity > > > > CPU: Intel(R) Xeon(R) CPU E5-2630L (2 package(s) x 6 core(s) x 2 SMT > threads) > > RAM: 128Gb > > STOR: 3 x LSI SAS2308 PCI-Express Fusion-MPT SAS-2 (jbod) > > 60 x HDD 2T (ATA WDC WD20EFRX-68A 0A80, Fixed Direct Access > SCSI-6 device 600.000MB/s) > > > > OS: FreeBSD 10.1-RELEASE #0 r274401 amd64 > > > > to avoid OS memory shortage sysctl vfs.zfs.arc_max is set to 120275861504 > > > > to clients, storage is provided via iSCSI by ctld (each target is file > backed) > > > > zpool created of 10 x raidz2, each raidz2 consists of 6 geli devices and > > now looks so (yes, deduplication is on): > > > >> zpool list storage > > NAME SIZE ALLOC FREE FRAG EXPANDSZ CAP DEDUP > HEALTH ALTROOT > > storage 109T 33.5T 75.2T - - 30% 1.57x > ONLINE - > > You will need to examine zdb output to correctly determine the size of > your dedupe table. Assuming an average block size of 64kb, your DDT > will be approximately 167GB, ie well outside your RAM (my maths may be > off of course - ((33.5*(2**40)/(64*1024))*320)/(2**30)). > > This article explains in detail: > > http://constantin.glez.de/blog/2011/07/zfs-dedupe-or-not-dedupe > > Cheers > > Tom > _______________________________________________ > freebsd-fs@freebsd.org mailing list > https://lists.freebsd.org/mailman/listinfo/freebsd-fs > To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" > From owner-freebsd-fs@freebsd.org Wed Dec 2 21:31:32 2015 Return-Path: Delivered-To: freebsd-fs@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 5F48EA3F24D for ; Wed, 2 Dec 2015 21:31:32 +0000 (UTC) (envelope-from wjw@digiware.nl) Received: from smtp.digiware.nl (unknown [IPv6:2001:4cb8:90:ffff::3]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 23EFE18E6 for ; Wed, 2 Dec 2015 21:31:32 +0000 (UTC) (envelope-from wjw@digiware.nl) Received: from rack1.digiware.nl (unknown [127.0.0.1]) by smtp.digiware.nl (Postfix) with ESMTP id 257FE153431; Wed, 2 Dec 2015 22:31:27 +0100 (CET) X-Virus-Scanned: amavisd-new at digiware.nl Received: from smtp.digiware.nl ([127.0.0.1]) by rack1.digiware.nl (rack1.digiware.nl [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id mEQh1Dl6p8-9; Wed, 2 Dec 2015 22:31:17 +0100 (CET) Received: from [IPv6:2001:4cb8:3:1:f108:af9:b0c4:8855] (unknown [IPv6:2001:4cb8:3:1:f108:af9:b0c4:8855]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.digiware.nl (Postfix) with ESMTPSA id 4EF2F153408; Wed, 2 Dec 2015 22:31:17 +0100 (CET) Subject: Re: advice needed: zpool of 10 x (raidz2 on (4+2) x 2T HDD) To: krad , Tom Evans References: <20151202133428.35820@smtp.new-ukraine.org> Cc: FreeBSD Filesystems From: Willem Jan Withagen Message-ID: <565F6323.1090803@digiware.nl> Date: Wed, 2 Dec 2015 22:31:15 +0100 User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:38.0) Gecko/20100101 Thunderbird/38.3.0 MIME-Version: 1.0 In-Reply-To: Content-Type: text/plain; charset=windows-1252 Content-Transfer-Encoding: 7bit X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 02 Dec 2015 21:31:32 -0000 On 2-12-2015 15:00, krad wrote: > If this is the case, put more ram in and a good amount of SSD for l2arc. > SSD is fairly cheap, so put in a TB of the best you can afford. RAM is > always best though if you can afford it and fit it onto the board. > > It's probably worth going to 10.2 as well, and maybe even stable as th > have been a lot of zfs related patches since 10.1. I used to have the odd system hangup now and then during rsync-time. When several host start sending their backups. They all have gone since my last upgrade. FreeBSD zfs.digiware.nl 10.2-STABLE FreeBSD 10.2-STABLE #3 r289060M: Fri Oct 9 11:46:21 CEST 2015 Even hangs attributed to using PCI-X bus freezes have gone. uptime is now 54 days, where before it hardly got over 14 days. So I think some things have changed for the better with recent patches. --WjW From owner-freebsd-fs@freebsd.org Fri Dec 4 00:23:29 2015 Return-Path: Delivered-To: freebsd-fs@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 4FECAA3F441 for ; Fri, 4 Dec 2015 00:23:29 +0000 (UTC) (envelope-from case@SDF.ORG) Received: from sdf.lonestar.org (mx.sdf.org [192.94.73.23]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client CN "mx.sdf.org", Issuer "SDF.ORG" (not verified)) by mx1.freebsd.org (Postfix) with ESMTPS id 384BB1A7E for ; Fri, 4 Dec 2015 00:23:28 +0000 (UTC) (envelope-from case@SDF.ORG) Received: from otaku.freeshell.org (IDENT:case@otaku.freeshell.org [192.94.73.9]) by sdf.lonestar.org (8.15.2/8.14.5) with ESMTPS id tB40KuN3020410 (using TLSv1 with cipher DHE-RSA-AES256-SHA (256 bits) verified NO) for ; Fri, 4 Dec 2015 00:23:09 GMT Date: Fri, 4 Dec 2015 00:20:56 +0000 (UTC) From: John Case X-X-Sender: case@faeroes.freeshell.org To: freebsd-fs@freebsd.org Subject: How much data loss do these 3ware 9650SE errors represent ? Message-ID: MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII; format=flowed X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 04 Dec 2015 00:23:29 -0000 I know 3ware 9650SE cards are a bit old, but a lot of people still have and use them ... I have an 8 drive, raid6 array that lost one drive, then another died during rebuild, and then a third drive created some errors. So, I set the card to ignoreECC during rebuild ... Things went well and the array is rebuilt and healthy, BUT I got 21 of these errors: c0 [date] ERROR Source drive ECC error overwritten: port=4 So I have data loss in 21 places in the array. But how much ? I cannot find any documentation of the size of this data loss - does each error message represent a secor ? a block ? I know it doesn't represent "files" or "directories" since the raid card doesn't know anything about the UFS2 filesystem running on it. Does anyone have any idea how much data each of these errors represents ? Thanks. From owner-freebsd-fs@freebsd.org Fri Dec 4 02:39:10 2015 Return-Path: Delivered-To: freebsd-fs@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 01EF2A401C2 for ; Fri, 4 Dec 2015 02:39:10 +0000 (UTC) (envelope-from bugzilla-noreply@freebsd.org) Received: from kenobi.freebsd.org (kenobi.freebsd.org [IPv6:2001:1900:2254:206a::16:76]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id E49341362 for ; Fri, 4 Dec 2015 02:39:09 +0000 (UTC) (envelope-from bugzilla-noreply@freebsd.org) Received: from bugs.freebsd.org ([127.0.1.118]) by kenobi.freebsd.org (8.15.2/8.15.2) with ESMTP id tB42d93v060938 for ; Fri, 4 Dec 2015 02:39:09 GMT (envelope-from bugzilla-noreply@freebsd.org) From: bugzilla-noreply@freebsd.org To: freebsd-fs@FreeBSD.org Subject: [Bug 204997] experienced multiple processes in suspfs with complete blackout of the filesystem Date: Fri, 04 Dec 2015 02:39:08 +0000 X-Bugzilla-Reason: AssignedTo X-Bugzilla-Type: changed X-Bugzilla-Watch-Reason: None X-Bugzilla-Product: Base System X-Bugzilla-Component: kern X-Bugzilla-Version: 10.1-RELEASE X-Bugzilla-Keywords: X-Bugzilla-Severity: Affects Only Me X-Bugzilla-Who: linimon@FreeBSD.org X-Bugzilla-Status: New X-Bugzilla-Priority: --- X-Bugzilla-Assigned-To: freebsd-fs@FreeBSD.org X-Bugzilla-Target-Milestone: --- X-Bugzilla-Flags: X-Bugzilla-Changed-Fields: assigned_to Message-ID: In-Reply-To: References: Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: 7bit X-Bugzilla-URL: https://bugs.freebsd.org/bugzilla/ Auto-Submitted: auto-generated MIME-Version: 1.0 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 04 Dec 2015 02:39:10 -0000 https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=204997 Mark Linimon changed: What |Removed |Added ---------------------------------------------------------------------------- Assignee|freebsd-bugs@FreeBSD.org |freebsd-fs@FreeBSD.org -- You are receiving this mail because: You are the assignee for the bug. From owner-freebsd-fs@freebsd.org Fri Dec 4 02:39:20 2015 Return-Path: Delivered-To: freebsd-fs@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 613DAA401F4 for ; Fri, 4 Dec 2015 02:39:20 +0000 (UTC) (envelope-from bugzilla-noreply@freebsd.org) Received: from kenobi.freebsd.org (kenobi.freebsd.org [IPv6:2001:1900:2254:206a::16:76]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 506DA1451 for ; Fri, 4 Dec 2015 02:39:20 +0000 (UTC) (envelope-from bugzilla-noreply@freebsd.org) Received: from bugs.freebsd.org ([127.0.1.118]) by kenobi.freebsd.org (8.15.2/8.15.2) with ESMTP id tB42dKDV066034 for ; Fri, 4 Dec 2015 02:39:20 GMT (envelope-from bugzilla-noreply@freebsd.org) From: bugzilla-noreply@freebsd.org To: freebsd-fs@FreeBSD.org Subject: [Bug 204976] zfs root fails to boot raidz1 raidz2 raidz3 Date: Fri, 04 Dec 2015 02:39:20 +0000 X-Bugzilla-Reason: AssignedTo X-Bugzilla-Type: changed X-Bugzilla-Watch-Reason: None X-Bugzilla-Product: Base System X-Bugzilla-Component: kern X-Bugzilla-Version: 11.0-CURRENT X-Bugzilla-Keywords: X-Bugzilla-Severity: Affects Many People X-Bugzilla-Who: linimon@FreeBSD.org X-Bugzilla-Status: New X-Bugzilla-Priority: --- X-Bugzilla-Assigned-To: freebsd-fs@FreeBSD.org X-Bugzilla-Target-Milestone: --- X-Bugzilla-Flags: X-Bugzilla-Changed-Fields: assigned_to Message-ID: In-Reply-To: References: Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: 7bit X-Bugzilla-URL: https://bugs.freebsd.org/bugzilla/ Auto-Submitted: auto-generated MIME-Version: 1.0 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 04 Dec 2015 02:39:20 -0000 https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=204976 Mark Linimon changed: What |Removed |Added ---------------------------------------------------------------------------- Assignee|freebsd-bugs@FreeBSD.org |freebsd-fs@FreeBSD.org -- You are receiving this mail because: You are the assignee for the bug. From owner-freebsd-fs@freebsd.org Fri Dec 4 02:58:48 2015 Return-Path: Delivered-To: freebsd-fs@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 0B106A40738 for ; Fri, 4 Dec 2015 02:58:48 +0000 (UTC) (envelope-from bugzilla-noreply@freebsd.org) Received: from kenobi.freebsd.org (kenobi.freebsd.org [IPv6:2001:1900:2254:206a::16:76]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id EDB8E1F6E for ; Fri, 4 Dec 2015 02:58:47 +0000 (UTC) (envelope-from bugzilla-noreply@freebsd.org) Received: from bugs.freebsd.org ([127.0.1.118]) by kenobi.freebsd.org (8.15.2/8.15.2) with ESMTP id tB42wl6K057290 for ; Fri, 4 Dec 2015 02:58:47 GMT (envelope-from bugzilla-noreply@freebsd.org) From: bugzilla-noreply@freebsd.org To: freebsd-fs@FreeBSD.org Subject: [Bug 204997] experienced multiple processes in suspfs with complete blackout of the filesystem Date: Fri, 04 Dec 2015 02:58:48 +0000 X-Bugzilla-Reason: AssignedTo X-Bugzilla-Type: changed X-Bugzilla-Watch-Reason: None X-Bugzilla-Product: Base System X-Bugzilla-Component: kern X-Bugzilla-Version: 10.1-RELEASE X-Bugzilla-Keywords: X-Bugzilla-Severity: Affects Only Me X-Bugzilla-Who: mckusick@FreeBSD.org X-Bugzilla-Status: New X-Bugzilla-Priority: --- X-Bugzilla-Assigned-To: freebsd-fs@FreeBSD.org X-Bugzilla-Target-Milestone: --- X-Bugzilla-Flags: X-Bugzilla-Changed-Fields: cc Message-ID: In-Reply-To: References: Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: 7bit X-Bugzilla-URL: https://bugs.freebsd.org/bugzilla/ Auto-Submitted: auto-generated MIME-Version: 1.0 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 04 Dec 2015 02:58:48 -0000 https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=204997 Kirk McKusick changed: What |Removed |Added ---------------------------------------------------------------------------- CC| |mckusick@FreeBSD.org --- Comment #1 from Kirk McKusick --- Which filesystem was the one that hung? Assuming you know, if the problem happens again try disabling journaled soft-updates on that filesystem to see if that feature is what is causing your problem. Specifically run this command on the unmounted filesystem: tunefs -j disable / where is the filesystem to be disabled (such as /data, /l, etc). -- You are receiving this mail because: You are the assignee for the bug. From owner-freebsd-fs@freebsd.org Fri Dec 4 05:27:27 2015 Return-Path: Delivered-To: freebsd-fs@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id BFE2AA40362 for ; Fri, 4 Dec 2015 05:27:27 +0000 (UTC) (envelope-from bugzilla-noreply@freebsd.org) Received: from kenobi.freebsd.org (kenobi.freebsd.org [IPv6:2001:1900:2254:206a::16:76]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id A11061386 for ; Fri, 4 Dec 2015 05:27:27 +0000 (UTC) (envelope-from bugzilla-noreply@freebsd.org) Received: from bugs.freebsd.org ([127.0.1.118]) by kenobi.freebsd.org (8.15.2/8.15.2) with ESMTP id tB45RRZZ026317 for ; Fri, 4 Dec 2015 05:27:27 GMT (envelope-from bugzilla-noreply@freebsd.org) From: bugzilla-noreply@freebsd.org To: freebsd-fs@FreeBSD.org Subject: [Bug 204997] experienced multiple processes in suspfs with complete blackout of the filesystem Date: Fri, 04 Dec 2015 05:27:27 +0000 X-Bugzilla-Reason: AssignedTo X-Bugzilla-Type: changed X-Bugzilla-Watch-Reason: None X-Bugzilla-Product: Base System X-Bugzilla-Component: kern X-Bugzilla-Version: 10.1-RELEASE X-Bugzilla-Keywords: X-Bugzilla-Severity: Affects Only Me X-Bugzilla-Who: linas@in.spb.ru X-Bugzilla-Status: New X-Bugzilla-Priority: --- X-Bugzilla-Assigned-To: freebsd-fs@FreeBSD.org X-Bugzilla-Target-Milestone: --- X-Bugzilla-Flags: X-Bugzilla-Changed-Fields: Message-ID: In-Reply-To: References: Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: 7bit X-Bugzilla-URL: https://bugs.freebsd.org/bugzilla/ Auto-Submitted: auto-generated MIME-Version: 1.0 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 04 Dec 2015 05:27:27 -0000 https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=204997 --- Comment #2 from Polina Soloviova --- (In reply to Kirk McKusick from comment #1) Hi! It was /1 mount point Thank you for your comment. We now monitoring server's processes in state suspfs, and will send a feedback as soon as it happens again. -- You are receiving this mail because: You are the assignee for the bug. From owner-freebsd-fs@freebsd.org Fri Dec 4 11:13:01 2015 Return-Path: Delivered-To: freebsd-fs@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 56D06A4009A for ; Fri, 4 Dec 2015 11:13:01 +0000 (UTC) (envelope-from wjw@digiware.nl) Received: from smtp.digiware.nl (unknown [IPv6:2001:4cb8:90:ffff::3]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 18C2D1B65 for ; Fri, 4 Dec 2015 11:13:00 +0000 (UTC) (envelope-from wjw@digiware.nl) Received: from rack1.digiware.nl (unknown [127.0.0.1]) by smtp.digiware.nl (Postfix) with ESMTP id 71298153416; Fri, 4 Dec 2015 12:12:55 +0100 (CET) X-Virus-Scanned: amavisd-new at digiware.nl Received: from smtp.digiware.nl ([127.0.0.1]) by rack1.digiware.nl (rack1.digiware.nl [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id LdewbwnMFCwu; Fri, 4 Dec 2015 12:12:53 +0100 (CET) Received: from [IPv6:2001:4cb8:3:1:64aa:8bd1:11fd:867b] (unknown [IPv6:2001:4cb8:3:1:64aa:8bd1:11fd:867b]) by smtp.digiware.nl (Postfix) with ESMTP id 45BCD153413; Fri, 4 Dec 2015 12:12:53 +0100 (CET) Subject: Re: CEPH + FreeBSD To: Jordan Hubbard , Rakshith Venkatesh References: Cc: freebsd-fs@freebsd.org From: Willem Jan Withagen Organization: Digiware Management b.v. Message-ID: <5661752C.1090200@digiware.nl> Date: Fri, 4 Dec 2015 12:12:44 +0100 User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:38.0) Gecko/20100101 Thunderbird/38.4.0 MIME-Version: 1.0 In-Reply-To: Content-Type: text/plain; charset=utf-8; format=flowed Content-Transfer-Encoding: 8bit X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 04 Dec 2015 11:13:01 -0000 On 3-9-2015 08:51, Jordan Hubbard wrote: > >> On Sep 2, 2015, at 10:44 PM, Rakshith Venkatesh >> wrote: >> >> Thanks for the reply Mike. Yeah, i think its time for Ceph folks to >> take this up. > > Not to rain on your parade, but the last time I checked, there was no > interest on that side. Their needs are adequately served by Linux, > and I got the feeling that they felt more than a little burned by the > drive-by FreeBSD port that never actually produced a working FreeBSD > port but did cruft their code up with lots of pointless #ifdef BSD > constructs. > > If you or anyone else really wants to move that ball forward, you’re > going to have to volunteer to do the port yourself and also agree to > maintain it for, I dunno, at least 20 years so they don’t feel like > they’re getting into that same situation again. :-) Well Thusfar the Ceph people are more than willing to accept patches. In the tree are already diffs and WIPs for: freebsd, osx, AIX, solaris and it sounds like that the AIX port is working. Also there have been a lot of attempts to get FreeBSD up and running, but mostly it dies because the "porter" needs to start doing different things. Talking to Sage Weill, he said that one of the main things to keep FreeBSD-Ceph up and running, is the possibility to actually run the automated builds and tests on a FreeBSD system. So that is something I/we need to think about. --WjW From owner-freebsd-fs@freebsd.org Fri Dec 4 13:51:38 2015 Return-Path: Delivered-To: freebsd-fs@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 17991A419A2 for ; Fri, 4 Dec 2015 13:51:38 +0000 (UTC) (envelope-from bugzilla-noreply@freebsd.org) Received: from kenobi.freebsd.org (kenobi.freebsd.org [IPv6:2001:1900:2254:206a::16:76]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 03A7F10FB for ; Fri, 4 Dec 2015 13:51:38 +0000 (UTC) (envelope-from bugzilla-noreply@freebsd.org) Received: from bugs.freebsd.org ([127.0.1.118]) by kenobi.freebsd.org (8.15.2/8.15.2) with ESMTP id tB4DpbCh040560 for ; Fri, 4 Dec 2015 13:51:37 GMT (envelope-from bugzilla-noreply@freebsd.org) From: bugzilla-noreply@freebsd.org To: freebsd-fs@FreeBSD.org Subject: [Bug 204976] zfs root fails to boot raidz1 raidz2 raidz3 Date: Fri, 04 Dec 2015 13:51:38 +0000 X-Bugzilla-Reason: AssignedTo X-Bugzilla-Type: changed X-Bugzilla-Watch-Reason: None X-Bugzilla-Product: Base System X-Bugzilla-Component: kern X-Bugzilla-Version: 11.0-CURRENT X-Bugzilla-Keywords: X-Bugzilla-Severity: Affects Many People X-Bugzilla-Who: fk@fabiankeil.de X-Bugzilla-Status: New X-Bugzilla-Priority: --- X-Bugzilla-Assigned-To: freebsd-fs@FreeBSD.org X-Bugzilla-Target-Milestone: --- X-Bugzilla-Flags: X-Bugzilla-Changed-Fields: cc Message-ID: In-Reply-To: References: Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: 7bit X-Bugzilla-URL: https://bugs.freebsd.org/bugzilla/ Auto-Submitted: auto-generated MIME-Version: 1.0 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 04 Dec 2015 13:51:38 -0000 https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=204976 Fabian Keil changed: What |Removed |Added ---------------------------------------------------------------------------- CC| |fk@fabiankeil.de --- Comment #1 from Fabian Keil --- I'm not sure I understand you correctly, but I would not expect the installer to modify devices that weren't selected as install devices. I also would not expect the system to boot reliably if some of the disks (that were not modified during the installation) contain invalid boot code. It could be argued that the installer should warn about the issue but that's not really a ZFS-specific problem. To get the attention from the people working on the installer it might help to adjust the subject and to clarify the report. -- You are receiving this mail because: You are the assignee for the bug. From owner-freebsd-fs@freebsd.org Fri Dec 4 14:47:55 2015 Return-Path: Delivered-To: freebsd-fs@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 2EDECA4058D for ; Fri, 4 Dec 2015 14:47:55 +0000 (UTC) (envelope-from murf@perftech.com) Received: from mail.pt.net (mail.pt.net [IPv6:2001:4870:610e:2:4::11]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 0A53D1DDA for ; Fri, 4 Dec 2015 14:47:54 +0000 (UTC) (envelope-from murf@perftech.com) Received: from localhost (localhost [IPv6:::1]) by mail.pt.net (Postfix) with ESMTP id E9681840717 for ; Fri, 4 Dec 2015 08:47:46 -0600 (CST) Received: from mail.pt.net ([IPv6:::1]) by localhost (mail.pt.net [IPv6:::1]) (amavisd-new, port 10032) with ESMTP id LNmlsQ3-Fqko for ; Fri, 4 Dec 2015 08:47:46 -0600 (CST) Received: from localhost (localhost [IPv6:::1]) by mail.pt.net (Postfix) with ESMTP id C5BC8840723 for ; Fri, 4 Dec 2015 08:47:46 -0600 (CST) X-Virus-Scanned: amavisd-new at mail.pt.net Received: from mail.pt.net ([IPv6:::1]) by localhost (mail.pt.net [IPv6:::1]) (amavisd-new, port 10026) with ESMTP id xwdJlB41e9_w for ; Fri, 4 Dec 2015 08:47:46 -0600 (CST) Received: from [127.0.0.1] (murfhome-dhcp-251.pt.net [206.210.205.251]) (Authenticated sender: murf@perftech.com) by mail.pt.net (Postfix) with ESMTPA id A480F840717 for ; Fri, 4 Dec 2015 08:47:46 -0600 (CST) Message-ID: <5661A792.4040504@perftech.com> Date: Fri, 04 Dec 2015 08:47:46 -0600 From: "John A. Murphy" User-Agent: Thunderbird 2.0.0.24 (Windows/20100228) MIME-Version: 1.0 To: freebsd-fs@freebsd.org Subject: Re: fuse libfuse and teh fuse_mount call with options References: <54090215.4080006@freebsd.org> In-Reply-To: <54090215.4080006@freebsd.org> Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit X-Antivirus: avast! (VPS 151204-2, 12/04/2015), Outbound message X-Antivirus-Status: Clean X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 04 Dec 2015 14:47:55 -0000 Did you ever find an answer to your FUSE options question. I'm running around in circles chasing the same issue.... Murf From owner-freebsd-fs@freebsd.org Sat Dec 5 02:04:59 2015 Return-Path: Delivered-To: freebsd-fs@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 52DDAA3FAE6 for ; Sat, 5 Dec 2015 02:04:59 +0000 (UTC) (envelope-from julian@freebsd.org) Received: from vps1.elischer.org (vps1.elischer.org [204.109.63.16]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 2C98110AF for ; Sat, 5 Dec 2015 02:04:58 +0000 (UTC) (envelope-from julian@freebsd.org) Received: from Julian-MBP3.local (50-196-156-133-static.hfc.comcastbusiness.net [50.196.156.133]) (authenticated bits=0) by vps1.elischer.org (8.15.2/8.15.2) with ESMTPSA id tB524r3B023578 (version=TLSv1.2 cipher=DHE-RSA-AES128-SHA bits=128 verify=NO); Fri, 4 Dec 2015 18:04:57 -0800 (PST) (envelope-from julian@freebsd.org) Subject: Re: fuse libfuse and teh fuse_mount call with options To: "John A. Murphy" , freebsd-fs@freebsd.org References: <54090215.4080006@freebsd.org> <5661A792.4040504@perftech.com> From: Julian Elischer Message-ID: <56624640.4070804@freebsd.org> Date: Sat, 5 Dec 2015 10:04:48 +0800 User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.11; rv:38.0) Gecko/20100101 Thunderbird/38.4.0 MIME-Version: 1.0 In-Reply-To: <5661A792.4040504@perftech.com> Content-Type: text/plain; charset=windows-1252; format=flowed Content-Transfer-Encoding: 7bit X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 05 Dec 2015 02:04:59 -0000 On 4/12/2015 10:47 PM, John A. Murphy wrote: > Did you ever find an answer to your FUSE options question. I'm > running around in circles chasing the same issue.... nope,,it remains 'hacked off' We have only one use of fuse in our appliance so... > > Murf > _______________________________________________ > freebsd-fs@freebsd.org mailing list > https://lists.freebsd.org/mailman/listinfo/freebsd-fs > To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" >