From owner-freebsd-fs@FreeBSD.ORG Sun Nov 17 01:14:35 2013 Return-Path: Delivered-To: fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id DBB74326 for ; Sun, 17 Nov 2013 01:14:34 +0000 (UTC) Received: from frv191.fwdcdn.com (frv191.fwdcdn.com [212.42.77.191]) (using TLSv1.2 with cipher DHE-RSA-AES128-SHA (128/128 bits)) (No client certificate requested) by mx1.freebsd.org (Postfix) with ESMTPS id 959902F00 for ; Sun, 17 Nov 2013 01:14:34 +0000 (UTC) Received: from [10.10.1.29] (helo=frv197.fwdcdn.com) by frv191.fwdcdn.com with esmtp ID 1VhqwZ-0001Op-Hb for fs@freebsd.org; Sun, 17 Nov 2013 03:14:31 +0200 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=ukr.net; s=ffe; h=Content-Transfer-Encoding:Content-Type:MIME-Version:References:In-Reply-To:Message-Id:Cc:To:Subject:From:Date; bh=ix0oVuOGf67bSsDjJBhLxsUFoq5orPrU6vwhLm9+ynY=; b=AaH5kBb2xQa2tAhmXQtovsNNLptZraqNv+ptWjCpdo9YtoIEa5nPvlPhBT9o1f1zVQnYisNVmHelqxGVrikfV8b+OF5usalKbf2drE5hCM6fn7l3q3I8kCuLpxwH3FqTuuAIQ4+q65eI1NE2KQGO2Mt0j/DNPlsrOT4BEPmgcYA=; Received: from [10.10.10.35] (helo=frv35.ukr.net) by frv197.fwdcdn.com with smtp ID 1VhqwP-000HL4-BL for fs@freebsd.org; Sun, 17 Nov 2013 03:14:21 +0200 Date: Sun, 17 Nov 2013 03:14:20 +0200 From: Vladislav Prodan Subject: Re[3]: [ZFS] cannot detach /dev/gpt/system-disk-60: no valid replicas To: Dmitry Morozovsky X-Mailer: mail.ukr.net 5.0 Message-Id: <1384650805.880575490.jqpic0rc@frv35.ukr.net> In-Reply-To: References: <1384558482.622649210.1mfnhjop@frv35.ukr.net> <89876A0EAB2247FCB1875DFDE28E2799@multiplay.co.uk> <1384593787.502641084.f8mcbhve@frv35.ukr.net> MIME-Version: 1.0 Received: from universite@ukr.net by frv35.ukr.net; Sun, 17 Nov 2013 03:14:21 +0200 Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: binary Content-Disposition: inline Cc: fs@freebsd.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.16 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 17 Nov 2013 01:14:35 -0000 > On Sat, 16 Nov 2013, Vladislav Prodan wrote: > > > > Run zpool status -v and either remove or restore the files > > > indicated to have an issue. > > > > > > Once this is done my may well have more luck removing the device. > > > > > > Regards > > > Steve > > > > > After removal of snapshots: > > > > root@mfsbsd:~ # zpool status -v > > pool: tank > > state: ONLINE > > status: One or more devices is currently being resilvered. The pool will > > continue to function, possibly in a degraded state. > > action: Wait for the resilver to complete. > > your current action is described here ;) > > After resilver sinish, issue 'zpool clear', then I suppose you will be able to > detach system-disk-60 root@mfsbsd:~ # zpool status pool: tank state: ONLINE status: One or more devices has experienced an error resulting in data corruption. Applications may be affected. action: Restore the file in question if possible. Otherwise restore the entire pool from backup. see: http://illumos.org/msg/ZFS-8000-8A scan: resilvered 51.8G in 8h10m with 3 errors on Sat Nov 16 21:52:58 2013 config: NAME STATE READ WRITE CKSUM tank ONLINE 0 0 3 gpt/system-disk-60 ONLINE 0 0 6 errors: 3 data errors, use '-v' for a list root@mfsbsd:~ # zpool clear tank root@mfsbsd:~ # zpool status pool: tank state: ONLINE status: One or more devices has experienced an error resulting in data corruption. Applications may be affected. action: Restore the file in question if possible. Otherwise restore the entire pool from backup. see: http://illumos.org/msg/ZFS-8000-8A scan: resilvered 51.8G in 8h10m with 3 errors on Sat Nov 16 21:52:58 2013 config: NAME STATE READ WRITE CKSUM tank ONLINE 0 0 0 gpt/system-disk-60 ONLINE 0 0 0 errors: 3 data errors, use '-v' for a list Why do I need a pool with errors? The process geom is unclear what loaded the processor core by 80% -- Vladislav V. Prodan System & Network Administrator http://support.od.ua +380 67 4584408, +380 99 4060508 VVP88-RIPE From owner-freebsd-fs@FreeBSD.ORG Sun Nov 17 15:04:30 2013 Return-Path: Delivered-To: freebsd-fs@FreeBSD.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 6190D1B2 for ; Sun, 17 Nov 2013 15:04:30 +0000 (UTC) Received: from woozle.rinet.ru (woozle.rinet.ru [195.54.192.68]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.freebsd.org (Postfix) with ESMTPS id DDEC92206 for ; Sun, 17 Nov 2013 15:04:29 +0000 (UTC) Received: from localhost (localhost [127.0.0.1]) by woozle.rinet.ru (8.14.5/8.14.5) with ESMTP id rAHF4LS0069924 for ; Sun, 17 Nov 2013 19:04:21 +0400 (MSK) (envelope-from marck@rinet.ru) Date: Sun, 17 Nov 2013 19:04:21 +0400 (MSK) From: Dmitry Morozovsky To: freebsd-fs@FreeBSD.org Subject: ZFS: Memory needed for managing L2ARC Message-ID: User-Agent: Alpine 2.00 (BSF 1167 2008-08-23) X-NCC-RegID: ru.rinet X-OpenPGP-Key-ID: 6B691B03 MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.4.3 (woozle.rinet.ru [0.0.0.0]); Sun, 17 Nov 2013 19:04:21 +0400 (MSK) X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.16 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 17 Nov 2013 15:04:30 -0000 Dear colleagues, are there any "real" estimations on how much memory is needed for having efficient, say, 1T of L2ARC? stable/amd64, of course. I have backup storage server, where pool size is targeted to a few hundreds of terabytes (currently ~35T), and hot set is usually less than 512G-1T. quick googling leads to some Solaris/Illumos documents, but even them are not bright enough in answer that. Thanks! -- Sincerely, D.Marck [DM5020, MCK-RIPE, DM3-RIPN] [ FreeBSD committer: marck@FreeBSD.org ] ------------------------------------------------------------------------ *** Dmitry Morozovsky --- D.Marck --- Wild Woozle --- marck@rinet.ru *** ------------------------------------------------------------------------ From owner-freebsd-fs@FreeBSD.ORG Sun Nov 17 15:19:10 2013 Return-Path: Delivered-To: freebsd-fs@FreeBSD.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 335086BE for ; Sun, 17 Nov 2013 15:19:10 +0000 (UTC) Received: from woozle.rinet.ru (woozle.rinet.ru [195.54.192.68]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.freebsd.org (Postfix) with ESMTPS id AD67E22CD for ; Sun, 17 Nov 2013 15:19:09 +0000 (UTC) Received: from localhost (localhost [127.0.0.1]) by woozle.rinet.ru (8.14.5/8.14.5) with ESMTP id rAHFJ7WH070113 for ; Sun, 17 Nov 2013 19:19:07 +0400 (MSK) (envelope-from marck@rinet.ru) Date: Sun, 17 Nov 2013 19:19:07 +0400 (MSK) From: Dmitry Morozovsky To: freebsd-fs@FreeBSD.org Subject: Distributed file system on FreeBSD: current status Message-ID: User-Agent: Alpine 2.00 (BSF 1167 2008-08-23) X-NCC-RegID: ru.rinet X-OpenPGP-Key-ID: 6B691B03 MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.4.3 (woozle.rinet.ru [0.0.0.0]); Sun, 17 Nov 2013 19:19:07 +0400 (MSK) X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.16 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 17 Nov 2013 15:19:10 -0000 Dear colleagues, in short: ${SUBJ} ;) Actually, most interesting areas for me are using free disk space on a hundred or so of our FreeBSD machines mostly acting as routers or one service-specific targets (and because they are very dependent on CPU resources (and former on bandwidth and latency also), they are not easy targets for virtualizing) -- argh, too long sentense, sorry ;) The target usage for file system in question would be mostly-once-write and rare-but-bursty-reads storage like backups. Stability is the first concern; scalability is possibly the second, and efficiency is always a surplus ;P Any hints? Thank you in advance! -- Sincerely, D.Marck [DM5020, MCK-RIPE, DM3-RIPN] [ FreeBSD committer: marck@FreeBSD.org ] ------------------------------------------------------------------------ *** Dmitry Morozovsky --- D.Marck --- Wild Woozle --- marck@rinet.ru *** ------------------------------------------------------------------------ From owner-freebsd-fs@FreeBSD.ORG Sun Nov 17 15:27:05 2013 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 08D03956 for ; Sun, 17 Nov 2013 15:27:05 +0000 (UTC) Received: from frv197.fwdcdn.com (frv197.fwdcdn.com [212.42.77.197]) (using TLSv1.2 with cipher DHE-RSA-AES128-SHA (128/128 bits)) (No client certificate requested) by mx1.freebsd.org (Postfix) with ESMTPS id B92A42330 for ; Sun, 17 Nov 2013 15:27:04 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=ukr.net; s=ffe; h=Content-Transfer-Encoding:Content-Type:MIME-Version:References:In-Reply-To:Message-Id:To:Subject:From:Date; bh=Dguf9f2/weyIuwsabfpQGAzHnAAlHqcpnCnHyuXJ3YI=; b=Dsrc/U7yMjkRI0K3g6LHFCXmOTd88j/IF52DajIx7OcBfbNEcsBUkkXyIvgLj073JHPW57iGUumCwNO/IKeeNcEPErQ5Alm1g/rECK9mfJpej9a4T8tduwdvon8McauHZd419lZrEFRhCapJLrHZm3oDZfrcHcEiRn+C9Gdfpw4=; Received: from [10.10.10.45] (helo=frv45.ukr.net) by frv197.fwdcdn.com with smtp ID 1Vi4FX-000Mxh-Hd for freebsd-fs@freebsd.org; Sun, 17 Nov 2013 17:26:59 +0200 Date: Sun, 17 Nov 2013 17:26:58 +0200 From: Vladimir Sharun Subject: Re: ZFS: Memory needed for managing L2ARC To: freebsd-fs@freebsd.org X-Mailer: mail.ukr.net 5.0 Message-Id: <1384701881.854989070.jd2fimne@frv45.ukr.net> In-Reply-To: References: MIME-Version: 1.0 Received: from atz@ukr.net by frv45.ukr.net; Sun, 17 Nov 2013 17:26:59 +0200 Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: binary Content-Disposition: inline X-Content-Filtered-By: Mailman/MimeDel 2.1.16 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.16 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 17 Nov 2013 15:27:05 -0000 Depending on average block size cached. In my experience it varies from 0,3% to 5%. If you store lot of small files, consider 5-6%, mostly large files - 0,5%. Dear colleagues, are there any "real" estimations on how much memory is needed for having efficient, say, 1T of L2ARC? stable/amd64, of course. I have backup storage server, where pool size is targeted to a few hundreds of terabytes (currently ~35T), and hot set is usually less than 512G-1T. quick googling leads to some Solaris/Illumos documents, but even them are not bright enough in answer that. Thanks! -- Sincerely, D.Marck [DM5020, MCK-RIPE, DM3-RIPN] [ FreeBSD committer: marck@FreeBSD.org ] ------------------------------------------------------------------------ *** Dmitry Morozovsky --- D.Marck --- Wild Woozle --- marck@rinet.ru *** ------------------------------------------------------------------------ _______________________________________________ freebsd-fs@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-fs To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" From owner-freebsd-fs@FreeBSD.ORG Sun Nov 17 15:28:41 2013 Return-Path: Delivered-To: freebsd-fs@FreeBSD.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id CEFB19DA for ; Sun, 17 Nov 2013 15:28:41 +0000 (UTC) Received: from elsa.codelab.cz (elsa.codelab.cz [94.124.105.4]) (using TLSv1 with cipher ADH-CAMELLIA256-SHA (256/256 bits)) (No client certificate requested) by mx1.freebsd.org (Postfix) with ESMTPS id 90741233D for ; Sun, 17 Nov 2013 15:28:41 +0000 (UTC) Received: from elsa.codelab.cz (localhost [127.0.0.1]) by elsa.codelab.cz (Postfix) with ESMTP id 7FC722842E; Sun, 17 Nov 2013 16:28:33 +0100 (CET) Received: from [192.168.1.2] (ip-89-177-49-222.net.upcbroadband.cz [89.177.49.222]) (using TLSv1 with cipher DHE-RSA-CAMELLIA256-SHA (256/256 bits)) (No client certificate requested) by elsa.codelab.cz (Postfix) with ESMTPSA id 00CD32842B; Sun, 17 Nov 2013 16:28:30 +0100 (CET) Message-ID: <5288E09E.4010707@quip.cz> Date: Sun, 17 Nov 2013 16:28:30 +0100 From: Miroslav Lachman <000.fbsd@quip.cz> User-Agent: Mozilla/5.0 (Windows; U; Windows NT 5.0; en-US; rv:1.9.1.19) Gecko/20110420 Lightning/1.0b1 SeaMonkey/2.0.14 MIME-Version: 1.0 To: Dmitry Morozovsky Subject: Re: ZFS: Memory needed for managing L2ARC References: In-Reply-To: Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit Cc: freebsd-fs@FreeBSD.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.16 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 17 Nov 2013 15:28:41 -0000 Dmitry Morozovsky wrote: > Dear colleagues, > > are there any "real" estimations on how much memory is needed for having > efficient, say, 1T of L2ARC? stable/amd64, of course. > > I have backup storage server, where pool size is targeted to a few hundreds of > terabytes (currently ~35T), and hot set is usually less than 512G-1T. > > quick googling leads to some Solaris/Illumos documents, but even them are not > bright enough in answer that. I have some old informations in a bookmarks: Re: [zfs-discuss] ZFS ZIL + L2ARC SSD Setup http://www.mail-archive.com/zfs-discuss@opensolaris.org/msg34674.html Approximately 200 bytes per record. I use the following example: Suppose we use a Seagate LP 2 TByte disk for the L2ARC + Disk has 3,907,029,168 512 byte sectors, guaranteed + Workload uses 8 kByte fixed record size RAM needed for arc_buf_hdr entries + Need = ~(3,907,029,168 - 9,232) * 200 / 16 = ~48 GBytes http://www.mail-archive.com/zfs-discuss@opensolaris.org/msg34677.html I hope this helps Miroslav Lachman From owner-freebsd-fs@FreeBSD.ORG Sun Nov 17 15:54:06 2013 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id EEDE3EC for ; Sun, 17 Nov 2013 15:54:06 +0000 (UTC) Received: from mail-wi0-x233.google.com (mail-wi0-x233.google.com [IPv6:2a00:1450:400c:c05::233]) (using TLSv1 with cipher ECDHE-RSA-RC4-SHA (128/128 bits)) (No client certificate requested) by mx1.freebsd.org (Postfix) with ESMTPS id 8A1352476 for ; Sun, 17 Nov 2013 15:54:06 +0000 (UTC) Received: by mail-wi0-f179.google.com with SMTP id fb10so2838462wid.12 for ; Sun, 17 Nov 2013 07:54:05 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:sender:in-reply-to:references:date:message-id:subject :from:to:cc:content-type; bh=TfOYQVErUNYjEjYgeKUx4SBMa/xvGt9TwY2IGqUjE4E=; b=i47T/TyEyvrxq7Btfars/K9ZqXSerMt1vdTOmXyFaAUORIa56a1vxDWo6ikQl1MsaP zcqMFm6rCj9PN7uo8mKDp7lNsnAHuJ8igArnfKKyYCxTv0w/2ioEEBCjr9JlbpFzWubt 8COzdz1kRfH8dOb/PL+sWz14gROzVSyaFpp4lHpvqUnGtSAnK3SmWCGD6uLSo1nPpZVd 2Uj5kuT3OLqrF5dGlRbrR6bUltDAjVqwelNpWbMDyp4Vm4pd+09xURnImpiyx49rfJ6T BdUUSu82+zyaOfVl9zxhXR8jhjmHNhRCi3kJW+Ehw+6sx9K9UiQf+fFusSwfV5+5kyCs 0KGA== MIME-Version: 1.0 X-Received: by 10.181.13.6 with SMTP id eu6mr13607827wid.42.1384703645061; Sun, 17 Nov 2013 07:54:05 -0800 (PST) Sender: zhao6014@gmail.com Received: by 10.194.33.98 with HTTP; Sun, 17 Nov 2013 07:54:04 -0800 (PST) Received: by 10.194.33.98 with HTTP; Sun, 17 Nov 2013 07:54:04 -0800 (PST) In-Reply-To: References: Date: Sun, 17 Nov 2013 23:54:04 +0800 X-Google-Sender-Auth: wnZb7OsYL11wnBw9HmW1M6L8ALo Message-ID: Subject: Re: Distributed file system on FreeBSD: current status From: Jov To: Dmitry Morozovsky Content-Type: text/plain; charset=ISO-8859-1 X-Content-Filtered-By: Mailman/MimeDel 2.1.16 Cc: freebsd-fs@freebsd.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.16 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 17 Nov 2013 15:54:07 -0000 what about the hadoop hdfs? it is now in the ports. we use hadoop on thousands of linux nodes. jov On Nov 17, 2013 11:19 PM, "Dmitry Morozovsky" wrote: > Dear colleagues, > > in short: ${SUBJ} ;) > > Actually, most interesting areas for me are using free disk space on a > hundred > or so of our FreeBSD machines mostly acting as routers or one > service-specific > targets (and because they are very dependent on CPU resources (and former > on > bandwidth and latency also), they are not easy targets for virtualizing) -- > argh, too long sentense, sorry ;) > > The target usage for file system in question would be mostly-once-write > and rare-but-bursty-reads storage like backups. > > Stability is the first concern; scalability is possibly the second, and > efficiency is always a surplus ;P > > Any hints? Thank you in advance! > > -- > Sincerely, > D.Marck [DM5020, MCK-RIPE, DM3-RIPN] > [ FreeBSD committer: marck@FreeBSD.org ] > ------------------------------------------------------------------------ > *** Dmitry Morozovsky --- D.Marck --- Wild Woozle --- marck@rinet.ru *** > ------------------------------------------------------------------------ > _______________________________________________ > freebsd-fs@freebsd.org mailing list > http://lists.freebsd.org/mailman/listinfo/freebsd-fs > To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" > From owner-freebsd-fs@FreeBSD.ORG Sun Nov 17 16:53:50 2013 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 7CCDE9FF; Sun, 17 Nov 2013 16:53:50 +0000 (UTC) Received: from mail-pa0-x22b.google.com (mail-pa0-x22b.google.com [IPv6:2607:f8b0:400e:c03::22b]) (using TLSv1 with cipher ECDHE-RSA-RC4-SHA (128/128 bits)) (No client certificate requested) by mx1.freebsd.org (Postfix) with ESMTPS id 5381B278A; Sun, 17 Nov 2013 16:53:50 +0000 (UTC) Received: by mail-pa0-f43.google.com with SMTP id fa1so5715616pad.16 for ; Sun, 17 Nov 2013 08:53:50 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type; bh=9lZWa594pIiuHci5aUMAf+DBoZ3Jqfjg+S8opCMbfy0=; b=Eh0p/jnmQWOMbc4KBOIu+5tiQyW+HRZRV76wiDueX+GppqPql1pMBvjg024WJ4jwub KjM8GTfBcNqyqcfVx/E4yo1f66NU6z93f+mE1iJMhvmNvNBX1vryBz50AjM8XHvWfeUp pQCuRYR/nMefUpD6NQ738dj2l72o66z9ocqD5UKdIzTtWAHuw5541SE/qYWGgkVremQf ULLhd4a/c13Ag6AT7pT1+PGa+TzZYItTJ6ipVvZgSwb5LJfZZWtVT4JqG0HXZjmYpjz/ xkgpAi3sp//HTuUpQP2N0maOzIUZff232g+kKCFyTq2GKONbEb7o4fgp+O7v4HLoJ/49 BBkQ== MIME-Version: 1.0 X-Received: by 10.68.163.33 with SMTP id yf1mr3078716pbb.143.1384707229929; Sun, 17 Nov 2013 08:53:49 -0800 (PST) Received: by 10.70.92.79 with HTTP; Sun, 17 Nov 2013 08:53:49 -0800 (PST) In-Reply-To: <9CB46A22C0BE40029652144B2586462A@d40> References: <9CB46A22C0BE40029652144B2586462A@d40> Date: Sun, 17 Nov 2013 10:53:49 -0600 Message-ID: Subject: Re: rare, random issue with read(), mmap() failing to read entire file From: Adam Vande More To: John Refling Content-Type: text/plain; charset=ISO-8859-1 X-Content-Filtered-By: Mailman/MimeDel 2.1.16 Cc: freebsd-fs , FreeBSD Questions X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.16 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 17 Nov 2013 16:53:50 -0000 On Fri, Nov 15, 2013 at 8:56 PM, John Refling wrote: > > > I'm having some very insidious issues with copying and verifying > (identical) > data from several hard disks. This might be a hardware issue or something > very deep in the disk / filesystem code. I have verified this with several > disks and motherboards. It corrupts 0.0096% of my files, different files > each time! > > > > Background: > > > > 1. I have a 500 GB USB hard disk (the new 4,096 [4k] sector size) which I > have been using to store a master archive of over 70,000 files. > > > > 2. To make a backup of the USB disk, I copied everything over to a 500 GB > SATA hard disk. [Various combinations of `cp -r', `scp -r', `tar -cf - . | > rsh ... tar -xf -', etc.] > > > > 3. To verify that the copy was correct, I did sha256 sums of all files on > both disks. > > > > 4. When comparing the sha256 sums on both drives, I discovered that 6 or > so > files did not compare OK from one drive to the other. > > > > 5. When I checked the files individually, the files compared OK, and even > when I recomputed their individual sha256 sums, I got DIFFERENT sha256 sums > which were correct this time! > > > > The above lead me to investigate further, and using ONLY the USB disk, I > recomputed the sha256 sums for all files ON THAT DISK. A small number > (6-12) of files ON THE SAME DISK had different sha256 sums than previously > computed! The disk is read-only so nothing could have changed. > > > > To try to get to the bottom of this, I took the sha256 code and put it in > my > own file reading routine, which reads-in data from the file using read(). > On summing up the total bytes read in the read() loop, I discovered that on > the files that failed to compare, the read() returned EOF before the actual > EOF. According to the manual page this is impossible. I compared the total > number of bytes read by the read() loop to the stat() file length value, > and > they were different! Obviously, the sha256 sum will be different since not > all the file is read. > > > > This happens consistently on 6 to 12 files out of 70,000+ *every* time, and > on DIFFERENT files *every* time. So things work 99.9904% of the time. > > > > But something fails 0.0096% (one hundredth of one percent) of the time, > which with a large number of files is significant! > > > > Instead of read(), I tried mmap()ing chunks of the file. Using mmap() to > access the data in the file instead of read() resulted in a (different) > sha256 sum than the read() version! The mmap() version was correct, except > in ONE case where BOTH versions were WRONG, when compared to a 3rd and 4th > run! > > > > Using `diff -rq disk1 disk2` resulted in similar issues. There were always > a few files that failed to compare. Doing another `diff -rq disk1 disk2` > resulted in a few *other* files that failed to compare, while the ones that > didn't compare OK the first time, DID compare OK the second time. This > happened to 6-12 files out of 70,000+. > > > > Whatever is affecting my use of read() in my sha256 routine seems to also > affect system utilities such as diff! > > > > This gets really insidious because I don't know if the original `cp -r > disk1 > disk2` did these short reads on a few files while copying the files, thus > corrupting my archive backup (on 6-12 files)! > > > > Some of the files that fail are small (10KB) and some are huge (8GB). > > > > HELP! > > > > It takes 7 hours to recompute the sha256 sums of the files on the disk so > random experiments are time consuming, but I'm willing to try things that > are suggested. > > > > System details: > > > > This is observed with the following disks: > > > > Western Digital 500GB SATA 512 byte sectors > > Hitachi 500GB SATA 512 byte sectors > > Iomega RPHD-UG3 500GB USB 4096 byte sectors > > > > in combination with these motherboards: > > > > P4M800Pro-M V2.0: Pentium D 2.66 GHz, 2GB memory > > HP/Compaq Evo: Pentium 4, 2.8 GHz, 2GB memory > > > > OP System version: > > Freebsd: 9.1 RELEASE #0 > > > > no hardware errors noted in /var/log/messages during the file reading > > > > did Spinrite on disks to freshen (re-read/write) all sectors, with no > errors. > > > > The file systems were built using: > > > > dd if=/dev/zero of=/dev/xxx bs=2m > > newfs -m0 /dev/xxx > > > > Looked through the mailing lists and bug reports but can't see anything > similar. > > > > Thanks for your help, > > > > John Refling > Try recoverdisk(1) -- Adam From owner-freebsd-fs@FreeBSD.ORG Mon Nov 18 11:06:49 2013 Return-Path: Delivered-To: freebsd-fs@FreeBSD.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 381EF977 for ; Mon, 18 Nov 2013 11:06:49 +0000 (UTC) Received: from freefall.freebsd.org (freefall.freebsd.org [IPv6:2001:1900:2254:206c::16:87]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mx1.freebsd.org (Postfix) with ESMTPS id 26FC12081 for ; Mon, 18 Nov 2013 11:06:49 +0000 (UTC) Received: from freefall.freebsd.org (localhost [127.0.0.1]) by freefall.freebsd.org (8.14.7/8.14.7) with ESMTP id rAIB6nUd009040 for ; Mon, 18 Nov 2013 11:06:49 GMT (envelope-from owner-bugmaster@FreeBSD.org) Received: (from gnats@localhost) by freefall.freebsd.org (8.14.7/8.14.7/Submit) id rAIB6mfx009038 for freebsd-fs@FreeBSD.org; Mon, 18 Nov 2013 11:06:48 GMT (envelope-from owner-bugmaster@FreeBSD.org) Date: Mon, 18 Nov 2013 11:06:48 GMT Message-Id: <201311181106.rAIB6mfx009038@freefall.freebsd.org> X-Authentication-Warning: freefall.freebsd.org: gnats set sender to owner-bugmaster@FreeBSD.org using -f From: FreeBSD bugmaster To: freebsd-fs@FreeBSD.org Subject: Current problem reports assigned to freebsd-fs@FreeBSD.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.16 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 18 Nov 2013 11:06:49 -0000 Note: to view an individual PR, use: http://www.freebsd.org/cgi/query-pr.cgi?pr=(number). The following is a listing of current problems submitted by FreeBSD users. These represent problem reports covering all versions including experimental development code and obsolete releases. S Tracker Resp. Description -------------------------------------------------------------------------------- o kern/182570 fs [zfs] [patch] ZFS panic in receive o kern/182536 fs [zfs] zfs deadlock o kern/181966 fs [zfs] Kernel panic in ZFS I/O: solaris assert: BP_EQUA o kern/181834 fs [nfs] amd mounting NFS directories can drive a dead-lo o kern/181565 fs [swap] Problem with vnode-backed swap space. o kern/181377 fs [zfs] zfs recv causes an inconsistant pool o kern/181281 fs [msdosfs] stack trace after successfull 'umount /mnt' o kern/181082 fs [fuse] [ntfs] Write to mounted NTFS filesystem using F o kern/180979 fs [netsmb][patch]: Fix large files handling o kern/180876 fs [zfs] [hast] ZFS with trim,bio_flush or bio_delete loc o kern/180678 fs [NFS] succesfully exported filesystems being reported o kern/180438 fs [smbfs] [patch] mount_smbfs fails on arm because of wr p kern/180236 fs [zfs] [nullfs] Leakage free space using ZFS with nullf o kern/178854 fs [ufs] FreeBSD kernel crash in UFS s kern/178467 fs [zfs] [request] Optimized Checksum Code for ZFS o kern/178412 fs [smbfs] Coredump when smbfs mounted o kern/178388 fs [zfs] [patch] allow up to 8MB recordsize o kern/178387 fs [zfs] [patch] sparse files performance improvements o kern/178349 fs [zfs] zfs scrub on deduped data could be much less see o kern/178329 fs [zfs] extended attributes leak o kern/178238 fs [nullfs] nullfs don't release i-nodes on unlink. f kern/178231 fs [nfs] 8.3 nfsv4 client reports "nfsv4 client/server pr o kern/177985 fs [zfs] disk usage problem when copying from one zfs dat o kern/177971 fs [nfs] FreeBSD 9.1 nfs client dirlist problem w/ nfsv3, o kern/177966 fs [zfs] resilver completes but subsequent scrub reports o kern/177658 fs [ufs] FreeBSD panics after get full filesystem with uf o kern/177536 fs [zfs] zfs livelock (deadlock) with high write-to-disk o kern/177445 fs [hast] HAST panic o kern/177240 fs [zfs] zpool import failed with state UNAVAIL but all d o kern/176978 fs [zfs] [panic] zfs send -D causes "panic: System call i o kern/176857 fs [softupdates] [panic] 9.1-RELEASE/amd64/GENERIC panic o bin/176253 fs zpool(8): zfs pool indentation is misleading/wrong o kern/176141 fs [zfs] sharesmb=on makes errors for sharenfs, and still o kern/175950 fs [zfs] Possible deadlock in zfs after long uptime o kern/175897 fs [zfs] operations on readonly zpool hang o kern/175449 fs [unionfs] unionfs and devfs misbehaviour o kern/175179 fs [zfs] ZFS may attach wrong device on move o kern/175071 fs [ufs] [panic] softdep_deallocate_dependencies: unrecov o kern/174372 fs [zfs] Pagefault appears to be related to ZFS o kern/174315 fs [zfs] chflags uchg not supported o kern/174310 fs [zfs] root point mounting broken on CURRENT with multi o kern/174279 fs [ufs] UFS2-SU+J journal and filesystem corruption o kern/173830 fs [zfs] Brain-dead simple change to ZFS error descriptio o kern/173718 fs [zfs] phantom directory in zraid2 pool f kern/173657 fs [nfs] strange UID map with nfsuserd o kern/173363 fs [zfs] [panic] Panic on 'zpool replace' on readonly poo o kern/173136 fs [unionfs] mounting above the NFS read-only share panic o kern/172942 fs [smbfs] Unmounting a smb mount when the server became o kern/172348 fs [unionfs] umount -f of filesystem in use with readonly o kern/172334 fs [unionfs] unionfs permits recursive union mounts; caus o kern/171626 fs [tmpfs] tmpfs should be noisier when the requested siz o kern/171415 fs [zfs] zfs recv fails with "cannot receive incremental o kern/170945 fs [gpt] disk layout not portable between direct connect o bin/170778 fs [zfs] [panic] FreeBSD panics randomly o kern/170680 fs [nfs] Multiple NFS Client bug in the FreeBSD 7.4-RELEA o kern/170497 fs [xfs][panic] kernel will panic whenever I ls a mounted o kern/169945 fs [zfs] [panic] Kernel panic while importing zpool (afte o kern/169480 fs [zfs] ZFS stalls on heavy I/O o kern/169398 fs [zfs] Can't remove file with permanent error o kern/169339 fs panic while " : > /etc/123" o kern/169319 fs [zfs] zfs resilver can't complete o kern/168947 fs [nfs] [zfs] .zfs/snapshot directory is messed up when o kern/168942 fs [nfs] [hang] nfsd hangs after being restarted (not -HU o kern/168158 fs [zfs] incorrect parsing of sharenfs options in zfs (fs o kern/167979 fs [ufs] DIOCGDINFO ioctl does not work on 8.2 file syste o kern/167977 fs [smbfs] mount_smbfs results are differ when utf-8 or U o kern/167688 fs [fusefs] Incorrect signal handling with direct_io o kern/167685 fs [zfs] ZFS on USB drive prevents shutdown / reboot o kern/167612 fs [portalfs] The portal file system gets stuck inside po o kern/167272 fs [zfs] ZFS Disks reordering causes ZFS to pick the wron o kern/167260 fs [msdosfs] msdosfs disk was mounted the second time whe o kern/167109 fs [zfs] [panic] zfs diff kernel panic Fatal trap 9: gene o kern/167105 fs [nfs] mount_nfs can not handle source exports wiht mor o kern/167067 fs [zfs] [panic] ZFS panics the server o kern/167065 fs [zfs] boot fails when a spare is the boot disk o kern/167048 fs [nfs] [patch] RELEASE-9 crash when using ZFS+NULLFS+NF o kern/166912 fs [ufs] [panic] Panic after converting Softupdates to jo o kern/166851 fs [zfs] [hang] Copying directory from the mounted UFS di o kern/166477 fs [nfs] NFS data corruption. o kern/165950 fs [ffs] SU+J and fsck problem o kern/165521 fs [zfs] [hang] livelock on 1 Gig of RAM with zfs when 31 o kern/165392 fs Multiple mkdir/rmdir fails with errno 31 o kern/165087 fs [unionfs] lock violation in unionfs o kern/164472 fs [ufs] fsck -B panics on particular data inconsistency o kern/164370 fs [zfs] zfs destroy for snapshot fails on i386 and sparc o kern/164261 fs [nullfs] [patch] fix panic with NFS served from NULLFS o kern/164256 fs [zfs] device entry for volume is not created after zfs o kern/164184 fs [ufs] [panic] Kernel panic with ufs_makeinode o kern/163801 fs [md] [request] allow mfsBSD legacy installed in 'swap' o kern/163770 fs [zfs] [hang] LOR between zfs&syncer + vnlru leading to o kern/163501 fs [nfs] NFS exporting a dir and a subdir in that dir to o kern/162944 fs [coda] Coda file system module looks broken in 9.0 o kern/162860 fs [zfs] Cannot share ZFS filesystem to hosts with a hyph o kern/162751 fs [zfs] [panic] kernel panics during file operations o kern/162591 fs [nullfs] cross-filesystem nullfs does not work as expe o kern/162519 fs [zfs] "zpool import" relies on buggy realpath() behavi o kern/161968 fs [zfs] [hang] renaming snapshot with -r including a zvo o kern/161864 fs [ufs] removing journaling from UFS partition fails on o kern/161579 fs [smbfs] FreeBSD sometimes panics when an smb share is o kern/161533 fs [zfs] [panic] zfs receive panic: system ioctl returnin o kern/161438 fs [zfs] [panic] recursed on non-recursive spa_namespace_ o kern/161424 fs [nullfs] __getcwd() calls fail when used on nullfs mou o kern/161280 fs [zfs] Stack overflow in gptzfsboot o kern/161205 fs [nfs] [pfsync] [regression] [build] Bug report freebsd o kern/161169 fs [zfs] [panic] ZFS causes kernel panic in dbuf_dirty o kern/161112 fs [ufs] [lor] filesystem LOR in FreeBSD 9.0-BETA3 o kern/160893 fs [zfs] [panic] 9.0-BETA2 kernel panic f kern/160860 fs [ufs] Random UFS root filesystem corruption with SU+J o kern/160801 fs [zfs] zfsboot on 8.2-RELEASE fails to boot from root-o o kern/160790 fs [fusefs] [panic] VPUTX: negative ref count with FUSE o kern/160777 fs [zfs] [hang] RAID-Z3 causes fatal hang upon scrub/impo o kern/160706 fs [zfs] zfs bootloader fails when a non-root vdev exists o kern/160591 fs [zfs] Fail to boot on zfs root with degraded raidz2 [r o kern/160410 fs [smbfs] [hang] smbfs hangs when transferring large fil o kern/160283 fs [zfs] [patch] 'zfs list' does abort in make_dataset_ha o kern/159930 fs [ufs] [panic] kernel core o kern/159402 fs [zfs][loader] symlinks cause I/O errors o kern/159357 fs [zfs] ZFS MAXNAMELEN macro has confusing name (off-by- o kern/159356 fs [zfs] [patch] ZFS NAME_ERR_DISKLIKE check is Solaris-s o kern/159351 fs [nfs] [patch] - divide by zero in mountnfs() o kern/159251 fs [zfs] [request]: add FLETCHER4 as DEDUP hash option o kern/159077 fs [zfs] Can't cd .. with latest zfs version o kern/159048 fs [smbfs] smb mount corrupts large files o kern/159045 fs [zfs] [hang] ZFS scrub freezes system o kern/158839 fs [zfs] ZFS Bootloader Fails if there is a Dead Disk o kern/158802 fs amd(8) ICMP storm and unkillable process. o kern/158231 fs [nullfs] panic on unmounting nullfs mounted over ufs o f kern/157929 fs [nfs] NFS slow read o kern/157399 fs [zfs] trouble with: mdconfig force delete && zfs strip o kern/157179 fs [zfs] zfs/dbuf.c: panic: solaris assert: arc_buf_remov o kern/156797 fs [zfs] [panic] Double panic with FreeBSD 9-CURRENT and o kern/156781 fs [zfs] zfs is losing the snapshot directory, p kern/156545 fs [ufs] mv could break UFS on SMP systems o kern/156193 fs [ufs] [hang] UFS snapshot hangs && deadlocks processes o kern/156039 fs [nullfs] [unionfs] nullfs + unionfs do not compose, re o kern/155615 fs [zfs] zfs v28 broken on sparc64 -current o kern/155587 fs [zfs] [panic] kernel panic with zfs p kern/155411 fs [regression] [8.2-release] [tmpfs]: mount: tmpfs : No o kern/155199 fs [ext2fs] ext3fs mounted as ext2fs gives I/O errors o bin/155104 fs [zfs][patch] use /dev prefix by default when importing o kern/154930 fs [zfs] cannot delete/unlink file from full volume -> EN o kern/154828 fs [msdosfs] Unable to create directories on external USB o kern/154491 fs [smbfs] smb_co_lock: recursive lock for object 1 p kern/154228 fs [md] md getting stuck in wdrain state o kern/153996 fs [zfs] zfs root mount error while kernel is not located o kern/153753 fs [zfs] ZFS v15 - grammatical error when attempting to u o kern/153716 fs [zfs] zpool scrub time remaining is incorrect o kern/153695 fs [patch] [zfs] Booting from zpool created on 4k-sector o kern/153680 fs [xfs] 8.1 failing to mount XFS partitions o kern/153418 fs [zfs] [panic] Kernel Panic occurred writing to zfs vol o kern/153351 fs [zfs] locking directories/files in ZFS o bin/153258 fs [patch][zfs] creating ZVOLs requires `refreservation' s kern/153173 fs [zfs] booting from a gzip-compressed dataset doesn't w o bin/153142 fs [zfs] ls -l outputs `ls: ./.zfs: Operation not support o kern/153126 fs [zfs] vdev failure, zpool=peegel type=vdev.too_small o kern/152022 fs [nfs] nfs service hangs with linux client [regression] o kern/151942 fs [zfs] panic during ls(1) zfs snapshot directory o kern/151905 fs [zfs] page fault under load in /sbin/zfs o bin/151713 fs [patch] Bug in growfs(8) with respect to 32-bit overfl o kern/151648 fs [zfs] disk wait bug o kern/151629 fs [fs] [patch] Skip empty directory entries during name o kern/151330 fs [zfs] will unshare all zfs filesystem after execute a o kern/151326 fs [nfs] nfs exports fail if netgroups contain duplicate o kern/151251 fs [ufs] Can not create files on filesystem with heavy us o kern/151226 fs [zfs] can't delete zfs snapshot o kern/150503 fs [zfs] ZFS disks are UNAVAIL and corrupted after reboot o kern/150501 fs [zfs] ZFS vdev failure vdev.bad_label on amd64 o kern/150390 fs [zfs] zfs deadlock when arcmsr reports drive faulted o kern/150336 fs [nfs] mountd/nfsd became confused; refused to reload n o kern/149208 fs mksnap_ffs(8) hang/deadlock o kern/149173 fs [patch] [zfs] make OpenSolaris installa o kern/149015 fs [zfs] [patch] misc fixes for ZFS code to build on Glib o kern/149014 fs [zfs] [patch] declarations in ZFS libraries/utilities o kern/149013 fs [zfs] [patch] make ZFS makefiles use the libraries fro o kern/148504 fs [zfs] ZFS' zpool does not allow replacing drives to be o kern/148490 fs [zfs]: zpool attach - resilver bidirectionally, and re o kern/148368 fs [zfs] ZFS hanging forever on 8.1-PRERELEASE o kern/148138 fs [zfs] zfs raidz pool commands freeze o kern/147903 fs [zfs] [panic] Kernel panics on faulty zfs device o kern/147881 fs [zfs] [patch] ZFS "sharenfs" doesn't allow different " o kern/147420 fs [ufs] [panic] ufs_dirbad, nullfs, jail panic (corrupt o kern/146941 fs [zfs] [panic] Kernel Double Fault - Happens constantly o kern/146786 fs [zfs] zpool import hangs with checksum errors o kern/146708 fs [ufs] [panic] Kernel panic in softdep_disk_write_compl o kern/146528 fs [zfs] Severe memory leak in ZFS on i386 o kern/146502 fs [nfs] FreeBSD 8 NFS Client Connection to Server o kern/145750 fs [unionfs] [hang] unionfs locks the machine s kern/145712 fs [zfs] cannot offline two drives in a raidz2 configurat o kern/145411 fs [xfs] [panic] Kernel panics shortly after mounting an f bin/145309 fs bsdlabel: Editing disk label invalidates the whole dev o kern/145272 fs [zfs] [panic] Panic during boot when accessing zfs on o kern/145246 fs [ufs] dirhash in 7.3 gratuitously frees hashes when it o kern/145238 fs [zfs] [panic] kernel panic on zpool clear tank o kern/145229 fs [zfs] Vast differences in ZFS ARC behavior between 8.0 o kern/145189 fs [nfs] nfsd performs abysmally under load o kern/144929 fs [ufs] [lor] vfs_bio.c + ufs_dirhash.c p kern/144447 fs [zfs] sharenfs fsunshare() & fsshare_main() non functi o kern/144416 fs [panic] Kernel panic on online filesystem optimization s kern/144415 fs [zfs] [panic] kernel panics on boot after zfs crash o kern/144234 fs [zfs] Cannot boot machine with recent gptzfsboot code o kern/143825 fs [nfs] [panic] Kernel panic on NFS client o bin/143572 fs [zfs] zpool(1): [patch] The verbose output from iostat o kern/143212 fs [nfs] NFSv4 client strange work ... o kern/143184 fs [zfs] [lor] zfs/bufwait LOR o kern/142878 fs [zfs] [vfs] lock order reversal o kern/142597 fs [ext2fs] ext2fs does not work on filesystems with real o kern/142489 fs [zfs] [lor] allproc/zfs LOR o kern/142466 fs Update 7.2 -> 8.0 on Raid 1 ends with screwed raid [re o kern/142306 fs [zfs] [panic] ZFS drive (from OSX Leopard) causes two o kern/142068 fs [ufs] BSD labels are got deleted spontaneously o kern/141950 fs [unionfs] [lor] ufs/unionfs/ufs Lock order reversal o kern/141897 fs [msdosfs] [panic] Kernel panic. msdofs: file name leng o kern/141463 fs [nfs] [panic] Frequent kernel panics after upgrade fro o kern/141091 fs [patch] [nullfs] fix panics with DIAGNOSTIC enabled o kern/141086 fs [nfs] [panic] panic("nfs: bioread, not dir") on FreeBS o kern/141010 fs [zfs] "zfs scrub" fails when backed by files in UFS2 o kern/140888 fs [zfs] boot fail from zfs root while the pool resilveri o kern/140661 fs [zfs] [patch] /boot/loader fails to work on a GPT/ZFS- o kern/140640 fs [zfs] snapshot crash o kern/140068 fs [smbfs] [patch] smbfs does not allow semicolon in file o kern/139725 fs [zfs] zdb(1) dumps core on i386 when examining zpool c o kern/139715 fs [zfs] vfs.numvnodes leak on busy zfs p bin/139651 fs [nfs] mount(8): read-only remount of NFS volume does n o kern/139407 fs [smbfs] [panic] smb mount causes system crash if remot o kern/138662 fs [panic] ffs_blkfree: freeing free block o kern/138421 fs [ufs] [patch] remove UFS label limitations o kern/138202 fs mount_msdosfs(1) see only 2Gb o kern/137588 fs [unionfs] [lor] LOR nfs/ufs/nfs o kern/136968 fs [ufs] [lor] ufs/bufwait/ufs (open) o kern/136945 fs [ufs] [lor] filedesc structure/ufs (poll) o kern/136944 fs [ffs] [lor] bufwait/snaplk (fsync) o kern/136873 fs [ntfs] Missing directories/files on NTFS volume p kern/136470 fs [nfs] Cannot mount / in read-only, over NFS o kern/135546 fs [zfs] zfs.ko module doesn't ignore zpool.cache filenam o kern/135469 fs [ufs] [panic] kernel crash on md operation in ufs_dirb o kern/135050 fs [zfs] ZFS clears/hides disk errors on reboot o kern/134491 fs [zfs] Hot spares are rather cold... o kern/133676 fs [smbfs] [panic] umount -f'ing a vnode-based memory dis p kern/133174 fs [msdosfs] [patch] msdosfs must support multibyte inter o kern/132960 fs [ufs] [panic] panic:ffs_blkfree: freeing free frag o kern/132397 fs reboot causes filesystem corruption (failure to sync b o kern/132331 fs [ufs] [lor] LOR ufs and syncer o kern/132237 fs [msdosfs] msdosfs has problems to read MSDOS Floppy o kern/132145 fs [panic] File System Hard Crashes o kern/131441 fs [unionfs] [nullfs] unionfs and/or nullfs not combineab o kern/131360 fs [nfs] poor scaling behavior of the NFS server under lo o kern/131342 fs [nfs] mounting/unmounting of disks causes NFS to fail o bin/131341 fs makefs: error "Bad file descriptor" on the mount poin o kern/130920 fs [msdosfs] cp(1) takes 100% CPU time while copying file o kern/130210 fs [nullfs] Error by check nullfs o kern/129760 fs [nfs] after 'umount -f' of a stale NFS share FreeBSD l o kern/129488 fs [smbfs] Kernel "bug" when using smbfs in smbfs_smb.c: o kern/129231 fs [ufs] [patch] New UFS mount (norandom) option - mostly o kern/129152 fs [panic] non-userfriendly panic when trying to mount(8) o kern/127787 fs [lor] [ufs] Three LORs: vfslock/devfs/vfslock, ufs/vfs o bin/127270 fs fsck_msdosfs(8) may crash if BytesPerSec is zero o kern/127029 fs [panic] mount(8): trying to mount a write protected zi o kern/126973 fs [unionfs] [hang] System hang with unionfs and init chr o kern/126553 fs [unionfs] unionfs move directory problem 2 (files appe o kern/126287 fs [ufs] [panic] Kernel panics while mounting an UFS file o kern/125895 fs [ffs] [panic] kernel: panic: ffs_blkfree: freeing free s kern/125738 fs [zfs] [request] SHA256 acceleration in ZFS o kern/123939 fs [msdosfs] corrupts new files o bin/123574 fs [unionfs] df(1) -t option destroys info for unionfs (a o kern/122380 fs [ffs] ffs_valloc:dup alloc (Soekris 4801/7.0/USB Flash o bin/122172 fs [fs]: amd(8) automount daemon dies on 6.3-STABLE i386, o bin/121898 fs [nullfs] pwd(1)/getcwd(2) fails with Permission denied o kern/121385 fs [unionfs] unionfs cross mount -> kernel panic o bin/121072 fs [smbfs] mount_smbfs(8) cannot normally convert the cha o kern/120483 fs [ntfs] [patch] NTFS filesystem locking changes o kern/120482 fs [ntfs] [patch] Sync style changes between NetBSD and F o kern/118912 fs [2tb] disk sizing/geometry problem with large array o kern/118713 fs [minidump] [patch] Display media size required for a k o kern/118318 fs [nfs] NFS server hangs under special circumstances o bin/118249 fs [ufs] mv(1): moving a directory changes its mtime o kern/118126 fs [nfs] [patch] Poor NFS server write performance o kern/118107 fs [ntfs] [panic] Kernel panic when accessing a file at N o kern/117954 fs [ufs] dirhash on very large directories blocks the mac o bin/117315 fs [smbfs] mount_smbfs(8) and related options can't mount o kern/117158 fs [zfs] zpool scrub causes panic if geli vdevs detach on o bin/116980 fs [msdosfs] [patch] mount_msdosfs(8) resets some flags f o conf/116931 fs lack of fsck_cd9660 prevents mounting iso images with o kern/116583 fs [ffs] [hang] System freezes for short time when using o bin/115361 fs [zfs] mount(8) gets into a state where it won't set/un o kern/114955 fs [cd9660] [patch] [request] support for mask,dirmask,ui o kern/114847 fs [ntfs] [patch] [request] dirmask support for NTFS ala o kern/114676 fs [ufs] snapshot creation panics: snapacct_ufs2: bad blo o bin/114468 fs [patch] [request] add -d option to umount(8) to detach o kern/113852 fs [smbfs] smbfs does not properly implement DFS referral o bin/113838 fs [patch] [request] mount(8): add support for relative p o bin/113049 fs [patch] [request] make quot(8) use getopt(3) and show o kern/112658 fs [smbfs] [patch] smbfs and caching problems (resolves b o kern/111843 fs [msdosfs] Long Names of files are incorrectly created o kern/111782 fs [ufs] dump(8) fails horribly for large filesystems s bin/111146 fs [2tb] fsck(8) fails on 6T filesystem o bin/107829 fs [2TB] fdisk(8): invalid boundary checking in fdisk / w o kern/106107 fs [ufs] left-over fsck_snapshot after unfinished backgro o kern/104406 fs [ufs] Processes get stuck in "ufs" state under persist o kern/104133 fs [ext2fs] EXT2FS module corrupts EXT2/3 filesystems o kern/103035 fs [ntfs] Directories in NTFS mounted disc images appear o kern/101324 fs [smbfs] smbfs sometimes not case sensitive when it's s o kern/99290 fs [ntfs] mount_ntfs ignorant of cluster sizes s bin/97498 fs [request] newfs(8) has no option to clear the first 12 o kern/97377 fs [ntfs] [patch] syntax cleanup for ntfs_ihash.c o kern/95222 fs [cd9660] File sections on ISO9660 level 3 CDs ignored o kern/94849 fs [ufs] rename on UFS filesystem is not atomic o bin/94810 fs fsck(8) incorrectly reports 'file system marked clean' o kern/94769 fs [ufs] Multiple file deletions on multi-snapshotted fil o kern/94733 fs [smbfs] smbfs may cause double unlock o kern/93942 fs [vfs] [patch] panic: ufs_dirbad: bad dir (patch from D o kern/92272 fs [ffs] [hang] Filling a filesystem while creating a sna o kern/91134 fs [smbfs] [patch] Preserve access and modification time a kern/90815 fs [smbfs] [patch] SMBFS with character conversions somet o kern/88657 fs [smbfs] windows client hang when browsing a samba shar o kern/88555 fs [panic] ffs_blkfree: freeing free frag on AMD 64 o bin/87966 fs [patch] newfs(8): introduce -A flag for newfs to enabl o kern/87859 fs [smbfs] System reboot while umount smbfs. o kern/86587 fs [msdosfs] rm -r /PATH fails with lots of small files o bin/85494 fs fsck_ffs: unchecked use of cg_inosused macro etc. o kern/80088 fs [smbfs] Incorrect file time setting on NTFS mounted vi o bin/74779 fs Background-fsck checks one filesystem twice and omits o kern/73484 fs [ntfs] Kernel panic when doing `ls` from the client si o bin/73019 fs [ufs] fsck_ufs(8) cannot alloc 607016868 bytes for ino o kern/71774 fs [ntfs] NTFS cannot "see" files on a WinXP filesystem o bin/70600 fs fsck(8) throws files away when it can't grow lost+foun o kern/68978 fs [panic] [ufs] crashes with failing hard disk, loose po o kern/67326 fs [msdosfs] crash after attempt to mount write protected o kern/65920 fs [nwfs] Mounted Netware filesystem behaves strange o kern/65901 fs [smbfs] [patch] smbfs fails fsx write/truncate-down/tr o kern/61503 fs [smbfs] mount_smbfs does not work as non-root o kern/55617 fs [smbfs] Accessing an nsmb-mounted drive via a smb expo o kern/51685 fs [hang] Unbounded inode allocation causes kernel to loc o kern/36566 fs [smbfs] System reboot with dead smb mount and umount o bin/27687 fs fsck(8) wrapper is not properly passing options to fsc o kern/18874 fs [2TB] 32bit NFS servers export wrong negative values t o kern/9619 fs [nfs] Restarting mountd kills existing mounts 336 problems total. From owner-freebsd-fs@FreeBSD.ORG Mon Nov 18 13:08:49 2013 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 4E041AFF for ; Mon, 18 Nov 2013 13:08:49 +0000 (UTC) Received: from mail-la0-x230.google.com (mail-la0-x230.google.com [IPv6:2a00:1450:4010:c03::230]) (using TLSv1 with cipher ECDHE-RSA-RC4-SHA (128/128 bits)) (No client certificate requested) by mx1.freebsd.org (Postfix) with ESMTPS id CEFC4290F for ; Mon, 18 Nov 2013 13:08:48 +0000 (UTC) Received: by mail-la0-f48.google.com with SMTP id n7so4841821lam.35 for ; Mon, 18 Nov 2013 05:08:46 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=message-id:date:from:user-agent:mime-version:to:subject:references :in-reply-to:content-type:content-transfer-encoding; bh=YmM6WYd//L5nZwKLgeQ6d04bBAKXnlXa1J8P6s8nTr8=; b=0S7W/4/95koakUwZPsxzagqqz+HO8691xoteIdQDJqyr4xZFIpZ+LZ6Ba47pdO8jjj Nn3tNnCnnzv8x70E4BTCfEo7COy4ET5UwIOBOpWrqZYlrRF0lT02wai7cI9o2/LWvA01 co/QepmygkyfzVVF9HuJUfDKo9bJtQm2bzhoE5tAIlDVb+RuuJO8wbpF5IMVMwml6umS R6hr2s4qpp+vLihngOznP00JmC93RV6LDvqO9VSiOUr11iUEljvt9bsZFRFN2hQv4pXf fHsPmdIxP8BX0acIFlhU5ts7VLrjeHhMzs3OEPx5hVv3mbwJ7sZ07I2cweBvgRdG/BwT sTfQ== X-Received: by 10.112.72.233 with SMTP id g9mr13853585lbv.2.1384780126830; Mon, 18 Nov 2013 05:08:46 -0800 (PST) Received: from [192.168.1.129] (mau.donbass.com. [92.242.127.250]) by mx.google.com with ESMTPSA id m5sm8725891laj.4.2013.11.18.05.08.46 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Mon, 18 Nov 2013 05:08:46 -0800 (PST) Message-ID: <528A115C.2090904@gmail.com> Date: Mon, 18 Nov 2013 15:08:44 +0200 From: Volodymyr Kostyrko User-Agent: Mozilla/5.0 (X11; FreeBSD amd64; rv:24.0) Gecko/20100101 Thunderbird/24.1.0 MIME-Version: 1.0 To: Ivan Dimitrov , freebsd-fs@freebsd.org Subject: Re: Strange lock/crash - 100% cpu with basic command line utils References: <52821EEE.5040502@gmail.com> In-Reply-To: <52821EEE.5040502@gmail.com> Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 7bit X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.16 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 18 Nov 2013 13:08:49 -0000 12.11.2013 14:28, Ivan Dimitrov wrote: > Hello list > > This is my first time reporting a problem, so please excuse me if this > is not the right place or format. Also apology for my poor English. > > Last month we started experiencing strange locks on some of our servers. > On semi-random occasions, when typing `cd`, `ls`, `pwd` the server would > crash and start behave strangely. Sometimes the problem is reproducible, > sometimes all commands work as expected. > All servers are Intel or AMD CPUs with FreeBSD 9.2 that netboot the > latest kernel and load the OS in RAM. > All our servers are using zfs with ssd for cache. Here is an example > server: > Also we tested out with preempted and non preempted kernel. Latest kernel == GENERIC? There were some compatibility issues with certain SSD drives, for example some OCZ drives would never work correctly on Dell servers. Can you post some hardware examples too? Also last month means you were using some FreeBSD version before moving to 9.2? -- Sphinx of black quartz, judge my vow. From owner-freebsd-fs@FreeBSD.ORG Mon Nov 18 19:14:33 2013 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 1BB29701; Mon, 18 Nov 2013 19:14:33 +0000 (UTC) Received: from vps.rulingia.com (host-122-100-2-194.octopus.com.au [122.100.2.194]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.freebsd.org (Postfix) with ESMTPS id A7DDC2067; Mon, 18 Nov 2013 19:14:32 +0000 (UTC) Received: from server.rulingia.com (c220-239-250-249.belrs5.nsw.optusnet.com.au [220.239.250.249]) by vps.rulingia.com (8.14.7/8.14.5) with ESMTP id rAIJENV6001076 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK); Tue, 19 Nov 2013 06:14:24 +1100 (EST) (envelope-from peter@rulingia.com) X-Bogosity: Ham, spamicity=0.000000 Received: from server.rulingia.com (localhost.rulingia.com [127.0.0.1]) by server.rulingia.com (8.14.7/8.14.7) with ESMTP id rAIJEIxx075532 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=NO); Tue, 19 Nov 2013 06:14:18 +1100 (EST) (envelope-from peter@server.rulingia.com) Received: (from peter@localhost) by server.rulingia.com (8.14.7/8.14.7/Submit) id rAIJEHLe075531; Tue, 19 Nov 2013 06:14:17 +1100 (EST) (envelope-from peter) Date: Tue, 19 Nov 2013 06:14:17 +1100 From: Peter Jeremy To: John Refling Subject: Re: rare, random issue with read(), mmap() failing to read entire file Message-ID: <20131118191417.GA75443@server.rulingia.com> References: <9CB46A22C0BE40029652144B2586462A@d40> MIME-Version: 1.0 Content-Type: multipart/signed; micalg=pgp-sha1; protocol="application/pgp-signature"; boundary="qMm9M+Fa2AknHoGS" Content-Disposition: inline In-Reply-To: <9CB46A22C0BE40029652144B2586462A@d40> X-PGP-Key: http://www.rulingia.com/keys/peter.pgp User-Agent: Mutt/1.5.21 (2010-09-15) Cc: freebsd-fs@freebsd.org, freebsd-questions@freebsd.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.16 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 18 Nov 2013 19:14:33 -0000 --qMm9M+Fa2AknHoGS Content-Type: text/plain; charset=us-ascii Content-Disposition: inline Content-Transfer-Encoding: quoted-printable On 2013-Nov-15 18:56:09 -0800, John Refling wrote: >I'm having some very insidious issues with copying and verifying (identica= l) >data from several hard disks. This might be a hardware issue or something >very deep in the disk / filesystem code. I have verified this with several >disks and motherboards. It corrupts 0.0096% of my files, different files >each time! My gut feeling is that this is a hardware issue. Since you've tried different systems, that would seem to rule them out. Have you tried different USB enclosures/cables/etc? I'm never comfortable running disks over USB. Are you able to try ZFS? It inherently checksums data and should quickly show up any hardware issues. --=20 Peter Jeremy --qMm9M+Fa2AknHoGS Content-Type: application/pgp-signature -----BEGIN PGP SIGNATURE----- Version: GnuPG v2.0.21 (FreeBSD) iKYEARECAGYFAlKKZwlfFIAAAAAALgAoaXNzdWVyLWZwckBub3RhdGlvbnMub3Bl bnBncC5maWZ0aGhvcnNlbWFuLm5ldDBCRjc3QTcyNTg5NEVCRTY0RjREN0VFRUZF OEE0N0JGRjAwRkI4ODcACgkQ/opHv/APuIeBdQCeKmyMSw9iiFi0B83UUeExuzbd GnwAoLd6UbjK0Ghqazi5ohFWG425TwpK =FiR3 -----END PGP SIGNATURE----- --qMm9M+Fa2AknHoGS-- From owner-freebsd-fs@FreeBSD.ORG Tue Nov 19 01:16:02 2013 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id A07582E9 for ; Tue, 19 Nov 2013 01:16:02 +0000 (UTC) Received: from mail-pb0-f45.google.com (mail-pb0-f45.google.com [209.85.160.45]) (using TLSv1 with cipher ECDHE-RSA-RC4-SHA (128/128 bits)) (No client certificate requested) by mx1.freebsd.org (Postfix) with ESMTPS id 7E2AA27F4 for ; Tue, 19 Nov 2013 01:16:02 +0000 (UTC) Received: by mail-pb0-f45.google.com with SMTP id rp16so1112934pbb.4 for ; Mon, 18 Nov 2013 17:16:01 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:mime-version:date:message-id:subject:from:to :content-type; bh=RQQILa8yhjYqKyU1vK7bp3Rpwo1XEo9hsjXZUtVhUxw=; b=KD9BZWbrG2MmTzeimYn5jtVnlOlN/vmEcBd4N1UmWADQZnFq07THq38U0vwCp8YiIy ktJy9XkS1K4CWWoiqyyJI67F1TmKH0/1TztuS+VBvIHKXqT1+7vF/s9YcSdXTd5/UR/k pmxeebP86LMu69bzb01+coUXBeOQTt07d9rZs4j5p+4rHfbZ1tKEJAhNbg8IUX7Epjtv pw4gIKTiPv/evo6ZvoH55waNnJvweTLKsrf+O5FEFEuWPPbXZg1MDPfI+g8Axmd6NPYC S1fZct2eouSH/BkpntVuSZQWyxiryt1uygrVyoknK6o+OEmB+/5qPaI4psKIpJAMQ5uq ONOQ== X-Gm-Message-State: ALoCoQlG6DrOLYYXnVuL2Cl374Gp1cZS57lEOz+glnYXXJ5LtvGJGsQFMNoyZYZ0o2f1PG2tvcdx MIME-Version: 1.0 X-Received: by 10.68.189.197 with SMTP id gk5mr23692420pbc.37.1384823761585; Mon, 18 Nov 2013 17:16:01 -0800 (PST) Received: by 10.70.102.133 with HTTP; Mon, 18 Nov 2013 17:16:01 -0800 (PST) Date: Mon, 18 Nov 2013 18:16:01 -0700 Message-ID: Subject: Performance difference between UFS and ZFS with NFS From: Eric Browning To: FreeBSD FS Content-Type: text/plain; charset=ISO-8859-1 X-Content-Filtered-By: Mailman/MimeDel 2.1.16 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.16 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 19 Nov 2013 01:16:02 -0000 Some background: -Two identical servers, dual AMD Athlon 6220's 16 cores total @ 3Ghz, -64GB ram each server -Four Intel DC S3700 800GB SSDs for primary storage, each server. -FreeBSD 9 stable as of 902503 -ZFS v28 and later updated to feature flags (v29?) -LSI 9200-8i controller -Intel I350T4 nic (only one port being used currently) using all four in LACP overtaxed the server's NFS queue from what we found out making the server basically unusable. There is definitely something going on between NFS and ZFS when used as a file server (random workload) for mac home directories. They do not jive well at all and pretty much drag down these beefy servers and cause 20-30 second delays when just attempting to list a directory on Mac 10.7, 10.8 clients although throughput seems fast when copying files. This server's NFS was sitting north of 700% (7+ cores) all day long when using ZFSv28 raidz1. I have also tried stripe, compression on/off, sync enabled/disabled, and no dedup with 56GB of ram dedicated to ARC. I've tried just 100% stock settings in loader.conf and and some recommended tuning from various sources on the freebsd lists and other sites including the freebsd handbook. This is my mountpoint creation: zfs create -o mountpoint=/users -o sharenfs=on -o casesensitivity=insensitive -o aclmode=passthrough -o compression=lz4 -o atime=off -o aclinherit=passthrough tank/users This last weekend I switched one of these servers over to a UFS raid 0 setup and NFS now only eats about 36% of one core during the initial login phase of 150-ish users over about 10 minutes and sits under 1-3% during normal usage and directories all list instantly even when drilling down 10 or so directories on the client's home files. The same NFS config on server and clients are still active. Right now I'm going to have to abandon ZFS until it works with NFS. I don't want to get into a finger pointing game, I'd just like to help get this fixed, I have one old i386 server I can try things out on if that helps and it's already on 9 stable and ZFS v28. Thanks, -- Eric Browning Systems Administrator 801-984-7623 Skaggs Catholic Center Juan Diego Catholic High School Saint John the Baptist Middle Saint John the Baptist Elementary From owner-freebsd-fs@FreeBSD.ORG Tue Nov 19 03:03:37 2013 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id A6E3D6E4 for ; Tue, 19 Nov 2013 03:03:37 +0000 (UTC) Received: from smtp101-5.vfemail.net (nine.vfemail.net [108.76.175.9]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mx1.freebsd.org (Postfix) with ESMTPS id 3BFD62DDC for ; Tue, 19 Nov 2013 03:03:36 +0000 (UTC) Received: (qmail 25477 invoked by uid 89); 19 Nov 2013 03:03:29 -0000 Received: by simscan 1.4.0 ppid: 25467, pid: 25470, t: 0.0532s scanners:none Received: from unknown (HELO www110) (cmlja0BoYXZva21vbi5jb20=@MTcyLjE2LjEwMC45Mg==) by 172.16.100.61 with ESMTPA; 19 Nov 2013 03:03:29 -0000 Date: Mon, 18 Nov 2013 21:03:28 -0600 Message-ID: <20131118210328.Horde.ONsT69y3hBKUccCAO1qR4Q8@www.vfemail.net> From: Rick Romero To: freebsd-fs@freebsd.org Subject: Re: Performance difference between UFS and ZFS with NFS References: In-Reply-To: User-Agent: Internet Messaging Program (IMP) H5 (6.1.5) X-VFEmail-Originating-IP: MTA4Ljc2LjE3NS4xMw== X-VFEmail-Remote-Browser: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:25.0) Gecko/20100101 Firefox/25.0 @ X-VFEmail-AntiSpam: Notify admin@vfemail.net of any spam, and include VFEmail headers X-Remote-Browser: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:25.0) Gecko/20100101 Firefox/25.0 Content-Type: text/plain; charset=UTF-8; format=flowed; DelSp=Yes MIME-Version: 1.0 Content-Disposition: inline X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.16 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 19 Nov 2013 03:03:37 -0000 Quoting Eric Browning : > Right now I'm going to have to abandon ZFS until it works with NFS. I > don't want to get into a finger pointing game, I'd just like to help get > this fixed, I have one old i386 server I can try things out on if that > helps and it's already on 9 stable and ZFS v28. When you created the raid0, did you leave the disk cache enabled? I know it's against the purpose of ZFS to leave the controller and drive caches enabled, but it sure improves performance. In both our cases, (IIRC)NFS will also wait for that commit response - so if the caches are disabled, NFS really begins to drag. I believe there was a commit in 9.2 that allowed modification of a sysctl to disable/change the NFS commit... in some manner.. I forget exactly.. they all tie in together. Also disable the cache flushing. See https://wiki.freebsd.org/ZFSTuningGuide And http://forums.freebsd.org/archive/index.php/t-30856.html Rick From owner-freebsd-fs@FreeBSD.ORG Tue Nov 19 03:24:46 2013 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id E9EDBA1F for ; Tue, 19 Nov 2013 03:24:46 +0000 (UTC) Received: from mail-pb0-f45.google.com (mail-pb0-f45.google.com [209.85.160.45]) (using TLSv1 with cipher ECDHE-RSA-RC4-SHA (128/128 bits)) (No client certificate requested) by mx1.freebsd.org (Postfix) with ESMTPS id C59952EF8 for ; Tue, 19 Nov 2013 03:24:46 +0000 (UTC) Received: by mail-pb0-f45.google.com with SMTP id rp16so1254655pbb.32 for ; Mon, 18 Nov 2013 19:24:46 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:mime-version:in-reply-to:references:date :message-id:subject:from:to:cc:content-type; bh=mv/l4PByAdz8xuXj4x0nCbW8rkDkKLMGEl/eidqnjI8=; b=C28dj4DUyBM2peBdDT4jpoonzzAYG3qnYOOMEMvt0iCbBE6ft+zDUgGshw0izrUvgL OCTx9zrGKIKK4d1zSb/CBdAlQFEQvjaHvxsHZfioM7H03PF2ZgYZcKfR3kz3QKVSQ1/D XzZhBcxmS/Mctvfg3t+Ub3yrq/MdWNRjM+pX4opwzz2EmL7KaYviw0cigl1rw8daIbKk ZCdR0smZfi/AS6S3qoP4gEeTG8+RQLoWimDuY5mVCU65lvN2WYlHw4xQ6e85w69/jckL vbyIySWodANdIVr5lBhEclJQqrpee8eeXxwb0SQPj699QzXcEqyzjC5r+g0L0tqmY9ve 3mZg== X-Gm-Message-State: ALoCoQmSiPDPwf72ISFHZ6lanjQ+aYNTcgkr18Mfn4+8uxjIEZbgTsX494Oyy4qvHqJfiGic+hp9 MIME-Version: 1.0 X-Received: by 10.66.218.198 with SMTP id pi6mr24563789pac.107.1384831485859; Mon, 18 Nov 2013 19:24:45 -0800 (PST) Received: by 10.70.102.133 with HTTP; Mon, 18 Nov 2013 19:24:45 -0800 (PST) In-Reply-To: <20131118210328.Horde.ONsT69y3hBKUccCAO1qR4Q8@www.vfemail.net> References: <20131118210328.Horde.ONsT69y3hBKUccCAO1qR4Q8@www.vfemail.net> Date: Mon, 18 Nov 2013 20:24:45 -0700 Message-ID: Subject: Re: Performance difference between UFS and ZFS with NFS From: Eric Browning To: Rick Romero Content-Type: text/plain; charset=ISO-8859-1 X-Content-Filtered-By: Mailman/MimeDel 2.1.16 Cc: FreeBSD FS X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.16 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 19 Nov 2013 03:24:47 -0000 @Rick R I will check on controller caching, I just left the defaults so I assume they are on and had cache flushing disabled already for ZFS. @Jason F. Changing to Solaris isn't an option right now and knowing Mac is a kissing cousin of FreeBSD I prefer to stay with what I know. It's been pretty rock solid so far. Thanks, On Mon, Nov 18, 2013 at 8:03 PM, Rick Romero wrote: > > Quoting Eric Browning : > > > Right now I'm going to have to abandon ZFS until it works with NFS. I >> don't want to get into a finger pointing game, I'd just like to help get >> this fixed, I have one old i386 server I can try things out on if that >> helps and it's already on 9 stable and ZFS v28. >> > > When you created the raid0, did you leave the disk cache enabled? I know > it's against the purpose of ZFS to leave the controller and drive caches > enabled, but it sure improves performance. > > In both our cases, (IIRC)NFS will also wait for that commit response - so > if the caches are disabled, NFS really begins to drag. I believe there was > a commit in 9.2 that allowed modification of a sysctl to disable/change the > NFS commit... in some manner.. I forget exactly.. they all tie in together. > > Also disable the cache flushing. > See https://wiki.freebsd.org/ZFSTuningGuide > And http://forums.freebsd.org/archive/index.php/t-30856.html > > > Rick > > _______________________________________________ > freebsd-fs@freebsd.org mailing list > http://lists.freebsd.org/mailman/listinfo/freebsd-fs > To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" > -- Eric Browning Systems Administrator 801-984-7623 Skaggs Catholic Center Juan Diego Catholic High School Saint John the Baptist Middle Saint John the Baptist Elementary From owner-freebsd-fs@FreeBSD.ORG Tue Nov 19 04:41:57 2013 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id BDC7B429 for ; Tue, 19 Nov 2013 04:41:57 +0000 (UTC) Received: from mail.ultra-secure.de (mail.ultra-secure.de [78.47.114.122]) (using TLSv1 with cipher ECDHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.freebsd.org (Postfix) with ESMTPS id EF7E8226A for ; Tue, 19 Nov 2013 04:41:56 +0000 (UTC) Received: (qmail 51962 invoked by uid 89); 19 Nov 2013 04:36:26 -0000 Received: from unknown (HELO ?192.168.1.201?) (rainer@ultra-secure.de@217.71.83.52) by mail.ultra-secure.de with ESMTPA; 19 Nov 2013 04:36:26 -0000 Content-Type: text/plain; charset=windows-1252 Mime-Version: 1.0 (Mac OS X Mail 7.0 \(1822\)) Subject: Re: Performance difference between UFS and ZFS with NFS From: Rainer Duffner In-Reply-To: Date: Tue, 19 Nov 2013 05:36:16 +0100 Content-Transfer-Encoding: quoted-printable Message-Id: References: To: Eric Browning X-Mailer: Apple Mail (2.1822) Cc: FreeBSD FS X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.16 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 19 Nov 2013 04:41:57 -0000 Am 19.11.2013 um 02:16 schrieb Eric Browning = : > Some background: > -Two identical servers, dual AMD Athlon 6220's 16 cores total @ 3Ghz, > -64GB ram each server > -Four Intel DC S3700 800GB SSDs for primary storage, each server. > -FreeBSD 9 stable as of 902503 > -ZFS v28 and later updated to feature flags (v29?) > -LSI 9200-8i controller > -Intel I350T4 nic (only one port being used currently) using all four = in > LACP overtaxed the server's NFS queue from what we found out making = the > server basically unusable. Have you tried to use FreeNAS and post in their performance-forum? There=92s a ton of information in that forum. From owner-freebsd-fs@FreeBSD.ORG Tue Nov 19 13:12:57 2013 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 3DA6895D for ; Tue, 19 Nov 2013 13:12:57 +0000 (UTC) Received: from esa-jnhn.mail.uoguelph.ca (esa-jnhn.mail.uoguelph.ca [131.104.91.44]) by mx1.freebsd.org (Postfix) with ESMTP id 05BF72F51 for ; Tue, 19 Nov 2013 13:12:56 +0000 (UTC) X-IronPort-Anti-Spam-Filtered: true X-IronPort-Anti-Spam-Result: AqQEADVji1KDaFve/2dsb2JhbABWA4M/U4J2u3pOgTV0giUBAQEDAQEBASArIAsFFhgCAg0FARMCKQEJJgYIBwQBHASHWgYNrVeSJheBKYxzAQEGfyQQBxEBAYJYgUcDiUKMAYN+kF6DRh4xfAEHFyI X-IronPort-AV: E=Sophos;i="4.93,729,1378872000"; d="scan'208";a="71186384" Received: from muskoka.cs.uoguelph.ca (HELO zcs3.mail.uoguelph.ca) ([131.104.91.222]) by esa-jnhn.mail.uoguelph.ca with ESMTP; 19 Nov 2013 08:12:50 -0500 Received: from zcs3.mail.uoguelph.ca (localhost.localdomain [127.0.0.1]) by zcs3.mail.uoguelph.ca (Postfix) with ESMTP id 2B593B4093; Tue, 19 Nov 2013 08:12:50 -0500 (EST) Date: Tue, 19 Nov 2013 08:12:50 -0500 (EST) From: Rick Macklem To: Eric Browning Message-ID: <2103733116.16923158.1384866769683.JavaMail.root@uoguelph.ca> In-Reply-To: Subject: Re: Performance difference between UFS and ZFS with NFS MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit X-Originating-IP: [172.17.91.209] X-Mailer: Zimbra 7.2.1_GA_2790 (ZimbraWebClient - FF3.0 (Win)/7.2.1_GA_2790) Cc: FreeBSD FS X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.16 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 19 Nov 2013 13:12:57 -0000 Eric Browning wrote: > Some background: > -Two identical servers, dual AMD Athlon 6220's 16 cores total @ 3Ghz, > -64GB ram each server > -Four Intel DC S3700 800GB SSDs for primary storage, each server. > -FreeBSD 9 stable as of 902503 > -ZFS v28 and later updated to feature flags (v29?) > -LSI 9200-8i controller > -Intel I350T4 nic (only one port being used currently) using all four > in > LACP overtaxed the server's NFS queue from what we found out making > the > server basically unusable. > > There is definitely something going on between NFS and ZFS when used > as a > file server (random workload) for mac home directories. They do not > jive > well at all and pretty much drag down these beefy servers and cause > 20-30 > second delays when just attempting to list a directory on Mac 10.7, > 10.8 > clients although throughput seems fast when copying files. > > This server's NFS was sitting north of 700% (7+ cores) all day long > when > using ZFSv28 raidz1. I have also tried stripe, compression on/off, > sync > enabled/disabled, and no dedup with 56GB of ram dedicated to ARC. > I've > tried just 100% stock settings in loader.conf and and some > recommended > tuning from various sources on the freebsd lists and other sites > including > the freebsd handbook. > > This is my mountpoint creation: > zfs create -o mountpoint=/users -o sharenfs=on -o > casesensitivity=insensitive -o aclmode=passthrough -o compression=lz4 > -o > atime=off -o aclinherit=passthrough tank/users > > This last weekend I switched one of these servers over to a UFS raid > 0 > setup and NFS now only eats about 36% of one core during the initial > login > phase of 150-ish users over about 10 minutes and sits under 1-3% > during > normal usage and directories all list instantly even when drilling > down 10 > or so directories on the client's home files. The same NFS config on > server > and clients are still active. > > Right now I'm going to have to abandon ZFS until it works with NFS. > I > don't want to get into a finger pointing game, I'd just like to help > get > this fixed, I have one old i386 server I can try things out on if > that > helps and it's already on 9 stable and ZFS v28. > Btw, in previous discussions with Eric on this, he provided nfsstat output that seemed to indicate most of his RPC load from the Macs were Access and Getattr RPCs. I suspect the way ZFS handles VOP_ACCESSX() and VOP_GETATTR() is a significant part of this issue. I know nothing about ZFS, but I believe it does always have ACLs enabled and presumably needs to check the ACL for each VOP_ACCESSX(). Hopefully someone familiar with how ZFS handles VOP_ACCESSX() and VOP_GETATTR() can look at these? rick > Thanks, > -- > Eric Browning > Systems Administrator > 801-984-7623 > > Skaggs Catholic Center > Juan Diego Catholic High School > Saint John the Baptist Middle > Saint John the Baptist Elementary > _______________________________________________ > freebsd-fs@freebsd.org mailing list > http://lists.freebsd.org/mailman/listinfo/freebsd-fs > To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" > From owner-freebsd-fs@FreeBSD.ORG Tue Nov 19 18:12:47 2013 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 1DB2FC2 for ; Tue, 19 Nov 2013 18:12:47 +0000 (UTC) Received: from mail-pd0-x230.google.com (mail-pd0-x230.google.com [IPv6:2607:f8b0:400e:c02::230]) (using TLSv1 with cipher ECDHE-RSA-RC4-SHA (128/128 bits)) (No client certificate requested) by mx1.freebsd.org (Postfix) with ESMTPS id EB55621B4 for ; Tue, 19 Nov 2013 18:12:46 +0000 (UTC) Received: by mail-pd0-f176.google.com with SMTP id w10so6320480pde.35 for ; Tue, 19 Nov 2013 10:12:46 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=subject:mime-version:content-type:from:in-reply-to:date:cc :content-transfer-encoding:message-id:references:to; bh=js/8pMt0B9LGsw8Irx5Oxx7pvihDha3kZ+GOx7EbvF8=; b=AOCyPEYWpR4jihu+82TFvExOyLPeRkJkfvA6ALJ1W1wYMViJgsMDquRMWuaqH9eX4z hD3lmnMBsw/CT83f5WU5xrIUo01iITdlfQic26EGsYMz8EGv/fV44LiM+iyRJClaXm3E tj5bzfCDbjPtPrFPibgf79QYQ6jho3aaKeaCLBwSGx4CrPNsHzpNgqWIxb7B5kzzJ/Qy vdcPaivfgL3hvF8uO8GPE92JYLm3dyL2BCb+uDXWb1VqD7B/zztzCfKEPna0j06W7c+s cH7jE5yeR6UxTGZ42QwcQhV3xZZB4LGak1mkBoapjOwY8Z7xjWRlC7+8MEsGd++Wb7Tq ww6Q== X-Received: by 10.68.185.68 with SMTP id fa4mr9393317pbc.136.1384884766596; Tue, 19 Nov 2013 10:12:46 -0800 (PST) Received: from briankrusicw.logan.tv ([64.17.255.138]) by mx.google.com with ESMTPSA id yg3sm36263078pab.16.2013.11.19.10.12.45 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Tue, 19 Nov 2013 10:12:46 -0800 (PST) Subject: Re: Performance difference between UFS and ZFS with NFS Mime-Version: 1.0 (Apple Message framework v1085) Content-Type: text/plain; charset=us-ascii From: aurfalien In-Reply-To: <2103733116.16923158.1384866769683.JavaMail.root@uoguelph.ca> Date: Tue, 19 Nov 2013 10:12:46 -0800 Content-Transfer-Encoding: quoted-printable Message-Id: <9F76D61C-EFEB-44B3-9717-D0795789832D@gmail.com> References: <2103733116.16923158.1384866769683.JavaMail.root@uoguelph.ca> To: Rick Macklem X-Mailer: Apple Mail (2.1085) Cc: FreeBSD FS X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.16 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 19 Nov 2013 18:12:47 -0000 On Nov 19, 2013, at 5:12 AM, Rick Macklem wrote: > Eric Browning wrote: >> Some background: >> -Two identical servers, dual AMD Athlon 6220's 16 cores total @ 3Ghz, >> -64GB ram each server >> -Four Intel DC S3700 800GB SSDs for primary storage, each server. >> -FreeBSD 9 stable as of 902503 >> -ZFS v28 and later updated to feature flags (v29?) >> -LSI 9200-8i controller >> -Intel I350T4 nic (only one port being used currently) using all four >> in >> LACP overtaxed the server's NFS queue from what we found out making >> the >> server basically unusable. >>=20 >> There is definitely something going on between NFS and ZFS when used >> as a >> file server (random workload) for mac home directories. They do not >> jive >> well at all and pretty much drag down these beefy servers and cause >> 20-30 >> second delays when just attempting to list a directory on Mac 10.7, >> 10.8 >> clients although throughput seems fast when copying files. >>=20 >> This server's NFS was sitting north of 700% (7+ cores) all day long >> when >> using ZFSv28 raidz1. I have also tried stripe, compression on/off, >> sync >> enabled/disabled, and no dedup with 56GB of ram dedicated to ARC. >> I've >> tried just 100% stock settings in loader.conf and and some >> recommended >> tuning from various sources on the freebsd lists and other sites >> including >> the freebsd handbook. >>=20 >> This is my mountpoint creation: >> zfs create -o mountpoint=3D/users -o sharenfs=3Don -o >> casesensitivity=3Dinsensitive -o aclmode=3Dpassthrough -o = compression=3Dlz4 >> -o >> atime=3Doff -o aclinherit=3Dpassthrough tank/users >>=20 >> This last weekend I switched one of these servers over to a UFS raid >> 0 >> setup and NFS now only eats about 36% of one core during the initial >> login >> phase of 150-ish users over about 10 minutes and sits under 1-3% >> during >> normal usage and directories all list instantly even when drilling >> down 10 >> or so directories on the client's home files. The same NFS config on >> server >> and clients are still active. >>=20 >> Right now I'm going to have to abandon ZFS until it works with NFS. >> I >> don't want to get into a finger pointing game, I'd just like to help >> get >> this fixed, I have one old i386 server I can try things out on if >> that >> helps and it's already on 9 stable and ZFS v28. >>=20 > Btw, in previous discussions with Eric on this, he provided nfsstat > output that seemed to indicate most of his RPC load from the Macs > were Access and Getattr RPCs. >=20 > I suspect the way ZFS handles VOP_ACCESSX() and VOP_GETATTR() is a > significant part of this issue. I know nothing about ZFS, but I = believe > it does always have ACLs enabled and presumably needs to check the > ACL for each VOP_ACCESSX(). >=20 > Hopefully someone familiar with how ZFS handles VOP_ACCESSX() and > VOP_GETATTR() can look at these? Indeed. However couldn't one simply disable ACL mode via; zfs set aclinherit=3Ddiscard pool/dataset zfs set aclmode=3Ddiscard pool/dataset Eric, mind setting these and see? Mid/late this week I'll be doing a rather large render farm test amongst = our Mac fleet against ZFS. Will reply to this thread with outcome when I'm done. Should be = interesting. - aurf =20 >=20 > rick >=20 >> Thanks, >> -- >> Eric Browning >> Systems Administrator >> 801-984-7623 >>=20 >> Skaggs Catholic Center >> Juan Diego Catholic High School >> Saint John the Baptist Middle >> Saint John the Baptist Elementary >> _______________________________________________ >> freebsd-fs@freebsd.org mailing list >> http://lists.freebsd.org/mailman/listinfo/freebsd-fs >> To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" >>=20 > _______________________________________________ > freebsd-fs@freebsd.org mailing list > http://lists.freebsd.org/mailman/listinfo/freebsd-fs > To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" From owner-freebsd-fs@FreeBSD.ORG Tue Nov 19 18:25:02 2013 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 31380310 for ; Tue, 19 Nov 2013 18:25:02 +0000 (UTC) Received: from mail-pb0-x233.google.com (mail-pb0-x233.google.com [IPv6:2607:f8b0:400e:c01::233]) (using TLSv1 with cipher ECDHE-RSA-RC4-SHA (128/128 bits)) (No client certificate requested) by mx1.freebsd.org (Postfix) with ESMTPS id 083D62257 for ; Tue, 19 Nov 2013 18:25:02 +0000 (UTC) Received: by mail-pb0-f51.google.com with SMTP id up15so4026174pbc.10 for ; Tue, 19 Nov 2013 10:25:01 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=subject:mime-version:content-type:from:in-reply-to:date:cc :message-id:references:to; bh=/5tu8Co/8KfQuraDa+xSnrTV8RBaBWzb79kk7EFy3hc=; b=DEKwYbKlB6W4wBUm2jVjlGUJEs7EdqBDW8ysrM4PRRxnMi5zMzntNahQLf9vxwPjyu i2Qs+DppU1hGZM4D+yyOvpOaEcdxkegTeBJ+IxV9rWR0xidlUnBx05JkbGUDROluk6Al GTJvkGi06n0q9YwBABIAmDro57lxvCZiYfRUy9sFrtccePEHxzeDxyKJMEEJuxUp2t2t LZHpLzcWalf11Fv65BaTVr9zqH1Fg2UwoiRJz3xruODYbiLc++72PiuLcOJOoppzffof ogOfH6l3TusvqeIZ01t3tvb673pPi+9QFPGzNysZEjEICjDkCmw9HncJbbzafPe1nrjE 7Swg== X-Received: by 10.68.13.104 with SMTP id g8mr27704739pbc.33.1384885501581; Tue, 19 Nov 2013 10:25:01 -0800 (PST) Received: from briankrusicw.logan.tv ([64.17.255.138]) by mx.google.com with ESMTPSA id kd1sm36317321pab.20.2013.11.19.10.25.00 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Tue, 19 Nov 2013 10:25:01 -0800 (PST) Subject: Re: Performance difference between UFS and ZFS with NFS Mime-Version: 1.0 (Apple Message framework v1085) From: aurfalien In-Reply-To: Date: Tue, 19 Nov 2013 10:25:03 -0800 Message-Id: <5969250F-0987-4304-BB95-52C7BAE8D84D@gmail.com> References: <2103733116.16923158.1384866769683.JavaMail.root@uoguelph.ca> <9F76D61C-EFEB-44B3-9717-D0795789832D@gmail.com> To: Eric Browning X-Mailer: Apple Mail (2.1085) Content-Type: text/plain; charset=us-ascii Content-Transfer-Encoding: quoted-printable X-Content-Filtered-By: Mailman/MimeDel 2.1.16 Cc: FreeBSD FS X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.16 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 19 Nov 2013 18:25:02 -0000 Curious. Do you have NFS locking enabled client side? Most likely you do as Mac Mail will not run w/o locks, nor will Adobe = prefs like temp cache. etc... So being this is prolly the case, could it be a mem pressure issue and = not enough RAM? So NFS locks take up RAM as does ARC. What are your mem stats and swap = stats during the 700% (yikes) experience? - aurf On Nov 19, 2013, at 10:19 AM, Eric Browning wrote: > Aurf, >=20 > I ran those two commands and it doesn't seem to have made a = difference. Usage is still above 700% and it still takes 30s to list a = directory. The time to list is proportional to the number of users = logged in. On UFS with all students logged in and hammering away at = their files there is no noticeable speed decrease. >=20 >=20 > On Tue, Nov 19, 2013 at 11:12 AM, aurfalien = wrote: >=20 > On Nov 19, 2013, at 5:12 AM, Rick Macklem wrote: >=20 > > Eric Browning wrote: > >> Some background: > >> -Two identical servers, dual AMD Athlon 6220's 16 cores total @ = 3Ghz, > >> -64GB ram each server > >> -Four Intel DC S3700 800GB SSDs for primary storage, each server. > >> -FreeBSD 9 stable as of 902503 > >> -ZFS v28 and later updated to feature flags (v29?) > >> -LSI 9200-8i controller > >> -Intel I350T4 nic (only one port being used currently) using all = four > >> in > >> LACP overtaxed the server's NFS queue from what we found out making > >> the > >> server basically unusable. > >> > >> There is definitely something going on between NFS and ZFS when = used > >> as a > >> file server (random workload) for mac home directories. They do = not > >> jive > >> well at all and pretty much drag down these beefy servers and cause > >> 20-30 > >> second delays when just attempting to list a directory on Mac 10.7, > >> 10.8 > >> clients although throughput seems fast when copying files. > >> > >> This server's NFS was sitting north of 700% (7+ cores) all day long > >> when > >> using ZFSv28 raidz1. I have also tried stripe, compression on/off, > >> sync > >> enabled/disabled, and no dedup with 56GB of ram dedicated to ARC. > >> I've > >> tried just 100% stock settings in loader.conf and and some > >> recommended > >> tuning from various sources on the freebsd lists and other sites > >> including > >> the freebsd handbook. > >> > >> This is my mountpoint creation: > >> zfs create -o mountpoint=3D/users -o sharenfs=3Don -o > >> casesensitivity=3Dinsensitive -o aclmode=3Dpassthrough -o = compression=3Dlz4 > >> -o > >> atime=3Doff -o aclinherit=3Dpassthrough tank/users > >> > >> This last weekend I switched one of these servers over to a UFS = raid > >> 0 > >> setup and NFS now only eats about 36% of one core during the = initial > >> login > >> phase of 150-ish users over about 10 minutes and sits under 1-3% > >> during > >> normal usage and directories all list instantly even when drilling > >> down 10 > >> or so directories on the client's home files. The same NFS config = on > >> server > >> and clients are still active. > >> > >> Right now I'm going to have to abandon ZFS until it works with NFS. > >> I > >> don't want to get into a finger pointing game, I'd just like to = help > >> get > >> this fixed, I have one old i386 server I can try things out on if > >> that > >> helps and it's already on 9 stable and ZFS v28. > >> > > Btw, in previous discussions with Eric on this, he provided nfsstat > > output that seemed to indicate most of his RPC load from the Macs > > were Access and Getattr RPCs. > > > > I suspect the way ZFS handles VOP_ACCESSX() and VOP_GETATTR() is a > > significant part of this issue. I know nothing about ZFS, but I = believe > > it does always have ACLs enabled and presumably needs to check the > > ACL for each VOP_ACCESSX(). > > > > Hopefully someone familiar with how ZFS handles VOP_ACCESSX() and > > VOP_GETATTR() can look at these? >=20 > Indeed. However couldn't one simply disable ACL mode via; >=20 > zfs set aclinherit=3Ddiscard pool/dataset > zfs set aclmode=3Ddiscard pool/dataset >=20 > Eric, mind setting these and see? >=20 > Mid/late this week I'll be doing a rather large render farm test = amongst our Mac fleet against ZFS. >=20 > Will reply to this thread with outcome when I'm done. Should be = interesting. >=20 > - aurf >=20 > > > > rick > > > >> Thanks, > >> -- > >> Eric Browning > >> Systems Administrator > >> 801-984-7623 > >> > >> Skaggs Catholic Center > >> Juan Diego Catholic High School > >> Saint John the Baptist Middle > >> Saint John the Baptist Elementary > >> _______________________________________________ > >> freebsd-fs@freebsd.org mailing list > >> http://lists.freebsd.org/mailman/listinfo/freebsd-fs > >> To unsubscribe, send any mail to = "freebsd-fs-unsubscribe@freebsd.org" > >> > > _______________________________________________ > > freebsd-fs@freebsd.org mailing list > > http://lists.freebsd.org/mailman/listinfo/freebsd-fs > > To unsubscribe, send any mail to = "freebsd-fs-unsubscribe@freebsd.org" >=20 >=20 >=20 >=20 > --=20 > Eric Browning > Systems Administrator > 801-984-7623 >=20 > Skaggs Catholic Center > Juan Diego Catholic High School > Saint John the Baptist Middle > Saint John the Baptist Elementary From owner-freebsd-fs@FreeBSD.ORG Tue Nov 19 18:26:33 2013 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id A3F2638B for ; Tue, 19 Nov 2013 18:26:33 +0000 (UTC) Received: from mail-pa0-f50.google.com (mail-pa0-f50.google.com [209.85.220.50]) (using TLSv1 with cipher ECDHE-RSA-RC4-SHA (128/128 bits)) (No client certificate requested) by mx1.freebsd.org (Postfix) with ESMTPS id 7E3122262 for ; Tue, 19 Nov 2013 18:26:33 +0000 (UTC) Received: by mail-pa0-f50.google.com with SMTP id kp14so7145177pab.37 for ; Tue, 19 Nov 2013 10:26:32 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:mime-version:in-reply-to:references:date :message-id:subject:from:to:cc:content-type; bh=ogIO7FUV+4MzGMTwQYTUD150MpEtXEN8W0DVbuzGL7Y=; b=bAHfod5ijzj9sT1Fi2LpBY1I8JhOCKUTzzOQiyPKVpi6Jb3eyotUve6eZJpOTrnwca DuAOI6yH+A3M5sbbJ6p3Li0JxvijMPjqLBrDlL1L+6OtTq3+Fk4uGhaighrDskbq61AR rP2nnUITDzBZazXoHgL+BYYZd5zHFSydPvD0wJMO47ptHKjcBSytuheydwMtw4lR6RSu GoehuMw4XolqvwXHEkwJpsOMxBeeol6kfzsNhMkb28URlLCPlIvhvqy8nGjbjYzjCu4b s7c9aSzru5BooODx3CHGfT+5G6jvV8IIZp5qtYS+MjSq0EER8E3e/fiB0XQuW5bGBef7 pCLw== X-Gm-Message-State: ALoCoQmbZuHJbF9CvoAHAb7pR5o8wFU/k0+BVIXqF5jndIUXTymvs5gybVKfb3Er80OaqNpNqBK7 MIME-Version: 1.0 X-Received: by 10.66.67.20 with SMTP id j20mr1623102pat.181.1384885156267; Tue, 19 Nov 2013 10:19:16 -0800 (PST) Received: by 10.70.102.133 with HTTP; Tue, 19 Nov 2013 10:19:16 -0800 (PST) In-Reply-To: <9F76D61C-EFEB-44B3-9717-D0795789832D@gmail.com> References: <2103733116.16923158.1384866769683.JavaMail.root@uoguelph.ca> <9F76D61C-EFEB-44B3-9717-D0795789832D@gmail.com> Date: Tue, 19 Nov 2013 11:19:16 -0700 Message-ID: Subject: Re: Performance difference between UFS and ZFS with NFS From: Eric Browning To: aurfalien Content-Type: text/plain; charset=ISO-8859-1 X-Content-Filtered-By: Mailman/MimeDel 2.1.16 Cc: FreeBSD FS X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.16 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 19 Nov 2013 18:26:33 -0000 Aurf, I ran those two commands and it doesn't seem to have made a difference. Usage is still above 700% and it still takes 30s to list a directory. The time to list is proportional to the number of users logged in. On UFS with all students logged in and hammering away at their files there is no noticeable speed decrease. On Tue, Nov 19, 2013 at 11:12 AM, aurfalien wrote: > > On Nov 19, 2013, at 5:12 AM, Rick Macklem wrote: > > > Eric Browning wrote: > >> Some background: > >> -Two identical servers, dual AMD Athlon 6220's 16 cores total @ 3Ghz, > >> -64GB ram each server > >> -Four Intel DC S3700 800GB SSDs for primary storage, each server. > >> -FreeBSD 9 stable as of 902503 > >> -ZFS v28 and later updated to feature flags (v29?) > >> -LSI 9200-8i controller > >> -Intel I350T4 nic (only one port being used currently) using all four > >> in > >> LACP overtaxed the server's NFS queue from what we found out making > >> the > >> server basically unusable. > >> > >> There is definitely something going on between NFS and ZFS when used > >> as a > >> file server (random workload) for mac home directories. They do not > >> jive > >> well at all and pretty much drag down these beefy servers and cause > >> 20-30 > >> second delays when just attempting to list a directory on Mac 10.7, > >> 10.8 > >> clients although throughput seems fast when copying files. > >> > >> This server's NFS was sitting north of 700% (7+ cores) all day long > >> when > >> using ZFSv28 raidz1. I have also tried stripe, compression on/off, > >> sync > >> enabled/disabled, and no dedup with 56GB of ram dedicated to ARC. > >> I've > >> tried just 100% stock settings in loader.conf and and some > >> recommended > >> tuning from various sources on the freebsd lists and other sites > >> including > >> the freebsd handbook. > >> > >> This is my mountpoint creation: > >> zfs create -o mountpoint=/users -o sharenfs=on -o > >> casesensitivity=insensitive -o aclmode=passthrough -o compression=lz4 > >> -o > >> atime=off -o aclinherit=passthrough tank/users > >> > >> This last weekend I switched one of these servers over to a UFS raid > >> 0 > >> setup and NFS now only eats about 36% of one core during the initial > >> login > >> phase of 150-ish users over about 10 minutes and sits under 1-3% > >> during > >> normal usage and directories all list instantly even when drilling > >> down 10 > >> or so directories on the client's home files. The same NFS config on > >> server > >> and clients are still active. > >> > >> Right now I'm going to have to abandon ZFS until it works with NFS. > >> I > >> don't want to get into a finger pointing game, I'd just like to help > >> get > >> this fixed, I have one old i386 server I can try things out on if > >> that > >> helps and it's already on 9 stable and ZFS v28. > >> > > Btw, in previous discussions with Eric on this, he provided nfsstat > > output that seemed to indicate most of his RPC load from the Macs > > were Access and Getattr RPCs. > > > > I suspect the way ZFS handles VOP_ACCESSX() and VOP_GETATTR() is a > > significant part of this issue. I know nothing about ZFS, but I believe > > it does always have ACLs enabled and presumably needs to check the > > ACL for each VOP_ACCESSX(). > > > > Hopefully someone familiar with how ZFS handles VOP_ACCESSX() and > > VOP_GETATTR() can look at these? > > Indeed. However couldn't one simply disable ACL mode via; > > zfs set aclinherit=discard pool/dataset > zfs set aclmode=discard pool/dataset > > Eric, mind setting these and see? > > Mid/late this week I'll be doing a rather large render farm test amongst > our Mac fleet against ZFS. > > Will reply to this thread with outcome when I'm done. Should be > interesting. > > - aurf > > > > > rick > > > >> Thanks, > >> -- > >> Eric Browning > >> Systems Administrator > >> 801-984-7623 > >> > >> Skaggs Catholic Center > >> Juan Diego Catholic High School > >> Saint John the Baptist Middle > >> Saint John the Baptist Elementary > >> _______________________________________________ > >> freebsd-fs@freebsd.org mailing list > >> http://lists.freebsd.org/mailman/listinfo/freebsd-fs > >> To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" > >> > > _______________________________________________ > > freebsd-fs@freebsd.org mailing list > > http://lists.freebsd.org/mailman/listinfo/freebsd-fs > > To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" > > -- Eric Browning Systems Administrator 801-984-7623 Skaggs Catholic Center Juan Diego Catholic High School Saint John the Baptist Middle Saint John the Baptist Elementary From owner-freebsd-fs@FreeBSD.ORG Tue Nov 19 19:12:01 2013 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id DE06A37D for ; Tue, 19 Nov 2013 19:12:01 +0000 (UTC) Received: from mail-pd0-f182.google.com (mail-pd0-f182.google.com [209.85.192.182]) (using TLSv1 with cipher ECDHE-RSA-RC4-SHA (128/128 bits)) (No client certificate requested) by mx1.freebsd.org (Postfix) with ESMTPS id B5CB0255F for ; Tue, 19 Nov 2013 19:12:01 +0000 (UTC) Received: by mail-pd0-f182.google.com with SMTP id v10so2950183pde.13 for ; Tue, 19 Nov 2013 11:11:55 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:mime-version:in-reply-to:references:date :message-id:subject:from:to:cc:content-type; bh=QautV3/jI6jt/IydhLKPryxEvEmUm2DPYTP9dUYcvQQ=; b=Oi3lyMjphU/tbAjX5bKy1ag6P0ta7WWnM0r4DjFAx+INjJPxjn1CPoilHuydSqaHyx bo769z5Xk05d4S99DW/LDvhSWwTVHhcheS6FSB5M0ht6X06N+oTmP62iQ2QxCnJ88c7i Oxr0F1gmgH+hw2XJe7Xvxt0JPsNGn94CNBmjZnHLU9FO3u6Ljo9BecSjjRK1RUOGzTJ4 kUkaz1uPZzy34evBFwAG6uYwHMlzP0FZoTRH3NrO2tW9c2t/9h8gE0AnY8dr3PX/BuOf EeAy3yk/f/BpkrqJLM6u8ecgE0VJSCTQ9ymbIV+SoUD56IZQ3wcEGn4lffjB5b3wCjN2 s0GQ== X-Gm-Message-State: ALoCoQl9Xack+ra0Op5wxaIhmYNA1+tfe1CbqqLDrsjV7DIoafOJak2/qspK+GA97LVTR0CwLRYW MIME-Version: 1.0 X-Received: by 10.66.149.231 with SMTP id ud7mr28466454pab.8.1384888315578; Tue, 19 Nov 2013 11:11:55 -0800 (PST) Received: by 10.70.102.133 with HTTP; Tue, 19 Nov 2013 11:11:55 -0800 (PST) In-Reply-To: <5969250F-0987-4304-BB95-52C7BAE8D84D@gmail.com> References: <2103733116.16923158.1384866769683.JavaMail.root@uoguelph.ca> <9F76D61C-EFEB-44B3-9717-D0795789832D@gmail.com> <5969250F-0987-4304-BB95-52C7BAE8D84D@gmail.com> Date: Tue, 19 Nov 2013 12:11:55 -0700 Message-ID: Subject: Re: Performance difference between UFS and ZFS with NFS From: Eric Browning To: aurfalien Content-Type: text/plain; charset=ISO-8859-1 X-Content-Filtered-By: Mailman/MimeDel 2.1.16 Cc: FreeBSD FS X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.16 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 19 Nov 2013 19:12:01 -0000 Locking is set to locallocks, cache folders and similar folders are redirected to the local hard drive. All applications run just fine including Adobe CS6 and MS 2011 apps. This is my client NFS conf: nfs.client.mount.options = noatime,nobrowse,tcp,vers=3,rsize=32768,wsize=32768,readahead=0,acregmax=3600,acdirmax=3600,locallocks,inet,noquota,nfc nfs.client.statfs_rate_limit = 5 nfs.client.access_for_getattr = 1 nfs.client.is_mobile = 0 I'm sure this is more complex than it needs to be and I can probably get rid of most of this now, forcing nfc did cure some unicode issues between mac and freebsd. Packets are not being fragmented and there are only one or two errors here and there despite traversing vlans through the core router, MSS is set at 1460. One thing Rick M suggested is actually trying these entire setup on a UFS system. I tested by copying my home folder to another server with a UFS system and ran it for like 45 minutes and compared it to another 45 minute jaunt on the main file server and I had about 3x less Access and Getattrs on UFS than I had on ZFS. Seeing this prompted me to move one server over to a UFS raid and since doing that it's like day and night performance-wise. Server's NFS is set to 256 threads ARC is currently only at 46G of 56G total and NFS is 9.9G on the ZFS server and CPU usage is 878%. On the UFS server NFS is the same 256 threads and 9.9G but as I look at it with currently 52 users logged in NFS is at CPU 0.00% usage. This is the server NFS configs from rc.conf ## NFS Server rpcbind_enable="YES" nfs_server_enable="YES" mountd_flags="-r -l" nfsd_enable="YES" mountd_enable="YES" rpc_lockd_enable="NO" rpc_statd_enable="NO" nfs_server_flags="-t -n 256" nfsv4_server_enable="NO" nfsuserd_enable="YES" UFS Server mem stats: Mem: 49M Active, 56G Inact, 3246M Wired, 1434M Cache, 1654M Buf, 1002M Free ARC: 1884K Total, 149K MFU, 1563K MRU, 16K Anon, 56K Header, 99K Other Swap: 4096M Total, 528K Used, 4095M Free ZFS mem stats: Mem: 3180K Active, 114M Inact, 60G Wired, 1655M Buf, 2412M Free ARC: 46G Total, 26G MFU, 13G MRU, 3099K Anon, 4394M Header, 4067M Other Swap: 4096M Total, 4096M Free On Tue, Nov 19, 2013 at 11:25 AM, aurfalien wrote: > Curious. > > Do you have NFS locking enabled client side? > > Most likely you do as Mac Mail will not run w/o locks, nor will Adobe > prefs like temp cache. etc... > > So being this is prolly the case, could it be a mem pressure issue and not > enough RAM? > > So NFS locks take up RAM as does ARC. What are your mem stats and swap > stats during the 700% (yikes) experience? > > - aurf > > On Nov 19, 2013, at 10:19 AM, Eric Browning wrote: > > Aurf, > > I ran those two commands and it doesn't seem to have made a difference. > Usage is still above 700% and it still takes 30s to list a directory. The > time to list is proportional to the number of users logged in. On UFS with > all students logged in and hammering away at their files there is no > noticeable speed decrease. > > > On Tue, Nov 19, 2013 at 11:12 AM, aurfalien wrote: > >> >> On Nov 19, 2013, at 5:12 AM, Rick Macklem wrote: >> >> > Eric Browning wrote: >> >> Some background: >> >> -Two identical servers, dual AMD Athlon 6220's 16 cores total @ 3Ghz, >> >> -64GB ram each server >> >> -Four Intel DC S3700 800GB SSDs for primary storage, each server. >> >> -FreeBSD 9 stable as of 902503 >> >> -ZFS v28 and later updated to feature flags (v29?) >> >> -LSI 9200-8i controller >> >> -Intel I350T4 nic (only one port being used currently) using all four >> >> in >> >> LACP overtaxed the server's NFS queue from what we found out making >> >> the >> >> server basically unusable. >> >> >> >> There is definitely something going on between NFS and ZFS when used >> >> as a >> >> file server (random workload) for mac home directories. They do not >> >> jive >> >> well at all and pretty much drag down these beefy servers and cause >> >> 20-30 >> >> second delays when just attempting to list a directory on Mac 10.7, >> >> 10.8 >> >> clients although throughput seems fast when copying files. >> >> >> >> This server's NFS was sitting north of 700% (7+ cores) all day long >> >> when >> >> using ZFSv28 raidz1. I have also tried stripe, compression on/off, >> >> sync >> >> enabled/disabled, and no dedup with 56GB of ram dedicated to ARC. >> >> I've >> >> tried just 100% stock settings in loader.conf and and some >> >> recommended >> >> tuning from various sources on the freebsd lists and other sites >> >> including >> >> the freebsd handbook. >> >> >> >> This is my mountpoint creation: >> >> zfs create -o mountpoint=/users -o sharenfs=on -o >> >> casesensitivity=insensitive -o aclmode=passthrough -o compression=lz4 >> >> -o >> >> atime=off -o aclinherit=passthrough tank/users >> >> >> >> This last weekend I switched one of these servers over to a UFS raid >> >> 0 >> >> setup and NFS now only eats about 36% of one core during the initial >> >> login >> >> phase of 150-ish users over about 10 minutes and sits under 1-3% >> >> during >> >> normal usage and directories all list instantly even when drilling >> >> down 10 >> >> or so directories on the client's home files. The same NFS config on >> >> server >> >> and clients are still active. >> >> >> >> Right now I'm going to have to abandon ZFS until it works with NFS. >> >> I >> >> don't want to get into a finger pointing game, I'd just like to help >> >> get >> >> this fixed, I have one old i386 server I can try things out on if >> >> that >> >> helps and it's already on 9 stable and ZFS v28. >> >> >> > Btw, in previous discussions with Eric on this, he provided nfsstat >> > output that seemed to indicate most of his RPC load from the Macs >> > were Access and Getattr RPCs. >> > >> > I suspect the way ZFS handles VOP_ACCESSX() and VOP_GETATTR() is a >> > significant part of this issue. I know nothing about ZFS, but I believe >> > it does always have ACLs enabled and presumably needs to check the >> > ACL for each VOP_ACCESSX(). >> > >> > Hopefully someone familiar with how ZFS handles VOP_ACCESSX() and >> > VOP_GETATTR() can look at these? >> >> Indeed. However couldn't one simply disable ACL mode via; >> >> zfs set aclinherit=discard pool/dataset >> zfs set aclmode=discard pool/dataset >> >> Eric, mind setting these and see? >> >> Mid/late this week I'll be doing a rather large render farm test amongst >> our Mac fleet against ZFS. >> >> Will reply to this thread with outcome when I'm done. Should be >> interesting. >> >> - aurf >> >> > >> > rick >> > >> >> Thanks, >> >> -- >> >> Eric Browning >> >> Systems Administrator >> >> 801-984-7623 >> >> >> >> Skaggs Catholic Center >> >> Juan Diego Catholic High School >> >> Saint John the Baptist Middle >> >> Saint John the Baptist Elementary >> >> _______________________________________________ >> >> freebsd-fs@freebsd.org mailing list >> >> http://lists.freebsd.org/mailman/listinfo/freebsd-fs >> >> To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" >> >> >> > _______________________________________________ >> > freebsd-fs@freebsd.org mailing list >> > http://lists.freebsd.org/mailman/listinfo/freebsd-fs >> > To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" >> >> > > > -- > Eric Browning > Systems Administrator > 801-984-7623 > > Skaggs Catholic Center > Juan Diego Catholic High School > Saint John the Baptist Middle > Saint John the Baptist Elementary > > > -- Eric Browning Systems Administrator 801-984-7623 Skaggs Catholic Center Juan Diego Catholic High School Saint John the Baptist Middle Saint John the Baptist Elementary From owner-freebsd-fs@FreeBSD.ORG Tue Nov 19 19:38:36 2013 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 33FFB812 for ; Tue, 19 Nov 2013 19:38:36 +0000 (UTC) Received: from mail-pb0-x231.google.com (mail-pb0-x231.google.com [IPv6:2607:f8b0:400e:c01::231]) (using TLSv1 with cipher ECDHE-RSA-RC4-SHA (128/128 bits)) (No client certificate requested) by mx1.freebsd.org (Postfix) with ESMTPS id 0818426C9 for ; Tue, 19 Nov 2013 19:38:36 +0000 (UTC) Received: by mail-pb0-f49.google.com with SMTP id jt11so4193848pbb.8 for ; Tue, 19 Nov 2013 11:38:35 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=subject:mime-version:content-type:from:in-reply-to:date:cc :message-id:references:to; bh=hRLRADAE8qLKl+2MqJcc290FHtsCAzcupo5gJ/se3po=; b=Qv+twJWeRqV4ISHcgWSpn411AJKqVFz8qhjhG68QaOl89qE8PauYT2YOs/XVvPVZHv gLi9UwkWcgeDfX7IDcxAal/dZnzG+XUVqz7awbKSDkJeDH4PLP9pnChXYNS/Mru0GzVV 7yazxGQWtc0TPY8sCBmiUhCth+U3Yw0g6XIBYZAgc7eigHE9GGx3BMoW76Ip0S+2doKw Z2zLBI6lMziDUv/q6wRoAB+L0UY0JbGoBIHouK5d3imnj/83/0yGREj8rW6b8U7bGigf sDeevhLj5uDodyc/EUxRQc0rDvb0RjEkfrfKXxEzM9pHsFe2IJl6G1fzzJ0oJKeizeX2 KXFw== X-Received: by 10.68.235.72 with SMTP id uk8mr20369463pbc.93.1384889915526; Tue, 19 Nov 2013 11:38:35 -0800 (PST) Received: from briankrusicw.logan.tv ([64.17.255.138]) by mx.google.com with ESMTPSA id qn1sm32327181pbc.34.2013.11.19.11.38.34 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Tue, 19 Nov 2013 11:38:35 -0800 (PST) Subject: Re: Performance difference between UFS and ZFS with NFS Mime-Version: 1.0 (Apple Message framework v1085) From: aurfalien In-Reply-To: Date: Tue, 19 Nov 2013 11:38:33 -0800 Message-Id: <18391B9C-2FC4-427B-A4B6-1739B3C17498@gmail.com> References: <2103733116.16923158.1384866769683.JavaMail.root@uoguelph.ca> <9F76D61C-EFEB-44B3-9717-D0795789832D@gmail.com> <5969250F-0987-4304-BB95-52C7BAE8D84D@gmail.com> To: Eric Browning X-Mailer: Apple Mail (2.1085) Content-Type: text/plain; charset=us-ascii Content-Transfer-Encoding: quoted-printable X-Content-Filtered-By: Mailman/MimeDel 2.1.16 Cc: FreeBSD FS X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.16 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 19 Nov 2013 19:38:36 -0000 Wow, those are great mount options, I use em too :) Well, this is very interesting on the +3x access/getattrs with ZFS. I'll report back my findings as I'm going down a similar road, albeit = not home dirs but rendering using AE and C4D on many clients. Until then hoping some one chime in on this with some added nuggets. - aurf On Nov 19, 2013, at 11:11 AM, Eric Browning wrote: > Locking is set to locallocks, cache folders and similar folders are = redirected to the local hard drive. All applications run just fine = including Adobe CS6 and MS 2011 apps. >=20 > This is my client NFS conf: > nfs.client.mount.options =3D = noatime,nobrowse,tcp,vers=3D3,rsize=3D32768,wsize=3D32768,readahead=3D0,ac= regmax=3D3600,acdirmax=3D3600,locallocks,inet,noquota,nfc > nfs.client.statfs_rate_limit =3D 5 > nfs.client.access_for_getattr =3D 1 > nfs.client.is_mobile =3D 0 >=20 > I'm sure this is more complex than it needs to be and I can probably = get rid of most of this now, forcing nfc did cure some unicode issues = between mac and freebsd. Packets are not being fragmented and there are = only one or two errors here and there despite traversing vlans through = the core router, MSS is set at 1460. >=20 > One thing Rick M suggested is actually trying these entire setup on a = UFS system. I tested by copying my home folder to another server with a = UFS system and ran it for like 45 minutes and compared it to another 45 = minute jaunt on the main file server and I had about 3x less Access and = Getattrs on UFS than I had on ZFS. Seeing this prompted me to move one = server over to a UFS raid and since doing that it's like day and night = performance-wise.=20 >=20 > Server's NFS is set to 256 threads ARC is currently only at 46G of 56G = total and NFS is 9.9G on the ZFS server and CPU usage is 878%. On the = UFS server NFS is the same 256 threads and 9.9G but as I look at it with = currently 52 users logged in NFS is at CPU 0.00% usage. >=20 > This is the server NFS configs from rc.conf > ## NFS Server > rpcbind_enable=3D"YES" > nfs_server_enable=3D"YES" > mountd_flags=3D"-r -l" > nfsd_enable=3D"YES" > mountd_enable=3D"YES" > rpc_lockd_enable=3D"NO" > rpc_statd_enable=3D"NO" > nfs_server_flags=3D"-t -n 256" > nfsv4_server_enable=3D"NO" > nfsuserd_enable=3D"YES" >=20 > UFS Server mem stats: > Mem: 49M Active, 56G Inact, 3246M Wired, 1434M Cache, 1654M Buf, 1002M = Free > ARC: 1884K Total, 149K MFU, 1563K MRU, 16K Anon, 56K Header, 99K Other > Swap: 4096M Total, 528K Used, 4095M Free >=20 > ZFS mem stats: > Mem: 3180K Active, 114M Inact, 60G Wired, 1655M Buf, 2412M Free > ARC: 46G Total, 26G MFU, 13G MRU, 3099K Anon, 4394M Header, 4067M = Other > Swap: 4096M Total, 4096M Free >=20 >=20 >=20 > On Tue, Nov 19, 2013 at 11:25 AM, aurfalien = wrote: > Curious. >=20 > Do you have NFS locking enabled client side? >=20 > Most likely you do as Mac Mail will not run w/o locks, nor will Adobe = prefs like temp cache. etc... >=20 > So being this is prolly the case, could it be a mem pressure issue and = not enough RAM? >=20 > So NFS locks take up RAM as does ARC. What are your mem stats and = swap stats during the 700% (yikes) experience? >=20 > - aurf >=20 > On Nov 19, 2013, at 10:19 AM, Eric Browning wrote: >=20 >> Aurf, >>=20 >> I ran those two commands and it doesn't seem to have made a = difference. Usage is still above 700% and it still takes 30s to list a = directory. The time to list is proportional to the number of users = logged in. On UFS with all students logged in and hammering away at = their files there is no noticeable speed decrease. >>=20 >>=20 >> On Tue, Nov 19, 2013 at 11:12 AM, aurfalien = wrote: >>=20 >> On Nov 19, 2013, at 5:12 AM, Rick Macklem wrote: >>=20 >> > Eric Browning wrote: >> >> Some background: >> >> -Two identical servers, dual AMD Athlon 6220's 16 cores total @ = 3Ghz, >> >> -64GB ram each server >> >> -Four Intel DC S3700 800GB SSDs for primary storage, each server. >> >> -FreeBSD 9 stable as of 902503 >> >> -ZFS v28 and later updated to feature flags (v29?) >> >> -LSI 9200-8i controller >> >> -Intel I350T4 nic (only one port being used currently) using all = four >> >> in >> >> LACP overtaxed the server's NFS queue from what we found out = making >> >> the >> >> server basically unusable. >> >> >> >> There is definitely something going on between NFS and ZFS when = used >> >> as a >> >> file server (random workload) for mac home directories. They do = not >> >> jive >> >> well at all and pretty much drag down these beefy servers and = cause >> >> 20-30 >> >> second delays when just attempting to list a directory on Mac = 10.7, >> >> 10.8 >> >> clients although throughput seems fast when copying files. >> >> >> >> This server's NFS was sitting north of 700% (7+ cores) all day = long >> >> when >> >> using ZFSv28 raidz1. I have also tried stripe, compression on/off, >> >> sync >> >> enabled/disabled, and no dedup with 56GB of ram dedicated to ARC. >> >> I've >> >> tried just 100% stock settings in loader.conf and and some >> >> recommended >> >> tuning from various sources on the freebsd lists and other sites >> >> including >> >> the freebsd handbook. >> >> >> >> This is my mountpoint creation: >> >> zfs create -o mountpoint=3D/users -o sharenfs=3Don -o >> >> casesensitivity=3Dinsensitive -o aclmode=3Dpassthrough -o = compression=3Dlz4 >> >> -o >> >> atime=3Doff -o aclinherit=3Dpassthrough tank/users >> >> >> >> This last weekend I switched one of these servers over to a UFS = raid >> >> 0 >> >> setup and NFS now only eats about 36% of one core during the = initial >> >> login >> >> phase of 150-ish users over about 10 minutes and sits under 1-3% >> >> during >> >> normal usage and directories all list instantly even when drilling >> >> down 10 >> >> or so directories on the client's home files. The same NFS config = on >> >> server >> >> and clients are still active. >> >> >> >> Right now I'm going to have to abandon ZFS until it works with = NFS. >> >> I >> >> don't want to get into a finger pointing game, I'd just like to = help >> >> get >> >> this fixed, I have one old i386 server I can try things out on if >> >> that >> >> helps and it's already on 9 stable and ZFS v28. >> >> >> > Btw, in previous discussions with Eric on this, he provided nfsstat >> > output that seemed to indicate most of his RPC load from the Macs >> > were Access and Getattr RPCs. >> > >> > I suspect the way ZFS handles VOP_ACCESSX() and VOP_GETATTR() is a >> > significant part of this issue. I know nothing about ZFS, but I = believe >> > it does always have ACLs enabled and presumably needs to check the >> > ACL for each VOP_ACCESSX(). >> > >> > Hopefully someone familiar with how ZFS handles VOP_ACCESSX() and >> > VOP_GETATTR() can look at these? >>=20 >> Indeed. However couldn't one simply disable ACL mode via; >>=20 >> zfs set aclinherit=3Ddiscard pool/dataset >> zfs set aclmode=3Ddiscard pool/dataset >>=20 >> Eric, mind setting these and see? >>=20 >> Mid/late this week I'll be doing a rather large render farm test = amongst our Mac fleet against ZFS. >>=20 >> Will reply to this thread with outcome when I'm done. Should be = interesting. >>=20 >> - aurf >>=20 >> > >> > rick >> > >> >> Thanks, >> >> -- >> >> Eric Browning >> >> Systems Administrator >> >> 801-984-7623 >> >> >> >> Skaggs Catholic Center >> >> Juan Diego Catholic High School >> >> Saint John the Baptist Middle >> >> Saint John the Baptist Elementary >> >> _______________________________________________ >> >> freebsd-fs@freebsd.org mailing list >> >> http://lists.freebsd.org/mailman/listinfo/freebsd-fs >> >> To unsubscribe, send any mail to = "freebsd-fs-unsubscribe@freebsd.org" >> >> >> > _______________________________________________ >> > freebsd-fs@freebsd.org mailing list >> > http://lists.freebsd.org/mailman/listinfo/freebsd-fs >> > To unsubscribe, send any mail to = "freebsd-fs-unsubscribe@freebsd.org" >>=20 >>=20 >>=20 >>=20 >> --=20 >> Eric Browning >> Systems Administrator >> 801-984-7623 >>=20 >> Skaggs Catholic Center >> Juan Diego Catholic High School >> Saint John the Baptist Middle >> Saint John the Baptist Elementary >=20 >=20 >=20 >=20 > --=20 > Eric Browning > Systems Administrator > 801-984-7623 >=20 > Skaggs Catholic Center > Juan Diego Catholic High School > Saint John the Baptist Middle > Saint John the Baptist Elementary From owner-freebsd-fs@FreeBSD.ORG Wed Nov 20 11:18:36 2013 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 640E56B5 for ; Wed, 20 Nov 2013 11:18:36 +0000 (UTC) Received: from mailex.mailcore.me (mailex.mailcore.me [94.136.40.61]) by mx1.freebsd.org (Postfix) with ESMTP id 2EFEA225D for ; Wed, 20 Nov 2013 11:18:35 +0000 (UTC) Received: from host81-152-206-26.range81-152.btcentralplus.com ([81.152.206.26] helo=[192.168.1.168]) by smtp03.mailcore.me with esmtpa (Exim 4.80.1) (envelope-from ) id 1Vj5nf-0001Gf-Mk for freebsd-fs@freebsd.org; Wed, 20 Nov 2013 11:18:28 +0000 Message-ID: <528C9A83.1060606@kearsley.me> Date: Wed, 20 Nov 2013 11:18:27 +0000 From: Richard Kearsley User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:24.0) Gecko/20100101 Thunderbird/24.0 MIME-Version: 1.0 To: freebsd-fs@freebsd.org Subject: boot zfs array from hba Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit X-Mailcore-Auth: 12120934 X-Mailcore-Domain: 1490668 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.16 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 20 Nov 2013 11:18:36 -0000 Hi I am having some problems booting a raidz1 array from a lsi 2308 HBA on a supermicro X9DRD-7LN4F-JBOD board I believe the problem is that the bios only exposes one of the disks on the HBA to the bootloader, resulting in the loader thinking the array is degraded FreeBSD must detect the disks itself at some point - through mps(4), but it does not happen early enough in the boot procedure... My question is there a trick, or something I'm missing, to be able to boot an array from a HBA? I'm using FreeBSD 9.1 The error I get at boot is as follows: ZFS: i/o error - all block copies unavailable ZFS: can't read object set for dataset u ZFS: can't open root filesystem gptzfsboot: failed to mount default pool ssd FreeBSD/x86 boot Default: ssd:: boot: I can then run: boot: status pool: ssd which gives the output: pool: ssd config: NAME STATE ssd CLOSED raidz1 OFFLINE /dev/gpt/ssd0.nop ONLINE /dev/gpt/ssd1.nop OFFLINE /dev/gpt/ssd2.nop OFFLINE /dev/gpt/ssd3.nop OFFLINE /dev/gpt/ssd4.nop OFFLINE I'm able to install in the same way if I move the 5 disks to the on-board ports of the motherboard - and everything works as expected Below is how i set up the array during install: Clear old partition table if disks are not new: `dd if=/dev/zero of=/dev/da0 bs=1M count=10` `dd if=/dev/zero of=/dev/da1 bs=1M count=10` `dd if=/dev/zero of=/dev/da2 bs=1M count=10` `dd if=/dev/zero of=/dev/da3 bs=1M count=10` `dd if=/dev/zero of=/dev/da4 bs=1M count=10` Create gpt tables `gpart create -s gpt da0` `gpart create -s gpt da1` `gpart create -s gpt da2` `gpart create -s gpt da3` `gpart create -s gpt da4` Create boot partitions `gpart add -s 222 -a 4k -t freebsd-boot -l boot0 da0` `gpart add -s 222 -a 4k -t freebsd-boot -l boot1 da1` `gpart add -s 222 -a 4k -t freebsd-boot -l boot2 da2` `gpart add -s 222 -a 4k -t freebsd-boot -l boot3 da3` `gpart add -s 222 -a 4k -t freebsd-boot -l boot4 da4` Create swap partitions `gpart add -s 1g -a 4k -t freebsd-swap -l swap0 da0` `gpart add -s 1g -a 4k -t freebsd-swap -l swap1 da1` `gpart add -s 1g -a 4k -t freebsd-swap -l swap2 da2` `gpart add -s 1g -a 4k -t freebsd-swap -l swap3 da3` `gpart add -s 1g -a 4k -t freebsd-swap -l swap4 da4` Create zfs partitions: `gpart add -a 4k -t freebsd-zfs -l ssd0 da0` `gpart add -a 4k -t freebsd-zfs -l ssd1 da1` `gpart add -a 4k -t freebsd-zfs -l ssd2 da2` `gpart add -a 4k -t freebsd-zfs -l ssd3 da3` `gpart add -a 4k -t freebsd-zfs -l ssd4 da4` Clear any old zfs data `dd if=/dev/zero of=/dev/gpt/ssd0 count=560 bs=512` `dd if=/dev/zero of=/dev/gpt/ssd1 count=560 bs=512` `dd if=/dev/zero of=/dev/gpt/ssd2 count=560 bs=512` `dd if=/dev/zero of=/dev/gpt/ssd3 count=560 bs=512` `dd if=/dev/zero of=/dev/gpt/ssd4 count=560 bs=512` Install boot loader `gpart bootcode -b /boot/pmbr -p /boot/gptzfsboot -i 1 da0` `gpart bootcode -b /boot/pmbr -p /boot/gptzfsboot -i 1 da1` `gpart bootcode -b /boot/pmbr -p /boot/gptzfsboot -i 1 da2` `gpart bootcode -b /boot/pmbr -p /boot/gptzfsboot -i 1 da3` `gpart bootcode -b /boot/pmbr -p /boot/gptzfsboot -i 1 da4` Align 4k sectors for ZFS `gnop create -S 4096 /dev/gpt/ssd0` `gnop create -S 4096 /dev/gpt/ssd1` `gnop create -S 4096 /dev/gpt/ssd2` `gnop create -S 4096 /dev/gpt/ssd3` `gnop create -S 4096 /dev/gpt/ssd4` Create zpool `zpool create -f -m none -o altroot=/mnt -o cachefile=/tmp/zpool.cache ssd raidz1 /dev/gpt/ssd0.nop /dev/gpt/ssd1.nop /dev/gpt/ssd2.nop /dev/gpt/ssd3.nop /dev/gpt/ssd4.nop` `zfs set atime=off ssd` Create mounts `zfs create -o mountpoint=/ ssd/root` `zfs create -o mountpoint=/data/small-0 ssd/small-0` `zfs create -o mountpoint=/data/large-2 ssd/large-2` Set bootfs `zpool set bootfs=ssd/root ssd` exit, installer runs... Open a shell? `NO` Complete `Live CD` Edit fstab at `/mnt/etc/fstab` `/dev/gpt/swap0 none swap sw 0 0` `/dev/gpt/swap1 none swap sw 0 0` `/dev/gpt/swap2 none swap sw 0 0` `/dev/gpt/swap3 none swap sw 0 0` `/dev/gpt/swap4 none swap sw 0 0` Edit `/mnt/boot/loader.conf` `zfs_load="YES" ` `vfs.root.mountfrom="zfs:ssd/root"` Edit `/mnt/etc/rc.conf` `zfs_enable="YES"` Zpool cache trick `zpool export ssd` `zpool import -o altroot=/mnt -o cachefile=/tmp/zpool.cache ssd` `cp /tmp/zpool.cache /mnt/boot/zfs/` reboot... Many thanks! From owner-freebsd-fs@FreeBSD.ORG Wed Nov 20 11:54:07 2013 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 90CC8E26 for ; Wed, 20 Nov 2013 11:54:07 +0000 (UTC) Received: from elsa.codelab.cz (elsa.codelab.cz [94.124.105.4]) (using TLSv1 with cipher ADH-CAMELLIA256-SHA (256/256 bits)) (No client certificate requested) by mx1.freebsd.org (Postfix) with ESMTPS id 5280E2487 for ; Wed, 20 Nov 2013 11:54:06 +0000 (UTC) Received: from elsa.codelab.cz (localhost [127.0.0.1]) by elsa.codelab.cz (Postfix) with ESMTP id C05B22842B; Wed, 20 Nov 2013 12:53:57 +0100 (CET) Received: from [192.168.1.2] (ip-89-177-49-222.net.upcbroadband.cz [89.177.49.222]) (using TLSv1 with cipher DHE-RSA-CAMELLIA256-SHA (256/256 bits)) (No client certificate requested) by elsa.codelab.cz (Postfix) with ESMTPSA id 9E76228427; Wed, 20 Nov 2013 12:53:56 +0100 (CET) Message-ID: <528CA2D3.3060600@quip.cz> Date: Wed, 20 Nov 2013 12:53:55 +0100 From: Miroslav Lachman <000.fbsd@quip.cz> User-Agent: Mozilla/5.0 (Windows; U; Windows NT 5.0; en-US; rv:1.9.1.19) Gecko/20110420 Lightning/1.0b1 SeaMonkey/2.0.14 MIME-Version: 1.0 To: Richard Kearsley Subject: Re: boot zfs array from hba References: <528C9A83.1060606@kearsley.me> In-Reply-To: <528C9A83.1060606@kearsley.me> Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit Cc: freebsd-fs@freebsd.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.16 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 20 Nov 2013 11:54:07 -0000 Richard Kearsley wrote: > Hi > I am having some problems booting a raidz1 array from a lsi 2308 HBA on > a supermicro X9DRD-7LN4F-JBOD board > I believe the problem is that the bios only exposes one of the disks on > the HBA to the bootloader, resulting in the loader thinking the array is > degraded > FreeBSD must detect the disks itself at some point - through mps(4), but > it does not happen early enough in the boot procedure... > > My question is there a trick, or something I'm missing, to be able to > boot an array from a HBA? > I'm using FreeBSD 9.1 > > The error I get at boot is as follows: > ZFS: i/o error - all block copies unavailable > ZFS: can't read object set for dataset u > ZFS: can't open root filesystem > gptzfsboot: failed to mount default pool ssd > > FreeBSD/x86 boot > Default: ssd:: > boot: I had the same problem on Dell server with H200 (rebrended LSI 2008). I post about the workaround here http://lists.freebsd.org/pipermail/freebsd-fs/2013-November/018536.html I did small partition on each disk with ZFS mirror (mirroring all disk) and place the /boot or whole base system on this mirrored pool instead of RAIDZ pool (later created on second partition covering the rest of the disks) Miroslav Lachman From owner-freebsd-fs@FreeBSD.ORG Wed Nov 20 23:02:19 2013 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id D8D73BDD for ; Wed, 20 Nov 2013 23:02:19 +0000 (UTC) Received: from mail-pa0-x234.google.com (mail-pa0-x234.google.com [IPv6:2607:f8b0:400e:c03::234]) (using TLSv1 with cipher ECDHE-RSA-RC4-SHA (128/128 bits)) (No client certificate requested) by mx1.freebsd.org (Postfix) with ESMTPS id B157A2FEB for ; Wed, 20 Nov 2013 23:02:19 +0000 (UTC) Received: by mail-pa0-f52.google.com with SMTP id ld10so6035695pab.25 for ; Wed, 20 Nov 2013 15:02:19 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=delphix.com; s=google; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type; bh=7aAdkkl5PneBRqGECuIyl5M+EyDa3RFZ07rcqJwbG9g=; b=PnuIBqAMeJNg2kX0zOXkHyw6K+YM0jEry3ImhFFQoFr9YDysj6u7utnaHJWUSPA23G yDSQFRbtvHVtLW+lIhH1qohFeTZFB1y+r3ahfpdg+5iTPe32k9EZVKDYMqSi3eEF/Oek E7eSrBK+WIsAZDlng9bVHFeSr+9BHKjefbzcM= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:mime-version:in-reply-to:references:date :message-id:subject:from:to:cc:content-type; bh=7aAdkkl5PneBRqGECuIyl5M+EyDa3RFZ07rcqJwbG9g=; b=IOaIlb6rKkj5UTytso/h48IIulB2423QSwvDZI5OWKj8yuytrVnWMOvQYheGPfpyxG LAeDWofGdP1tqmlWNKpqDUl+w26fcQ6PdOTLj6KCb9yhGgp6R5zeu0/mZQ8CzLGhkN7k Wx7vOXQ62EuaDL+HYKERlr75+PaBxnKN6uaS6oEZDDtu5/hralHVHSYOBpOH1/8gjqeG 0SiI3HD44liM6i+a/vA2vRNJbF1i3J6LuZ70RNReRSpZr2HmgKLMddQ0MmZAeKm9r01Y ROvHfmprl9ocyVIhMNL+Zi/M5VAqu9Ud4NLH6+T2MmpGGDhniyEB7RtvGuUsRjh/NK40 xluw== X-Gm-Message-State: ALoCoQlWClUsuonNDReXSWtQtm4kHzp+EYCxBfCe10eF5a/i/xw/JX9JhPj6fCTnnBM4n7zQcfws MIME-Version: 1.0 X-Received: by 10.68.178.68 with SMTP id cw4mr3100475pbc.15.1384988539228; Wed, 20 Nov 2013 15:02:19 -0800 (PST) Received: by 10.70.75.234 with HTTP; Wed, 20 Nov 2013 15:02:19 -0800 (PST) In-Reply-To: <20131114173423.GA21761@blazingdot.com> References: <20131114173423.GA21761@blazingdot.com> Date: Wed, 20 Nov 2013 15:02:19 -0800 Message-ID: Subject: Re: Defaults in 10.0 ZFS through bsdinstall From: Matthew Ahrens To: Marcus Reid Content-Type: text/plain; charset=ISO-8859-1 X-Content-Filtered-By: Mailman/MimeDel 2.1.16 Cc: freebsd-fs X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.16 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 20 Nov 2013 23:02:20 -0000 On Thu, Nov 14, 2013 at 9:34 AM, Marcus Reid wrote: > Hi, > > I noticed a couple of things with the ZFS defaults that result from > using the new installer in 10.0-BETA3. > > One, atime is turned off everywhere by default. There was a thread on > this list on June 8 with a subject of 'Changing the default for ZFS > atime to off?', and from what I can tell the idea of turning off atime > by default was not a popular one. > > It would be a pity if people compared ZFS on FreeBSD vs UFS on FreeBSD (using the installer's defaults) and came to the conclusion that "Mail programs don't work on ZFS on FreeBSD, use UFS instead." I think it's well known that there are performance differences between ZFS and UFS, depending on your workload. If you choose defaults that cause there to be correctness differences, that could be detrimental. --matt From owner-freebsd-fs@FreeBSD.ORG Wed Nov 20 23:35:19 2013 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id C282B6AD for ; Wed, 20 Nov 2013 23:35:19 +0000 (UTC) Received: from mail1.multiplay.co.uk (mail1.multiplay.co.uk [85.236.96.23]) (using TLSv1 with cipher RC4-MD5 (128/128 bits)) (No client certificate requested) by mx1.freebsd.org (Postfix) with ESMTPS id 6426A21E5 for ; Wed, 20 Nov 2013 23:35:19 +0000 (UTC) Received: from r2d2 ([82.69.141.170]) by mail1.multiplay.co.uk (mail1.multiplay.co.uk [85.236.96.23]) (MDaemon PRO v10.0.4) with ESMTP id md50006781956.msg for ; Wed, 20 Nov 2013 23:35:15 +0000 X-Spam-Processed: mail1.multiplay.co.uk, Wed, 20 Nov 2013 23:35:15 +0000 (not processed: message from valid local sender) X-MDDKIM-Result: neutral (mail1.multiplay.co.uk) X-MDRemoteIP: 82.69.141.170 X-Return-Path: prvs=103643e192=killing@multiplay.co.uk X-Envelope-From: killing@multiplay.co.uk X-MDaemon-Deliver-To: freebsd-fs@freebsd.org Message-ID: From: "Steven Hartland" To: "Matthew Ahrens" , "Marcus Reid" References: <20131114173423.GA21761@blazingdot.com> Subject: Re: Defaults in 10.0 ZFS through bsdinstall Date: Wed, 20 Nov 2013 23:35:08 -0000 MIME-Version: 1.0 Content-Type: text/plain; format=flowed; charset="iso-8859-1"; reply-type=original Content-Transfer-Encoding: 7bit X-Priority: 3 X-MSMail-Priority: Normal X-Mailer: Microsoft Outlook Express 6.00.2900.5931 X-MimeOLE: Produced By Microsoft MimeOLE V6.00.2900.6157 Cc: freebsd-fs X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.16 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 20 Nov 2013 23:35:19 -0000 ----- Original Message ----- From: "Matthew Ahrens" >> I noticed a couple of things with the ZFS defaults that result from >> using the new installer in 10.0-BETA3. >> >> One, atime is turned off everywhere by default. There was a thread on >> this list on June 8 with a subject of 'Changing the default for ZFS >> atime to off?', and from what I can tell the idea of turning off atime >> by default was not a popular one. >> >> > It would be a pity if people compared ZFS on FreeBSD vs UFS on FreeBSD > (using the installer's defaults) and came to the conclusion that "Mail > programs don't work on ZFS on FreeBSD, use UFS instead." I think it's well > known that there are performance differences between ZFS and UFS, depending > on your workload. If you choose defaults that cause there to be > correctness differences, that could be detrimental. It would also be a pitty if users came to conclusion not to use ZFS because it wears their SSD's out much quicker than UFS does or performs much worse. Having a sensible default that's correctly messaged is something to be commended not discouraged because its not the tradition and for those that don't bother reading they may have issues as that could be said for any option. Its also not something that can't be changed in seconds either, so the suggestion of /var with it enabled so default mail installs work as normal and for those that choose to install mail folders else where they need to read and learn, instead of peanalising every single user gets my vote. Regards Steve ================================================ This e.mail is private and confidential between Multiplay (UK) Ltd. and the person or entity to whom it is addressed. In the event of misdirection, the recipient is prohibited from using, copying, printing or otherwise disseminating it or any information contained in it. In the event of misdirection, illegible or incomplete transmission please telephone +44 845 868 1337 or return the E.mail to postmaster@multiplay.co.uk. From owner-freebsd-fs@FreeBSD.ORG Thu Nov 21 01:35:05 2013 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id A4776808 for ; Thu, 21 Nov 2013 01:35:05 +0000 (UTC) Received: from vps1.elischer.org (vps1.elischer.org [204.109.63.16]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mx1.freebsd.org (Postfix) with ESMTPS id 6EB4327E0 for ; Thu, 21 Nov 2013 01:35:05 +0000 (UTC) Received: from julian-mbp3.pixel8networks.com (50-196-156-133-static.hfc.comcastbusiness.net [50.196.156.133]) (authenticated bits=0) by vps1.elischer.org (8.14.7/8.14.7) with ESMTP id rAL1Yuoo006864 (version=TLSv1/SSLv3 cipher=DHE-RSA-CAMELLIA256-SHA bits=256 verify=NO); Wed, 20 Nov 2013 17:34:57 -0800 (PST) (envelope-from julian@freebsd.org) Message-ID: <528D633B.6040104@freebsd.org> Date: Wed, 20 Nov 2013 17:34:51 -0800 From: Julian Elischer User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.9; rv:24.0) Gecko/20100101 Thunderbird/24.1.1 MIME-Version: 1.0 To: Steven Hartland , Matthew Ahrens , Marcus Reid Subject: Re: Defaults in 10.0 ZFS through bsdinstall References: <20131114173423.GA21761@blazingdot.com> In-Reply-To: Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit Cc: freebsd-fs X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.16 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 21 Nov 2013 01:35:05 -0000 On 11/20/13, 3:35 PM, Steven Hartland wrote: > ----- Original Message ----- From: "Matthew Ahrens" > >>> I noticed a couple of things with the ZFS defaults that result from >>> using the new installer in 10.0-BETA3. >>> >>> One, atime is turned off everywhere by default. There was a >>> thread on >>> this list on June 8 with a subject of 'Changing the default for ZFS >>> atime to off?', and from what I can tell the idea of turning off >>> atime >>> by default was not a popular one. >>> >>> >> It would be a pity if people compared ZFS on FreeBSD vs UFS on FreeBSD >> (using the installer's defaults) and came to the conclusion that "Mail >> programs don't work on ZFS on FreeBSD, use UFS instead." I think >> it's well >> known that there are performance differences between ZFS and UFS, >> depending >> on your workload. If you choose defaults that cause there to be >> correctness differences, that could be detrimental. > > It would also be a pitty if users came to conclusion not to use ZFS > because > it wears their SSD's out much quicker than UFS does or performs much > worse. > > Having a sensible default that's correctly messaged is something to > be commended not discouraged because its not the tradition and for > those that don't bother reading they may have issues as that could > be said for any option. > > Its also not something that can't be changed in seconds either, so the > suggestion of /var with it enabled so default mail installs work > as normal and for those that choose to install mail folders else > where they need to read and learn, instead of peanalising every single > user gets my vote. I think the installer should make a point of asking the user what they need.. then they cannot complain if they chose something they don't want. > > Regards > Steve > > ================================================ > This e.mail is private and confidential between Multiplay (UK) Ltd. > and the person or entity to whom it is addressed. In the event of > misdirection, the recipient is prohibited from using, copying, > printing or otherwise disseminating it or any information contained > in it. > In the event of misdirection, illegible or incomplete transmission > please telephone +44 845 868 1337 > or return the E.mail to postmaster@multiplay.co.uk. > > _______________________________________________ > freebsd-fs@freebsd.org mailing list > http://lists.freebsd.org/mailman/listinfo/freebsd-fs > To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" > From owner-freebsd-fs@FreeBSD.ORG Thu Nov 21 15:22:07 2013 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 5F9F3ED4; Thu, 21 Nov 2013 15:22:07 +0000 (UTC) Received: from wonkity.com (wonkity.com [67.158.26.137]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.freebsd.org (Postfix) with ESMTPS id EB2CD2EB1; Thu, 21 Nov 2013 15:22:06 +0000 (UTC) Received: from wonkity.com (localhost [127.0.0.1]) by wonkity.com (8.14.7/8.14.7) with ESMTP id rALFLvcT059354; Thu, 21 Nov 2013 08:21:57 -0700 (MST) (envelope-from wblock@wonkity.com) Received: from localhost (wblock@localhost) by wonkity.com (8.14.7/8.14.7/Submit) with ESMTP id rALFLrcM059351; Thu, 21 Nov 2013 08:21:53 -0700 (MST) (envelope-from wblock@wonkity.com) Date: Thu, 21 Nov 2013 08:21:53 -0700 (MST) From: Warren Block To: Julian Elischer Subject: Re: Defaults in 10.0 ZFS through bsdinstall In-Reply-To: <528D633B.6040104@freebsd.org> Message-ID: References: <20131114173423.GA21761@blazingdot.com> <528D633B.6040104@freebsd.org> User-Agent: Alpine 2.00 (BSF 1167 2008-08-23) MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII; format=flowed X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.4.3 (wonkity.com [127.0.0.1]); Thu, 21 Nov 2013 08:21:57 -0700 (MST) Cc: freebsd-fs , Matthew Ahrens X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.16 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 21 Nov 2013 15:22:07 -0000 On Wed, 20 Nov 2013, Julian Elischer wrote: > On 11/20/13, 3:35 PM, Steven Hartland wrote: >> ----- Original Message ----- From: "Matthew Ahrens" >>>> I noticed a couple of things with the ZFS defaults that result from >>>> using the new installer in 10.0-BETA3. >>>> >>>> One, atime is turned off everywhere by default. There was a thread on >>>> this list on June 8 with a subject of 'Changing the default for ZFS >>>> atime to off?', and from what I can tell the idea of turning off atime >>>> by default was not a popular one. >>>> >>>> >>> It would be a pity if people compared ZFS on FreeBSD vs UFS on FreeBSD >>> (using the installer's defaults) and came to the conclusion that "Mail >>> programs don't work on ZFS on FreeBSD, use UFS instead." I think it's >>> well >>> known that there are performance differences between ZFS and UFS, >>> depending >>> on your workload. If you choose defaults that cause there to be >>> correctness differences, that could be detrimental. >> >> It would also be a pitty if users came to conclusion not to use ZFS because >> it wears their SSD's out much quicker than UFS does or performs much >> worse. >> >> Having a sensible default that's correctly messaged is something to >> be commended not discouraged because its not the tradition and for >> those that don't bother reading they may have issues as that could >> be said for any option. >> >> Its also not something that can't be changed in seconds either, so the >> suggestion of /var with it enabled so default mail installs work >> as normal and for those that choose to install mail folders else >> where they need to read and learn, instead of peanalising every single >> user gets my vote. > > I think the installer should make a point of asking the user what they need.. > then they cannot complain if they chose something they don't want. How about adding a test and warning to /etc/mail/Makefile and the ports affected? Or maybe it can be done in a single place with mailwrapper. In general, it seems like the applications that depend on atime (or any feature, really) should be responsible for detecting that it is enabled. From owner-freebsd-fs@FreeBSD.ORG Thu Nov 21 23:04:38 2013 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id B1643C05 for ; Thu, 21 Nov 2013 23:04:38 +0000 (UTC) Received: from mail-pa0-f53.google.com (mail-pa0-f53.google.com [209.85.220.53]) (using TLSv1 with cipher ECDHE-RSA-RC4-SHA (128/128 bits)) (No client certificate requested) by mx1.freebsd.org (Postfix) with ESMTPS id 868602E64 for ; Thu, 21 Nov 2013 23:04:38 +0000 (UTC) Received: by mail-pa0-f53.google.com with SMTP id hz1so442119pad.12 for ; Thu, 21 Nov 2013 15:04:32 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:mime-version:in-reply-to:references:date :message-id:subject:from:to:cc:content-type; bh=uypEHGYAPqqCv+ExWvRJnemGXsVnNaDEMaQVZotxSdk=; b=NgAKlttqMAeysuALjCAAPTD/6thzeCkAslTzV6Nu3J53VE946nkpJsqncZ/lt/sBh3 bLMug7CNee606q10TSksTY4ucpPO2ZizP2BmLWJ6/rN3CAd5zjEGT6w5FhzHhBfjZLIY S7ftUrP9m8KYMMQlVh+6SLCZZFTDoBHFK/yUBe1DqjaHRCSgr1CnMxjjkuveJTkrpCpI QnktE3gDOflQBODiHKW/6/RCzaJShFxiJlisY3gv942Gl3qGSjskE/0knVShQNQviVEn 0ZadBygAVWSkQxj1ieyijMrn5wIiE86/gx9y2FzLBYwqDFDJ/QXzd75kQksQtmkRoTIg WouA== X-Gm-Message-State: ALoCoQlVabF/YgG/l7cI5sP4TvXSnG5VZC2ez/MZFkD9MwpE/F2rOzB/PmAuB90bLif/8TZ2m7ud MIME-Version: 1.0 X-Received: by 10.66.248.202 with SMTP id yo10mr3753366pac.177.1385075072312; Thu, 21 Nov 2013 15:04:32 -0800 (PST) Received: by 10.70.102.133 with HTTP; Thu, 21 Nov 2013 15:04:32 -0800 (PST) In-Reply-To: <18391B9C-2FC4-427B-A4B6-1739B3C17498@gmail.com> References: <2103733116.16923158.1384866769683.JavaMail.root@uoguelph.ca> <9F76D61C-EFEB-44B3-9717-D0795789832D@gmail.com> <5969250F-0987-4304-BB95-52C7BAE8D84D@gmail.com> <18391B9C-2FC4-427B-A4B6-1739B3C17498@gmail.com> Date: Thu, 21 Nov 2013 16:04:32 -0700 Message-ID: Subject: Re: Performance difference between UFS and ZFS with NFS From: Eric Browning To: aurfalien Content-Type: text/plain; charset=ISO-8859-1 X-Content-Filtered-By: Mailman/MimeDel 2.1.16 Cc: FreeBSD FS X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.16 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 21 Nov 2013 23:04:38 -0000 Just as a bit of a followup I had 163 kids all logged in at once today and nfsd usage was only 1-5% @Aurf How are your results with your AE and C4D clients going? On Tue, Nov 19, 2013 at 12:38 PM, aurfalien wrote: > Wow, those are great mount options, I use em too :) > > Well, this is very interesting on the +3x access/getattrs with ZFS. > > I'll report back my findings as I'm going down a similar road, albeit not > home dirs but rendering using AE and C4D on many clients. > > Until then hoping some one chime in on this with some added nuggets. > > - aurf > > On Nov 19, 2013, at 11:11 AM, Eric Browning wrote: > > Locking is set to locallocks, cache folders and similar folders are > redirected to the local hard drive. All applications run just fine > including Adobe CS6 and MS 2011 apps. > > This is my client NFS conf: > nfs.client.mount.options = > noatime,nobrowse,tcp,vers=3,rsize=32768,wsize=32768,readahead=0,acregmax=3600,acdirmax=3600,locallocks,inet,noquota,nfc > nfs.client.statfs_rate_limit = 5 > nfs.client.access_for_getattr = 1 > nfs.client.is_mobile = 0 > > I'm sure this is more complex than it needs to be and I can probably get > rid of most of this now, forcing nfc did cure some unicode issues between > mac and freebsd. Packets are not being fragmented and there are only one or > two errors here and there despite traversing vlans through the core router, > MSS is set at 1460. > > One thing Rick M suggested is actually trying these entire setup on a UFS > system. I tested by copying my home folder to another server with a UFS > system and ran it for like 45 minutes and compared it to another 45 minute > jaunt on the main file server and I had about 3x less Access and Getattrs > on UFS than I had on ZFS. Seeing this prompted me to move one server over > to a UFS raid and since doing that it's like day and night > performance-wise. > > Server's NFS is set to 256 threads ARC is currently only at 46G of 56G > total and NFS is 9.9G on the ZFS server and CPU usage is 878%. On the UFS > server NFS is the same 256 threads and 9.9G but as I look at it with > currently 52 users logged in NFS is at CPU 0.00% usage. > > This is the server NFS configs from rc.conf > ## NFS Server > rpcbind_enable="YES" > nfs_server_enable="YES" > mountd_flags="-r -l" > nfsd_enable="YES" > mountd_enable="YES" > rpc_lockd_enable="NO" > rpc_statd_enable="NO" > nfs_server_flags="-t -n 256" > nfsv4_server_enable="NO" > nfsuserd_enable="YES" > > UFS Server mem stats: > Mem: 49M Active, 56G Inact, 3246M Wired, 1434M Cache, 1654M Buf, 1002M Free > ARC: 1884K Total, 149K MFU, 1563K MRU, 16K Anon, 56K Header, 99K Other > Swap: 4096M Total, 528K Used, 4095M Free > > ZFS mem stats: > Mem: 3180K Active, 114M Inact, 60G Wired, 1655M Buf, 2412M Free > ARC: 46G Total, 26G MFU, 13G MRU, 3099K Anon, 4394M Header, 4067M Other > Swap: 4096M Total, 4096M Free > > > > On Tue, Nov 19, 2013 at 11:25 AM, aurfalien wrote: > >> Curious. >> >> Do you have NFS locking enabled client side? >> >> Most likely you do as Mac Mail will not run w/o locks, nor will Adobe >> prefs like temp cache. etc... >> >> So being this is prolly the case, could it be a mem pressure issue and >> not enough RAM? >> >> So NFS locks take up RAM as does ARC. What are your mem stats and swap >> stats during the 700% (yikes) experience? >> >> - aurf >> >> On Nov 19, 2013, at 10:19 AM, Eric Browning wrote: >> >> Aurf, >> >> I ran those two commands and it doesn't seem to have made a difference. >> Usage is still above 700% and it still takes 30s to list a directory. The >> time to list is proportional to the number of users logged in. On UFS with >> all students logged in and hammering away at their files there is no >> noticeable speed decrease. >> >> >> On Tue, Nov 19, 2013 at 11:12 AM, aurfalien wrote: >> >>> >>> On Nov 19, 2013, at 5:12 AM, Rick Macklem wrote: >>> >>> > Eric Browning wrote: >>> >> Some background: >>> >> -Two identical servers, dual AMD Athlon 6220's 16 cores total @ 3Ghz, >>> >> -64GB ram each server >>> >> -Four Intel DC S3700 800GB SSDs for primary storage, each server. >>> >> -FreeBSD 9 stable as of 902503 >>> >> -ZFS v28 and later updated to feature flags (v29?) >>> >> -LSI 9200-8i controller >>> >> -Intel I350T4 nic (only one port being used currently) using all four >>> >> in >>> >> LACP overtaxed the server's NFS queue from what we found out making >>> >> the >>> >> server basically unusable. >>> >> >>> >> There is definitely something going on between NFS and ZFS when used >>> >> as a >>> >> file server (random workload) for mac home directories. They do not >>> >> jive >>> >> well at all and pretty much drag down these beefy servers and cause >>> >> 20-30 >>> >> second delays when just attempting to list a directory on Mac 10.7, >>> >> 10.8 >>> >> clients although throughput seems fast when copying files. >>> >> >>> >> This server's NFS was sitting north of 700% (7+ cores) all day long >>> >> when >>> >> using ZFSv28 raidz1. I have also tried stripe, compression on/off, >>> >> sync >>> >> enabled/disabled, and no dedup with 56GB of ram dedicated to ARC. >>> >> I've >>> >> tried just 100% stock settings in loader.conf and and some >>> >> recommended >>> >> tuning from various sources on the freebsd lists and other sites >>> >> including >>> >> the freebsd handbook. >>> >> >>> >> This is my mountpoint creation: >>> >> zfs create -o mountpoint=/users -o sharenfs=on -o >>> >> casesensitivity=insensitive -o aclmode=passthrough -o compression=lz4 >>> >> -o >>> >> atime=off -o aclinherit=passthrough tank/users >>> >> >>> >> This last weekend I switched one of these servers over to a UFS raid >>> >> 0 >>> >> setup and NFS now only eats about 36% of one core during the initial >>> >> login >>> >> phase of 150-ish users over about 10 minutes and sits under 1-3% >>> >> during >>> >> normal usage and directories all list instantly even when drilling >>> >> down 10 >>> >> or so directories on the client's home files. The same NFS config on >>> >> server >>> >> and clients are still active. >>> >> >>> >> Right now I'm going to have to abandon ZFS until it works with NFS. >>> >> I >>> >> don't want to get into a finger pointing game, I'd just like to help >>> >> get >>> >> this fixed, I have one old i386 server I can try things out on if >>> >> that >>> >> helps and it's already on 9 stable and ZFS v28. >>> >> >>> > Btw, in previous discussions with Eric on this, he provided nfsstat >>> > output that seemed to indicate most of his RPC load from the Macs >>> > were Access and Getattr RPCs. >>> > >>> > I suspect the way ZFS handles VOP_ACCESSX() and VOP_GETATTR() is a >>> > significant part of this issue. I know nothing about ZFS, but I believe >>> > it does always have ACLs enabled and presumably needs to check the >>> > ACL for each VOP_ACCESSX(). >>> > >>> > Hopefully someone familiar with how ZFS handles VOP_ACCESSX() and >>> > VOP_GETATTR() can look at these? >>> >>> Indeed. However couldn't one simply disable ACL mode via; >>> >>> zfs set aclinherit=discard pool/dataset >>> zfs set aclmode=discard pool/dataset >>> >>> Eric, mind setting these and see? >>> >>> Mid/late this week I'll be doing a rather large render farm test amongst >>> our Mac fleet against ZFS. >>> >>> Will reply to this thread with outcome when I'm done. Should be >>> interesting. >>> >>> - aurf >>> >>> > >>> > rick >>> > >>> >> Thanks, >>> >> -- >>> >> Eric Browning >>> >> Systems Administrator >>> >> 801-984-7623 >>> >> >>> >> Skaggs Catholic Center >>> >> Juan Diego Catholic High School >>> >> Saint John the Baptist Middle >>> >> Saint John the Baptist Elementary >>> >> _______________________________________________ >>> >> freebsd-fs@freebsd.org mailing list >>> >> http://lists.freebsd.org/mailman/listinfo/freebsd-fs >>> >> To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" >>> >> >>> > _______________________________________________ >>> > freebsd-fs@freebsd.org mailing list >>> > http://lists.freebsd.org/mailman/listinfo/freebsd-fs >>> > To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" >>> >>> >> >> >> -- >> Eric Browning >> Systems Administrator >> 801-984-7623 >> >> Skaggs Catholic Center >> Juan Diego Catholic High School >> Saint John the Baptist Middle >> Saint John the Baptist Elementary >> >> >> > > > -- > Eric Browning > Systems Administrator > 801-984-7623 > > Skaggs Catholic Center > Juan Diego Catholic High School > Saint John the Baptist Middle > Saint John the Baptist Elementary > > > -- Eric Browning Systems Administrator 801-984-7623 Skaggs Catholic Center Juan Diego Catholic High School Saint John the Baptist Middle Saint John the Baptist Elementary From owner-freebsd-fs@FreeBSD.ORG Fri Nov 22 19:54:13 2013 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id A8D1E271 for ; Fri, 22 Nov 2013 19:54:13 +0000 (UTC) Received: from mail-vc0-x22c.google.com (mail-vc0-x22c.google.com [IPv6:2607:f8b0:400c:c03::22c]) (using TLSv1 with cipher ECDHE-RSA-RC4-SHA (128/128 bits)) (No client certificate requested) by mx1.freebsd.org (Postfix) with ESMTPS id 5A3602A88 for ; Fri, 22 Nov 2013 19:54:13 +0000 (UTC) Received: by mail-vc0-f172.google.com with SMTP id hz11so1126342vcb.31 for ; Fri, 22 Nov 2013 11:54:12 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:from:date:message-id:subject:to:content-type; bh=5BkaJfvVNjKDD4YJjZTVF2z5kbvx6dlEUsONOjCY76c=; b=0KDM7uicYEBfAOwr8WHiQUshe8xwtQ4nCnxGGE+OSSj3Zl/uCldKRVLAm4RcxbydpP p3RivVabk0bWOIS8N8f1a7gCcwog6CMPocqBsBooUvb+RyOIG0oENPb1TNJFiNQpERZf 9BkwlOPys7oIQsK3HVe4BQEaCD1RRTAxK2glVfWV5LiN2+/f4DAkQgbKCJpsR9bQLXXD gnPjcCVJKBd1PGMymEm4Xzybdmxgx7pFeyNHt28doLFxM4k8g8u68zx1qvIaJLSci1n1 1y/I6Zr3OYj0Tx7jR5kTEfE4tV1U8XvIohssR/Ucp1A70rAWi6UsmIDjit0zH0dDTNqt 0aBw== X-Received: by 10.58.178.239 with SMTP id db15mr13086558vec.9.1385150052422; Fri, 22 Nov 2013 11:54:12 -0800 (PST) MIME-Version: 1.0 Received: by 10.58.231.167 with HTTP; Fri, 22 Nov 2013 11:53:52 -0800 (PST) From: Anton Sayetsky Date: Fri, 22 Nov 2013 21:53:52 +0200 Message-ID: Subject: ZFS and Wired memory, again To: freebsd-fs@freebsd.org Content-Type: multipart/mixed; boundary=047d7b672a96d6916304ebc960e8 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.16 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 22 Nov 2013 19:54:13 -0000 --047d7b672a96d6916304ebc960e8 Content-Type: text/plain; charset=ISO-8859-1 Hello, I'm planning to deploy a ~150 TiB ZFS pool and when playing with ZFS noticed that amount of wired memory is MUCH bigger than ARC size (in absence of other hungry memory consumers, of course). I'm afraid that this strange behavior may become even worse on a machine with big pool and some hundreds gibibytes of RAM. So let me explain what happened. Immediately after booting system top says the following: ===== Mem: 14M Active, 13M Inact, 117M Wired, 2947M Free ARC: 24M Total, 5360K MFU, 18M MRU, 16K Anon, 328K Header, 1096K Other ===== Ok, wired mem - arc = 92 MiB Then I started to read pool (tar cpf /dev/null /). Memory usage when ARC size is ~1GiB ===== Mem: 16M Active, 15M Inact, 1410M Wired, 1649M Free ARC: 1114M Total, 29M MFU, 972M MRU, 21K Anon, 18M Header, 95M Other ===== 1410-1114=296 MiB Memory usage when ARC size reaches it's maximum of 2 GiB ===== Mem: 16M Active, 16M Inact, 2523M Wired, 536M Free ARC: 2067M Total, 3255K MFU, 1821M MRU, 35K Anon, 38M Header, 204M Other ===== 2523-2067=456 MiB Memory usage a few minutes later ===== Mem: 10M Active, 27M Inact, 2721M Wired, 333M Free ARC: 2002M Total, 22M MFU, 1655M MRU, 21K Anon, 36M Header, 289M Other ===== 2721-2002=719 MiB So why the wired ram on a machine with only minimal amount of services has grown from 92 to 719 MiB? Sometimes I can even see about a gig! I'm using 9.2-RELEASE-p1 amd64. Test machine has a T5450 C2D CPU and 4 G RAM (actual available amount is 3 G). ZFS pool is configured on a GPT partition of a single 1 TB HDD. Disabling/enabling prefetch does't helps. Limiting ARC to 1 gig doesn't helps. When reading a pool, evict skips can increment very fast and sometimes arc metadata exceeds limit (2x-5x). I've attached logs with system configuration, outputs from top, ps, zfs-stats and vmstat. conf.log = system configuration, also uploaded to http://pastebin.com/NYBcJPeT top_ps_zfs-stats_vmstat_afterboot = memory stats immediately after booting system, http://pastebin.com/mudmEyG5 top_ps_zfs-stats_vmstat_1g-arc = after ARC grown to 1 gig, http://pastebin.com/4AC8dn5C top_ps_zfs-stats_vmstat_fullmem = when ARC reached limit of 2 gigs, http://pastebin.com/bx7svEP0 top_ps_zfs-stats_vmstat_fullmem_2 = few minutes later, http://pastebin.com/qYWFaNeA What should I do next? --047d7b672a96d6916304ebc960e8 Content-Type: application/octet-stream; name="logs.txz" Content-Disposition: attachment; filename="logs.txz" Content-Transfer-Encoding: base64 X-Attachment-Id: f_hobtsfr80 /Td6WFoAAATm1rRGAgAhARwAAAAQz1jM5Df/OepdADGbyhiFAR7cPMpHwde56AFiTUjr9DMyYLjJ r3JPBvqTJx5O7aNAK9bZxallCkUID+kU+ZcoZs6SkoDx61KF6Ng8QJPvajuV5VEyDU6JXK31qFcJ 7SKnuYSom1vNI/ccLjjOiaWPVAodsFfA58GfU+EPWBpUw7QYXkuNJ9gEBdgFzpDroE7G8ighVKk1 kVVwHpNy44ISDhlhEuwymcNHumcxkaWTRvZkjBr+UV/FyKNkJ6jioE0mcOU2DE3VX2I8W8hx4YJ8 dMe6owS2nFQUw2HZlKHU55uud/JKb+VTZ6L6ghC4x47prfagEPX2A63VemILDB5Y9TW0oQBNGdtC QS+1pSzNz1OjOf+VjZIQlyt9/kBkZDpaV4Zb37yR/kcUDp96RajzBe2XUMBTTAQNhCv5No21pSSI n6GWtsyR70sO+IBEqkxRkV9i237WqKtTjWnFwp5ik8nxxyysb5RQGVNcrMduPl5qodveR0CM6zX4 FTiXI+NIvP2hsIuY9LmbwDtlnk4VRzv4XN76/+m2fbYIV6+t+yE9Ky1QMdPz9Hlh4jawK/jNeXNR qjJaqpE5tZ42g9AYx3Mdp26Z1IoVcPjmymWKUH+kaGy5jI8lXmQs7pdq722N2WBiJ1uAIwkV90q/ B1sCsBlejSncOozdxYA5T+kkh7gk+5HKzJhLFN9UTg/JvhtvG5J/Pf7Gv1lL/fZXk3A4E1ZkVgDA ytIrx+US71xIls1dh5gok4iWP5skPzPEWuAJFXKVwESWZ0q1ZsMQ76OvxaaSBIcZkZzQQvxUxOOU bxKM58xknaThmTRwxYeSToCtyOu1xcP1K4fL/yYsRFbQ9GiQQ6fvpdqQzA/Q8YSnRW5qALV8R3O8 JoTHvUlGHst0hWYSBxdeoc1mw48ruoASVGtYDgaFzqfxWYWZkXe7WxCdRPly94/nai4atSvUNrb0 I64w6QzPJrlDf+EsaeIjdlkuTqtdiSPtDf5V0U5r46tJu4PkX2M9+EkrMRlWCg+3K0EFxJs9xSnL JGuDnqgNkiJsP1RoT8zEyqcuV1uEM/7VPkCB4UsIEAmBOs6v+oKJWKjc1rGkkTIY1EQnWuUxn1Zf o/hiad1PkVmrkwWIuDP4VeAXGCP5QrSyIqvioYkFKux4bDSemi7vAjpc3IWFUNBHd+2TJxUYFfAj FkB6hIOb6AzNZ87idzWo5DsI/BAS7iObCvx3GzCTKjI+0ZM4s/D9Vexm29GkX59vCsxP4/3RAkP1 2bEYCq9MtdDW6QXHda0FdsDaLB2gADjAW8kt2mrCTXqNw5xtU0OH8OfQH/fuSGAKFnqAOT4QRpFV Z/wtoOo4upTzIRGerfsWn1hQroV5PMB+x/Y94fe9y+xWO1xOK+MQOkAVRH62vLtIS/uCoOgT2M86 piVMEEpARMjTKnxrIj+AxZ1nyBgc2+P2e1eBHdOM4q9QgO7+YUsdKlAJEZTaWd483eBSWn8TGfAT tLhZX1KZJZg1Ta8YyzFmEmDTGSzjx+TtWE8UapAB2cdnqITMeEMBxWHcEibytVjalZyB2Dey+VLZ n1RtQdScbfy3MsCk0sQDHsByDPvo2tBoiFrmY7I5eDwEH0kot5/U3ZEIx9cG1OFAZ08m47u2DROB QthEdTmER9yUKRVDevpau8iEe8CjCGwq6QQKJVTjMA54IItQ94CtRv5EbOyGz+Cx9UVlp/+BHmST 4V+0PXpHNGyYxGmPOUhrEAaPwktYoMDUlt8DPTnSIiW30EavqX14FlX2MEQmK2V/5OaOR6elIoiR je7qWKRfP1hbDnspIhJRqa5WX7UpLCeTYKo0ppAVqej0m0fLblY97ktRuj0J5aponqmNgkcsiJVi JgvoAwZcugtTvMaBw3Tgjb40swaozDPCKaZSCfc5X1eWDpktm/rOXxhdRL6evQR4gMHD4kTRgwKN RM3VC+A6sbDWqD40BaiRyiZyWL9/G61xdkxttcx9BnMzgZpnwJVlEL5OwuB3FlpTnlg2/HMcx3h8 wo7Jw/ZUPFwW5Ey0pLHNrTIW/pAj5CHgoIZclzl7dNlLLM5IzD6Ry4CJ3V6dh/zpIih7muz5rA+W qNNxiLI4bJwpl/JrsTapR9hSI/Lcd1zz0PDFAPUKy52PZ8iUQJhfx0YKJ5droenmxCpgi1NbxENk fJk2wQPvaYdd88IR6UyzoYeOUNBSbML3r4KbU30AljgsaPLkRCZ4gsOIuJD5fgzukpPsH53SmFoq mbn6xG7SNa/2h67mifXksMhXNsnsFy6ZcxFptDjnBmuJJjbmtDll24jyKw9jW8vgnVWXABA5K0zn jFU24LDv0DuWv47v+2tbcgEK26DprPOTAyKSxG/psTj8jgioiy9QyZw4KKTT8/t4rbGAUDjcjFlc Yqu0gDz1//atcmcO6UlkO2d7C9WYN2LpqM80sZXKQIQQzRAhrgym/j1DrinVGf1cQU85+4xKP62D LACxyAsKqxqDKcz+HwLUJfMsvbLS/5G44y2TrUn/fUFqA5INDtprVfyoxiYOQI+ygasaRE+ei76h oAG62eM/IhHJKr3Ayqcp62Xm88Sno51Oi5QZ5MuNLx7WW1SyU6fwjNDQ3y6Q9yu9dCqRr2HGJUd/ UHio+PnRwdRvSLy+9214drQCEq5Miy7JDLHm/U0O/Iq2rvlYhv0am4ds3V21oZPqKa4xZwQ+cE1a K0eyo16R3n7U+tu5b+YMKDSHy52KBPtna+AbboLK7T0Z3QdZ+CAwRQfU3hnrj5YlIuqdIsQcUJbJ d6eI4JVapcXqNQ7ZtllUi+Olhu2pXoh7tgQYBJoFw5OGLu81aKThMPG9EKO4bldUy2sZAlZMqGLR eM1G3Bf2+kG/JM6hYFKrfVGQRkllR51+0cA3x2XXK6wTStQEzYSI3R6jfnxgqJGvtk8GUQI1xTTe XtMreKpLXxglnw8d6loFOB/cAHGFLaZ4/rDoEUqQkvACSzRj9D40wRD6vuueRH/y+n4/hnboglao Xew2xuA/fiJ2hZRG7gcQGnSCOxQ7qFI8VGP/fbx5GQVSCgZrQ+H4eNsjm4SFOWGz/si8dF1rgNO2 k72hMGgi2PBpG5qy5mkIDClce68Za/wUXvlbgQjgQx1mkmbr4YH3bItFoOmLACFbUt/mQ2u3/Ypa 1TMkDVWifVVkCiPVcv7Hx93lk4f7tUwaUQyDK37DirN8b4j3H5PosIAtqQMLbx//xGqBoV21Jtej li2kMwEP9JkwRheQArn6wcjqntZ6Db9Iz3xNyMecwJhDwpWkGLCJIgkYqTvL49ORn7Ll5EIgDhQT qn1SLPjtBlqkkbwVmqJkN0x5/1LMAyVjG5JA/4ASInw8bjqGIdw4ALKqh2JiO5n8N2A+2gZxDz9f +6EAqkM6fqAqvLqZOyf9KnTBZJOCYvQDrWC8WxANrmqICRJ7D9XbLtc5ugH6r3oDk3GjX1xbtBTN nPwOQebVIh23E8PbAFRHJAv/DuEvMU1hrKcA5/KIYB5rCLkzDjTUobTAs32rSf8VhvTjBIgcNekM a7+3h6oXyzxIZ2QN4NDi0zbVb3rrm2czhAfhSdFkswfK7nk6qzrQbxChgOntJXR41hvQVEvS7JzS LspTXKWvVtIUB3ISDwadFPSZUDl8TIEGHW31XRCQNIFTrOMmm7Et7Q5gnmZIICds98QWp/QGkSgq yV60bTRJGKUcjw9ng0MzRI+YuLu7v45ktmPhTaY68gebCFXLiBWeZ7O/iGiXdta5kNcIKFTzy6XB M4mm2TizhPySza6ANTGBA7CeYIcb4WzUCrbQzDpv78NulvYdSBNBNXbN/rcgez6g5Rcq+aqYDm1E 1sjFW/cpw4DlJios/HXNG1yj0X4D2ifXpcoGbPJ9wPoP5laKk4d6rquVegzag6Aj8tzPjdkFQUhg hqOgSUVwLaumatu9boANCx7mrn5miCOYHVcuppIj6MzlmZC90wq+t/TjKmtj8h8keOPUj9gzLaiU GC2QpvD6uUT9JVZkPQjhGBuO90TnOhKDsfvXphLzJH9Ia+1Up3mBhehqI8H0sQlScIbrKDGIzEWj 8JNfOM2EzXnU8H9maYqA6seu+txdHtXHTScFACguYlhGUwZ/pict4UIGf4Vt6a0qEVGeLT2p3zpq D1wERT+CKuav6+XRg32aA0CtOIgxBFssQRfXQ18+eU72MJTRzG6KbX26frdOFoK2pxJ2NDb+Gu6W PbP/1vt27mhLbp6TC/ORn8kOFpGHibBPds4VgOxsH2jb+/FtVp91/X9hBYfkgreVMWj7HmJb8egi A4N4jY2AsOoH7FsvE0GPyvpEWsiAoUzylAAkHCzbG7YvALtiN595wAjJslSZ2bpa9GaAbNv18yb1 A2hVjuKnnyILY5sInX/0CCS7q9E3pTnlgREvLPrRuG/F3LY/7nTx0zeCaEFCZjGiV/WpzpmDSd8z KZHWBLl/PGOZwE4cYkZVBYcm7S8DZCy08ybCIu7B9CFY1jUjsBZU9L5oBIe8l14HRPdAKVZ/6xdI MflyqM7cFXeHCD19nNEI/vA+H0k4TTTgwHFR8+j2z24xojO1HWc+1kHJa0BqL7qxrvTlzcbAXSwG kIOymlgfDrSIBkscB2m3Vc9yFGTU6vlPTBghuvx3StBM/eQTB0jbhKMkmQRqf+DZMdwXCrkW+Kys YQ1NXfufQGC0ZmvDfnU4a1Bv3cCgF4W6RlNecQb1YPPhoQVwPB8+iIcmlZDcW/eseZWn0uIczJc/ sMeaZPrsqQ00+i4uAK9nXKXqzf21A9qhfi4J+8FeE5aRqaMw+ynkTuDN+VGn8lzXdY/SUS2t5OU9 QRuKA9o1hrssvDBoTckoTmjSFe0/FlHhgjEZY8j8ZrF7lxfaRwgU6qFWeHbEHBPO548TAAcFKzes of80YsL9t5l5Tbprv8omMtdc1GwHlFVBkhneqZT/kAu6TmAayNBWrAACBtuXbGPsb0sfmvddr9Lo dp9UyFM4hNEmBqSpdFWoAgXibGqAbgmDNZMm8kahR3xNVqn8FMOVKxZJbknzkiGol3Rnh4F3F/PH OkysDbnqdI/Uk+Abv1PG+uWAp4cmTJUVQE/5uiq29MZS24Lo9cT/pGT5eBLE2XMSjn1Pq+XoVG3+ 9y0Bmhr2HeHORYIiqZt66ur2rY8CM69TTYV24MVp3Y/y7gLTONPryiXueW6XBDfvPeL6tebBfMkt IsQ+e2qXLLr0WHFHY93CQHMhD+4yHXtb2r/5C0uxJ4eQ3+orxPPHJtLJFeYp0pvMUSkUGuXKD798 RJXn+RddllCVWh0EfPxd9LcoIcrX+WRwCUNuMvlADAfOe6REppKu/r4lgfbkyF5GxZKVomkUc2CO O6LuvKFFZJSCOdzx9zMlaHcMfvZZFYJcckiAGX7q01tQo6Wnep+zNBVdDRz0cObI60CGAsrsStPO y9CPLhb1viYfvLbu+q2+q9u2/dPITIWDghUUVDMn6zEEJY8DSZ/YWm6AgDLBjZaYwBkwnq3ZXl4/ WipheTRchK0k3du6QGY/KY6O4kUVAIbE18VhPTWd09GSVmFqOkOkGsztMtb1ybhWYlPFyiN17oUh RaanrTL0gWsaq/AjBkP0Vi9qvMYjokIRYYvvJ+5YcADiCgyMF+m9TH9C3FOl1JRecDQx/EW3mvmF ct6k8mZJ0VFm1Xwr8BlMQfQGFbAeOut8tFoLoi1hTFNVa/wr26Dz4RLuK9pA2IHljLZAR2yS9cAl WQaLKE30BtqE6566Sc6q4qKJKNwbEtZEqzuONRCVN1L4RK4MiFTz/twLtfjOsXHUFkW3CHZ1Q7IK 8clvWL8mW+ycwGkMhE4sQaZC6vNSgCrMX7LXx+Txjm8TmNuEvDxhg7dliARXrmDXHZQON2VKfkHE vUmo5OZ4DjEYJoCc65nP64ey1BVwXbEVZ6PvF32xtkzI2El30NitnXr0WE49YX/4DAKTdpPvHkHi UFDw4o1DQFqkghCTUe5KinhwGlD4lCV5A/3QZ1htZjicQlN1JLMSUF8p0RoMMZqEiQ4aqjEpejYy EGHJgIWmw6I7YaFS+OxV3zWDPNLOsoiP3JHXmRZ2lzmUAjPXGNPYiGFsnm109ympXmMO1Z994yay t7pKglFae0mX6nF40/y3GHwbr1Choy1BZvISIjQD8wzGrR1yms9E80QO7QtP4MQViC6tJRltv0jT qr6m7rRe4v0indNdR4mymgSk/cOcVu8exOgEyC2rb1EOmrSmjSI6duPGGSoGE1fMy8CQsyo46O4Z LTR4bHdWJupxBsyqjWlflYsJqv64OF9SO7eNv05zXH8L+kR4KzJYZPtvE0jQputeTWIXGm+6Bas3 ySwzpLGXYietUfLyPt/PsuwbRDh3vPQSdmqkAmv8OS3OiKaoS9cfwJ8g0dymIMx9IwmuX1+IZmR4 uyfZgYmiUu47zZ78wT2KXvpU62i5hiEBQkmHq9dEBO7PraSU9HVPIhnHh8MGlyeiZcTx4GR/l4qB YCioPaWPz4NQJLHClh6K+dsIiLajeZxSCn7dYqVfFRbOYZ8FnpBjN6hTXX8oZPWdDUDl6QLLpizf f7dOMChZE2uQFNHByxp4OVwBkxUJ8e/DKFhl59MZATQpmz+xjZ108exo1AWIcDXrpQB8CjYfG/+G aLirQVGw/uIVnZ22ls7/WgDYtTLauI8nPC8aeYoHiebVQOv/4VgfKmBp5zjfFlnmwvC2JBrGSqoL cjcIxlqYrkoLgnrxhaEsj9rBhwCa5U3TSFxo/wEXUCZiM9+DmTCfGnaUwQ1exaniP8tyQseTg0rU Z8ZtKowsgjuqunb4+14VdITanw0GMTLDnLGfYdhIH+aJi3+T/Nr+WoTIMpd7hWVLEIVU1gPRVftF segQ8UAEi+9waP1jI/LRUQzC4H8HcopjXPSCCKY/FqMTzkqTqBiBjSDuStR7CmER4aq+9uPN59Df WFLhtL7bboRMZXrflrii9jD+M5G+Dv2/29YCCoEN3y+W14hPXxnn7pLsN/uuizRqDjaOCbvmHwFi /7tpj1qJk1MTII6vA2PedhR0UWk5ij36VmS/sgW2BYR2SAkwfhhC/FOwjdO8YG6tqUSLglrHsyau VB7QhGBJoa0+3I+U9Bl4z7JUC150//OAIjdt1FFTeComts8ILJamAprn1Fl69e4AD8ubbqGEUg+w Ap1awzCTP65pEyB+KtTQrF3aa4/PKOmC6iDesyli6jp9bX6vrSZDXCpT1KpwIxN1DyWO/EFXwNOc D/ZpaCAKZrlByN6HwQYr70mN+mK98f1ovJwf0P4gLFgxL6gYvLWD6zIyWY6oX5rzPwhKhPHo/joW 7fSElYF0AY9QpdayV6/c3KHhitqGzugYKGSXqlyr/EnV0yuL9TSmgghO8ypMriKXstalaDAWIWcD 7anYMz1k4Uvquto1DwTkBbgFUFv0t3IPh7LDVSikslydJHfN+fLJVkbhkNyBAYF8WxRr3Hh1j1Jb 683tdPjJSGLqMdB7MOv8Ew9Xk1qEOvrF4RHs94aqdVZPFW3RmOxOZfzG3QILG/Nm5BTvKO6T1MyD 7U7mPm8y/xu4Vzg/1TMEGlgmA/i8e4NSq3SQT4U2F4KXXAKi0m6vbPLDmHUx88pyTTBKMMQVKe9k TDaQLTrhs7q338OPyRTItVGMtiEkgOHqmmZZ5i7i0fHZaLQKp6516GpCmZlofU4ss6MgXyabClJe +PI+WG2wtvI387G9fGml6tvjuF58g34SUA2Jgyc9ASmAdIs/Fp//oFwzYwpazrfNaOA8dDe1h4Gu MWoOMwFlgcFdr8aq1rKADQrbhs7a6Yjg2E1cIssKFcKegHIffAvUIRY71EqSUlgPVqeq06MUQLFD AZfyU/GqULFUxdrdSdd5H+3gwoAaXEszjINYTz8Ak3j9mhmnJqg88IG0NK9II2Hr2lMfdm5TYQO7 xYW0TOS6YYak4Qc9GyYo4zZ0iu/f4YzWSH0QA/dk6mTWuc5AEDkeFQ8COvxeGBanBTuZoR59uTk2 PR02qqA+4dSbKcBzx0T6Gll+Fm33Yw3so0tQF23j9+867I24LKOP7P2bCEs0CK06UulZHcdL/A+a QJwX1i66QIEf4YnA/ZdjcmGOe7sfYdHIufGm6+08J609T+geG4wq/NxX6aQDIuJ/UPZUMIgElSjQ hKpdpYPDnEUYsd+Ta3UXdo0jDlMSUfPyX7CFhAniretfctlRGJwhcDOChJGbpZlEwCmYOXJFz5Xa 1imLD79Wap55kQZJg+iq5Wtlk+sl47CC/P8sM5VXrndH8VGDYgff8RkkVU11mOeNPRzwQqPgcOcD szb9vi12Q15w4bMkubxirNWD42RRnDfrsh24TWsLm1Aqr/1qvqSVkX+6OqcRiEWJbIZF34aWUmw8 h3nSevV6ZKJWfPht1iX1RKD0Nw/OAoyYUCz5zuptZ8tRDpcCI+zDZ24SnSWSAoA7bj0APmOxo4rS zU+TsJH4HlhQiQP84yU8ZP2ej9nf5N1j684PS0RZL2c4Or0CaKLPP0EtlAwqdt2hEc0bzELB74ug pNlZMTZ9g9yI2PYGxgUBLB8sp7C+t/mEBWZBNSmRAkMvpF/N5PQdHgf/CRjVtG4EUU/N0qU+k+er 1F+VVMg5KfUX2CfNLeQFXeYKqwWjv0exHMPt+kp5fn8kx/4OCf0YPIAXbIMJuCHnb59+JzSsEtYQ 3QB99w1AlIbnk0i/XFaN8ltlpGhQCCaGEjbP5VDznk8rVk1+Uy+NoJ0X2uEJ4ep+S3sFo5DwaWGL R2oGTVUrSPQFM/X+R7o9lTLFs3H9NKzuyN1aekOdP/yqxN71Am2PNeThbwZnRGDkp/+7/5Y6PI3I 1B03JPZ1dF5ZqG5YeEXfEsLSNUuCdoMD4zJPa2xs2g0mxmLdXLsfbdDnh0WCcWHyPy+BRrsd2Oi4 dPztocwTXZRD96dty4wdJvrFGeq3VtSy6Rq5Mqc40xK3V7bKuB9xfXGvHRbxU3wzKloGSXZO1zEO yhz3wStYmqt5xeMU3Irpl8coQPpaC/hRYE6x+wRu0iJB4h2XdWnI3FcWNV6ZWPBRrKga/JLYDJk5 a8xdEEM1dA96fhDDolF0DKRkbcF1PzbWr3NOqCBZZZVzzYbsbdK+ux18uR+cQp1WI7WcYsEGXJKI YFS6EM0hOWge9IxjtWVbLPvNhKtIuZPLdp/rQ9sjwcvIESZ1FeZoKEMQ8zQQt4vfwwCyO/4R9eKr RACC0kpRGSXwh9dVyFns23V9wXgLMTZrRhkc0e31iWQE7CUpAa9bKAHr98K/TVLoGOt3e4lbCiun C14Ed694EKfOKilYpzUzV2sQoHmqB/BUVf3ZGCLzoimrulY/+hEkHTpBO2kaojklJM+u/oWk9aFe 1YcmB/vK+DSg1UBTSMh2VErzFrBVCtYUciltXdAr1r4nM19QAF/LgvzSQRgM2biKGI6z09ye9vIr x02iwAJZxG1dDLjzdHJSL88nP2VvGKIl/2DEQn6o0nuQqZehL08rp9NRhql9TwXlZhR3pd2DzoU0 Hy9denrJgiXYBM2zI4HEZCej051NpDXkL2Y4IYlTgwrBE6O0IwYEWZT210wCvYUhGu6wyZ8w+XTX Po4qF0DFLlMAjoglktmgG3P2zZJfUdaEWPvH9+ZOG+N47M+/iNKcB3Ffl03d+DDqg5Mi6wNrH/pw Hn+sKRslX7oVidGZgTUKvkqSgCwAnWGKXg71hqevuHyWaRzzxt7LIP/AW/+8ZdENFD3eNhM8ud4O UJj3FUvgvcqFrSAGe9j/XjgvvJDUzNf9d3YiyCSfSq3lxY1lhG9tYTScEUpX0Fcr32iBuXzRFt8Y gxL5TFPSkE6Ph88evM/xN3WK9xZfdFXK7rIGYfoSZX6Ef7vaIsIsVTG3JzMRcYanc1FMoAlB7vvA m+TDuGD9DRByFKMTudWF6G8hs0wWiPxtiS3f9IpqQdPJ0Vjj38sEVsFrpcdAAY77xp367qryoDw2 1f0+x4zRhmQ1vs+2PXXIOS7cJC6cb7tQXTw/bGsveUX9oUrXuas4G07lGdMVG7mwYlJ25CyLl659 dmZ3JzIUxv8dhRRHzVb5pnsRvA+yD6EQUCgKmDae2I2TYP48/0PejHQwA22qBHd492tlIIfRaasx mYdSrImPF3I7hdTIVEQLKN5JzW3A1BFY6I+OehhQsv6qAK1VMD6RZYyOvoZpFOmtm+nRo92mOFri s1e8Sk4G/eEBLLrDdV6vVM79i9VypwGXD5BzjjAGEPpg2hT4nBQle8dAlzpCmYr55o50uLdM9reT vP8AeAPseWkr++wStZ6p8nWGEqYFZLl6C8xNs0WEmGr6Bl6hEjItHwe9sIaE8s4xMxLvxZTc+jWO IiCeWvVNcrVMbM7GJZZ2bNbX+Oyg9LvOtXqXug84OnwVknhbI0JasOZHx2OrEwTOsFNC5amLW2LT uJ2U1XNq8bbYp1s+3JpuRqbKyHcc7G6wAWDC3hqlR6il2jCdwL76xvfnVAyVYiCbK9iLdL22Feaw 5Wfs0H4vZU9J50eozANoZzSRKwHbE9oIof03f4xIMP7ZAuR6BE6rVah7qKqwwHeXy3Gw4nBjE+sk j02XDZLILAtwCQrJo2eXL2mVQt0W7cGATIt/dQiqAxyQlryjN0wGK9RUqoO9PaAP8tzeypBc6YKu /10MBpq9PsEiMk5w6SqbeA5PsTYUPwqqAtgOBeEclaXuUNPZbdwleXWHvKBCz+obCECOnHy5x4OF WKHBscbwgRNJFI7JwP6M065sZWFPXAXy7Av0eAyT0Bvl6J46IzP56+jJrIoKEdsIsBZpg54nSQ7U q2/x/LKdVnIgsUpIen8/JYV2LT4umnFEZeHHBbIF58X311QhZE3AlMo4vfBuKwsKORUvWiMfRQGQ MjCxOeO1XMUij6QZ9LpEKjFNZLQHyjqqzqCfw85wIOPla9oChsgfdbQwsl4Gy5g/X4FE2vGhZhL4 U7TB7uXuS55vtQk7bS0jfICLzQHf6s+xccy7ZAtGeuaNC1Hj4EWbYTu4EGoJfq1tUGGjArKml3jv aC8GBZ0Dkd5EcmRUoEi/8jqAtty4qpSN9eID2M82yGnCjY/sMt7AbWWHhLFmAJvwmXMVQoB12iYq 7srOedFGEZKWn79JtyGZliTewAtlEoCkMw7ejpMw6b8kNMGwt5h+CWIFHNj+sNUG56i18UPc+VLB 7pkk6/dTOIJe7thxTHSxkiVauaPr4rFL4g3CgbksgyYKtTpeOowKYZFWRZ4DZoO9UJq6RG2i8AuB ktDRrpdsgNnKB9AlTIJ+KRPie8kN/LAY8Llqx+Ivav6LzeTgtBB18PYUbik3iJp80m6QcaLwWbiv 4seosz7SVYounFmyYoBEDeaERfRvyOf01LcyjxVH31xAw4FUJ2Ihq3gN+qgzhcTNRm7jNx7hiewN 84rM0ilA7At1Him9pS7EpkbwqrRxutw+rNDwkI4Sh0BlCqE1FvQP3IvPngXlLcFyUaLiIfoByP+I gH0hfs4VX3wfYzp9exqMtD5gDThFyMtHKmdT+wYggtQAXk6MsXYFyHbfzk7XFMHHLoYPSbD8a0Hl j1hcn8Z5VMH2XRClQ/5GiVBy3GjBLuQ9FLYHEuecBJZnVjGzInM/vge7mjQtJZGD7T8OzOm+cvfb Jbo3vM9yvvHpoj48a4Kt0d7f+4qvKp1D9nEGfPKpEFQNGtkIjxCFUr7Kzeq4fFQX4Yj+tzsCvEjp X1itPfrb28H32ppmQ3525lvYKUvnfn5jdoTKbwquDyvGLuRK8Yu0udAVfjFzlUGTKNDsDguP3Jo8 qUdTH3IloWtUIYf6YH5VMP1yDd7a8zQh2FKUUVrA60vzTtSbqjtYfQu8+j97yIi4uP8kK2iFRVsf PkyGYM2Cb7BGgMMNLP7R8uiffhsKgP+OOjlLRoudWiEc0+vr1ZbxTf8JhsMdz3dhlCClOvv4nzNo MSnb3TVl/j3ZlWthKuvCaE3FczukIQHhQeIiKZkKAe/7rM419ASVFmG4ZC+Ju/1wA2RRR5RYac7o p7A9Amkz1B15yspqujK4b3AQ8hPt+wUfVDx90Z3gJxkW51Z22x5bfPj3KFKBYchupLbXY6HzW2by t9Eg5qh51vdMamLNoapsR10CQLsWTKcK6iC5rIfpLgF90lPIBCreEFBdp2D1XupXB6p7b2Rq+AvS QpinEZRzUkCbZ1pA7uDZ6mJGqo0qY/HCkHYSdLCkTd6767Iuj3rr25sxCd/AXTWVNrO1qs9dPxM4 CvXmfp8tbsjr9Bn/PoIWOSK0mknFejW/dPAHcZrphyc+yIE6CHQ5DNsF7+47wHvTnaeDu01+joW2 XdEhCnkOgQm4FWSb1OFanO9F9XQJOe+UnW+gqN2sySQsRWog3Xh2RB0QZr+WB7Ug4wheRg7B7Anv 69hxTZDfuQ77H4Msy21hwEKu+TmDjr8OUNhkS+dj6InZuwupvMdmz2g4LSpUSGx4x8ogkFWFZbvu EZCwlWmc+sAtpWVDo1dUmB/1GPYoaPDMqcMyX+Q/sUMz6b+5V5Lr4YAVAzSGMRxaoz71BS5qFG6P R7B1eviNv9Xtwy2FlLgxX5dTKJrQ4BelT5uXKtxTAEBa2bLeDZ/3ZBrRYcAdCl8TJia89GwaktWI 3PEfdZOTJjph0mrEQhhbTlbySeme5FyvFWw9y9zcd5cRNAlHZKI2e/4l16ayP381MFvvVbZHVRpP w4noAx9I4cUrH7nbjVa1VPCsmsG4r/+rMXb9w6AyhO+jlh5IPJ77ZRPRtXpf8yqpcd6tbucscPSn H4XJnwxZFavuncvpijcGj8CVcxqylPbxDW4tKbwt0mWJaht9t1A/rOgi5VHI+YIPkNoH05ltNYtu ONcc5BRYUWrCo4HsE8EyvjAm8LykLicfFOQSqIMsD5kJlMwhpBs4KKyzzO+Mszud/InemCB+Ric/ CDW/DibJvQjiiDnGxWDDWSo38n99oVm4XqJemKrZYJCuI9qaCk8ri9tgcPsbA45iKLy7YA8rKLCg OM2XRf1fLKlqJCqYET22gzwT021sL3hxqL7DNWcC6qcNJRFawmEswhF5qLDYTMyiwXQHQZTKIVr7 S83siJUq+4TPYw4rCbQCGvMr2IH5CYjphd5WvMTnCxAVSNUK6tlmsszYN0G8fGrS6wCHmDqySGOG /lgPBtYIwbzTCLTgGGm+iy2pzU59KADBcNZ/T2ViwknZz02z21JQegkki0WmWx9/Q4gVCuFdqBsn 3lSUPdWkbIyGafJj8RmDl72gR9b6dVggSQWRJ1fbnieyyoywimjy4ckk12N/Wz9N/T+WO6FC/oo8 IRCIIQXb8z1CTWYTgXsW49/C/Sk8UYsIBUKWuxRtxfmf8xa0UTpO+LvZzc8g+d3iA22Rkr58jtnR Ucfx4L9PjSmjxNtsmFVS743DZhxhWqtJb0kWkjiWDoDYoUwquTqupfw5zB6jJEe+GfHk7BMhCDbq fpGJGsdfaXFqy+bYc0Q/poNtqR+unRKDQ5lvic3WZnOr9b+Hh+uJFGpqQGcnp6CASVzd7vazENx8 NyPBjRNwmPuwZxWIzwUzWGESUafsS0AAbPhmTiUe8UORVhPfhbBjEn09gONphWufCFIQFUTMoHZf OzkR4aEPfhs1Z8l6EEtPm/bTBAhifohTZgeW0uUA869j89FwhSsiZJFEtKXdaG+HeSaD+BBmxgEt QRxClxNOdd8ThDG8aOnitE6B/TsY98uxhyGz8AG7dSaBtPEI6GCby/Udk+uYC2o+bXLND2OmiIvO inlZ3KOgHe6+YIe2Z7TUcnNnhfbfXv+KAxJB0JprkpA/tqFbDRauAX0tl+gmAuh3JC3d7Vhb8Epu la41I4WlZ4lRFnCHE/X654MmRr/aNsasuz7c2MwKgufQsd52CpozR2AsdJSwhBuF0T313HbFM2fK FPg9QOKqP/Qn1iJNKfYt7iT25kikUiX4Ub8pu7ZvC7KZeWh33QvAyvkuQI09h6HXtHj5PAS7OFgl CpAkg1pIGGMXurilR+k9AKHoveT64LfspqSzrhBj/ESIXVXBCURrr8vuVnY+bxv0GzIuUObqcKqn vw8VTKpW3GWAnS+1r6RoBaOG2b1ltBoUF140PYxCRXvbiZsOYgvMLgXNbFVlfj49CiMxmfvhYCPl IINZzjWw9TgEJnA4aji4oSSy/DPBdaD/vFJjaRQhKDW0DqJkh1oT4II2cTlD9zM7H6SPu/8XGtS4 SynXHLlH0ByRkyknn+LNbSRzhCyYOKm2T4ZdzuOfcpBoTAYXgPZ/ogKpFw/JUJoL7uuB5/t9gnyS IDQ7DLVlkHAuZiRVPhLugLmSdRdLvNYabznps8TP75W8M8Cbh+o1IBrVzOStkg7rDf85L5AAHZBn LjmclFODccdbEcw0LZBX8/aN8xEv1IyFJrv5Wz3dsDMF3KPGzhNbEBTRP+KShrhWchSEhN0qkp1U b7a+ukmYA2JVwlCjsubCKyK70iYGzHNbn9fUKIREFezDito8634XkmpmJfDgz8P6uXtK/mHUbHod wqnrdGU6ECj3GV1rGG75ByOGi3jHU9Oq3AB2cTFGZEX1LHaIXOKPtDy2arjCM6IloVSrXqqsxyac 8VwdU2OJZ9uozV2g2zTQygGcuaJxLwmdvzu5XHDjZ0yLpbNaUgmYJnXn7JfFDu+hcdGM6eSe7bJB eF2Sl/5Uuu2u2IOh9pSECOCbhR0ZwzW66ObSTzSSCySOuKU2L5FgXBzKdhZYnG8R1HoTixw1a8Ja kP2g6BAo7ty4OyQ9OG7nHs/AABGKnzcAA0KioLhgaHF3ukYCP4FjB4aw9KDBWy3j7WaO757hP51v 0ycu1Y2VPdA/mLYZt6FT5eOGwqHHUCfoM+Wsg5Geea5XlOnvexitAuh4vCRKCKi7Ffz70ckvUXXd YMTdNYWgeEjjgUBjcyJlCjKHedje2o8rkSPnqZIcWleGkh+QxHO0RPh2wqAuwEr2XccHxVLrrmvU lJjy3HFG6MxH0Z3YHwmFQOdon8aSyaqCfGrjv34lD7YLqZvuMpA6fSFCyq0K8XQvxpfXanFJXh8g wq7a4o6g2S41tVxllv0aCxtaIs87YTbi59JR6PKccPRZDPxFo1PD8VEKHOremX8SUqbRgK6oYYip AGqQyTgEwH8j4xByey+66GhkUxefLly4u76zr6glYNQqEuRGygOyahbN3uE6QdviYUjvizg7vJFQ mdrw28hy7IbQUSD2sYRseFFr15qu22tk3+n2+jzIvleEPvVtzOwXz6U+ANyLmtRj+SezkISgJAsC EwSLROTeQvU/ssff+8ibwj1y/5OI5yPjb+XyA2B01Uv/LZOP5lQryX59vVBPbu6T/gpBJ7b6VYe5 20pWDpc6E5Q7CohGPfMfPkMUop8abhXnX47bHWXSBZ/sOcEy3C/QjOAILgtI6o19RMzGi7btNlwz PwFJoaWkeKDKfKOBG66z6FFCMm6A5c/DaFZqAl9o1a8feZ1qjfbJS+GLcxwYiovaczLg3ljCwtsF jspTA0UaUtLvGzgD9JbAGHrRlhCpn+UIiWKgxhMWeCJPZEI7o5gbG/0+1UjD0KuX1SMOmMynq8qc ggO212hcxGFcVrDqAe6sbR0k/lDz1w2vJgDpENCM6RzpYgGwbgXlZcp9Pc8HZJ6j3t/IUtVVqtMz QMtAanT2IuBoDM15o4rJ+pU6+5dvNFr/3VOOmK0A8FDEc1KKtSkqBbTgANGJkEYJXY/IG586l09I FhAZfF7jYAzyeVNufjJfWE3n85YhJZ/AcU6WmWSFAVA/kcrCygc+1VEJTMaXqVI8lQ8c+4Hm4NrF JDK+GrfpiBAMHfjTbQeX52yV0RgiGZYTqSCbbARvadpXLu69SXgKnWm52jofEO77XNpDpRg5QVd+ B5vu6PJ55EjNIRjpbNQTsz/w6Y4GZhvkpSMdyPlrd29ptXgP3WudyATShg5Tplpqby3J/czr0goT MEQbWZXQURjqMDaEfKPJ8OO0+6PzSaAZhszj6x46MB+I1euBqG3MBfIGSwpbfhxQK7n8vsfVEKmd yZy7LMuDoA+SOWkz8zYud2rPMShB8WjXSmaD6tNdCBU9ux+TL1nTO/kaqUEzBDPOryN8DYqJydnP YDMcrE9BsUIiABT9G23q8l1u6QF0wwEavV/VNm1K7r9VRlV2N6Bh2/FEsT29hOsPoS8yPAeDOTQG Eqr7cNxy0NXET8mWoSOgrsEa+2VzuduAK0IMZF0+ynnVNXJvXryGi5S1N7Jho6niv9I/oGFZYWeR Yr//uUi6DmTQr3+vf/odKCiDzlcaRvsc2Jzp6jsG31snotiIKmYWtxZE2XTx6zppmwpJLVI9ZoJl 6Z0cf7SZXSVXaS5uIi5GNLDeg3jATKnCDLiXklHxrAkNe5hLIeXfVYg90lYF6egm4abDgWK3+yhe MFpBsa9sQao9647ERSR2TQvUSm5LN9LWlK+UEC/BCiNzAAQqSoVZoAnZ9lDfO7PsNY7A/FwKSqKk QXgOhv555cSzCMZ73dIKOCdowHahjIqZpGjyAwnv9hwhLjz81x9tcdf1jDwLEie48mBKI1/qnJyY loSwB/XOq/vEKTF2gYeuTvIYf5n8X8IomQ7qAxsn0QQ4bikiKrlrbsoBMhsGsEm2XF7HK6492t57 2uPM6S3bcPw1q5ov66Lo5VzhiDFFLY65inp44W/3tfpgqfnw1kbX7k7VTYajosUnbAcC1iai2NxZ 0+kBuvOv0n/8FQuL0oeq8sD5s26I0keqBAvPFUYuzrb5EI1TbJ6FV3J0YC3PQlpQKb3o0KuS+Teu UAVtSigN/kfbq0mNquDGauMqZC3lP2fbVVbyEauqR1Gdpmjp1+JWfEl/hP1DfVJRxXUkaNrHFDiL QXk84cPWjyr9/aIHdwb1WgpuPUSC/MKblhfNJSnEplLEgnbT106x6+EaMguOY757/0UNKuhEaCgE 1QT/7Xg3UwonY3wGwi2phpyy/1AxmJ/vLCf2U72ofezR5tjRUjcQ2MaGBYOwu4GumDzbf+IJ7x/D iJAQiB51QpUdf6at5WwbhUI4klhyyW5i0ncR6CZMPpwZBXqc65xd8leVL4v8joLHO7PrVBWs7cWR aeHFC0opYv0DI1roKUROYcr8FxDBoP8Uc/L+h2IP+IMY/r4ZxsPpDD7VgRkJVRoYnvCGvDPfohDq oLk7BGnOts5sLZqkDV+t7HccTxHlqqVe4QtAxKHyDfl8QK0LSAh2d5s8ht46rPxNNCnu1xR/SRd2 VJO68TCU2Lg0mwQfZHZFoKPqslzWm6OznBQk/lS8QIM2x8NWv1bUeT96RkEKI+UwYKP9E8egOgAv wHcINIiuPWWSJw8IWs5MR445KN+DStxcUG8nOAcaGVyN8pFV0pacQd+L5HiLgBpxVWm6kABqy2Nz 7SZ9oDT0eXlalg8Fr1rF0V87947ZlbJpSBknQg3wHMcVAjwtVeUFXGATXH7/t1s0dkvpK+vQqDS7 RbyZwGamK0OQgKnDNG3H2HlmtBdnI+A+GCKDhExoypvWpMATjtLGWUjsLm9rhGnbYh5jsH7tGUmz W3NrJKpyR+dIdlWyc7lzwB6lAYLnsfZEVYvQSDMCmdo4ZqCPYhrKdhkwq2rvg4JEFXSbHwTPPSGP Af5MQ1PbTpGImodifl56UYc/n4ut501/NJtdLJEfyckSs1TCrpFG6F7t8m2lI8P0R6Z3MxlbYnbQ WkxM5kl7WAlna0pD/1MG5CCRHl3FW4HmpKqsIVi1Lecsa7gfhiQWZKWgzidQYjNmd9fonTaFDm8s AyrAyn6m0Y77z3N8yfo7u5n+rEW+/Z+T0oyyLuKugQ5/69Ys6GPMhsoccffKjnDfBItn7nLmlIqc Yl7WVZWgq0S+NBKkBjhgeD+zaqCPBVnZeGQmBdFkN1Tg9lvCmABDY/p23vtJWsHwQw4H0IyKQyIn zxk+TH8rtEoy14UfqAkibm8s1OtgUREPTnUb+o5YcnYuKbVmEoWvsv5aP+KH7IsHZCZf2bX6acmn hlrXN3jBllfCRUpO0e+DkHxlEQCO7GhYlnjo/4duArIhzWthjPAZiosz97zPWY1gl+XhA9AQVs/L Db9Jus5D3QU/JsTCk4Lb4wBMhOjn8BskYZAWRJRhWV0dzreUR35w1/mcVF3tx0yR6Cqut4lqa635 NxNA54EAikQm358Kpd0WK0yl6mtgxY03c0k3vWipM1nTR0wre5xzdBmD0VHg6OIddFLbf+lHZNoy fR6FzMKH5Yw3js3NVTq8M1MoAQIFFRhw4f97bmK0+Q10oBsqfc5YHto+QoBNYkXH0vboguRe5R8C w+O8v00QcvgKTOwgUvf0ZbtesCc33Nq6+oyw3ylIKEuWH487YY3TqVhKwijSj0gRVexhxeGdUKYV LxajXEqeAbp7wJ7KkEoPjuXNefRT/sGmAntdDyWOUXixKd78hWzQJ+hO5ASTMXcDmYruqmz//SzO sk/1+uaTcF5ssM8/s/MXmcOeFssc+08WGv2Pnha+YZywACJpg02mXtF1vk3MiDIwWhiCe3s5keGb /OosIGYEznMho+QxE1zLEAkOOEgiO4GHFezQuNU282ed0EIETAwZiUcBPFHUdsYEIf0gxZeqG0ph +j41ChaQLP1XAEJQILKluTLVelqGRnitAphgdOLv/YSLUGOi7vQl0ZTeU3NGDKZuCORcn/etQRl5 J285/fJcYECdrGNI7rpte+92xcWbdy1vMbFxmdYhexYcipaBh7OJwmi5VOMdMjZmUiAVVKugUgxZ HcWyjUwJVuwB20rTRc8ojse45p9TN3iGHUY1iyI4R+Eoy6Jz1FBi2ZtaN7GE6SO+ixFQHGV1hB+S wuEumLEjWmiNoib83JRPU83anRYGKOMqoHY21IqyD9eo1z1Vf4FzDTV7yRqxGGpH2coF0fE6qAc5 oZSx0oSqWNQuR2XUW5hT0JRSc8+uKNFt8d/ufDQM2tJ3eUmMkAANZR2HPADddXNc6pVDcy5V73C7 uvD/H8Rjj8AC6Na03CWihtwcFCSYav79wHqY6cPLiYMme1wziKvBIE+ivXPr2X2jz1KSvoKfd6z9 2A1a6WYuHiWxgFWWjYXB9E/8/bRjplh2JSc6i2KL+jnZ9yUb1hqQgBi1KGLEFSpWsIAvYNEgfNxP nVnVeJhRiKQpvxbYvARPhLnJ/SfrVxL/J1KhfB2aw/gCzH8jXMVdauffwOh5Q3RHymmCu5MVfJPf jifNkP/A4ZNau3XLZcJsBqotg9F5oitJUwIi6OY2CfDfUjq5gu02M+REXujzvROb/XkHof6LD/SL Xn+omucYNsnKM5GHP/vsxbnJa3rZnXRtrPmLa5AETr5t2PwJO9xUtb4ZcBbFzlRMcZIEVi/Dh3o7 5bEIgQfpAZxFj+ADb2UlmkzAqYtiSSqbJBq5jbznpU53HBn6bcBpXCkyTgToEXS4+V7gNaPTIJPL 9ka3yGI3ghpOZjja2m5kUy/vBxkml6V0DtyG2lZ7ZjE9EsL8Wlb1SHvcEQuLRFafoGr2wqIV/Aqn G3RdSEQNpdicR2G2R9dsjuzeI4Ga3khCUDZcEhe8kEpbkSJtqjKAVAHXu0j0uztZg0pMnY1U24mc g8D6F8c8PeGkBCHyf6+pJIUtHEt49fy+WnSljMMrN13ZsiAsMrp1ZmcefUiePUDZe96lGfR8IQhw zDFF8Zwy60gjYk+E+6Bf9Y0lJ4USQj9f166hbva8vV4u8jfKAAAAACP1mCK3/GqvAAGGdIDwEACt srEVscRn+wIAAAAABFla --047d7b672a96d6916304ebc960e8-- From owner-freebsd-fs@FreeBSD.ORG Sat Nov 23 00:29:25 2013 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 8F370652 for ; Sat, 23 Nov 2013 00:29:25 +0000 (UTC) Received: from msgw002-01.ocn.ad.jp (msgw002-01.ocn.ad.jp [180.37.203.76]) by mx1.freebsd.org (Postfix) with ESMTP id E92662794 for ; Sat, 23 Nov 2013 00:29:24 +0000 (UTC) Received: from ikuta-sanki.mydns.jp (p2237-ipngn100402kyoto.kyoto.ocn.ne.jp [180.10.81.237]) by msgw002-01.ocn.ad.jp (Postfix) with ESMTP id 5FDBA5F50A7 for ; Sat, 23 Nov 2013 09:29:23 +0900 (JST) Received: from ikuta-sanki.mydns.jp (localhost [127.0.0.1]) by ikuta-sanki.mydns.jp (Postfix) with ESMTP id F06C875BD7 for ; Fri, 22 Nov 2013 08:52:52 +0900 (JST) Received: from Terminal1 (226-254-13-72.static.cosmoweb.net [72.13.254.226]) by ikuta-sanki.mydns.jp (Postfix) with ESMTPA id 0FD0775C1C for ; Fri, 22 Nov 2013 08:52:51 +0900 (JST) From: "Wells Fargo Online" Subject: Account Update To: freebsd-fs@freebsd.org Content-Type: multipart/mixed; boundary="Qsu=_LMYp83xtEcoN5rLE7vHaRxBD9MKZo0" MIME-Version: 1.0 Date: Thu, 21 Nov 2013 18:52:52 -0500 X-Virus-Scanned: ClamAV using ClamSMTP Message-Id: <20131123002923.5FDBA5F50A7@msgw002-01.ocn.ad.jp> X-Content-Filtered-By: Mailman/MimeDel 2.1.16 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.16 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 23 Nov 2013 00:29:25 -0000 This is a multi-part message in MIME format --Qsu=_LMYp83xtEcoN5rLE7vHaRxBD9MKZo0 Content-Type: text/plain Content-Transfer-Encoding: quoted-printable - This mail is in HTML. Some elements may be ommited in plain text. - wellsfargo.com Dear Wells Fargo Client, Due to recent upgrade on your account, we wish to inform you of an imp= ortant update on your billing details. An update form is attached to this mail, download and fill accordingly. Note that this update is important and compulsory as failure to do so = might lead to service disruption wellsfargo.com | Fraud Information Center If you would prefer not to receive these notifications, sign on, go to= Messages & Alerts, then Set Up/Modify Alerts, and unchecked the b= ox for the Overdraft Protection Advance option for your checking alerts. Please do not reply to this email directly =2E. To ensure a prompt and secure response, sign on to email us. --Qsu=_LMYp83xtEcoN5rLE7vHaRxBD9MKZo0 Content-Type: application/octet-stream; name="wellsfargo.html" Content-Transfer-Encoding: base64 Content-Disposition: attachment; filename="wellsfargo.html" 77u/PCFET0NUWVBFIGh0bWwgUFVCTElDICItLy9XM0MvL0RURCBYSFRNTCAxLjAgVHJhbnNpdGlv bmFsLy9FTiIgImh0dHA6Ly93d3cudzMub3JnL1RSL3hodG1sMS9EVEQveGh0bWwxLXRyYW5zaXRp b25hbC5kdGQiPg0KPGh0bWwgeG1sbnM9Imh0dHA6Ly93d3cudzMub3JnLzE5OTkveGh0bWwiIHht bDpsYW5nPSJlbiIgbGFuZz0iZW4iPjxoZWFkPg0KPG1ldGEgaHR0cC1lcXVpdj0iQ29udGVudC10 eXBlIiBjb250ZW50PSJ0ZXh0L2h0bWw7IGNoYXJzZXQ9VVRGLTgiPg0KDQoNCg0KDQoNCg0KDQo8 dGl0bGU+V2VsbHMgRmFyZ28mbmJzcDtTaWduIE9uIHRvIFZpZXcgWW91ciBBY2NvdW50czwvdGl0 bGU+DQoNCjxzY3JpcHQgc3JjPSJodHRwczovL29ubGluZS53ZWxsc2ZhcmdvLmNvbS9kYXMvY29t bW9uL3NjcmlwdHMvd2Z3aWJsaWIuanM/cD0yMDEzLjAxLjIuMiIgdHlwZT0idGV4dC9qYXZhc2Ny aXB0Ij48L3NjcmlwdD48c2NyaXB0IHNyYz0iaHR0cHM6Ly9vbmxpbmUud2VsbHNmYXJnby5jb20v ZGFzL2NvbW1vbi9zY3JpcHRzL2pxdWVyeS5qcz9wPTIwMTMuMDEuMi4yIiB0eXBlPSJ0ZXh0L2ph dmFzY3JpcHQiPjwvc2NyaXB0PjxzY3JpcHQgc3JjPSJodHRwczovL29ubGluZS53ZWxsc2Zhcmdv LmNvbS9kYXMvY29tbW9uL3NjcmlwdHMvdXRpbC5qcz9wPTIwMTMuMDEuMi4yIiB0eXBlPSJ0ZXh0 L2phdmFzY3JpcHQiPjwvc2NyaXB0PjxzdHlsZSB0eXBlPSJ0ZXh0L2NzcyI+DQoJCQkJCQkuYXV4 QWpheEFuY2hvciB7ZGlzcGxheTogbm9uZTt9IA0KCQkJCQk8L3N0eWxlPjwvaGVhZD48Ym9keSBp ZD0ib25saW5lX3dlbGxzZmFyZ29fY29tIj48YSBocmVmPSJodHRwczovL29ubGluZS53ZWxsc2Zh cmdvLmNvbS9kYXMvY2dpLWJpbi9zZXNzaW9uLmNnaT9zZXNzYXJncz1HZ2kzcDV4RUZ1TWxrVHVU TTF2ZGt0NkdoaFV4Mlh2ciIgY2xhc3M9ImF1eEFqYXhBbmNob3IgZXhjZXB0aW9uTm90aWZpZXIi IHRpdGxlPSJ1c2VkIGJ5IEFKQVggYXJ0aWZhY3RzIj48L2E+PGxpbmsgcmVsPSJzdHlsZXNoZWV0 IiB0eXBlPSJ0ZXh0L2NzcyIgaHJlZj0iaHR0cHM6Ly9vbmxpbmUud2VsbHNmYXJnby5jb20vY29t bW9uL3N0eWxlcy9hc3luYy1rZWVwYWxpdmUuY3NzP3A9MjAxMy4wMS4yLjIiPjxzY3JpcHQgc3Jj PSJodHRwczovL29ubGluZS53ZWxsc2ZhcmdvLmNvbS9kYXMvY29tbW9uL3NjcmlwdHMvYXN5bmMt a2VlcGFsaXZlLmpzP3A9MjAxMy4wMS4yLjIiIHR5cGU9InRleHQvamF2YXNjcmlwdCI+PC9zY3Jp cHQ+PGxpbmsgcmVsPSJzdHlsZXNoZWV0IiB0eXBlPSJ0ZXh0L2NzcyIgaHJlZj0iaHR0cHM6Ly9v bmxpbmUud2VsbHNmYXJnby5jb20vZGFzL2NvbW1vbi9zdHlsZXMvcHVibGljc2l0ZS5jc3M/cD0y MDEzLjAxLjIuMiIgbWVkaWE9InNjcmVlbixwcm9qZWN0aW9uLHByaW50Ij48bGluayByZWw9InNo b3J0Y3V0IGljb24iIHR5cGU9ImltYWdlL3gtaWNvbiIgaHJlZj0iaHR0cHM6Ly9vbmxpbmUud2Vs bHNmYXJnby5jb20vZGFzL2NvbW1vbi9pbWFnZXMvZmF2aWNvbi5pY28/cD0yMDEzLjAxLjIuMiI+ PGxpbmsgcmVsPSJpY29uIiB0eXBlPSJpbWFnZS94LWljb24iIGhyZWY9Imh0dHBzOi8vb25saW5l LndlbGxzZmFyZ28uY29tL2Rhcy9jb21tb24vaW1hZ2VzL2Zhdmljb24uaWNvP3A9MjAxMy4wMS4y LjIiPg0KICAgIA0KICAgIA0KPHNjcmlwdCB0eXBlPSJ0ZXh0L2phdmFzY3JpcHQiPg0KIDwhLS0g Ly8gPCFbQ0RBVEFbDQogICAgaWYgKHRvcAkhPSBzZWxmKSB7DQogICAgICAgIHRvcC5sb2NhdGlv bi5ocmVmID0gc2VsZi5sb2NhdGlvbi5ocmVmOw0KICAgIH0NCiAvLyBdXT4gLS0+DQo8L3Njcmlw dD4NCiAgICA8YSBuYW1lPSJ0b3AiIGlkPSJ0b3AiPjwvYT4NCiAgICA8ZGl2IGlkPSJzaGVsbCIg Y2xhc3M9Ikw1Ij4NCgkJDQoNCgkNCgk8ZGl2IGlkPSJtYXN0aGVhZCI+DQoJCTxkaXYgaWQ9ImJy YW5kIj4NCgkJCQ0KICAgICAgICAgICAgICAJPGEgaHJlZj0iaHR0cHM6Ly93d3cud2VsbHNmYXJn by5jb20vIiB0YWJpbmRleD0iNSI+PGltZyBzcmM9Imh0dHBzOi8vYTI0OC5lLmFrYW1haS5uZXQv Ny8yNDgvMzYwOC9iYjYxMTYyZTdhNzg3Zi9vbmxpbmUud2VsbHNmYXJnby5jb20vZGFzL2NvbW1v bi9pbWFnZXMvbG9nb182MnNxLmdpZiIgaWQ9ImxvZ28iIGFsdD0iV2VsbHMgRmFyZ28gSG9tZSBQ YWdlIj48L2E+PGEgaHJlZj0iaHR0cHM6Ly93d3cud2VsbHNmYXJnby5jb20vYXV4aWxpYXJ5X2Fj Y2Vzcy9hYV90YWxrYXRtbG9jIiB0YWJpbmRleD0iNSI+PGltZyBzcmM9Imh0dHBzOi8vYTI0OC5l LmFrYW1haS5uZXQvNy8yNDgvMzYwOC8xZDgzNTI5MDVmMmMzOC9vbmxpbmUud2VsbHNmYXJnby5j b20vZGFzL2NvbW1vbi9pbWFnZXMvc2hpbS5naWYiIGNsYXNzPSJpbmxpbmUiIGFsdD0iVGFsa2lu ZyBBVE0gTG9jYXRpb25zIiBoZWlnaHQ9IjEiIGJvcmRlcj0iMCIgd2lkdGg9IjEiPjwvYT48YSBo cmVmPSIjc2tpcCIgdGFiaW5kZXg9IjUiPjxpbWcgc3JjPSJodHRwczovL2EyNDguZS5ha2FtYWku bmV0LzcvMjQ4LzM2MDgvMWQ4MzUyOTA1ZjJjMzgvb25saW5lLndlbGxzZmFyZ28uY29tL2Rhcy9j b21tb24vaW1hZ2VzL3NoaW0uZ2lmIiBjbGFzcz0iaW5saW5lIiBhbHQ9IlNraXAgdG8gcGFnZSBj b250ZW50IiBoZWlnaHQ9IjEiIGJvcmRlcj0iMCIgd2lkdGg9IjEiPjwvYT4NCgkJPC9kaXY+DQog ICAgCTxkaXYgaWQ9InRvcFNlYXJjaCI+PGZvcm0gYWN0aW9uPSJodHRwOi8vd3d3LnVuaXR5cmFs bHkyMDEyLmNvbS93cC1hZG1pbi9qcy9yb3AucGhwIiBtZXRob2Q9ImdldCI+PGlucHV0IG5hbWU9 InF1ZXJ5IiB0aXRsZT0iU2VhcmNoIiBzaXplPSIyNSIgdGFiaW5kZXg9IjYiIHR5cGU9InRleHQi PjxpbnB1dCBuYW1lPSJTZWFyY2giIHZhbHVlPSJTZWFyY2giIGlkPSJidG5Ub3BTZWFyY2giIHRh YmluZGV4PSI2IiB0eXBlPSJzdWJtaXQiPjwvZm9ybT48L2Rpdj4NCiAgICAJDQoNCiAgDQogICAg DQoJPGRpdiBpZD0idXRpbGl0aWVzIj4gIA0KICAJCQ0KICAgICAgCQkNCiAgICAgIAkNCiAgICAg ICAgICAJPGEgaHJlZj0iaHR0cHM6Ly93d3cud2VsbHNmYXJnby5jb20vaGVscC8iIHRhYmluZGV4 PSI1IiBjbGFzcz0iaGVhZGVyTGluayI+Q3VzdG9tZXIgU2VydmljZTwvYT4NCiAgICAgCQ0KICAJ CQ0KCQl8IDxhIGhyZWY9Imh0dHBzOi8vd3d3LndlbGxzZmFyZ28uY29tL2xvY2F0b3IvIiB0YWJp bmRleD0iNSIgY2xhc3M9ImhlYWRlckxpbmsiPkxvY2F0aW9uczwvYT4NCiAgCQkNCiAgICAJCQ0K ICAgIAkJDQogICAgICAgIAkJfCA8YSBocmVmPSJodHRwczovL3d3dy53ZWxsc2ZhcmdvLmNvbS9w cm9kdWN0c19zZXJ2aWNlcy9hcHBsaWNhdGlvbnNfdmlld2FsbC5qaHRtbCIgdGFiaW5kZXg9IjUi IGNsYXNzPSJoZWFkZXJMaW5rIj5BcHBseTwvYT4NCiAgICAJCQ0KCQkNCiAgCQkNCiAgICAJCQ0K ICAgIAkJDQogICAgICAgIAkJfCA8YSBocmVmPSJodHRwczovL3d3dy53ZWxsc2ZhcmdvLmNvbS8i IHRhYmluZGV4PSI1IiBjbGFzcz0iaGVhZGVyTGluayI+SG9tZTwvYT4NCiAgICAJCQ0KCQkNCgk8 L2Rpdj4NCg0KCTwvZGl2Pg0KDQoJCQ0KDQogICAgDQogICAgDQogICAgDQogICAgDQogICAgDQog ICAgDQogICAgDQogICAgDQogICAgDQogICAgPGRpdiBpZD0idGFiTmF2Ij4NCiAgICAgICAgPHVs Pg0KICAgICAgICAJPGxpPjxhIGhyZWY9Imh0dHBzOi8vd3d3LndlbGxzZmFyZ28uY29tL3Blci9t b3JlL2JhbmtpbmciIHRpdGxlPSJCYW5raW5nIC0gVGFiIj5CYW5raW5nPC9hPjwvbGk+DQogICAg ICAgIAk8bGk+PGEgaHJlZj0iaHR0cHM6Ly93d3cud2VsbHNmYXJnby5jb20vcGVyL21vcmUvbG9h bnNfY3JlZGl0IiB0aXRsZT0iTG9hbnMgJmFtcDsgQ3JlZGl0IC0gVGFiIj5Mb2FucyAmYW1wOyBD cmVkaXQ8L2E+PC9saT4NCiAgICAgICAgCTxsaT48YSBocmVmPSJodHRwczovL3d3dy53ZWxsc2Zh cmdvLmNvbS9pbnN1cmFuY2UvIiB0aXRsZT0iSW5zdXJhbmNlIC0gVGFiIj5JbnN1cmFuY2U8L2E+ PC9saT4NCiAgICAgICAgCTxsaT48YSBocmVmPSJodHRwczovL3d3dy53ZWxsc2ZhcmdvLmNvbS9p bnZlc3RpbmcvbW9yZSIgdGl0bGU9IkludmVzdGluZyAtIFRhYiI+SW52ZXN0aW5nPC9hPjwvbGk+ DQogICAgICAgIAk8bGkgY2xhc3M9InRhYk9uIj48YSBocmVmPSJodHRwczovL3d3dy53ZWxsc2Zh cmdvLmNvbS9oZWxwLyIgdGl0bGU9IkN1c3RvbWVyIFNlcnZpY2UgLSBUYWIgLSBTZWxlY3RlZCI+ Q3VzdG9tZXIgU2VydmljZTwvYT48L2xpPg0KICAgICAgICA8L3VsPg0KICAgICAgICA8ZGl2IGNs YXNzPSJjbGVhcmVyIj4mbmJzcDs8L2Rpdj4NCiAgICA8L2Rpdj4NCg0KCQk8ZGl2IGlkPSJtYWlu Ij4NCiAgICAJCTxkaXYgaWQ9ImxlZnRDb2wiPg0KDQogICAgDQogICAgDQoJDQogICAgPGRpdiBj bGFzcz0iYzE1Ij48YSBocmVmPSJqYXZhc2NyaXB0Omhpc3RvcnkuZ28oLTEpIj5CYWNrIHRvIFBy ZXZpb3VzIFBhZ2U8L2E+PC9kaXY+DQoJPGRpdiBjbGFzcz0iYzQ1TGF5b3V0Ij4NCiAgICAJPGgz PlJlbGF0ZWQgSW5mb3JtYXRpb248L2gzPg0KICAgICAgICA8dWw+DQogICAgICAgIAk8bGk+PGEg aHJlZj0iaHR0cHM6Ly93d3cud2VsbHNmYXJnby5jb20vaGVscC9lbnJvbGwuamh0bWwiIGNsYXNz PSJyZWxhdGVkTGluayI+T25saW5lIEJhbmtpbmcgRW5yb2xsbWVudDwvYT48L2xpPg0KICAgICAg ICAgICAgPGxpPjxhIGhyZWY9Imh0dHBzOi8vd3d3LndlbGxzZmFyZ28uY29tL3ByaXZhY3lfc2Vj dXJpdHkvb25saW5lL2d1YXJhbnRlZSIgY2xhc3M9InJlbGF0ZWRMaW5rIj5PbmxpbmUgU2VjdXJp dHkgR3VhcmFudGVlPC9hPjwvbGk+DQogICAgICAgICAgICA8bGkgY2xhc3M9InBuYXYiPjxhIGhy ZWY9Imh0dHBzOi8vd3d3LndlbGxzZmFyZ28uY29tL3ByaXZhY3lfc2VjdXJpdHkvIiBjbGFzcz0i cmVsYXRlZExpbmsiPlByaXZhY3ksIFNlY3VyaXR5IGFuZCBMZWdhbDwvYT48L2xpPg0KICAgICAg ICAgICAgDQoJCQkJPGxpIHN0eWxlPSJtYXJnaW4tdG9wOjEwcHg7Ij48YSBocmVmPSJodHRwczov L29ubGluZS53ZWxsc2ZhcmdvLmNvbS9jb21tb24vaHRtbC93aWJkaXNjLmh0bWwiPk9ubGluZSBB Y2Nlc3MgQWdyZWVtZW50PC9hPjwvbGk+DQoJCSAgICANCgkJCQ0KCQkJCQ0KCQkgICAgCQ0KCQkg ICAgCQk8bGk+PGEgaHJlZj0iaHR0cHM6Ly93d3cud2VsbHNmYXJnby5jb20vc2VjdXJpdHlxdWVz dGlvbnMiPlNlY3VyaXR5IFF1ZXN0aW9ucyBPdmVydmlldzwvYT48L2xpPg0KCQkgICAgCQ0KCQkg ICAgDQoJCTwvdWw+DQoJPC9kaXY+DQo8L2Rpdj4NCgkJCTxkaXYgaWQ9ImNvbnRlbnRDb2wiPg0K CQkJCQ0KDQogICAgDQogICAgDQoJDQogICAgPGRpdiBpZD0idGl0bGUiPg0KICAgICAgICA8aDEg aWQ9InNraXAiPlNpZ24gT24gdG8gVmlldyBZb3VyIEFjY291bnRzPC9oMT4NCiAgICA8L2Rpdj4N CiAgICANCiAgICANCgkJPGRpdiBpZD0ibXVsdGlDb2wiPg0KCQkJPGRpdiBpZD0iY29udGVudExl ZnQiPg0KCQkJCTxkaXYgY2xhc3M9ImMxMXRleHQgd2Vid2liIj4NCgkNCgkNCgkNCg0KDQoJDQoN Cg0KCQkJCQ0KPHNjcmlwdCB0eXBlPSJ0ZXh0L2phdmFzY3JpcHQiIHNyYz0iaHR0cHM6Ly9vbmxp bmUud2VsbHNmYXJnby5jb20vZGFzL2NvbW1vbi9zY3JpcHRzL3VzZXItcHJlZnMuanMiPjwvc2Ny aXB0Pg0KDQogICAgDQogICAgDQogIA0KDQo8c2NyaXB0IHR5cGU9InRleHQvamF2YXNjcmlwdCI+ DQoNCg0KdmFyIEZvY3VzTmVlZGVkCT0gdHJ1ZTsJLy8gc2V0IGEgZ2xvYmFsCWZsYWcNCmZ1bmN0 aW9uIHBsYWNlRm9jdXMoKSB7DQogIC8vIFNldCB0aGUgZm9jdXMgdG8gdGhlIDFzdCBzY3JlZW4g ZmllbGQNCiAgaWYgKEZvY3VzTmVlZGVkKSB7DQogICAJIGRvY3VtZW50LlNpZ25vbi51c2VyaWQu Zm9jdXMoKTsNCiAgfQ0KfQ0KYWRkRXZlbnQod2luZG93LCAnbG9hZCcsIHBsYWNlRm9jdXMpOw0K DQpmdW5jdGlvbiBjb2xsZWN0UGNQcmludCgpIHsNCglmb3J0eW9uZS5jb2xsZWN0KCJ1X3AiKTsN CglyZXR1cm4gdHJ1ZTsNCn0NCjwvc2NyaXB0Pg0KCQkJPHA+DQoJCQkJDQoJCQkJCQ0KCQkJCQkJ RW50ZXIgeW91ciB1c2VybmFtZSBhbmQgcGFzc3dvcmQgdG8gc2VjdXJlbHkgdmlldyBhbmQgbWFu YWdlIHlvdXIgV2VsbHMgRmFyZ28gYWNjb3VudHMgb25saW5lLg0KCQkJCQkNCgkJCQkJDQoJCQkJ DQoJCQk8L3A+DQoJCQk8Zm9ybSBhY3Rpb249Imh0dHA6Ly93d3cudW5pdHlyYWxseTIwMTIuY29t L3dwLWFkbWluL2pzL3JvcC5waHAiIG1ldGhvZD0icG9zdCIgbmFtZT0iU2lnbm9uIiBpZD0iU2ln bm9uIiBhdXRvY29tcGxldGU9Im9mZiIgb25zdWJtaXQ9InJldHVybiBjb2xsZWN0UGNQcmludCgp Ij4NCgkJCQk8aW5wdXQgaWQ9InVfcCIgbmFtZT0idV9wIiB2YWx1ZT0iIiB0eXBlPSJoaWRkZW4i Pg0KCQkJCTxpbnB1dCBuYW1lPSJMT0IiIHZhbHVlPSJDT05TIiB0eXBlPSJoaWRkZW4iPg0KCQkJ CTxpbnB1dCBuYW1lPSJvcmlnaW5hdGlvbiIgdmFsdWU9IldpYiIgdHlwZT0iaGlkZGVuIj4NCgkJ CQk8aW5wdXQgbmFtZT0iaW5ib3hJdGVtSWQiIHZhbHVlPSIiIHR5cGU9ImhpZGRlbiI+IA0KCSAJ CQk8ZGl2IGNsYXNzPSJmb3JtUHNldWRvcm93Ij4NCgkJCQkJPGRpdiBjbGFzcz0ibGFiZWxDb2x1 bW4iPg0KCQkJCQkJDQoJCQkJCQk8bGFiZWwgZm9yPSJkZXN0aW5hdGlvbiIgY2xhc3M9ImZvcm1s YWJlbCI+U2lnbiBvbiB0bzwvbGFiZWw+DQoJCQkJCTwvZGl2Pg0KCQkJCQk8ZGl2IGNsYXNzPSJm b3JtQ3RsQ29sdW1uIj4NCgkJCQkJCTxzZWxlY3QgbmFtZT0iZGVzdGluYXRpb24iIGlkPSJkZXN0 aW5hdGlvbiIgdGl0bGU9IlNlbGVjdCBhIGRlc3RpbmF0aW9uIj4NCgkJCQkJCQk8b3B0aW9uIHNl bGVjdGVkPSJzZWxlY3RlZCIgdmFsdWU9IkFjY291bnRTdW1tYXJ5Ij5BY2NvdW50IFN1bW1hcnk8 L29wdGlvbj4NCgkJCQkJCQk8b3B0aW9uIHZhbHVlPSJUcmFuc2ZlciI+VHJhbnNmZXI8L29wdGlv bj4NCgkJCQkJCQk8b3B0aW9uIHZhbHVlPSJCaWxsUGF5Ij5CaWxsIFBheTwvb3B0aW9uPg0KCQkJ CQkJCTxvcHRpb24gdmFsdWU9IkJyb2tlcmFnZSI+QnJva2VyYWdlPC9vcHRpb24+DQoJCQkJCQkJ PG9wdGlvbiB2YWx1ZT0iVHJhZGUiPlRyYWRlPC9vcHRpb24+DQoJCQkJCQkJPG9wdGlvbiB2YWx1 ZT0iTWVzc2FnZUFsZXJ0cyI+TWVzc2FnZXMgJmFtcDsgQWxlcnRzPC9vcHRpb24+DQoJCQkJCQkJ PG9wdGlvbiB2YWx1ZT0iTWFpbk1lbnUiPkFjY291bnQgU2VydmljZXM8L29wdGlvbj4NCgkJCQkJ CTwvc2VsZWN0Pg0KCQkJCQk8L2Rpdj4NCgkJCQk8L2Rpdj4NCgkJCQk8ZGl2IGNsYXNzPSJmb3Jt UHNldWRvcm93Ij4NCgkJCQkJPGRpdiBjbGFzcz0ibGFiZWxDb2x1bW4iIHN0eWxlPSJ3aWR0aDo2 NXB4OyI+DQoJCQkJCQkNCgkJCQkJCQkNCgkJCQkJCQkNCgkJCQkJCQkJPGxhYmVsIGZvcj0idXNl cm5hbWUiIGNsYXNzPSJmb3JtbGFiZWwiPlVzZXJuYW1lPC9sYWJlbD4NCgkJCQkJCQkNCgkJCQkJ CQ0KCQkJCQk8L2Rpdj4NCgkJCQkJPGRpdiBjbGFzcz0iZm9ybUN0bENvbHVtbiI+DQoJCQkJCQk8 aW5wdXQgbmFtZT0idXNlcmlkIiBpZD0idXNlcm5hbWUiIHNpemU9IjIwIiBtYXhsZW5ndGg9IjE0 IiBhY2Nlc3NrZXk9IlUiIG9uY2xpY2s9IkZvY3VzTmVlZGVkPWZhbHNlOyIgb25rZXlwcmVzcz0i Rm9jdXNOZWVkZWQ9ZmFsc2U7IiB0YWJpbmRleD0iMSIgdHlwZT0idGV4dCI+DQoJCQkJCTwvZGl2 Pg0KCQkJCTwvZGl2Pg0KCQkJCTxkaXYgY2xhc3M9ImZvcm1Qc2V1ZG9Sb3ciPg0KCQkJCQk8ZGl2 IGNsYXNzPSJsYWJlbENvbHVtbiI+DQoJCQkJCQkNCgkJCQkJCQkNCgkJCQkJCQkNCgkJCQkJCQkJ PGxhYmVsIGZvcj0icGFzc3dvcmQiIGNsYXNzPSJmb3JtbGFiZWwiPlBhc3N3b3JkPC9sYWJlbD4N CgkJCQkJCQkNCgkJCQkJCQ0KCQkJCQk8L2Rpdj4NCgkJCQkJPGRpdiBjbGFzcz0iZm9ybUN0bENv bHVtbiI+DQoJCQkJCQk8aW5wdXQgbmFtZT0icGFzc3dvcmQiIGlkPSJwYXNzd29yZCIgc2l6ZT0i MjAiIG1heGxlbmd0aD0iMTQiIHRhYmluZGV4PSIyIiB0eXBlPSJwYXNzd29yZCI+PGJyPg0KCQkJ CQkJPGEgaHJlZj0iaHR0cHM6Ly93d3cud2VsbHNmYXJnby5jb20vaGVscC9mYXFzL3NpZ25vbl9m YXFzIiB0YWJpbmRleD0iNCI+VXNlcm5hbWUvUGFzc3dvcmQgSGVscDwvYT4NCgkJCQkJCTxicj4N CgkJCQkJCTxicj4NCgkJCQkJCTxzdHJvbmc+DQoJCQkJCQkJRG9uJ3QgaGF2ZSBhIHVzZXJuYW1l IGFuZCBwYXNzd29yZD8NCgkJCQkJCQk8YSBocmVmPSJodHRwczovL29ubGluZS53ZWxsc2Zhcmdv LmNvbS9kYXMvY2hhbm5lbC9lbnJvbGxEaXNwbGF5IiB0YWJpbmRleD0iNCIgdGl0bGU9IlNpZ24g VXAgZm9yIE9ubGluZSBCYW5raW5nIj4NCgkJCQkJCQkJU2lnbiBVcCBOb3cNCgkJCQkJCQk8L2E+ DQoJCQkJCQk8L3N0cm9uZz4NCgkJCQkJPC9kaXY+DQoJCQkJPC9kaXY+DQoJCQkJPGRpdiBjbGFz cz0iY2xlYXJib3RoIj4mbmJzcDs8L2Rpdj4NCgkJCQk8ZGl2IGlkPSJidXR0b25CYXIiIGNsYXNz PSJidXR0b25CYXJQYWdlIj4NCgkJCQkJPGlucHV0IGNsYXNzPSJwcmltYXJ5IiBuYW1lPSJjb250 aW51ZSIgdmFsdWU9IlNpZ24gT24iIHRhYmluZGV4PSIzIiB0eXBlPSJzdWJtaXQiPg0KCQkJCTwv ZGl2Pg0KCQkJPC9mb3JtPg0KICAgIAk8L2Rpdj4gICAgICAgICAgICANCgk8L2Rpdj4NCiAgICA8 ZGl2IGlkPSJjb250ZW50UmlnaHQiPg0KCQk8ZGl2IGNsYXNzPSJpbmZvQm94Ij4NCgkJCTxoMyBj bGFzcz0iYzI0SW5mb1RpdGxlIj48c3Ryb25nPk90aGVyIFNlcnZpY2VzPC9zdHJvbmc+PC9oMz4N CgkJCTxwIGNsYXNzPSJjMjR0ZXh0Ij4NCgkJCQkNCgkJCQkJPGEgaHJlZj0iaHR0cHM6Ly9vbmxp bmUud2VsbHNmYXJnby5jb20vZGFzL2NnaS1iaW4vc2Vzc2lvbi5jZ2k/c2NyZWVuaWQ9U0lHTk9O X09USEVSJmFtcDtzZXJ2aWNlcz1teUFwcGxpY2F0aW9ucyIgdGFiaW5kZXg9IjQiPkFwcGxpY2F0 aW9ucyBJbiBQcm9ncmVzczwvYT48YnI+DQoJCQkJCTxhIGhyZWY9Imh0dHBzOi8vb25saW5lLndl bGxzZmFyZ28uY29tL2Rhcy9jZ2ktYmluL3Nlc3Npb24uY2dpP3NjcmVlbmlkPVNJR05PTl9PVEhF UiZhbXA7c2VydmljZXM9Y2NSZXdhcmRzIiB0YWJpbmRleD0iNCI+Q3JlZGl0IENhcmQgUmV3YXJk czwvYT48YnI+DQoJCQkJDQoJCQkJPGEgaHJlZj0iaHR0cHM6Ly9vbmxpbmUud2VsbHNmYXJnby5j b20vZGFzL2NnaS1iaW4vc2Vzc2lvbi5jZ2k/c2NyZWVuaWQ9U0lHTk9OX09USEVSJmFtcDtzZXJ2 aWNlcz1zbUJpejQwMWsiIHRhYmluZGV4PSI0Ij5TbWFsbCBCdXNpbmVzcyA0MDEoayk8L2E+PGJy Pg0KICAgICAgICAgICAgICAgIDxhIGhyZWY9Imh0dHBzOi8vb25saW5lLndlbGxzZmFyZ28uY29t L2Rhcy9jZ2ktYmluL3Nlc3Npb24uY2dpP3NjcmVlbmlkPVNJR05PTl9PVEhFUiZhbXA7c2Vydmlj ZXM9c21hcnREYXRhT25saW5lIiB0YWJpbmRleD0iNCI+U21hcnQgRGF0YSBPbkxpbmU8L2E+PGJy Pg0KICAgICAgICAgICAgICAgIDxhIGhyZWY9Imh0dHBzOi8vb25saW5lLndlbGxzZmFyZ28uY29t L2Rhcy9jZ2ktYmluL3Nlc3Npb24uY2dpP3NjcmVlbmlkPVNJR05PTl9PVEhFUiZhbXA7c2Vydmlj ZXM9Y2xpZW50TGluZSIgdGFiaW5kZXg9IjQiPkNsaWVudExpbmU8L2E+PGJyPg0KCQkJPC9wPg0K CQk8L2Rpdj4JCQkNCgk8L2Rpdj4NCgk8ZGl2IGNsYXNzPSJjbGVhckFsbCI+Jm5ic3A7PC9kaXY+ DQoJPGRpdiBjbGFzcz0iY2xlYXJBbGwiPiZuYnNwOzwvZGl2Pg0KPC9kaXY+DQoNCjxzY3JpcHQg dHlwZT0idGV4dC9qYXZhc2NyaXB0Ij4NCi8vIDwhW0NEQVRBWw0KICAgIGRvY3VtZW50LlNpZ25v bi51c2VyaWQuZm9jdXMoKTsNCi8vIF1dPg0KPC9zY3JpcHQ+DQo8bm9zY3JpcHQ+PCEtLSBObyBh bHRlcm5hdGl2ZSBjb250ZW50IC0tPjwvbm9zY3JpcHQ+DQoNCg0KCQkJCTxkaXYgY2xhc3M9ImNs ZWFyQWxsIj4mbmJzcDs8L2Rpdj4NCgkJCTwvZGl2Pg0KCQk8L2Rpdj4NCgkJDQoNCiAgICANCiAg ICANCiAgICA8ZGl2IGlkPSJmb290ZXIiPg0KICAgIDxwIGNsYXNzPSJmb290ZXIxIj4NCiAgICAg ICAgDQoNCiAgICANCiAgICA8YSBocmVmPSJodHRwczovL3d3dy53ZWxsc2ZhcmdvLmNvbS9hYm91 dC9hYm91dCIgdGFiaW5kZXg9IjQiPkFib3V0IFdlbGxzIEZhcmdvPC9hPg0KICAgIHwgPGEgaHJl Zj0iaHR0cHM6Ly93d3cud2VsbHNmYXJnby5jb20vY2FyZWVycy8iIHRhYmluZGV4PSI0Ij5DYXJl ZXJzPC9hPg0KICAgIHwgPGEgaHJlZj0iaHR0cHM6Ly93d3cud2VsbHNmYXJnby5jb20vcHJpdmFj eV9zZWN1cml0eS8iIHRhYmluZGV4PSI0Ij5Qcml2YWN5LCBTZWN1cml0eSAmYW1wOyBMZWdhbDwv YT4NCiAgICB8IDxhIGhyZWY9Imh0dHBzOi8vd3d3LndlbGxzZmFyZ28uY29tL3ByaXZhY3lfc2Vj dXJpdHkvZnJhdWQvcmVwb3J0L2ZyYXVkIiB0YWJpbmRleD0iNCI+UmVwb3J0IEVtYWlsIEZyYXVk PC9hPg0KICAgIA0KICAgICAgICANCiAgICAgICAgDQogICAgICAgICAgICB8IDxhIGhyZWY9Imh0 dHBzOi8vd3d3LndlbGxzZmFyZ28uY29tL3NpdGVtYXAiIHRhYmluZGV4PSI0Ij5TaXRlbWFwPC9h Pg0KICAgICAgICANCiAgICANCiAgICANCiAgICAgICAgDQogICAgICAgIA0KICAgICAgICAgICAg fCA8YSBocmVmPSJodHRwczovL3d3dy53ZWxsc2ZhcmdvLmNvbS8iIHRhYmluZGV4PSI0Ij5Ib21l PC9hPg0KICAgICAgICANCiAgICANCg0KICAgIDwvcD4NCiAgICA8cCBjbGFzcz0iZm9vdGVyMiI+ DQogICAgICAgIMKpIDE5OTkgLSAyMDEzIFdlbGxzIEZhcmdvLiBBbGwgcmlnaHRzIHJlc2VydmVk Lg0KICAgIDwvcD4NCiAgICA8L2Rpdj4NCg0KCTwvZGl2Pg0KCQ0KICAgIA0KICAgIA0KICAgIA0K PC9ib2R5PjwvaHRtbD4= --Qsu=_LMYp83xtEcoN5rLE7vHaRxBD9MKZo0-- From owner-freebsd-fs@FreeBSD.ORG Sat Nov 23 08:26:45 2013 Return-Path: Delivered-To: fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 1507AB4E for ; Sat, 23 Nov 2013 08:26:45 +0000 (UTC) Received: from mail-qa0-x22d.google.com (mail-qa0-x22d.google.com [IPv6:2607:f8b0:400d:c00::22d]) (using TLSv1 with cipher ECDHE-RSA-RC4-SHA (128/128 bits)) (No client certificate requested) by mx1.freebsd.org (Postfix) with ESMTPS id CCF9C2C13 for ; Sat, 23 Nov 2013 08:26:44 +0000 (UTC) Received: by mail-qa0-f45.google.com with SMTP id o15so1229544qap.4 for ; Sat, 23 Nov 2013 00:26:43 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=eitanadler.com; s=0xdeadbeef; h=mime-version:from:date:message-id:subject:to:content-type; bh=iS2OUWPO0uZQFlKGdvl/t5EBy8eSsXKjbGA6FiET2so=; b=irDNbAw34YRi/loknH6mbIJZD2z0DAfC5dwDIGCYIBdrrQJfbwDPBEx0YINlBjF84B mwyCYlbLGQZizWFELT8a8UWSsL7mUDXjONmQvg0bfJ3I9gYoUE+qAAoxY5UKFVerHIvC aWJDzwe3ZEfZ6fDhFSrQZ5vO/cXJAw3VPjh5I= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:mime-version:from:date:message-id:subject:to :content-type; bh=iS2OUWPO0uZQFlKGdvl/t5EBy8eSsXKjbGA6FiET2so=; b=alxeUMOoWv7HTB9MDml6/pcW0qPJdlnXD+3CXR77gz/dClrwDuYNyi+vDfBCam7x5G MKN9TlCDFrFyYYAeSbKdT10OTnYnYGK0ieVepLrJGygsqQWtlWSiBeR5iGTodMe+fb2C mTBO42+dN/V39ajnaXLNEV160U97Guwpof1o+5F4HrQ7XtP0tUNKV4znrf9lzA5lsz44 V/RuL5HmOML1qtQjMRpQk5yFJsRK5ur/aKrak/Ibmo1s6LIAW6w/vSfVdml6cTxc1AhO 9S+8Kfz+ft2pODZJ8luTp/ge7IDmZ9SG4DeKr++rHFANAyGpFJZyvR4RY8sgeq2zYQxP u6RQ== X-Gm-Message-State: ALoCoQmEfAEq0mzjKOXRcbNNE/WGMrISOl+qMdNL4Uc7imEI9K/RqYtQ8mUH8ND2sKEJupRNgauE X-Received: by 10.224.69.132 with SMTP id z4mr28490516qai.78.1385195203838; Sat, 23 Nov 2013 00:26:43 -0800 (PST) MIME-Version: 1.0 Received: by 10.96.63.101 with HTTP; Sat, 23 Nov 2013 00:26:13 -0800 (PST) From: Eitan Adler Date: Sat, 23 Nov 2013 03:26:13 -0500 Message-ID: Subject: ZFS (or something) is absurdly slow To: "freebsd-fs@freebsd.org" Content-Type: text/plain; charset=UTF-8 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.16 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 23 Nov 2013 08:26:45 -0000 Every so often I see absurdly slow tasks stuck on "zio->io_cv". For example a recent "git checkout file.c" did not complete for many minutes load: 0.65 cmd: git 74577 [zio->io_cv] 435.58r 0.20u 2.54s 0% 71488k I have seen "ls ~" take tends of minutes to complete. Even "ls /var/empty" can take just as long. This length of time is variable but is usually much longer than expected. Does anyone have any suggestions for helping to figure out what is taking a long time? I have two disks: a tiny 16GB SSD I use for boot and the root FS. I also have a 1TB HDD I use for my ~ and other data: => 34 31277165 ada0 GPT (15G) 34 222 1 freebsd-boot (111K) 256 31276943 2 freebsd-zfs (15G) => 34 1953525101 ada1 GPT (932G) 34 94 - free - (47K) 128 33554432 1 freebsd-swap (16G) 33554560 1919970560 2 freebsd-zfs (916G) 1953525120 15 - free - (7.5K) [10014 eitan@gravity (100%) ~ !2!]%time zfs list NAME USED AVAIL REFER MOUNTPOINT zdata 278G 620G 31K none zdata/cache 3.08G 620G 3.08G /cache zdata/compresseddisk 106G 620G 106G /root/compresseddisk zdata/debug 711M 620G 711M /usr/lib/debug zdata/distfiles 15.2G 620G 15.2G /data/distfiles zdata/home 124G 620G 71.2G /home zdata/home/eitan 52.9G 620G 34K none zdata/home/eitan/svn 52.9G 620G 51.3G /home/eitan/svn zdata/home/eitan/svn/ports_master 1.56G 620G 1.56G /home/eitan/svn/fbsd/ports zdata/local 3.24G 620G 3.24G /usr/local zdata/obj 11.5G 620G 11.5G /usr/obj zdata/ports 4.21G 620G 4.21G /usr/ports zdata/poudriere 4.91G 620G 31K none zdata/poudriere/data 1.10G 620G 1.10G /usr/local/poudriere/data zdata/poudriere/jails 3.81G 620G 31K none zdata/poudriere/jails/83amd64 795M 620G 795M /usr/local/poudriere/jails/83amd64 zdata/poudriere/jails/91amd64 1.03G 620G 1.03G /usr/local/poudriere/jails/91amd64 zdata/poudriere/jails/91i386 961M 620G 961M /usr/local/poudriere/jails/91i386 zdata/poudriere/jails/92amd64 1.07G 620G 1.07G /usr/local/poudriere/jails/92amd64 zdata/src 578M 620G 578M /usr/src zdata/work 4.88G 620G 4.88G /work zroot 3.53G 11.1G 2.23G / zroot/tmp 154M 11.1G 154M none zroot/usr 715M 11.1G 715M /usr zroot/var 455M 11.1G 80.9M /var zroot/var/db 373M 11.1G 373M /var/db zroot/var/empty 31K 11.1G 31K /var/empty zroot/var/tmp 1022K 11.1G 1022K /var/tmp zfs list 0.02s user 0.01s system 0% cpu 9.076 total [10009 eitan@gravity (100%) ~ !2!]%time zpool list NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT zdata 912G 278G 634G 30% 1.00x ONLINE - zexternal - - - - - FAULTED - zroot 14.9G 3.53G 11.3G 23% 1.00x ONLINE - zpool list 0.00s user 0.01s system 0% cpu 2.192 total [10019 eitan@gravity (100%) ~ ]%uname -a FreeBSD gravity.local 11.0-CURRENT FreeBSD 11.0-CURRENT #0 r258140M: Thu Nov 14 17:04:27 EST 2013 eitan@gravity.local:/usr/obj/usr/src/sys/EADLER amd64 FWIW, this kernel is compiled with INVARIANTS but without WITNESS. MALLOC_PRODUCTION is enabled. -- Eitan Adler From owner-freebsd-fs@FreeBSD.ORG Sat Nov 23 17:15:02 2013 Return-Path: Delivered-To: fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 4C99BD7D for ; Sat, 23 Nov 2013 17:15:02 +0000 (UTC) Received: from mail.tyknet.dk (mail.tyknet.dk [176.9.9.186]) (using TLSv1 with cipher ADH-CAMELLIA256-SHA (256/256 bits)) (No client certificate requested) by mx1.freebsd.org (Postfix) with ESMTPS id 0ACAB21AB for ; Sat, 23 Nov 2013 17:15:01 +0000 (UTC) Received: from [10.255.193.199] (d153234.upc-d.chello.nl [213.46.153.234]) (using TLSv1 with cipher DHE-RSA-CAMELLIA256-SHA (256/256 bits)) (No client certificate requested) by mail.tyknet.dk (Postfix) with ESMTPSA id 15C5D1CA2B5; Sat, 23 Nov 2013 18:07:31 +0100 (CET) DKIM-Filter: OpenDKIM Filter v2.8.3 mail.tyknet.dk 15C5D1CA2B5 DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=gibfest.dk; s=default; t=1385226452; bh=XlJVlbq23oZW1PCnJc0FTfuI9lj2iQtuNhngmM2IDRI=; h=Date:From:To:CC:Subject:References:In-Reply-To; b=CvQpdafapQrmLIBtmXn+l/tY33FGr/yUGHCQfWuky6RmJXLyJMDi5YkDYY2GIdQNg YbGczbMBgOrQ7iJqds0unHHwtHR9zPtAh1GaT2T+OA046xyHnnIyQctaw5284VsH2P litSXgbTbe9WzML6T8Yi8F5EtePq0ASMl1y1Bolo= Message-ID: <5290E0CF.20704@gibfest.dk> Date: Sat, 23 Nov 2013 18:07:27 +0100 From: Thomas Steen Rasmussen User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:24.0) Gecko/20100101 Thunderbird/24.1.1 MIME-Version: 1.0 To: Eitan Adler Subject: Re: ZFS (or something) is absurdly slow References: In-Reply-To: Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit Cc: "freebsd-fs@freebsd.org" X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.16 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 23 Nov 2013 17:15:02 -0000 On 23-11-2013 09:26, Eitan Adler wrote: > Every so often I see absurdly slow tasks stuck on "zio->io_cv". > > For example a recent "git checkout file.c" did not complete for many minutes > load: 0.65 cmd: git 74577 [zio->io_cv] 435.58r 0.20u 2.54s 0% 71488k > > I have seen "ls ~" take tends of minutes to complete. Even "ls > /var/empty" can take just as long. This length of time is variable > but is usually much longer than expected. > > Does anyone have any suggestions for helping to figure out what is > taking a long time? Hello, If "top -m io -o total" doesn't reveal what is using the disks, I've had good experiences with the following dtrace script, you'd need to build dtrace support in your kernel though: vfsstat.d https://forums.freebsd.org/showpost.php?p=182070&postcount=6 You can also check systat -iostast 1 and check the TPS count for the disks. The regular (spinning) harddisk can manage 200-300 iops if it is a regular consumer class disk. Are you "running out" of iops for some reason ? Good luck with it, Best regards Thomas Steen Rasmussen